* Re: [PATCH 6/7] ext4: Convert to buffered_write_operations
@ 2024-05-29 14:56 4% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-05-29 14:56 UTC (permalink / raw)
To: oe-kbuild; +Cc: lkp, Dan Carpenter
BCC: lkp@intel.com
CC: oe-kbuild-all@lists.linux.dev
In-Reply-To: <20240528164829.2105447-7-willy@infradead.org>
References: <20240528164829.2105447-7-willy@infradead.org>
TO: "Matthew Wilcox (Oracle)" <willy@infradead.org>
TO: Christoph Hellwig <hch@lst.de>
CC: "Matthew Wilcox (Oracle)" <willy@infradead.org>
CC: linux-fsdevel@vger.kernel.org
CC: linux-ext4@vger.kernel.org
Hi Matthew,
kernel test robot noticed the following build warnings:
[auto build test WARNING on linus/master]
[also build test WARNING on v6.10-rc1 next-20240529]
[cannot apply to tytso-ext4/dev jack-fs/for_next hch-configfs/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Matthew-Wilcox-Oracle/fs-Introduce-buffered_write_operations/20240529-005213
base: linus/master
patch link: https://lore.kernel.org/r/20240528164829.2105447-7-willy%40infradead.org
patch subject: [PATCH 6/7] ext4: Convert to buffered_write_operations
:::::: branch date: 22 hours ago
:::::: commit date: 22 hours ago
config: openrisc-randconfig-r081-20240529 (https://download.01.org/0day-ci/archive/20240529/202405292201.wdsK3rFE-lkp@intel.com/config)
compiler: or1k-linux-gcc (GCC) 13.2.0
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <error27@gmail.com>
| Closes: https://lore.kernel.org/r/202405292201.wdsK3rFE-lkp@intel.com/
smatch warnings:
fs/ext4/inline.c:652 ext4_convert_inline_data_to_extent() warn: passing zero to 'ERR_PTR'
fs/ext4/inline.c:951 ext4_da_write_inline_data_begin() warn: passing zero to 'ERR_PTR'
fs/ext4/inline.c:956 ext4_da_write_inline_data_begin() error: uninitialized symbol 'folio'.
vim +/ERR_PTR +652 fs/ext4/inline.c
46c7f254543ded Tao Ma 2012-12-10 540
8ca000469995a1 Matthew Wilcox (Oracle 2024-05-28 541) /* Returns NULL on success, ERR_PTR on failure */
8ca000469995a1 Matthew Wilcox (Oracle 2024-05-28 542) static void *ext4_convert_inline_data_to_extent(struct address_space *mapping,
832ee62d992d9b Matthew Wilcox (Oracle 2022-02-22 543) struct inode *inode)
f19d5870cbf72d Tao Ma 2012-12-10 544 {
c755e251357a0c Theodore Ts'o 2017-01-11 545 int ret, needed_blocks, no_expand;
f19d5870cbf72d Tao Ma 2012-12-10 546 handle_t *handle = NULL;
f19d5870cbf72d Tao Ma 2012-12-10 547 int retries = 0, sem_held = 0;
83eba701cf6e58 Matthew Wilcox 2023-03-24 548 struct folio *folio = NULL;
f19d5870cbf72d Tao Ma 2012-12-10 549 unsigned from, to;
f19d5870cbf72d Tao Ma 2012-12-10 550 struct ext4_iloc iloc;
f19d5870cbf72d Tao Ma 2012-12-10 551
f19d5870cbf72d Tao Ma 2012-12-10 552 if (!ext4_has_inline_data(inode)) {
f19d5870cbf72d Tao Ma 2012-12-10 553 /*
f19d5870cbf72d Tao Ma 2012-12-10 554 * clear the flag so that no new write
f19d5870cbf72d Tao Ma 2012-12-10 555 * will trap here again.
f19d5870cbf72d Tao Ma 2012-12-10 556 */
f19d5870cbf72d Tao Ma 2012-12-10 557 ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
8ca000469995a1 Matthew Wilcox (Oracle 2024-05-28 558) return NULL;
f19d5870cbf72d Tao Ma 2012-12-10 559 }
f19d5870cbf72d Tao Ma 2012-12-10 560
f19d5870cbf72d Tao Ma 2012-12-10 561 needed_blocks = ext4_writepage_trans_blocks(inode);
f19d5870cbf72d Tao Ma 2012-12-10 562
f19d5870cbf72d Tao Ma 2012-12-10 563 ret = ext4_get_inode_loc(inode, &iloc);
f19d5870cbf72d Tao Ma 2012-12-10 564 if (ret)
8ca000469995a1 Matthew Wilcox (Oracle 2024-05-28 565) return ERR_PTR(ret);
f19d5870cbf72d Tao Ma 2012-12-10 566
f19d5870cbf72d Tao Ma 2012-12-10 567 retry:
9924a92a8c2175 Theodore Ts'o 2013-02-08 568 handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, needed_blocks);
f19d5870cbf72d Tao Ma 2012-12-10 569 if (IS_ERR(handle)) {
f19d5870cbf72d Tao Ma 2012-12-10 570 ret = PTR_ERR(handle);
f19d5870cbf72d Tao Ma 2012-12-10 571 handle = NULL;
f19d5870cbf72d Tao Ma 2012-12-10 572 goto out;
f19d5870cbf72d Tao Ma 2012-12-10 573 }
f19d5870cbf72d Tao Ma 2012-12-10 574
f19d5870cbf72d Tao Ma 2012-12-10 575 /* We cannot recurse into the filesystem as the transaction is already
f19d5870cbf72d Tao Ma 2012-12-10 576 * started */
83eba701cf6e58 Matthew Wilcox 2023-03-24 577 folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
83eba701cf6e58 Matthew Wilcox 2023-03-24 578 mapping_gfp_mask(mapping));
7fa8a8ee9400fe Linus Torvalds 2023-04-27 579 if (IS_ERR(folio)) {
7fa8a8ee9400fe Linus Torvalds 2023-04-27 580 ret = PTR_ERR(folio);
7fa8a8ee9400fe Linus Torvalds 2023-04-27 581 goto out_nofolio;
f19d5870cbf72d Tao Ma 2012-12-10 582 }
f19d5870cbf72d Tao Ma 2012-12-10 583
c755e251357a0c Theodore Ts'o 2017-01-11 584 ext4_write_lock_xattr(inode, &no_expand);
f19d5870cbf72d Tao Ma 2012-12-10 585 sem_held = 1;
f19d5870cbf72d Tao Ma 2012-12-10 586 /* If some one has already done this for us, just exit. */
f19d5870cbf72d Tao Ma 2012-12-10 587 if (!ext4_has_inline_data(inode)) {
f19d5870cbf72d Tao Ma 2012-12-10 588 ret = 0;
f19d5870cbf72d Tao Ma 2012-12-10 589 goto out;
f19d5870cbf72d Tao Ma 2012-12-10 590 }
f19d5870cbf72d Tao Ma 2012-12-10 591
f19d5870cbf72d Tao Ma 2012-12-10 592 from = 0;
f19d5870cbf72d Tao Ma 2012-12-10 593 to = ext4_get_inline_size(inode);
83eba701cf6e58 Matthew Wilcox 2023-03-24 594 if (!folio_test_uptodate(folio)) {
6b87fbe4155007 Matthew Wilcox 2023-03-24 595 ret = ext4_read_inline_folio(inode, folio);
f19d5870cbf72d Tao Ma 2012-12-10 596 if (ret < 0)
f19d5870cbf72d Tao Ma 2012-12-10 597 goto out;
f19d5870cbf72d Tao Ma 2012-12-10 598 }
f19d5870cbf72d Tao Ma 2012-12-10 599
f19d5870cbf72d Tao Ma 2012-12-10 600 ret = ext4_destroy_inline_data_nolock(handle, inode);
f19d5870cbf72d Tao Ma 2012-12-10 601 if (ret)
f19d5870cbf72d Tao Ma 2012-12-10 602 goto out;
f19d5870cbf72d Tao Ma 2012-12-10 603
705965bd6dfadc Jan Kara 2016-03-08 604 if (ext4_should_dioread_nolock(inode)) {
83eba701cf6e58 Matthew Wilcox 2023-03-24 605 ret = __block_write_begin(&folio->page, from, to,
705965bd6dfadc Jan Kara 2016-03-08 606 ext4_get_block_unwritten);
705965bd6dfadc Jan Kara 2016-03-08 607 } else
83eba701cf6e58 Matthew Wilcox 2023-03-24 608 ret = __block_write_begin(&folio->page, from, to, ext4_get_block);
f19d5870cbf72d Tao Ma 2012-12-10 609
f19d5870cbf72d Tao Ma 2012-12-10 610 if (!ret && ext4_should_journal_data(inode)) {
83eba701cf6e58 Matthew Wilcox 2023-03-24 611 ret = ext4_walk_page_buffers(handle, inode,
83eba701cf6e58 Matthew Wilcox 2023-03-24 612 folio_buffers(folio), from, to,
83eba701cf6e58 Matthew Wilcox 2023-03-24 613 NULL, do_journal_get_write_access);
f19d5870cbf72d Tao Ma 2012-12-10 614 }
f19d5870cbf72d Tao Ma 2012-12-10 615
f19d5870cbf72d Tao Ma 2012-12-10 616 if (ret) {
83eba701cf6e58 Matthew Wilcox 2023-03-24 617 folio_unlock(folio);
83eba701cf6e58 Matthew Wilcox 2023-03-24 618 folio_put(folio);
83eba701cf6e58 Matthew Wilcox 2023-03-24 619 folio = NULL;
f19d5870cbf72d Tao Ma 2012-12-10 620 ext4_orphan_add(handle, inode);
c755e251357a0c Theodore Ts'o 2017-01-11 621 ext4_write_unlock_xattr(inode, &no_expand);
f19d5870cbf72d Tao Ma 2012-12-10 622 sem_held = 0;
f19d5870cbf72d Tao Ma 2012-12-10 623 ext4_journal_stop(handle);
f19d5870cbf72d Tao Ma 2012-12-10 624 handle = NULL;
f19d5870cbf72d Tao Ma 2012-12-10 625 ext4_truncate_failed_write(inode);
f19d5870cbf72d Tao Ma 2012-12-10 626 /*
f19d5870cbf72d Tao Ma 2012-12-10 627 * If truncate failed early the inode might
f19d5870cbf72d Tao Ma 2012-12-10 628 * still be on the orphan list; we need to
f19d5870cbf72d Tao Ma 2012-12-10 629 * make sure the inode is removed from the
f19d5870cbf72d Tao Ma 2012-12-10 630 * orphan list in that case.
f19d5870cbf72d Tao Ma 2012-12-10 631 */
f19d5870cbf72d Tao Ma 2012-12-10 632 if (inode->i_nlink)
f19d5870cbf72d Tao Ma 2012-12-10 633 ext4_orphan_del(NULL, inode);
f19d5870cbf72d Tao Ma 2012-12-10 634 }
f19d5870cbf72d Tao Ma 2012-12-10 635
f19d5870cbf72d Tao Ma 2012-12-10 636 if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
f19d5870cbf72d Tao Ma 2012-12-10 637 goto retry;
f19d5870cbf72d Tao Ma 2012-12-10 638
83eba701cf6e58 Matthew Wilcox 2023-03-24 639 if (folio)
83eba701cf6e58 Matthew Wilcox 2023-03-24 640 block_commit_write(&folio->page, from, to);
f19d5870cbf72d Tao Ma 2012-12-10 641 out:
83eba701cf6e58 Matthew Wilcox 2023-03-24 642 if (folio) {
83eba701cf6e58 Matthew Wilcox 2023-03-24 643 folio_unlock(folio);
83eba701cf6e58 Matthew Wilcox 2023-03-24 644 folio_put(folio);
f19d5870cbf72d Tao Ma 2012-12-10 645 }
7fa8a8ee9400fe Linus Torvalds 2023-04-27 646 out_nofolio:
f19d5870cbf72d Tao Ma 2012-12-10 647 if (sem_held)
c755e251357a0c Theodore Ts'o 2017-01-11 648 ext4_write_unlock_xattr(inode, &no_expand);
f19d5870cbf72d Tao Ma 2012-12-10 649 if (handle)
f19d5870cbf72d Tao Ma 2012-12-10 650 ext4_journal_stop(handle);
f19d5870cbf72d Tao Ma 2012-12-10 651 brelse(iloc.bh);
8ca000469995a1 Matthew Wilcox (Oracle 2024-05-28 @652) return ERR_PTR(ret);
f19d5870cbf72d Tao Ma 2012-12-10 653 }
f19d5870cbf72d Tao Ma 2012-12-10 654
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 4%]
* [PATCH v6 03/11] filemap: allocate mapping_min_order folios in the page cache
2024-05-29 13:45 5% ` [PATCH v6 02/11] fs: Allow fine-grained control of folio sizes Pankaj Raghav (Samsung)
@ 2024-05-29 13:45 13% ` Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-05-29 13:45 UTC (permalink / raw)
To: david, chandan.babu, akpm, brauner, willy, djwong
Cc: linux-kernel, hare, john.g.garry, gost.dev, yang, p.raghav, cl,
linux-xfs, hch, mcgrof, linux-mm, linux-fsdevel
From: Pankaj Raghav <p.raghav@samsung.com>
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
Supporting mapping_min_order implies that we guarantee each folio in the
page cache has at least an order of mapping_min_order. When adding new
folios to the page cache we must also ensure the index used is aligned to
the mapping_min_order as the page cache requires the index to be aligned
to the order of the folio.
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Co-developed-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
include/linux/pagemap.h | 20 ++++++++++++++++++++
mm/filemap.c | 24 +++++++++++++++++-------
2 files changed, 37 insertions(+), 7 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 228275e7049f..899b8d751768 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -439,6 +439,26 @@ unsigned int mapping_min_folio_order(const struct address_space *mapping)
return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN;
}
+static inline unsigned long mapping_min_folio_nrpages(struct address_space *mapping)
+{
+ return 1UL << mapping_min_folio_order(mapping);
+}
+
+/**
+ * mapping_align_start_index() - Align starting index based on the min
+ * folio order of the page cache.
+ * @mapping: The address_space.
+ *
+ * Ensure the index used is aligned to the minimum folio order when adding
+ * new folios to the page cache by rounding down to the nearest minimum
+ * folio number of pages.
+ */
+static inline pgoff_t mapping_align_start_index(struct address_space *mapping,
+ pgoff_t index)
+{
+ return round_down(index, mapping_min_folio_nrpages(mapping));
+}
+
/*
* Large folio support currently depends on THP. These dependencies are
* being worked on but are not yet fixed.
diff --git a/mm/filemap.c b/mm/filemap.c
index 308714a44a0f..0914ef2e8256 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -859,6 +859,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
mapping_set_update(&xas, mapping);
VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio);
@@ -1919,8 +1921,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
+ index = mapping_align_start_index(mapping, index);
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
gfp |= __GFP_WRITE;
@@ -1958,7 +1962,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2447,13 +2451,16 @@ static int filemap_update_page(struct kiocb *iocb,
}
static int filemap_create_folio(struct file *file,
- struct address_space *mapping, pgoff_t index,
+ struct address_space *mapping, loff_t pos,
struct folio_batch *fbatch)
{
struct folio *folio;
int error;
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ pgoff_t index;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ min_order);
if (!folio)
return -ENOMEM;
@@ -2471,6 +2478,8 @@ static int filemap_create_folio(struct file *file,
* well to keep locking rules simple.
*/
filemap_invalidate_lock_shared(mapping);
+ /* index in PAGE units but aligned to min_order number of pages. */
+ index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
error = filemap_add_folio(mapping, folio, index,
mapping_gfp_constraint(mapping, GFP_KERNEL));
if (error == -EEXIST)
@@ -2531,8 +2540,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
return -EAGAIN;
- err = filemap_create_folio(filp, mapping,
- iocb->ki_pos >> PAGE_SHIFT, fbatch);
+ err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
if (err == AOP_TRUNCATED_PAGE)
goto retry;
return err;
@@ -3748,9 +3756,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
+ index = mapping_align_start_index(mapping, index);
err = filemap_add_folio(mapping, folio, index, gfp);
if (unlikely(err)) {
folio_put(folio);
--
2.34.1
^ permalink raw reply related [relevance 13%]
* [PATCH v6 02/11] fs: Allow fine-grained control of folio sizes
@ 2024-05-29 13:45 5% ` Pankaj Raghav (Samsung)
2024-05-29 13:45 13% ` [PATCH v6 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-05-29 13:45 UTC (permalink / raw)
To: david, chandan.babu, akpm, brauner, willy, djwong
Cc: linux-kernel, hare, john.g.garry, gost.dev, yang, p.raghav, cl,
linux-xfs, hch, mcgrof, linux-mm, linux-fsdevel
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
We need filesystems to be able to communicate acceptable folio sizes
to the pagecache for a variety of uses (e.g. large block sizes).
Support a range of folio sizes between order-0 and order-31.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
include/linux/pagemap.h | 86 ++++++++++++++++++++++++++++++++++-------
mm/filemap.c | 6 +--
mm/readahead.c | 4 +-
3 files changed, 77 insertions(+), 19 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 8f09ed4a4451..228275e7049f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -204,14 +204,21 @@ enum mapping_flags {
AS_EXITING = 4, /* final truncate in progress */
/* writeback related tags are not used */
AS_NO_WRITEBACK_TAGS = 5,
- AS_LARGE_FOLIO_SUPPORT = 6,
- AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */
- AS_STABLE_WRITES, /* must wait for writeback before modifying
+ AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */
+ AS_STABLE_WRITES = 7, /* must wait for writeback before modifying
folio contents */
- AS_UNMOVABLE, /* The mapping cannot be moved, ever */
- AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping */
+ AS_UNMOVABLE = 8, /* The mapping cannot be moved, ever */
+ AS_INACCESSIBLE = 9, /* Do not attempt direct R/W access to the mapping */
+ /* Bits 16-25 are used for FOLIO_ORDER */
+ AS_FOLIO_ORDER_BITS = 5,
+ AS_FOLIO_ORDER_MIN = 16,
+ AS_FOLIO_ORDER_MAX = AS_FOLIO_ORDER_MIN + AS_FOLIO_ORDER_BITS,
};
+#define AS_FOLIO_ORDER_MASK ((1u << AS_FOLIO_ORDER_BITS) - 1)
+#define AS_FOLIO_ORDER_MIN_MASK (AS_FOLIO_ORDER_MASK << AS_FOLIO_ORDER_MIN)
+#define AS_FOLIO_ORDER_MAX_MASK (AS_FOLIO_ORDER_MASK << AS_FOLIO_ORDER_MAX)
+
/**
* mapping_set_error - record a writeback error in the address_space
* @mapping: the mapping in which an error should be set
@@ -360,9 +367,49 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
#define MAX_PAGECACHE_ORDER 8
#endif
+/*
+ * mapping_set_folio_order_range() - Set the orders supported by a file.
+ * @mapping: The address space of the file.
+ * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
+ * @max: Maximum folio order (between @min-MAX_PAGECACHE_ORDER inclusive).
+ *
+ * The filesystem should call this function in its inode constructor to
+ * indicate which base size (min) and maximum size (max) of folio the VFS
+ * can use to cache the contents of the file. This should only be used
+ * if the filesystem needs special handling of folio sizes (ie there is
+ * something the core cannot know).
+ * Do not tune it based on, eg, i_size.
+ *
+ * Context: This should not be called while the inode is active as it
+ * is non-atomic.
+ */
+static inline void mapping_set_folio_order_range(struct address_space *mapping,
+ unsigned int min,
+ unsigned int max)
+{
+ if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+ return;
+
+ if (min > MAX_PAGECACHE_ORDER)
+ min = MAX_PAGECACHE_ORDER;
+ if (max > MAX_PAGECACHE_ORDER)
+ max = MAX_PAGECACHE_ORDER;
+ if (max < min)
+ max = min;
+
+ mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
+ (min << AS_FOLIO_ORDER_MIN) | (max << AS_FOLIO_ORDER_MAX);
+}
+
+static inline void mapping_set_folio_min_order(struct address_space *mapping,
+ unsigned int min)
+{
+ mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER);
+}
+
/**
* mapping_set_large_folios() - Indicate the file supports large folios.
- * @mapping: The file.
+ * @mapping: The address space of the file.
*
* The filesystem should call this function in its inode constructor to
* indicate that the VFS can use large folios to cache the contents of
@@ -373,7 +420,23 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
*/
static inline void mapping_set_large_folios(struct address_space *mapping)
{
- __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+ mapping_set_folio_order_range(mapping, 0, MAX_PAGECACHE_ORDER);
+}
+
+static inline
+unsigned int mapping_max_folio_order(const struct address_space *mapping)
+{
+ if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+ return 0;
+ return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX;
+}
+
+static inline
+unsigned int mapping_min_folio_order(const struct address_space *mapping)
+{
+ if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+ return 0;
+ return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN;
}
/*
@@ -382,16 +445,13 @@ static inline void mapping_set_large_folios(struct address_space *mapping)
*/
static inline bool mapping_large_folio_support(struct address_space *mapping)
{
- return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
- test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+ return mapping_max_folio_order(mapping) > 0;
}
/* Return the maximum folio size for this pagecache mapping, in bytes. */
-static inline size_t mapping_max_folio_size(struct address_space *mapping)
+static inline size_t mapping_max_folio_size(const struct address_space *mapping)
{
- if (mapping_large_folio_support(mapping))
- return PAGE_SIZE << MAX_PAGECACHE_ORDER;
- return PAGE_SIZE;
+ return PAGE_SIZE << mapping_max_folio_order(mapping);
}
static inline int filemap_nr_thps(struct address_space *mapping)
diff --git a/mm/filemap.c b/mm/filemap.c
index ba06237b942d..308714a44a0f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1933,10 +1933,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP))))
fgp_flags |= FGP_LOCK;
- if (!mapping_large_folio_support(mapping))
- order = 0;
- if (order > MAX_PAGECACHE_ORDER)
- order = MAX_PAGECACHE_ORDER;
+ if (order > mapping_max_folio_order(mapping))
+ order = mapping_max_folio_order(mapping);
/* If we're not aligned, allocate a smaller folio */
if (index & ((1UL << order) - 1))
order = __ffs(index);
diff --git a/mm/readahead.c b/mm/readahead.c
index 75e934a1fd78..da34b28da02c 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -504,9 +504,9 @@ void page_cache_ra_order(struct readahead_control *ractl,
limit = min(limit, index + ra->size - 1);
- if (new_order < MAX_PAGECACHE_ORDER) {
+ if (new_order < mapping_max_folio_order(mapping)) {
new_order += 2;
- new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
+ new_order = min(mapping_max_folio_order(mapping), new_order);
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
}
--
2.34.1
^ permalink raw reply related [relevance 5%]
* Re: [amir73il:sb_write_barrier] [fanotify] 9d1fd61f1d: unixbench.throughput -7.9% regression
2024-05-29 8:25 9% [amir73il:sb_write_barrier] [fanotify] 9d1fd61f1d: unixbench.throughput -7.9% regression kernel test robot
@ 2024-05-29 11:17 0% ` Amir Goldstein
0 siblings, 0 replies; 200+ results
From: Amir Goldstein @ 2024-05-29 11:17 UTC (permalink / raw)
To: Jan Kara, oe-lkp; +Cc: lkp, kernel test robot
On Wed, May 29, 2024 at 11:26 AM kernel test robot
<oliver.sang@intel.com> wrote:
>
>
>
> Hello,
>
> kernel test robot noticed a -7.9% regression of unixbench.throughput on:
>
>
> commit: 9d1fd61f1d9bb74e44bdcc8767ba7008a08c6075 ("fanotify: pass optional file access range in pre-content event")
> https://github.com/amir73il/linux sb_write_barrier
>
Jan,
I speculate that the regression is due to the fact that we store and pass the
path information on struct file_range on the stack before the optimizations
in fsnotify_parent(), so rw_verify_area() pays some price for the stores
and __fsnotify_parent() pays a bigger price for fetches?
Luckily, we already have the way to check
fsnotify_sb_has_priority_watchers(inode->i_sb,
FSNOTIFY_PRIO_PRE_CONTENT))
so now I used it to optimize out the fsnotify_file_range() inline
code entirely.
Oliver,
Can you please re-test with fixed branch (also rebased on v6.10-rc1):
* a82fd282befc - (fan_pre_content) fanotify: report file range info
with pre-content events
* f301cd18006c - fanotify: rename a misnamed constant
* 64108c0b47db - fanotify: pass optional file access range in pre-content event
* 94167e071109 - fanotify: introduce FAN_PRE_MODIFY permission event
* 68e04c2451ba - fanotify: introduce FAN_PRE_ACCESS permission event
* 83af0c89527a - fsnotify: generate pre-content permission event on exec
* aca408421327 - fsnotify: generate pre-content permission event on open
* 93656e196b00 - fsnotify: introduce pre-content permission event
The optimization was done in the first commit (fsnotify: introduce
pre-content permission event),
but impacts the regressing commit (fanotify: pass optional file access
range in pre-content event).
no need to test all middle commits.
Thanks,
Amir.
> testcase: unixbench
> test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory
> parameters:
>
> runtime: 300s
> nr_task: 100%
> test: fsbuffer-w
> cpufreq_governor: performance
>
>
>
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <oliver.sang@intel.com>
> | Closes: https://lore.kernel.org/oe-lkp/202405291640.2016ebfe-oliver.sang@intel.com
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> The kernel config and materials to reproduce are available at:
> https://download.01.org/0day-ci/archive/20240529/202405291640.2016ebfe-oliver.sang@intel.com
>
> =========================================================================================
> compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
> gcc-13/performance/x86_64-rhel-8.3/100%/debian-12-x86_64-20240206.cgz/300s/lkp-spr-r02/fsbuffer-w/unixbench
>
> commit:
> 00c423c0d8 ("fanotify: introduce FAN_PRE_MODIFY permission event")
> 9d1fd61f1d ("fanotify: pass optional file access range in pre-content event")
>
> 00c423c0d82eabad 9d1fd61f1d9bb74e44bdcc8767b
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 1.23e+08 -7.9% 1.133e+08 unixbench.throughput
> 6169 -7.7% 5694 unixbench.time.user_time
> 4.566e+10 -7.9% 4.206e+10 unixbench.workload
> 1.513e+11 -4.5% 1.445e+11 perf-stat.i.branch-instructions
> 6891152 +4.8% 7221484 perf-stat.i.branch-misses
> 29764445 ± 2% -7.4% 27565609 ± 3% perf-stat.i.cache-references
> 0.91 +2.0% 0.93 perf-stat.i.cpi
> 7.187e+11 -2.7% 6.996e+11 perf-stat.i.instructions
> 1.26 -2.6% 1.23 perf-stat.i.ipc
> 0.00 +0.0 0.01 perf-stat.overall.branch-miss-rate%
> 0.73 +2.7% 0.75 perf-stat.overall.cpi
> 1.37 -2.6% 1.34 perf-stat.overall.ipc
> 5828 +5.7% 6162 perf-stat.overall.path-length
> 1.505e+11 -4.5% 1.437e+11 perf-stat.ps.branch-instructions
> 6873687 +4.8% 7203107 perf-stat.ps.branch-misses
> 29721957 ± 2% -7.3% 27538369 ± 3% perf-stat.ps.cache-references
> 7.148e+11 -2.6% 6.96e+11 perf-stat.ps.instructions
> 2.662e+14 -2.6% 2.592e+14 perf-stat.total.instructions
> 57.79 -2.0 55.78 perf-profile.calltrace.cycles-pp.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 37.58 -2.0 35.63 perf-profile.calltrace.cycles-pp.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
> 13.06 -1.0 12.04 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.write
> 13.81 -1.0 12.83 perf-profile.calltrace.cycles-pp.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
> 12.72 -0.9 11.78 perf-profile.calltrace.cycles-pp.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write
> 7.00 -0.5 6.47 perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
> 6.53 -0.5 6.02 perf-profile.calltrace.cycles-pp.filemap_get_entry.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
> 5.36 -0.5 4.89 perf-profile.calltrace.cycles-pp.simple_write_end.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
> 3.66 -0.4 3.28 perf-profile.calltrace.cycles-pp.security_file_permission.rw_verify_area.vfs_write.ksys_write.do_syscall_64
> 2.68 -0.3 2.36 perf-profile.calltrace.cycles-pp.apparmor_file_permission.security_file_permission.rw_verify_area.vfs_write.ksys_write
> 6.57 -0.2 6.34 perf-profile.calltrace.cycles-pp.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
> 2.36 ± 2% -0.2 2.18 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
> 1.83 -0.2 1.66 perf-profile.calltrace.cycles-pp.folio_unlock.simple_write_end.generic_perform_write.generic_file_write_iter.vfs_write
> 2.92 -0.2 2.76 perf-profile.calltrace.cycles-pp.down_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
> 2.65 -0.2 2.49 perf-profile.calltrace.cycles-pp.xas_load.filemap_get_entry.__filemap_get_folio.simple_write_begin.generic_perform_write
> 3.95 -0.1 3.83 perf-profile.calltrace.cycles-pp.security_inode_need_killpriv.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter.vfs_write
> 1.62 -0.1 1.50 perf-profile.calltrace.cycles-pp.up_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
> 0.74 -0.1 0.64 perf-profile.calltrace.cycles-pp.__cond_resched.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 3.26 -0.1 3.17 perf-profile.calltrace.cycles-pp.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter
> 3.57 -0.1 3.49 perf-profile.calltrace.cycles-pp.__fsnotify_parent.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 1.61 -0.1 1.53 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
> 0.93 -0.1 0.85 perf-profile.calltrace.cycles-pp.balance_dirty_pages_ratelimited_flags.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
> 1.05 -0.1 0.99 perf-profile.calltrace.cycles-pp.xas_descend.xas_load.filemap_get_entry.__filemap_get_folio.simple_write_begin
> 0.61 -0.1 0.55 perf-profile.calltrace.cycles-pp.w_test
> 0.64 -0.1 0.58 perf-profile.calltrace.cycles-pp.x64_sys_call.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
> 0.87 -0.1 0.82 perf-profile.calltrace.cycles-pp.aa_file_perm.apparmor_file_permission.security_file_permission.rw_verify_area.vfs_write
> 2.50 -0.1 2.44 perf-profile.calltrace.cycles-pp.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags.__generic_file_write_iter
> 0.62 -0.1 0.56 perf-profile.calltrace.cycles-pp.xas_start.xas_load.filemap_get_entry.__filemap_get_folio.simple_write_begin
> 0.74 -0.0 0.69 perf-profile.calltrace.cycles-pp.setattr_should_drop_suidgid.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter.vfs_write
> 0.91 -0.0 0.86 perf-profile.calltrace.cycles-pp.folio_wait_stable.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
> 0.84 -0.0 0.79 perf-profile.calltrace.cycles-pp.xattr_resolve_name.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags
> 0.68 -0.0 0.64 perf-profile.calltrace.cycles-pp.__cond_resched.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
> 0.74 -0.0 0.71 perf-profile.calltrace.cycles-pp.folio_mark_dirty.simple_write_end.generic_perform_write.generic_file_write_iter.vfs_write
> 0.62 -0.0 0.59 perf-profile.calltrace.cycles-pp.__cond_resched.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
> 0.97 +0.0 1.00 perf-profile.calltrace.cycles-pp.strcmp.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags
> 0.91 +0.1 0.97 perf-profile.calltrace.cycles-pp.timestamp_truncate.inode_needs_update_time.file_update_time.__generic_file_write_iter.generic_file_write_iter
> 0.86 ± 3% +0.1 0.94 perf-profile.calltrace.cycles-pp.generic_write_check_limits.generic_write_checks.generic_file_write_iter.vfs_write.ksys_write
> 0.58 ± 2% +0.1 0.66 ± 7% perf-profile.calltrace.cycles-pp.ktime_get_coarse_real_ts64.inode_needs_update_time.file_update_time.__generic_file_write_iter.generic_file_write_iter
> 11.24 +0.1 11.36 perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
> 2.01 ± 2% +0.1 2.14 perf-profile.calltrace.cycles-pp.generic_write_checks.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
> 6.04 +0.2 6.24 perf-profile.calltrace.cycles-pp.fault_in_iov_iter_readable.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
> 5.17 +0.2 5.42 perf-profile.calltrace.cycles-pp.fault_in_readable.fault_in_iov_iter_readable.generic_perform_write.generic_file_write_iter.vfs_write
> 96.75 +0.3 97.03 perf-profile.calltrace.cycles-pp.write
> 2.57 +0.4 2.92 perf-profile.calltrace.cycles-pp.inode_needs_update_time.file_update_time.__generic_file_write_iter.generic_file_write_iter.vfs_write
> 3.20 +0.4 3.57 perf-profile.calltrace.cycles-pp.file_update_time.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
> 84.82 +1.1 85.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
> 83.38 +1.2 84.56 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
> 78.73 +1.5 80.20 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
> 74.54 +1.8 76.32 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
> 0.00 +4.0 3.99 perf-profile.calltrace.cycles-pp.__fsnotify_parent.rw_verify_area.vfs_write.ksys_write.do_syscall_64
> 5.32 +4.2 9.48 perf-profile.calltrace.cycles-pp.rw_verify_area.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 58.42 -2.0 56.38 perf-profile.children.cycles-pp.generic_file_write_iter
> 38.46 -2.0 36.50 perf-profile.children.cycles-pp.generic_perform_write
> 13.99 -1.0 13.01 perf-profile.children.cycles-pp.simple_write_begin
> 13.11 -1.0 12.15 perf-profile.children.cycles-pp.__filemap_get_folio
> 7.23 -0.6 6.66 perf-profile.children.cycles-pp.entry_SYSCALL_64
> 7.12 -0.5 6.59 perf-profile.children.cycles-pp.copy_page_from_iter_atomic
> 6.73 -0.5 6.21 perf-profile.children.cycles-pp.filemap_get_entry
> 5.76 -0.5 5.26 perf-profile.children.cycles-pp.simple_write_end
> 4.05 -0.4 3.64 perf-profile.children.cycles-pp.security_file_permission
> 2.93 -0.3 2.59 perf-profile.children.cycles-pp.apparmor_file_permission
> 4.32 -0.3 4.04 perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
> 4.20 -0.3 3.92 perf-profile.children.cycles-pp.__cond_resched
> 6.91 -0.2 6.67 perf-profile.children.cycles-pp.file_remove_privs_flags
> 2.43 -0.2 2.24 perf-profile.children.cycles-pp.rcu_all_qs
> 3.10 -0.2 2.92 perf-profile.children.cycles-pp.xas_load
> 2.47 ± 2% -0.2 2.29 ± 2% perf-profile.children.cycles-pp.__fdget_pos
> 1.92 -0.2 1.74 perf-profile.children.cycles-pp.folio_unlock
> 3.11 -0.2 2.94 perf-profile.children.cycles-pp.down_write
> 4.18 -0.1 4.04 perf-profile.children.cycles-pp.security_inode_need_killpriv
> 1.68 -0.1 1.56 perf-profile.children.cycles-pp.up_write
> 3.48 -0.1 3.38 perf-profile.children.cycles-pp.cap_inode_need_killpriv
> 1.96 -0.1 1.87 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
> 1.28 -0.1 1.18 perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
> 0.92 -0.1 0.84 perf-profile.children.cycles-pp.w_test
> 3.14 -0.1 3.06 perf-profile.children.cycles-pp.__vfs_getxattr
> 1.00 -0.1 0.92 perf-profile.children.cycles-pp.aa_file_perm
> 1.29 -0.1 1.22 perf-profile.children.cycles-pp.xas_descend
> 0.76 -0.1 0.70 perf-profile.children.cycles-pp.x64_sys_call
> 0.87 -0.1 0.80 perf-profile.children.cycles-pp.setattr_should_drop_suidgid
> 1.07 -0.1 1.01 perf-profile.children.cycles-pp.xattr_resolve_name
> 1.10 -0.1 1.04 perf-profile.children.cycles-pp.folio_wait_stable
> 1.05 -0.1 1.00 perf-profile.children.cycles-pp.folio_mapping
> 0.73 -0.1 0.67 perf-profile.children.cycles-pp.xas_start
> 0.93 -0.1 0.88 perf-profile.children.cycles-pp.folio_mark_dirty
> 0.50 -0.0 0.46 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
> 0.60 -0.0 0.56 perf-profile.children.cycles-pp.inode_to_bdi
> 0.43 -0.0 0.39 perf-profile.children.cycles-pp.write@plt
> 0.36 -0.0 0.33 perf-profile.children.cycles-pp.amd_clear_divider
> 0.37 -0.0 0.35 perf-profile.children.cycles-pp.__x64_sys_write
> 0.33 -0.0 0.31 perf-profile.children.cycles-pp.noop_dirty_folio
> 0.36 -0.0 0.34 perf-profile.children.cycles-pp.is_bad_inode
> 0.24 -0.0 0.23 ± 2% perf-profile.children.cycles-pp.file_remove_privs
> 1.18 +0.0 1.21 perf-profile.children.cycles-pp.strcmp
> 1.02 +0.1 1.08 perf-profile.children.cycles-pp.timestamp_truncate
> 99.01 +0.1 99.09 perf-profile.children.cycles-pp.write
> 0.98 ± 3% +0.1 1.06 perf-profile.children.cycles-pp.generic_write_check_limits
> 0.68 ± 2% +0.1 0.77 ± 6% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
> 11.58 +0.1 11.69 perf-profile.children.cycles-pp.__generic_file_write_iter
> 2.36 ± 2% +0.1 2.50 perf-profile.children.cycles-pp.generic_write_checks
> 5.57 +0.2 5.75 perf-profile.children.cycles-pp.fault_in_readable
> 6.28 +0.2 6.49 perf-profile.children.cycles-pp.fault_in_iov_iter_readable
> 2.98 +0.4 3.33 perf-profile.children.cycles-pp.inode_needs_update_time
> 3.51 +0.4 3.89 perf-profile.children.cycles-pp.file_update_time
> 85.24 +1.1 86.31 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
> 84.05 +1.2 85.21 perf-profile.children.cycles-pp.do_syscall_64
> 79.32 +1.5 80.78 perf-profile.children.cycles-pp.ksys_write
> 75.49 +1.7 77.21 perf-profile.children.cycles-pp.vfs_write
> 3.64 +4.0 7.64 perf-profile.children.cycles-pp.__fsnotify_parent
> 5.68 +4.3 10.03 perf-profile.children.cycles-pp.rw_verify_area
> 6.96 -0.5 6.44 perf-profile.self.cycles-pp.copy_page_from_iter_atomic
> 6.52 -0.5 6.01 perf-profile.self.cycles-pp.write
> 6.92 -0.4 6.48 perf-profile.self.cycles-pp.vfs_write
> 3.59 -0.3 3.24 perf-profile.self.cycles-pp.filemap_get_entry
> 4.41 -0.3 4.09 perf-profile.self.cycles-pp.__filemap_get_folio
> 4.23 -0.3 3.95 perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
> 2.79 -0.3 2.52 perf-profile.self.cycles-pp.simple_write_end
> 1.76 -0.2 1.52 perf-profile.self.cycles-pp.apparmor_file_permission
> 2.32 ± 2% -0.2 2.16 ± 2% perf-profile.self.cycles-pp.__fdget_pos
> 1.79 -0.2 1.62 perf-profile.self.cycles-pp.folio_unlock
> 2.05 -0.2 1.89 perf-profile.self.cycles-pp.down_write
> 2.35 -0.1 2.22 perf-profile.self.cycles-pp.__cond_resched
> 1.89 -0.1 1.77 perf-profile.self.cycles-pp.do_syscall_64
> 1.38 -0.1 1.26 perf-profile.self.cycles-pp.entry_SYSCALL_64
> 1.56 -0.1 1.45 perf-profile.self.cycles-pp.up_write
> 1.30 -0.1 1.19 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
> 1.42 -0.1 1.31 perf-profile.self.cycles-pp.rcu_all_qs
> 1.12 -0.1 1.02 perf-profile.self.cycles-pp.security_file_permission
> 1.46 -0.1 1.38 perf-profile.self.cycles-pp.ksys_write
> 0.90 -0.1 0.83 perf-profile.self.cycles-pp.aa_file_perm
> 1.29 -0.1 1.22 perf-profile.self.cycles-pp.xas_load
> 0.74 -0.1 0.67 perf-profile.self.cycles-pp.w_test
> 1.08 -0.1 1.01 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
> 1.98 -0.1 1.92 perf-profile.self.cycles-pp.file_remove_privs_flags
> 1.30 -0.1 1.24 perf-profile.self.cycles-pp.__vfs_getxattr
> 1.06 -0.1 1.00 perf-profile.self.cycles-pp.xas_descend
> 0.80 -0.1 0.74 perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
> 0.63 -0.1 0.58 perf-profile.self.cycles-pp.x64_sys_call
> 0.74 -0.1 0.69 perf-profile.self.cycles-pp.setattr_should_drop_suidgid
> 0.63 -0.0 0.58 perf-profile.self.cycles-pp.xas_start
> 0.87 -0.0 0.83 perf-profile.self.cycles-pp.folio_mapping
> 0.50 -0.0 0.46 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
> 0.60 -0.0 0.57 perf-profile.self.cycles-pp.xattr_resolve_name
> 0.48 -0.0 0.44 perf-profile.self.cycles-pp.folio_mark_dirty
> 0.68 -0.0 0.65 perf-profile.self.cycles-pp.security_inode_need_killpriv
> 0.36 -0.0 0.33 ± 2% perf-profile.self.cycles-pp.inode_to_bdi
> 0.52 -0.0 0.49 perf-profile.self.cycles-pp.folio_wait_stable
> 0.34 -0.0 0.32 perf-profile.self.cycles-pp.cap_inode_need_killpriv
> 0.89 -0.0 0.87 perf-profile.self.cycles-pp.simple_write_begin
> 0.25 -0.0 0.23 perf-profile.self.cycles-pp.__x64_sys_write
> 0.23 ± 2% -0.0 0.22 ± 2% perf-profile.self.cycles-pp.amd_clear_divider
> 0.23 ± 2% -0.0 0.21 perf-profile.self.cycles-pp.noop_dirty_folio
> 0.12 ± 4% -0.0 0.10 ± 3% perf-profile.self.cycles-pp.write@plt
> 0.24 -0.0 0.23 ± 2% perf-profile.self.cycles-pp.is_bad_inode
> 0.62 +0.0 0.65 perf-profile.self.cycles-pp.file_update_time
> 0.86 +0.0 0.90 perf-profile.self.cycles-pp.strcmp
> 0.69 +0.0 0.74 perf-profile.self.cycles-pp.fault_in_iov_iter_readable
> 0.75 ± 3% +0.1 0.81 perf-profile.self.cycles-pp.generic_write_check_limits
> 1.42 ± 2% +0.1 1.48 perf-profile.self.cycles-pp.generic_write_checks
> 0.82 +0.1 0.89 perf-profile.self.cycles-pp.timestamp_truncate
> 0.58 ± 3% +0.1 0.66 ± 6% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
> 5.44 +0.2 5.60 perf-profile.self.cycles-pp.fault_in_readable
> 1.36 +0.2 1.55 perf-profile.self.cycles-pp.inode_needs_update_time
> 1.76 ± 3% +0.9 2.64 perf-profile.self.cycles-pp.rw_verify_area
> 3.46 +3.8 7.25 perf-profile.self.cycles-pp.__fsnotify_parent
>
>
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki
>
^ permalink raw reply [relevance 0%]
* [amir73il:sb_write_barrier] [fanotify] 9d1fd61f1d: unixbench.throughput -7.9% regression
@ 2024-05-29 8:25 9% kernel test robot
2024-05-29 11:17 0% ` Amir Goldstein
0 siblings, 1 reply; 200+ results
From: kernel test robot @ 2024-05-29 8:25 UTC (permalink / raw)
To: Amir Goldstein; +Cc: oe-lkp, lkp, oliver.sang
Hello,
kernel test robot noticed a -7.9% regression of unixbench.throughput on:
commit: 9d1fd61f1d9bb74e44bdcc8767ba7008a08c6075 ("fanotify: pass optional file access range in pre-content event")
https://github.com/amir73il/linux sb_write_barrier
testcase: unixbench
test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory
parameters:
runtime: 300s
nr_task: 100%
test: fsbuffer-w
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202405291640.2016ebfe-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240529/202405291640.2016ebfe-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-13/performance/x86_64-rhel-8.3/100%/debian-12-x86_64-20240206.cgz/300s/lkp-spr-r02/fsbuffer-w/unixbench
commit:
00c423c0d8 ("fanotify: introduce FAN_PRE_MODIFY permission event")
9d1fd61f1d ("fanotify: pass optional file access range in pre-content event")
00c423c0d82eabad 9d1fd61f1d9bb74e44bdcc8767b
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.23e+08 -7.9% 1.133e+08 unixbench.throughput
6169 -7.7% 5694 unixbench.time.user_time
4.566e+10 -7.9% 4.206e+10 unixbench.workload
1.513e+11 -4.5% 1.445e+11 perf-stat.i.branch-instructions
6891152 +4.8% 7221484 perf-stat.i.branch-misses
29764445 ± 2% -7.4% 27565609 ± 3% perf-stat.i.cache-references
0.91 +2.0% 0.93 perf-stat.i.cpi
7.187e+11 -2.7% 6.996e+11 perf-stat.i.instructions
1.26 -2.6% 1.23 perf-stat.i.ipc
0.00 +0.0 0.01 perf-stat.overall.branch-miss-rate%
0.73 +2.7% 0.75 perf-stat.overall.cpi
1.37 -2.6% 1.34 perf-stat.overall.ipc
5828 +5.7% 6162 perf-stat.overall.path-length
1.505e+11 -4.5% 1.437e+11 perf-stat.ps.branch-instructions
6873687 +4.8% 7203107 perf-stat.ps.branch-misses
29721957 ± 2% -7.3% 27538369 ± 3% perf-stat.ps.cache-references
7.148e+11 -2.6% 6.96e+11 perf-stat.ps.instructions
2.662e+14 -2.6% 2.592e+14 perf-stat.total.instructions
57.79 -2.0 55.78 perf-profile.calltrace.cycles-pp.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
37.58 -2.0 35.63 perf-profile.calltrace.cycles-pp.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
13.06 -1.0 12.04 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.write
13.81 -1.0 12.83 perf-profile.calltrace.cycles-pp.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
12.72 -0.9 11.78 perf-profile.calltrace.cycles-pp.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write
7.00 -0.5 6.47 perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
6.53 -0.5 6.02 perf-profile.calltrace.cycles-pp.filemap_get_entry.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
5.36 -0.5 4.89 perf-profile.calltrace.cycles-pp.simple_write_end.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
3.66 -0.4 3.28 perf-profile.calltrace.cycles-pp.security_file_permission.rw_verify_area.vfs_write.ksys_write.do_syscall_64
2.68 -0.3 2.36 perf-profile.calltrace.cycles-pp.apparmor_file_permission.security_file_permission.rw_verify_area.vfs_write.ksys_write
6.57 -0.2 6.34 perf-profile.calltrace.cycles-pp.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
2.36 ± 2% -0.2 2.18 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.83 -0.2 1.66 perf-profile.calltrace.cycles-pp.folio_unlock.simple_write_end.generic_perform_write.generic_file_write_iter.vfs_write
2.92 -0.2 2.76 perf-profile.calltrace.cycles-pp.down_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
2.65 -0.2 2.49 perf-profile.calltrace.cycles-pp.xas_load.filemap_get_entry.__filemap_get_folio.simple_write_begin.generic_perform_write
3.95 -0.1 3.83 perf-profile.calltrace.cycles-pp.security_inode_need_killpriv.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter.vfs_write
1.62 -0.1 1.50 perf-profile.calltrace.cycles-pp.up_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
0.74 -0.1 0.64 perf-profile.calltrace.cycles-pp.__cond_resched.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.26 -0.1 3.17 perf-profile.calltrace.cycles-pp.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter
3.57 -0.1 3.49 perf-profile.calltrace.cycles-pp.__fsnotify_parent.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.61 -0.1 1.53 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.93 -0.1 0.85 perf-profile.calltrace.cycles-pp.balance_dirty_pages_ratelimited_flags.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
1.05 -0.1 0.99 perf-profile.calltrace.cycles-pp.xas_descend.xas_load.filemap_get_entry.__filemap_get_folio.simple_write_begin
0.61 -0.1 0.55 perf-profile.calltrace.cycles-pp.w_test
0.64 -0.1 0.58 perf-profile.calltrace.cycles-pp.x64_sys_call.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.87 -0.1 0.82 perf-profile.calltrace.cycles-pp.aa_file_perm.apparmor_file_permission.security_file_permission.rw_verify_area.vfs_write
2.50 -0.1 2.44 perf-profile.calltrace.cycles-pp.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags.__generic_file_write_iter
0.62 -0.1 0.56 perf-profile.calltrace.cycles-pp.xas_start.xas_load.filemap_get_entry.__filemap_get_folio.simple_write_begin
0.74 -0.0 0.69 perf-profile.calltrace.cycles-pp.setattr_should_drop_suidgid.file_remove_privs_flags.__generic_file_write_iter.generic_file_write_iter.vfs_write
0.91 -0.0 0.86 perf-profile.calltrace.cycles-pp.folio_wait_stable.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
0.84 -0.0 0.79 perf-profile.calltrace.cycles-pp.xattr_resolve_name.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags
0.68 -0.0 0.64 perf-profile.calltrace.cycles-pp.__cond_resched.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
0.74 -0.0 0.71 perf-profile.calltrace.cycles-pp.folio_mark_dirty.simple_write_end.generic_perform_write.generic_file_write_iter.vfs_write
0.62 -0.0 0.59 perf-profile.calltrace.cycles-pp.__cond_resched.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
0.97 +0.0 1.00 perf-profile.calltrace.cycles-pp.strcmp.__vfs_getxattr.cap_inode_need_killpriv.security_inode_need_killpriv.file_remove_privs_flags
0.91 +0.1 0.97 perf-profile.calltrace.cycles-pp.timestamp_truncate.inode_needs_update_time.file_update_time.__generic_file_write_iter.generic_file_write_iter
0.86 ± 3% +0.1 0.94 perf-profile.calltrace.cycles-pp.generic_write_check_limits.generic_write_checks.generic_file_write_iter.vfs_write.ksys_write
0.58 ± 2% +0.1 0.66 ± 7% perf-profile.calltrace.cycles-pp.ktime_get_coarse_real_ts64.inode_needs_update_time.file_update_time.__generic_file_write_iter.generic_file_write_iter
11.24 +0.1 11.36 perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
2.01 ± 2% +0.1 2.14 perf-profile.calltrace.cycles-pp.generic_write_checks.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
6.04 +0.2 6.24 perf-profile.calltrace.cycles-pp.fault_in_iov_iter_readable.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
5.17 +0.2 5.42 perf-profile.calltrace.cycles-pp.fault_in_readable.fault_in_iov_iter_readable.generic_perform_write.generic_file_write_iter.vfs_write
96.75 +0.3 97.03 perf-profile.calltrace.cycles-pp.write
2.57 +0.4 2.92 perf-profile.calltrace.cycles-pp.inode_needs_update_time.file_update_time.__generic_file_write_iter.generic_file_write_iter.vfs_write
3.20 +0.4 3.57 perf-profile.calltrace.cycles-pp.file_update_time.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
84.82 +1.1 85.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
83.38 +1.2 84.56 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
78.73 +1.5 80.20 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
74.54 +1.8 76.32 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +4.0 3.99 perf-profile.calltrace.cycles-pp.__fsnotify_parent.rw_verify_area.vfs_write.ksys_write.do_syscall_64
5.32 +4.2 9.48 perf-profile.calltrace.cycles-pp.rw_verify_area.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
58.42 -2.0 56.38 perf-profile.children.cycles-pp.generic_file_write_iter
38.46 -2.0 36.50 perf-profile.children.cycles-pp.generic_perform_write
13.99 -1.0 13.01 perf-profile.children.cycles-pp.simple_write_begin
13.11 -1.0 12.15 perf-profile.children.cycles-pp.__filemap_get_folio
7.23 -0.6 6.66 perf-profile.children.cycles-pp.entry_SYSCALL_64
7.12 -0.5 6.59 perf-profile.children.cycles-pp.copy_page_from_iter_atomic
6.73 -0.5 6.21 perf-profile.children.cycles-pp.filemap_get_entry
5.76 -0.5 5.26 perf-profile.children.cycles-pp.simple_write_end
4.05 -0.4 3.64 perf-profile.children.cycles-pp.security_file_permission
2.93 -0.3 2.59 perf-profile.children.cycles-pp.apparmor_file_permission
4.32 -0.3 4.04 perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
4.20 -0.3 3.92 perf-profile.children.cycles-pp.__cond_resched
6.91 -0.2 6.67 perf-profile.children.cycles-pp.file_remove_privs_flags
2.43 -0.2 2.24 perf-profile.children.cycles-pp.rcu_all_qs
3.10 -0.2 2.92 perf-profile.children.cycles-pp.xas_load
2.47 ± 2% -0.2 2.29 ± 2% perf-profile.children.cycles-pp.__fdget_pos
1.92 -0.2 1.74 perf-profile.children.cycles-pp.folio_unlock
3.11 -0.2 2.94 perf-profile.children.cycles-pp.down_write
4.18 -0.1 4.04 perf-profile.children.cycles-pp.security_inode_need_killpriv
1.68 -0.1 1.56 perf-profile.children.cycles-pp.up_write
3.48 -0.1 3.38 perf-profile.children.cycles-pp.cap_inode_need_killpriv
1.96 -0.1 1.87 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.28 -0.1 1.18 perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
0.92 -0.1 0.84 perf-profile.children.cycles-pp.w_test
3.14 -0.1 3.06 perf-profile.children.cycles-pp.__vfs_getxattr
1.00 -0.1 0.92 perf-profile.children.cycles-pp.aa_file_perm
1.29 -0.1 1.22 perf-profile.children.cycles-pp.xas_descend
0.76 -0.1 0.70 perf-profile.children.cycles-pp.x64_sys_call
0.87 -0.1 0.80 perf-profile.children.cycles-pp.setattr_should_drop_suidgid
1.07 -0.1 1.01 perf-profile.children.cycles-pp.xattr_resolve_name
1.10 -0.1 1.04 perf-profile.children.cycles-pp.folio_wait_stable
1.05 -0.1 1.00 perf-profile.children.cycles-pp.folio_mapping
0.73 -0.1 0.67 perf-profile.children.cycles-pp.xas_start
0.93 -0.1 0.88 perf-profile.children.cycles-pp.folio_mark_dirty
0.50 -0.0 0.46 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.60 -0.0 0.56 perf-profile.children.cycles-pp.inode_to_bdi
0.43 -0.0 0.39 perf-profile.children.cycles-pp.write@plt
0.36 -0.0 0.33 perf-profile.children.cycles-pp.amd_clear_divider
0.37 -0.0 0.35 perf-profile.children.cycles-pp.__x64_sys_write
0.33 -0.0 0.31 perf-profile.children.cycles-pp.noop_dirty_folio
0.36 -0.0 0.34 perf-profile.children.cycles-pp.is_bad_inode
0.24 -0.0 0.23 ± 2% perf-profile.children.cycles-pp.file_remove_privs
1.18 +0.0 1.21 perf-profile.children.cycles-pp.strcmp
1.02 +0.1 1.08 perf-profile.children.cycles-pp.timestamp_truncate
99.01 +0.1 99.09 perf-profile.children.cycles-pp.write
0.98 ± 3% +0.1 1.06 perf-profile.children.cycles-pp.generic_write_check_limits
0.68 ± 2% +0.1 0.77 ± 6% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
11.58 +0.1 11.69 perf-profile.children.cycles-pp.__generic_file_write_iter
2.36 ± 2% +0.1 2.50 perf-profile.children.cycles-pp.generic_write_checks
5.57 +0.2 5.75 perf-profile.children.cycles-pp.fault_in_readable
6.28 +0.2 6.49 perf-profile.children.cycles-pp.fault_in_iov_iter_readable
2.98 +0.4 3.33 perf-profile.children.cycles-pp.inode_needs_update_time
3.51 +0.4 3.89 perf-profile.children.cycles-pp.file_update_time
85.24 +1.1 86.31 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
84.05 +1.2 85.21 perf-profile.children.cycles-pp.do_syscall_64
79.32 +1.5 80.78 perf-profile.children.cycles-pp.ksys_write
75.49 +1.7 77.21 perf-profile.children.cycles-pp.vfs_write
3.64 +4.0 7.64 perf-profile.children.cycles-pp.__fsnotify_parent
5.68 +4.3 10.03 perf-profile.children.cycles-pp.rw_verify_area
6.96 -0.5 6.44 perf-profile.self.cycles-pp.copy_page_from_iter_atomic
6.52 -0.5 6.01 perf-profile.self.cycles-pp.write
6.92 -0.4 6.48 perf-profile.self.cycles-pp.vfs_write
3.59 -0.3 3.24 perf-profile.self.cycles-pp.filemap_get_entry
4.41 -0.3 4.09 perf-profile.self.cycles-pp.__filemap_get_folio
4.23 -0.3 3.95 perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
2.79 -0.3 2.52 perf-profile.self.cycles-pp.simple_write_end
1.76 -0.2 1.52 perf-profile.self.cycles-pp.apparmor_file_permission
2.32 ± 2% -0.2 2.16 ± 2% perf-profile.self.cycles-pp.__fdget_pos
1.79 -0.2 1.62 perf-profile.self.cycles-pp.folio_unlock
2.05 -0.2 1.89 perf-profile.self.cycles-pp.down_write
2.35 -0.1 2.22 perf-profile.self.cycles-pp.__cond_resched
1.89 -0.1 1.77 perf-profile.self.cycles-pp.do_syscall_64
1.38 -0.1 1.26 perf-profile.self.cycles-pp.entry_SYSCALL_64
1.56 -0.1 1.45 perf-profile.self.cycles-pp.up_write
1.30 -0.1 1.19 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.42 -0.1 1.31 perf-profile.self.cycles-pp.rcu_all_qs
1.12 -0.1 1.02 perf-profile.self.cycles-pp.security_file_permission
1.46 -0.1 1.38 perf-profile.self.cycles-pp.ksys_write
0.90 -0.1 0.83 perf-profile.self.cycles-pp.aa_file_perm
1.29 -0.1 1.22 perf-profile.self.cycles-pp.xas_load
0.74 -0.1 0.67 perf-profile.self.cycles-pp.w_test
1.08 -0.1 1.01 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
1.98 -0.1 1.92 perf-profile.self.cycles-pp.file_remove_privs_flags
1.30 -0.1 1.24 perf-profile.self.cycles-pp.__vfs_getxattr
1.06 -0.1 1.00 perf-profile.self.cycles-pp.xas_descend
0.80 -0.1 0.74 perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
0.63 -0.1 0.58 perf-profile.self.cycles-pp.x64_sys_call
0.74 -0.1 0.69 perf-profile.self.cycles-pp.setattr_should_drop_suidgid
0.63 -0.0 0.58 perf-profile.self.cycles-pp.xas_start
0.87 -0.0 0.83 perf-profile.self.cycles-pp.folio_mapping
0.50 -0.0 0.46 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.60 -0.0 0.57 perf-profile.self.cycles-pp.xattr_resolve_name
0.48 -0.0 0.44 perf-profile.self.cycles-pp.folio_mark_dirty
0.68 -0.0 0.65 perf-profile.self.cycles-pp.security_inode_need_killpriv
0.36 -0.0 0.33 ± 2% perf-profile.self.cycles-pp.inode_to_bdi
0.52 -0.0 0.49 perf-profile.self.cycles-pp.folio_wait_stable
0.34 -0.0 0.32 perf-profile.self.cycles-pp.cap_inode_need_killpriv
0.89 -0.0 0.87 perf-profile.self.cycles-pp.simple_write_begin
0.25 -0.0 0.23 perf-profile.self.cycles-pp.__x64_sys_write
0.23 ± 2% -0.0 0.22 ± 2% perf-profile.self.cycles-pp.amd_clear_divider
0.23 ± 2% -0.0 0.21 perf-profile.self.cycles-pp.noop_dirty_folio
0.12 ± 4% -0.0 0.10 ± 3% perf-profile.self.cycles-pp.write@plt
0.24 -0.0 0.23 ± 2% perf-profile.self.cycles-pp.is_bad_inode
0.62 +0.0 0.65 perf-profile.self.cycles-pp.file_update_time
0.86 +0.0 0.90 perf-profile.self.cycles-pp.strcmp
0.69 +0.0 0.74 perf-profile.self.cycles-pp.fault_in_iov_iter_readable
0.75 ± 3% +0.1 0.81 perf-profile.self.cycles-pp.generic_write_check_limits
1.42 ± 2% +0.1 1.48 perf-profile.self.cycles-pp.generic_write_checks
0.82 +0.1 0.89 perf-profile.self.cycles-pp.timestamp_truncate
0.58 ± 3% +0.1 0.66 ± 6% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
5.44 +0.2 5.60 perf-profile.self.cycles-pp.fault_in_readable
1.36 +0.2 1.55 perf-profile.self.cycles-pp.inode_needs_update_time
1.76 ± 3% +0.9 2.64 perf-profile.self.cycles-pp.rw_verify_area
3.46 +3.8 7.25 perf-profile.self.cycles-pp.__fsnotify_parent
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 9%]
* [hch-misc:nfs-large-folio] [nfs] 9349d8ed5c: fsmark.files_per_sec 20.2% improvement
@ 2024-05-29 7:40 5% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-05-29 7:40 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: oe-lkp, lkp, Christoph Hellwig, oliver.sang
Hello,
kernel test robot noticed a 20.2% improvement of fsmark.files_per_sec on:
commit: 9349d8ed5c28fa454a373af3435762d6828e6c2d ("nfs: add support for large folios")
git://git.infradead.org/users/hch/misc.git nfs-large-folio
testcase: fsmark
test machine: 96 threads 2 sockets Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz (Cascade Lake) with 128G memory
parameters:
iterations: 1x
nr_threads: 1t
disk: 1BRD_48G
fs: ext4
fs2: nfsv4
filesize: 4M
test_size: 24G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240529/202405291523.d708cc69-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-13/performance/1BRD_48G/4M/nfsv4/ext4/1x/x86_64-rhel-8.3/1t/debian-12-x86_64-20240206.cgz/fsyncBeforeClose/lkp-csl-2sp3/24G/fsmark
commit:
84c6edb7ce ("filemap: Convert generic_perform_write() to support large folios")
9349d8ed5c ("nfs: add support for large folios")
84c6edb7ce79f62e 9349d8ed5c28fa454a373af3435
---------------- ---------------------------
%stddev %change %stddev
\ | \
3.943e+09 ± 4% -16.9% 3.277e+09 ± 5% cpuidle..time
2448614 ± 2% -23.6% 1869696 ± 3% cpuidle..usage
0.23 ± 11% +0.1 0.34 ± 5% mpstat.cpu.all.iowait%
0.05 ± 12% +0.0 0.06 ± 7% mpstat.cpu.all.soft%
79007 ± 4% -19.2% 63858 ± 3% meminfo.Active
24660 ± 8% -53.6% 11438 ± 9% meminfo.Active(anon)
37977 ± 6% -47.0% 20135 ± 12% meminfo.Shmem
907413 ±100% +193.0% 2658813 ± 29% numa-meminfo.node0.Unevictable
20125 ± 47% -68.0% 6437 ± 57% numa-meminfo.node1.Mapped
2232220 ± 41% -78.5% 480821 ±161% numa-meminfo.node1.Unevictable
582180 ± 4% +19.2% 693739 ± 5% vmstat.io.bo
0.24 ± 22% +34.8% 0.32 ± 8% vmstat.procs.b
60361 ± 4% -40.3% 36063 ± 5% vmstat.system.cs
226853 ±100% +193.0% 664703 ± 29% numa-vmstat.node0.nr_unevictable
226853 ±100% +193.0% 664703 ± 29% numa-vmstat.node0.nr_zone_unevictable
5156 ± 47% -67.3% 1688 ± 53% numa-vmstat.node1.nr_mapped
558055 ± 41% -78.5% 120205 ±161% numa-vmstat.node1.nr_unevictable
558055 ± 41% -78.5% 120205 ±161% numa-vmstat.node1.nr_zone_unevictable
156.97 ± 4% +20.2% 188.70 ± 5% fsmark.files_per_sec
40.32 ± 4% -17.9% 33.11 ± 5% fsmark.time.elapsed_time
40.32 ± 4% -17.9% 33.11 ± 5% fsmark.time.elapsed_time.max
38.33 -37.0% 24.17 ± 2% fsmark.time.percent_of_cpu_this_job_got
15.16 ± 2% -49.5% 7.66 ± 2% fsmark.time.system_time
901484 ± 4% -72.6% 247009 ± 3% fsmark.time.voluntary_context_switches
6167 ± 8% -55.7% 2731 ± 10% proc-vmstat.nr_active_anon
7167212 ± 3% -4.4% 6851507 proc-vmstat.nr_file_pages
21549200 +2.1% 22004157 proc-vmstat.nr_free_pages
6359856 ± 4% -4.9% 6049312 ± 2% proc-vmstat.nr_inactive_file
9508 ± 6% -48.0% 4941 ± 10% proc-vmstat.nr_shmem
6167 ± 8% -55.7% 2731 ± 10% proc-vmstat.nr_zone_active_anon
6359866 ± 4% -4.9% 6049314 ± 2% proc-vmstat.nr_zone_inactive_file
21779997 ± 8% -14.5% 18618378 ± 3% proc-vmstat.numa_hit
21641144 ± 8% -14.6% 18487630 ± 3% proc-vmstat.numa_local
868.50 ±121% +1028.5% 9800 ± 92% proc-vmstat.numa_pages_migrated
28100 ± 17% -87.0% 3660 proc-vmstat.pgactivate
1.626e+09 ± 2% -11.6% 1.437e+09 ± 5% perf-stat.i.branch-instructions
64021 ± 4% -39.7% 38588 ± 5% perf-stat.i.context-switches
1.25 +11.4% 1.39 ± 2% perf-stat.i.cpi
8.21e+09 ± 2% -11.1% 7.3e+09 ± 5% perf-stat.i.instructions
0.84 -9.5% 0.76 ± 2% perf-stat.i.ipc
5314 ± 4% +13.7% 6042 ± 4% perf-stat.i.minor-faults
5314 ± 4% +13.7% 6042 ± 4% perf-stat.i.page-faults
3.33 ± 6% +0.3 3.64 ± 2% perf-stat.overall.branch-miss-rate%
1.18 ± 2% +9.4% 1.30 ± 2% perf-stat.overall.cpi
0.85 ± 2% -8.6% 0.77 ± 2% perf-stat.overall.ipc
1.586e+09 ± 2% -12.2% 1.393e+09 ± 5% perf-stat.ps.branch-instructions
62456 ± 4% -40.0% 37443 ± 5% perf-stat.ps.context-switches
8.008e+09 ± 2% -11.6% 7.076e+09 ± 5% perf-stat.ps.instructions
5160 ± 4% +13.0% 5832 ± 4% perf-stat.ps.minor-faults
5160 ± 4% +13.0% 5832 ± 4% perf-stat.ps.page-faults
3.37e+11 ± 6% -27.8% 2.434e+11 ± 3% perf-stat.total.instructions
3.20 ± 9% -2.2 1.04 ± 6% perf-profile.calltrace.cycles-pp.rpc_async_release.process_one_work.worker_thread.kthread.ret_from_fork
3.20 ± 9% -2.2 1.04 ± 6% perf-profile.calltrace.cycles-pp.rpc_free_task.rpc_async_release.process_one_work.worker_thread.kthread
2.64 ± 4% -2.0 0.62 ± 5% perf-profile.calltrace.cycles-pp.nfs_write_end.generic_perform_write.nfs_file_write.vfs_write.ksys_write
6.16 ± 3% -2.0 4.19 ± 2% perf-profile.calltrace.cycles-pp.generic_perform_write.nfs_file_write.vfs_write.ksys_write.do_syscall_64
6.22 ± 3% -2.0 4.25 ± 2% perf-profile.calltrace.cycles-pp.nfs_file_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.74 ± 4% -1.8 0.90 ± 4% perf-profile.calltrace.cycles-pp.fsync
2.70 ± 4% -1.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
2.70 ± 4% -1.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.fsync
2.70 ± 4% -1.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
2.70 ± 4% -1.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.nfs_file_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe.fsync
11.34 ± 2% -1.7 9.68 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
11.40 ± 2% -1.7 9.75 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
11.41 ± 2% -1.6 9.76 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
11.37 ± 2% -1.6 9.72 ± 2% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
11.69 ± 2% -1.6 10.08 ± 2% perf-profile.calltrace.cycles-pp.write
2.26 ± 13% -1.5 0.73 ± 8% perf-profile.calltrace.cycles-pp.nfs_write_completion.rpc_free_task.rpc_async_release.process_one_work.worker_thread
2.27 ± 2% -1.5 0.76 ± 5% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.nfs_file_fsync.__x64_sys_fsync.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.44 ± 4% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.nfs_file_fsync.__x64_sys_fsync.do_syscall_64
1.44 ± 4% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.nfs_file_fsync.__x64_sys_fsync
1.43 ± 4% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.nfs_file_fsync
1.43 ± 4% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.nfs_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
1.42 ± 4% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.write_cache_pages.nfs_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
1.35 ± 4% -0.8 0.59 ± 3% perf-profile.calltrace.cycles-pp.nfs_write_begin.generic_perform_write.nfs_file_write.vfs_write.ksys_write
1.29 ± 4% -0.7 0.57 ± 3% perf-profile.calltrace.cycles-pp.__filemap_get_folio.nfs_write_begin.generic_perform_write.nfs_file_write.vfs_write
1.43 ± 4% -0.6 0.88 ± 13% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.55 ± 4% +0.1 0.67 ± 7% perf-profile.calltrace.cycles-pp.folio_end_writeback.ext4_finish_bio.ext4_release_io_end.ext4_put_io_end.ext4_do_writepages
0.74 ± 7% +0.1 0.86 ± 4% perf-profile.calltrace.cycles-pp.mpage_submit_folio.mpage_map_and_submit_buffers.mpage_map_and_submit_extent.ext4_do_writepages.ext4_writepages
0.72 ± 6% +0.1 0.86 ± 7% perf-profile.calltrace.cycles-pp.filemap_add_folio.__filemap_get_folio.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter
0.98 ± 7% +0.2 1.14 ± 4% perf-profile.calltrace.cycles-pp.mpage_map_and_submit_buffers.mpage_map_and_submit_extent.ext4_do_writepages.ext4_writepages.do_writepages
0.76 ± 6% +0.2 0.93 ± 4% perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter
0.74 ± 6% +0.2 0.91 ± 4% perf-profile.calltrace.cycles-pp.ext4_da_map_blocks.ext4_da_get_block_prep.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write
1.04 ± 6% +0.2 1.21 ± 3% perf-profile.calltrace.cycles-pp.mpage_map_and_submit_extent.ext4_do_writepages.ext4_writepages.do_writepages.filemap_fdatawrite_wbc
0.75 ± 3% +0.2 0.94 ± 5% perf-profile.calltrace.cycles-pp.ext4_finish_bio.ext4_release_io_end.ext4_put_io_end.ext4_do_writepages.ext4_writepages
0.76 ± 3% +0.2 0.94 ± 5% perf-profile.calltrace.cycles-pp.ext4_release_io_end.ext4_put_io_end.ext4_do_writepages.ext4_writepages.do_writepages
0.64 ± 4% +0.2 0.83 ± 2% perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.ext4_da_write_end.generic_perform_write.ext4_buffered_write_iter
0.78 ± 2% +0.2 0.97 ± 5% perf-profile.calltrace.cycles-pp.ext4_put_io_end.ext4_do_writepages.ext4_writepages.do_writepages.filemap_fdatawrite_wbc
0.64 ± 4% +0.2 0.84 ± 2% perf-profile.calltrace.cycles-pp.block_write_end.ext4_da_write_end.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev
1.21 ± 4% +0.3 1.46 ± 3% perf-profile.calltrace.cycles-pp.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev
0.48 ± 45% +0.3 0.74 ± 4% perf-profile.calltrace.cycles-pp.__alloc_pages_noprof.alloc_pages_mpol_noprof.brd_insert_page.brd_submit_bio.__submit_bio
0.74 ± 5% +0.3 1.00 ± 2% perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev.vfs_iter_write
1.27 ± 5% +0.3 1.54 ± 6% perf-profile.calltrace.cycles-pp.__filemap_get_folio.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev
0.52 ± 45% +0.3 0.80 ± 3% perf-profile.calltrace.cycles-pp.alloc_pages_mpol_noprof.brd_insert_page.brd_submit_bio.__submit_bio.__submit_bio_noacct
5.29 ± 6% +0.4 5.68 ± 2% perf-profile.calltrace.cycles-pp.drm_fb_memcpy.ast_primary_plane_helper_atomic_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.calltrace.cycles-pp.ast_primary_plane_helper_atomic_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail.commit_tail
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail.commit_tail.drm_atomic_helper_commit
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_commit
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.calltrace.cycles-pp.ast_mode_config_helper_atomic_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_commit.drm_atomic_helper_dirtyfb
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.calltrace.cycles-pp.commit_tail.drm_atomic_helper_commit.drm_atomic_commit.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.calltrace.cycles-pp.drm_atomic_commit.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work.process_one_work
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit.drm_atomic_commit.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.calltrace.cycles-pp.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work.process_one_work.worker_thread
5.49 ± 6% +0.5 5.95 perf-profile.calltrace.cycles-pp.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
5.49 ± 6% +0.5 5.95 perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
2.08 ± 6% +0.5 2.57 ± 7% perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg
0.18 ±141% +0.5 0.67 ± 6% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_noprof.alloc_pages_mpol_noprof.brd_insert_page.brd_submit_bio
2.16 ± 6% +0.5 2.66 ± 8% perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet6_recvmsg
2.29 ± 6% +0.5 2.79 ± 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg_locked.tcp_recvmsg.inet6_recvmsg.sock_recvmsg.svc_tcp_sock_recv_cmsg
2.16 ± 6% +0.5 2.66 ± 8% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet6_recvmsg.sock_recvmsg
2.19 ± 4% +0.5 2.71 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.brd_insert_page.brd_submit_bio.__submit_bio.__submit_bio_noacct
2.43 ± 6% +0.5 2.96 ± 7% perf-profile.calltrace.cycles-pp.svc_tcp_read_msg.svc_tcp_recvfrom.svc_handle_xprt.svc_recv.nfsd
2.41 ± 6% +0.5 2.94 ± 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet6_recvmsg.sock_recvmsg.svc_tcp_sock_recv_cmsg.svc_tcp_read_msg
2.41 ± 6% +0.5 2.94 ± 7% perf-profile.calltrace.cycles-pp.inet6_recvmsg.sock_recvmsg.svc_tcp_sock_recv_cmsg.svc_tcp_read_msg.svc_tcp_recvfrom
2.42 ± 6% +0.5 2.95 ± 7% perf-profile.calltrace.cycles-pp.sock_recvmsg.svc_tcp_sock_recv_cmsg.svc_tcp_read_msg.svc_tcp_recvfrom.svc_handle_xprt
2.42 ± 6% +0.5 2.95 ± 7% perf-profile.calltrace.cycles-pp.svc_tcp_sock_recv_cmsg.svc_tcp_read_msg.svc_tcp_recvfrom.svc_handle_xprt.svc_recv
2.52 ± 4% +0.5 3.05 ± 4% perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev.vfs_iter_write
1.65 ± 3% +0.6 2.20 ± 7% perf-profile.calltrace.cycles-pp.copy_to_brd.brd_submit_bio.__submit_bio.__submit_bio_noacct.ext4_io_submit
2.66 ± 6% +0.6 3.24 ± 7% perf-profile.calltrace.cycles-pp.svc_tcp_recvfrom.svc_handle_xprt.svc_recv.nfsd.kthread
0.00 +0.6 0.58 ± 7% perf-profile.calltrace.cycles-pp.__folio_end_writeback.folio_end_writeback.ext4_finish_bio.ext4_release_io_end.ext4_put_io_end
3.56 ± 15% +0.7 4.24 ± 5% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.ext4_buffered_write_iter
2.99 ± 4% +0.7 3.74 perf-profile.calltrace.cycles-pp.brd_insert_page.brd_submit_bio.__submit_bio.__submit_bio_noacct.ext4_io_submit
2.30 ± 7% +0.9 3.16 ± 7% perf-profile.calltrace.cycles-pp.memcpy_orig.copy_page_from_iter_atomic.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev
2.36 ± 7% +0.9 3.24 ± 7% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev.vfs_iter_write
1.88 +0.9 2.80 ± 2% perf-profile.calltrace.cycles-pp.rep_movs_alternative.copy_page_from_iter_atomic.generic_perform_write.nfs_file_write.vfs_write
1.96 +0.9 2.89 ± 2% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.nfs_file_write.vfs_write.ksys_write
4.69 ± 3% +1.3 5.99 ± 2% perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_do_writepages.ext4_writepages.do_writepages.filemap_fdatawrite_wbc
4.69 ± 3% +1.3 5.99 ± 2% perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.ext4_io_submit.ext4_do_writepages.ext4_writepages
4.69 ± 3% +1.3 5.99 ± 2% perf-profile.calltrace.cycles-pp.__submit_bio_noacct.ext4_io_submit.ext4_do_writepages.ext4_writepages.do_writepages
4.69 ± 3% +1.3 5.99 ± 2% perf-profile.calltrace.cycles-pp.brd_submit_bio.__submit_bio.__submit_bio_noacct.ext4_io_submit.ext4_do_writepages
5.67 ± 5% +1.7 7.37 ± 2% perf-profile.calltrace.cycles-pp.generic_perform_write.ext4_buffered_write_iter.do_iter_readv_writev.vfs_iter_write.nfsd_vfs_write
7.04 ± 3% +1.8 8.82 perf-profile.calltrace.cycles-pp.ext4_sync_file.nfsd_commit.nfsd4_commit.nfsd4_proc_compound.nfsd_dispatch
7.04 ± 3% +1.8 8.82 perf-profile.calltrace.cycles-pp.nfsd_commit.nfsd4_commit.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
7.02 ± 3% +1.8 8.80 perf-profile.calltrace.cycles-pp.file_write_and_wait_range.ext4_sync_file.nfsd_commit.nfsd4_commit.nfsd4_proc_compound
7.02 ± 4% +1.8 8.80 perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.ext4_sync_file.nfsd_commit.nfsd4_commit
7.02 ± 4% +1.8 8.80 perf-profile.calltrace.cycles-pp.ext4_do_writepages.ext4_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
7.02 ± 4% +1.8 8.80 perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.ext4_sync_file.nfsd_commit
7.06 ± 4% +1.8 8.84 perf-profile.calltrace.cycles-pp.nfsd4_commit.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process
7.02 ± 4% +1.8 8.80 perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.ext4_sync_file
7.02 ± 4% +1.8 8.80 perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
10.10 ± 11% +2.5 12.63 ± 2% perf-profile.calltrace.cycles-pp.do_iter_readv_writev.vfs_iter_write.nfsd_vfs_write.nfsd4_write.nfsd4_proc_compound
10.11 ± 11% +2.5 12.64 ± 2% perf-profile.calltrace.cycles-pp.vfs_iter_write.nfsd_vfs_write.nfsd4_write.nfsd4_proc_compound.nfsd_dispatch
10.10 ± 11% +2.5 12.63 ± 2% perf-profile.calltrace.cycles-pp.ext4_buffered_write_iter.do_iter_readv_writev.vfs_iter_write.nfsd_vfs_write.nfsd4_write
10.11 ± 11% +2.5 12.64 ± 2% perf-profile.calltrace.cycles-pp.nfsd_vfs_write.nfsd4_write.nfsd4_proc_compound.nfsd_dispatch.svc_process_common
10.14 ± 11% +2.5 12.68 ± 2% perf-profile.calltrace.cycles-pp.nfsd4_write.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process
33.94 +4.4 38.31 ± 2% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork.ret_from_fork_asm
33.94 +4.4 38.31 ± 2% perf-profile.calltrace.cycles-pp.ret_from_fork.ret_from_fork_asm
33.94 +4.4 38.31 ± 2% perf-profile.calltrace.cycles-pp.ret_from_fork_asm
17.51 ± 7% +4.4 21.91 perf-profile.calltrace.cycles-pp.nfsd4_proc_compound.nfsd_dispatch.svc_process_common.svc_process.svc_handle_xprt
17.60 ± 7% +4.4 22.02 perf-profile.calltrace.cycles-pp.nfsd_dispatch.svc_process_common.svc_process.svc_handle_xprt.svc_recv
17.65 ± 7% +4.4 22.08 perf-profile.calltrace.cycles-pp.svc_process.svc_handle_xprt.svc_recv.nfsd.kthread
17.65 ± 7% +4.4 22.08 perf-profile.calltrace.cycles-pp.svc_process_common.svc_process.svc_handle_xprt.svc_recv.nfsd
20.62 ± 6% +5.1 25.69 perf-profile.calltrace.cycles-pp.svc_handle_xprt.svc_recv.nfsd.kthread.ret_from_fork
20.84 ± 6% +5.1 25.95 perf-profile.calltrace.cycles-pp.svc_recv.nfsd.kthread.ret_from_fork.ret_from_fork_asm
20.84 ± 6% +5.1 25.96 perf-profile.calltrace.cycles-pp.nfsd.kthread.ret_from_fork.ret_from_fork_asm
14.97 ± 3% -3.5 11.49 ± 3% perf-profile.children.cycles-pp.do_syscall_64
14.98 ± 3% -3.5 11.51 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
3.20 ± 9% -2.2 1.05 ± 6% perf-profile.children.cycles-pp.rpc_free_task
3.20 ± 9% -2.2 1.04 ± 6% perf-profile.children.cycles-pp.rpc_async_release
2.65 ± 4% -2.0 0.62 ± 5% perf-profile.children.cycles-pp.nfs_write_end
6.22 ± 3% -2.0 4.25 ± 2% perf-profile.children.cycles-pp.nfs_file_write
2.74 ± 4% -1.8 0.90 ± 4% perf-profile.children.cycles-pp.fsync
2.70 ± 4% -1.8 0.88 ± 4% perf-profile.children.cycles-pp.__x64_sys_fsync
2.70 ± 4% -1.8 0.88 ± 4% perf-profile.children.cycles-pp.nfs_file_fsync
11.36 ± 2% -1.7 9.70 ± 2% perf-profile.children.cycles-pp.vfs_write
11.39 ± 2% -1.6 9.74 ± 2% perf-profile.children.cycles-pp.ksys_write
11.72 ± 2% -1.6 10.12 ± 2% perf-profile.children.cycles-pp.write
2.27 ± 13% -1.5 0.73 ± 9% perf-profile.children.cycles-pp.nfs_write_completion
1.40 ± 3% -0.9 0.45 ± 4% perf-profile.children.cycles-pp.nfs_update_folio
1.43 ± 4% -0.9 0.50 ± 7% perf-profile.children.cycles-pp.nfs_writepages
1.42 ± 4% -0.9 0.48 ± 6% perf-profile.children.cycles-pp.write_cache_pages
1.32 ± 3% -0.9 0.43 ± 5% perf-profile.children.cycles-pp.nfs_writepage_setup
1.19 ± 9% -0.8 0.39 ± 10% perf-profile.children.cycles-pp.nfs_page_end_writeback
1.35 ± 4% -0.8 0.59 ± 3% perf-profile.children.cycles-pp.nfs_write_begin
0.96 ± 4% -0.6 0.33 ± 10% perf-profile.children.cycles-pp.nfs_writepages_callback
0.93 ± 2% -0.6 0.31 ± 5% perf-profile.children.cycles-pp.nfs_commit_release
0.93 ± 2% -0.6 0.31 ± 5% perf-profile.children.cycles-pp.nfs_commit_release_pages
0.91 ± 4% -0.6 0.31 ± 10% perf-profile.children.cycles-pp.nfs_page_async_flush
1.63 ± 6% -0.6 1.03 ± 4% perf-profile.children.cycles-pp.folio_end_writeback
0.82 ± 19% -0.6 0.23 ± 12% perf-profile.children.cycles-pp.nfs_scan_commit
0.83 ± 19% -0.6 0.24 ± 12% perf-profile.children.cycles-pp.__nfs_commit_inode
1.46 ± 5% -0.6 0.88 ± 13% perf-profile.children.cycles-pp.poll_idle
0.83 ± 10% -0.6 0.26 ± 8% perf-profile.children.cycles-pp.__filemap_fdatawait_range
2.58 ± 4% -0.4 2.14 ± 5% perf-profile.children.cycles-pp.__filemap_get_folio
0.50 -0.4 0.14 ± 9% perf-profile.children.cycles-pp.nfs_page_create_from_folio
0.51 ± 10% -0.3 0.17 ± 8% perf-profile.children.cycles-pp.folio_wait_writeback
0.45 ± 6% -0.3 0.15 ± 14% perf-profile.children.cycles-pp.writeback_iter
0.42 ± 20% -0.3 0.12 ± 17% perf-profile.children.cycles-pp.__mutex_lock
0.42 ± 19% -0.3 0.12 ± 17% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.43 ± 11% -0.3 0.14 ± 4% perf-profile.children.cycles-pp.nfs_page_group_sync_on_bit
0.40 ± 19% -0.3 0.11 ± 13% perf-profile.children.cycles-pp.nfs_scan_commit_list
1.48 ± 5% -0.3 1.20 ± 6% perf-profile.children.cycles-pp.filemap_add_folio
0.42 ± 4% -0.3 0.14 ± 11% perf-profile.children.cycles-pp.nfs_page_group_destroy
0.40 ± 19% -0.3 0.12 ± 12% perf-profile.children.cycles-pp.nfs_io_completion_put
0.41 ± 6% -0.3 0.13 ± 12% perf-profile.children.cycles-pp.folio_wait_bit_common
0.44 ± 7% -0.3 0.17 ± 10% perf-profile.children.cycles-pp.filemap_dirty_folio
1.06 ± 5% -0.3 0.80 ± 5% perf-profile.children.cycles-pp.__folio_end_writeback
0.34 ± 5% -0.3 0.08 ± 10% perf-profile.children.cycles-pp.nfs_page_create
0.39 ± 9% -0.3 0.13 ± 14% perf-profile.children.cycles-pp.writeback_get_folio
0.36 ± 9% -0.2 0.12 ± 15% perf-profile.children.cycles-pp.folio_wake_bit
0.34 ± 26% -0.2 0.10 ± 12% perf-profile.children.cycles-pp.nfs_request_add_commit_list
0.33 ± 4% -0.2 0.10 ± 17% perf-profile.children.cycles-pp.nfs_lock_and_join_requests
0.32 ± 4% -0.2 0.11 ± 13% perf-profile.children.cycles-pp.nfs_inode_remove_request
0.29 ± 5% -0.2 0.10 ± 13% perf-profile.children.cycles-pp.io_schedule
0.34 ± 9% -0.2 0.14 ± 17% perf-profile.children.cycles-pp.__wake_up_common
0.92 ± 8% -0.2 0.73 ± 6% perf-profile.children.cycles-pp.__schedule
0.44 ± 11% -0.2 0.25 ± 6% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
0.28 ± 14% -0.2 0.10 ± 10% perf-profile.children.cycles-pp.nfs_page_group_sync_on_bit_locked
0.27 ± 7% -0.2 0.10 ± 13% perf-profile.children.cycles-pp.wake_page_function
0.26 ± 32% -0.2 0.08 ± 19% perf-profile.children.cycles-pp.nfs_request_remove_commit_list
0.68 ± 6% -0.2 0.50 ± 8% perf-profile.children.cycles-pp.__folio_start_writeback
0.77 ± 4% -0.2 0.62 ± 6% perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
0.23 ± 14% -0.2 0.08 ± 16% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.43 ± 8% -0.1 0.28 ± 11% perf-profile.children.cycles-pp.try_to_wake_up
0.18 ± 11% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.nfs_page_clear_headlock
0.34 ± 10% -0.1 0.20 ± 3% perf-profile.children.cycles-pp.sched_ttwu_pending
0.19 ± 10% -0.1 0.06 ± 13% perf-profile.children.cycles-pp.nfs_inode_add_request
0.15 ± 7% -0.1 0.02 ± 99% perf-profile.children.cycles-pp.nfs_unlock_request
0.71 ± 7% -0.1 0.59 ± 7% perf-profile.children.cycles-pp.schedule
0.17 ± 12% -0.1 0.05 ± 7% perf-profile.children.cycles-pp.nfs_page_set_headlock
0.22 ± 11% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.nfs_pageio_add_request
0.68 ± 7% -0.1 0.56 ± 4% perf-profile.children.cycles-pp.__filemap_add_folio
0.45 ± 12% -0.1 0.34 ± 7% perf-profile.children.cycles-pp.folio_clear_dirty_for_io
0.17 ± 17% -0.1 0.05 ± 46% perf-profile.children.cycles-pp.sysvec_call_function_single
0.37 ± 24% -0.1 0.26 ± 9% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.17 ± 31% -0.1 0.06 ± 13% perf-profile.children.cycles-pp.mutex_lock
0.20 ± 9% -0.1 0.09 ± 11% perf-profile.children.cycles-pp.__nfs_pageio_add_request
0.44 ± 12% -0.1 0.34 ± 5% perf-profile.children.cycles-pp.filemap_get_folios_tag
0.20 ± 7% -0.1 0.10 ± 18% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.62 ± 6% -0.1 0.53 ± 5% perf-profile.children.cycles-pp.__folio_mark_dirty
0.48 ± 7% -0.1 0.38 ± 3% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.27 ± 10% -0.1 0.18 ± 8% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.22 ± 9% -0.1 0.13 ± 5% perf-profile.children.cycles-pp.ttwu_do_activate
0.18 ± 11% -0.1 0.09 ± 14% perf-profile.children.cycles-pp.dequeue_task_fair
0.19 ± 12% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.enqueue_task_fair
0.21 ± 11% -0.1 0.12 ± 19% perf-profile.children.cycles-pp.kmem_cache_free
0.38 ± 6% -0.1 0.30 ± 8% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.19 ± 12% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.activate_task
0.16 ± 12% -0.1 0.08 ± 13% perf-profile.children.cycles-pp.dequeue_entity
0.24 ± 8% -0.1 0.16 ± 11% perf-profile.children.cycles-pp.schedule_idle
0.16 ± 14% -0.1 0.09 ± 12% perf-profile.children.cycles-pp.enqueue_entity
0.40 ± 6% -0.1 0.33 ± 6% perf-profile.children.cycles-pp.kmem_cache_alloc_noprof
0.14 ± 8% -0.1 0.07 ± 14% perf-profile.children.cycles-pp.__smp_call_single_queue
0.13 ± 12% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.__slab_free
0.09 ± 7% -0.1 0.03 ±101% perf-profile.children.cycles-pp.llist_add_batch
0.15 ± 17% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.update_load_avg
0.22 ± 4% -0.1 0.17 ± 9% perf-profile.children.cycles-pp.___slab_alloc
0.27 ± 6% -0.1 0.22 ± 9% perf-profile.children.cycles-pp.lru_add_fn
0.28 ± 3% -0.0 0.24 ± 6% perf-profile.children.cycles-pp.__mod_lruvec_state
0.24 ± 6% -0.0 0.20 ± 10% perf-profile.children.cycles-pp.__mod_node_page_state
0.14 ± 11% -0.0 0.10 ± 10% perf-profile.children.cycles-pp.cgroup_rstat_updated
0.16 ± 7% -0.0 0.11 ± 14% perf-profile.children.cycles-pp.allocate_slab
0.14 ± 12% -0.0 0.10 ± 6% perf-profile.children.cycles-pp.xas_find_marked
0.13 ± 7% -0.0 0.10 ± 16% perf-profile.children.cycles-pp.shuffle_freelist
0.08 ± 19% -0.0 0.05 ± 47% perf-profile.children.cycles-pp.prepare_task_switch
0.08 ± 8% -0.0 0.06 ± 48% perf-profile.children.cycles-pp.__count_memcg_events
0.14 ± 5% -0.0 0.11 ± 10% perf-profile.children.cycles-pp.__xa_set_mark
0.08 ± 8% -0.0 0.06 ± 8% perf-profile.children.cycles-pp.xas_find_conflict
0.05 ± 8% +0.0 0.08 ± 16% perf-profile.children.cycles-pp.__es_insert_extent
0.04 ± 45% +0.0 0.07 ± 13% perf-profile.children.cycles-pp.svc_pool_wake_idle_thread
0.12 ± 12% +0.0 0.15 ± 10% perf-profile.children.cycles-pp.__xa_insert
0.17 ± 7% +0.0 0.21 ± 5% perf-profile.children.cycles-pp.fast_imageblit
0.18 ± 7% +0.0 0.21 ± 5% perf-profile.children.cycles-pp.sys_imageblit
0.22 ± 7% +0.0 0.26 ± 4% perf-profile.children.cycles-pp.bit_putcs
0.18 ± 7% +0.0 0.21 ± 6% perf-profile.children.cycles-pp.drm_fbdev_generic_defio_imageblit
0.26 ± 6% +0.0 0.31 ± 9% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.22 ± 8% +0.0 0.26 ± 4% perf-profile.children.cycles-pp.fbcon_putcs
0.25 ± 8% +0.0 0.30 ± 2% perf-profile.children.cycles-pp.vt_console_print
0.24 ± 8% +0.0 0.28 ± 3% perf-profile.children.cycles-pp.fbcon_redraw
0.24 ± 8% +0.0 0.29 ± 2% perf-profile.children.cycles-pp.con_scroll
0.24 ± 8% +0.0 0.29 ± 2% perf-profile.children.cycles-pp.fbcon_scroll
0.24 ± 8% +0.0 0.29 ± 2% perf-profile.children.cycles-pp.lf
0.27 ± 10% +0.1 0.32 ± 10% perf-profile.children.cycles-pp.svc_tcp_sendmsg
0.17 ± 7% +0.1 0.22 ± 9% perf-profile.children.cycles-pp.drm_fbdev_generic_damage_blit_real
0.13 ± 16% +0.1 0.19 ± 11% perf-profile.children.cycles-pp.__dquot_alloc_space
0.27 ± 9% +0.1 0.33 ± 9% perf-profile.children.cycles-pp.svc_tcp_sendto
0.24 ± 12% +0.1 0.30 ± 4% perf-profile.children.cycles-pp.ext4_da_reserve_space
0.39 ± 4% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.create_empty_buffers
0.18 ± 23% +0.1 0.26 ± 9% perf-profile.children.cycles-pp.irq_work_run_list
0.18 ± 24% +0.1 0.25 ± 9% perf-profile.children.cycles-pp.irq_work_single
0.18 ± 24% +0.1 0.25 ± 9% perf-profile.children.cycles-pp.__sysvec_irq_work
0.18 ± 24% +0.1 0.25 ± 9% perf-profile.children.cycles-pp._printk
0.18 ± 24% +0.1 0.25 ± 9% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.18 ± 24% +0.1 0.25 ± 9% perf-profile.children.cycles-pp.irq_work_run
0.18 ± 24% +0.1 0.25 ± 9% perf-profile.children.cycles-pp.sysvec_irq_work
0.39 ± 10% +0.1 0.48 ± 3% perf-profile.children.cycles-pp.mpage_prepare_extent_to_map
0.41 ± 6% +0.1 0.53 ± 4% perf-profile.children.cycles-pp.mark_buffer_dirty
0.74 ± 7% +0.1 0.86 ± 3% perf-profile.children.cycles-pp.mpage_submit_folio
1.17 ± 6% +0.1 1.30 ± 3% perf-profile.children.cycles-pp.get_page_from_freelist
0.98 ± 6% +0.2 1.14 ± 4% perf-profile.children.cycles-pp.mpage_map_and_submit_buffers
0.76 ± 6% +0.2 0.93 ± 4% perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.74 ± 6% +0.2 0.92 ± 4% perf-profile.children.cycles-pp.ext4_da_map_blocks
1.04 ± 6% +0.2 1.21 ± 3% perf-profile.children.cycles-pp.mpage_map_and_submit_extent
0.75 ± 3% +0.2 0.94 ± 5% perf-profile.children.cycles-pp.ext4_finish_bio
0.76 ± 3% +0.2 0.94 ± 5% perf-profile.children.cycles-pp.ext4_release_io_end
0.64 ± 4% +0.2 0.83 ± 2% perf-profile.children.cycles-pp.__block_commit_write
0.78 ± 2% +0.2 0.97 ± 5% perf-profile.children.cycles-pp.ext4_put_io_end
0.65 ± 4% +0.2 0.84 ± 2% perf-profile.children.cycles-pp.block_write_end
1.21 ± 4% +0.3 1.46 ± 3% perf-profile.children.cycles-pp.ext4_block_write_begin
0.74 ± 5% +0.3 1.00 ± 2% perf-profile.children.cycles-pp.ext4_da_write_end
5.28 ± 6% +0.4 5.66 ± 2% perf-profile.children.cycles-pp.memcpy_toio
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.children.cycles-pp.ast_primary_plane_helper_atomic_update
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail_rpm
5.32 ± 6% +0.4 5.71 ± 2% perf-profile.children.cycles-pp.drm_fb_memcpy
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.children.cycles-pp.ast_mode_config_helper_atomic_commit_tail
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.children.cycles-pp.commit_tail
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.children.cycles-pp.drm_atomic_commit
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.children.cycles-pp.drm_atomic_helper_commit
5.32 ± 6% +0.4 5.72 ± 2% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
5.49 ± 6% +0.5 5.95 perf-profile.children.cycles-pp.drm_fbdev_generic_helper_fb_dirty
5.49 ± 6% +0.5 5.95 perf-profile.children.cycles-pp.drm_fb_helper_damage_work
3.00 ± 4% +0.5 3.48 perf-profile.children.cycles-pp._raw_spin_lock
2.10 ± 6% +0.5 2.58 ± 7% perf-profile.children.cycles-pp._copy_to_iter
2.18 ± 6% +0.5 2.68 ± 8% perf-profile.children.cycles-pp.skb_copy_datagram_iter
2.18 ± 6% +0.5 2.68 ± 8% perf-profile.children.cycles-pp.__skb_datagram_iter
2.38 ± 5% +0.5 2.89 ± 7% perf-profile.children.cycles-pp.tcp_recvmsg_locked
2.43 ± 6% +0.5 2.96 ± 7% perf-profile.children.cycles-pp.svc_tcp_read_msg
2.52 ± 4% +0.5 3.05 ± 4% perf-profile.children.cycles-pp.ext4_da_write_begin
2.45 ± 6% +0.5 3.00 ± 7% perf-profile.children.cycles-pp.svc_tcp_sock_recv_cmsg
2.52 ± 6% +0.5 3.07 ± 7% perf-profile.children.cycles-pp.inet6_recvmsg
2.52 ± 6% +0.5 3.07 ± 7% perf-profile.children.cycles-pp.tcp_recvmsg
2.53 ± 5% +0.6 3.08 ± 7% perf-profile.children.cycles-pp.sock_recvmsg
1.70 ± 2% +0.6 2.26 ± 7% perf-profile.children.cycles-pp.copy_to_brd
2.66 ± 6% +0.6 3.24 ± 7% perf-profile.children.cycles-pp.svc_tcp_recvfrom
3.56 ± 15% +0.7 4.24 ± 5% perf-profile.children.cycles-pp.rwsem_spin_on_owner
3.00 ± 4% +0.8 3.76 perf-profile.children.cycles-pp.brd_insert_page
8.45 ± 4% +0.8 9.30 perf-profile.children.cycles-pp.do_writepages
8.45 ± 3% +0.8 9.30 perf-profile.children.cycles-pp.__filemap_fdatawrite_range
8.45 ± 4% +0.9 9.30 perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
2.50 ± 6% +0.9 3.40 ± 6% perf-profile.children.cycles-pp.memcpy_orig
1.90 +0.9 2.81 ± 2% perf-profile.children.cycles-pp.rep_movs_alternative
4.69 ± 3% +1.3 5.99 ± 2% perf-profile.children.cycles-pp.ext4_io_submit
4.75 ± 3% +1.3 6.06 ± 2% perf-profile.children.cycles-pp.__submit_bio
4.75 ± 3% +1.3 6.06 ± 2% perf-profile.children.cycles-pp.__submit_bio_noacct
4.75 ± 3% +1.3 6.06 ± 2% perf-profile.children.cycles-pp.brd_submit_bio
7.04 ± 3% +1.8 8.82 perf-profile.children.cycles-pp.ext4_sync_file
7.04 ± 3% +1.8 8.82 perf-profile.children.cycles-pp.nfsd_commit
7.02 ± 4% +1.8 8.80 perf-profile.children.cycles-pp.ext4_do_writepages
7.06 ± 4% +1.8 8.84 perf-profile.children.cycles-pp.nfsd4_commit
7.02 ± 4% +1.8 8.80 perf-profile.children.cycles-pp.ext4_writepages
4.34 ± 4% +1.8 6.13 ± 3% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
10.10 ± 11% +2.5 12.63 ± 2% perf-profile.children.cycles-pp.do_iter_readv_writev
10.11 ± 11% +2.5 12.64 ± 2% perf-profile.children.cycles-pp.vfs_iter_write
10.10 ± 11% +2.5 12.63 ± 2% perf-profile.children.cycles-pp.ext4_buffered_write_iter
10.11 ± 11% +2.5 12.64 ± 2% perf-profile.children.cycles-pp.nfsd_vfs_write
10.14 ± 11% +2.5 12.68 ± 2% perf-profile.children.cycles-pp.nfsd4_write
33.94 +4.4 38.31 ± 2% perf-profile.children.cycles-pp.ret_from_fork
33.94 +4.4 38.31 ± 2% perf-profile.children.cycles-pp.ret_from_fork_asm
33.94 +4.4 38.31 ± 2% perf-profile.children.cycles-pp.kthread
17.51 ± 7% +4.4 21.91 perf-profile.children.cycles-pp.nfsd4_proc_compound
17.60 ± 7% +4.4 22.02 perf-profile.children.cycles-pp.nfsd_dispatch
17.65 ± 7% +4.4 22.08 perf-profile.children.cycles-pp.svc_process
17.65 ± 7% +4.4 22.08 perf-profile.children.cycles-pp.svc_process_common
20.62 ± 6% +5.1 25.69 perf-profile.children.cycles-pp.svc_handle_xprt
20.84 ± 6% +5.1 25.95 perf-profile.children.cycles-pp.svc_recv
20.85 ± 6% +5.1 25.96 perf-profile.children.cycles-pp.nfsd
1.17 ± 6% -1.0 0.15 ± 11% perf-profile.self.cycles-pp.nfs_write_end
1.36 ± 4% -0.5 0.86 ± 13% perf-profile.self.cycles-pp.poll_idle
0.42 ± 20% -0.3 0.12 ± 17% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.27 ± 13% -0.2 0.10 ± 13% perf-profile.self.cycles-pp.nfs_page_group_sync_on_bit_locked
0.26 ± 31% -0.2 0.08 ± 19% perf-profile.self.cycles-pp.nfs_request_remove_commit_list
0.18 ± 11% -0.1 0.04 ± 71% perf-profile.self.cycles-pp.nfs_page_clear_headlock
0.15 ± 6% -0.1 0.03 ±100% perf-profile.self.cycles-pp.__filemap_fdatawait_range
0.17 ± 12% -0.1 0.04 ± 45% perf-profile.self.cycles-pp.nfs_page_set_headlock
0.15 ± 8% -0.1 0.02 ± 99% perf-profile.self.cycles-pp.nfs_unlock_request
0.16 ± 9% -0.1 0.04 ± 71% perf-profile.self.cycles-pp.nfs_inode_add_request
0.17 ± 32% -0.1 0.05 ± 46% perf-profile.self.cycles-pp.mutex_lock
0.31 ± 6% -0.1 0.20 ± 7% perf-profile.self.cycles-pp.__folio_end_writeback
0.21 ± 9% -0.1 0.11 ± 11% perf-profile.self.cycles-pp.folio_end_writeback
0.45 ± 7% -0.1 0.36 ± 4% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.09 ± 7% -0.1 0.03 ±101% perf-profile.self.cycles-pp.llist_add_batch
0.12 ± 12% -0.1 0.06 ± 11% perf-profile.self.cycles-pp.__slab_free
0.27 ± 10% -0.1 0.22 ± 6% perf-profile.self.cycles-pp.filemap_get_folios_tag
0.29 ± 9% -0.1 0.24 ± 10% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.22 ± 8% -0.0 0.18 ± 11% perf-profile.self.cycles-pp.folios_put_refs
0.16 ± 8% -0.0 0.11 ± 9% perf-profile.self.cycles-pp.lru_add_fn
0.24 ± 5% -0.0 0.19 ± 9% perf-profile.self.cycles-pp.__mod_node_page_state
0.14 ± 12% -0.0 0.10 ± 11% perf-profile.self.cycles-pp.xas_find_marked
0.18 ± 5% -0.0 0.15 ± 12% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.09 ± 14% -0.0 0.06 ± 13% perf-profile.self.cycles-pp.__schedule
0.12 ± 12% -0.0 0.09 ± 10% perf-profile.self.cycles-pp.cgroup_rstat_updated
0.09 ± 8% -0.0 0.06 ± 47% perf-profile.self.cycles-pp.shuffle_freelist
0.08 ± 12% -0.0 0.04 ± 45% perf-profile.self.cycles-pp.__switch_to_asm
0.09 ± 12% +0.0 0.11 ± 9% perf-profile.self.cycles-pp.mpage_prepare_extent_to_map
0.17 ± 7% +0.0 0.21 ± 5% perf-profile.self.cycles-pp.fast_imageblit
0.26 ± 6% +0.0 0.31 ± 9% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.04 ± 45% +0.0 0.09 ± 20% perf-profile.self.cycles-pp.ext4_da_write_end
0.00 +0.1 0.07 ± 21% perf-profile.self.cycles-pp.folio_alloc_noprof
0.12 ± 15% +0.2 0.29 ± 72% perf-profile.self.cycles-pp.intel_idle_irq
5.14 ± 6% +0.4 5.50 perf-profile.self.cycles-pp.memcpy_toio
2.08 ± 6% +0.5 2.54 ± 6% perf-profile.self.cycles-pp._copy_to_iter
2.93 ± 3% +0.5 3.41 perf-profile.self.cycles-pp._raw_spin_lock
1.62 ± 2% +0.6 2.17 ± 7% perf-profile.self.cycles-pp.copy_to_brd
3.52 ± 15% +0.7 4.20 ± 5% perf-profile.self.cycles-pp.rwsem_spin_on_owner
2.48 ± 6% +0.9 3.36 ± 6% perf-profile.self.cycles-pp.memcpy_orig
1.88 +0.9 2.80 ± 2% perf-profile.self.cycles-pp.rep_movs_alternative
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 5%]
* Re: [PATCH 6/7] ext4: Convert to buffered_write_operations
2024-05-28 16:48 13% ` [PATCH 6/7] ext4: Convert to buffered_write_operations Matthew Wilcox (Oracle)
@ 2024-05-28 23:42 3% ` kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-05-28 23:42 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Christoph Hellwig
Cc: llvm, oe-kbuild-all, Matthew Wilcox (Oracle), linux-fsdevel, linux-ext4
Hi Matthew,
kernel test robot noticed the following build warnings:
[auto build test WARNING on linus/master]
[also build test WARNING on v6.10-rc1 next-20240528]
[cannot apply to tytso-ext4/dev jack-fs/for_next hch-configfs/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Matthew-Wilcox-Oracle/fs-Introduce-buffered_write_operations/20240529-005213
base: linus/master
patch link: https://lore.kernel.org/r/20240528164829.2105447-7-willy%40infradead.org
patch subject: [PATCH 6/7] ext4: Convert to buffered_write_operations
config: hexagon-defconfig (https://download.01.org/0day-ci/archive/20240529/202405290727.QWBqNxqa-lkp@intel.com/config)
compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project bafda89a0944d947fc4b3b5663185e07a397ac30)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240529/202405290727.QWBqNxqa-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202405290727.QWBqNxqa-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from fs/ext4/inline.c:7:
In file included from include/linux/iomap.h:7:
In file included from include/linux/blk_types.h:10:
In file included from include/linux/bvec.h:10:
In file included from include/linux/highmem.h:10:
In file included from include/linux/mm.h:2253:
include/linux/vmstat.h:514:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
514 | return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
| ~~~~~~~~~~~ ^ ~~~
In file included from fs/ext4/inline.c:7:
In file included from include/linux/iomap.h:7:
In file included from include/linux/blk_types.h:10:
In file included from include/linux/bvec.h:10:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:14:
In file included from arch/hexagon/include/asm/io.h:328:
include/asm-generic/io.h:548:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
548 | val = __raw_readb(PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:561:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
561 | val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
| ^
In file included from fs/ext4/inline.c:7:
In file included from include/linux/iomap.h:7:
In file included from include/linux/blk_types.h:10:
In file included from include/linux/bvec.h:10:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:14:
In file included from arch/hexagon/include/asm/io.h:328:
include/asm-generic/io.h:574:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
574 | val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
| ^
In file included from fs/ext4/inline.c:7:
In file included from include/linux/iomap.h:7:
In file included from include/linux/blk_types.h:10:
In file included from include/linux/bvec.h:10:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:14:
In file included from arch/hexagon/include/asm/io.h:328:
include/asm-generic/io.h:585:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
585 | __raw_writeb(value, PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:595:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
595 | __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:605:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
605 | __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
>> fs/ext4/inline.c:914:7: warning: variable 'folio' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
914 | if (ret == -ENOSPC &&
| ^~~~~~~~~~~~~~~~~
915 | ext4_should_retry_alloc(inode->i_sb, &retries))
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/ext4/inline.c:956:9: note: uninitialized use occurs here
956 | return folio;
| ^~~~~
fs/ext4/inline.c:914:3: note: remove the 'if' if its condition is always true
914 | if (ret == -ENOSPC &&
| ^~~~~~~~~~~~~~~~~~~~~
915 | ext4_should_retry_alloc(inode->i_sb, &retries))
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
916 | goto retry_journal;
>> fs/ext4/inline.c:914:7: warning: variable 'folio' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
914 | if (ret == -ENOSPC &&
| ^~~~~~~~~~~~~~
fs/ext4/inline.c:956:9: note: uninitialized use occurs here
956 | return folio;
| ^~~~~
fs/ext4/inline.c:914:7: note: remove the '&&' if its condition is always true
914 | if (ret == -ENOSPC &&
| ^~~~~~~~~~~~~~~~~
>> fs/ext4/inline.c:907:6: warning: variable 'folio' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
907 | if (ret && ret != -ENOSPC)
| ^~~~~~~~~~~~~~~~~~~~~
fs/ext4/inline.c:956:9: note: uninitialized use occurs here
956 | return folio;
| ^~~~~
fs/ext4/inline.c:907:2: note: remove the 'if' if its condition is always false
907 | if (ret && ret != -ENOSPC)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
908 | goto out_journal;
| ~~~~~~~~~~~~~~~~
fs/ext4/inline.c:901:6: warning: variable 'folio' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
901 | if (IS_ERR(handle)) {
| ^~~~~~~~~~~~~~
fs/ext4/inline.c:956:9: note: uninitialized use occurs here
956 | return folio;
| ^~~~~
fs/ext4/inline.c:901:2: note: remove the 'if' if its condition is always false
901 | if (IS_ERR(handle)) {
| ^~~~~~~~~~~~~~~~~~~~~
902 | ret = PTR_ERR(handle);
| ~~~~~~~~~~~~~~~~~~~~~~
903 | goto out;
| ~~~~~~~~~
904 | }
| ~
fs/ext4/inline.c:891:21: note: initialize the variable 'folio' to silence this warning
891 | struct folio *folio;
| ^
| = NULL
11 warnings generated.
vim +914 fs/ext4/inline.c
9c3569b50f12e47 Tao Ma 2012-12-10 877
9c3569b50f12e47 Tao Ma 2012-12-10 878 /*
9c3569b50f12e47 Tao Ma 2012-12-10 879 * Prepare the write for the inline data.
8d6ce136790268f Shijie Luo 2020-01-23 880 * If the data can be written into the inode, we just read
9c3569b50f12e47 Tao Ma 2012-12-10 881 * the page and make it uptodate, and start the journal.
9c3569b50f12e47 Tao Ma 2012-12-10 882 * Otherwise read the page, makes it dirty so that it can be
9c3569b50f12e47 Tao Ma 2012-12-10 883 * handle in writepages(the i_disksize update is left to the
9c3569b50f12e47 Tao Ma 2012-12-10 884 * normal ext4_da_write_end).
9c3569b50f12e47 Tao Ma 2012-12-10 885 */
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 886) struct folio *ext4_da_write_inline_data_begin(struct address_space *mapping,
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 887) struct inode *inode, loff_t pos, size_t len)
9c3569b50f12e47 Tao Ma 2012-12-10 888 {
09355d9d038a159 Ritesh Harjani 2022-01-17 889 int ret;
9c3569b50f12e47 Tao Ma 2012-12-10 890 handle_t *handle;
9a9d01f081ea29a Matthew Wilcox 2023-03-24 891 struct folio *folio;
9c3569b50f12e47 Tao Ma 2012-12-10 892 struct ext4_iloc iloc;
625ef8a3acd111d Lukas Czerner 2018-10-02 893 int retries = 0;
9c3569b50f12e47 Tao Ma 2012-12-10 894
9c3569b50f12e47 Tao Ma 2012-12-10 895 ret = ext4_get_inode_loc(inode, &iloc);
9c3569b50f12e47 Tao Ma 2012-12-10 896 if (ret)
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 897) return ERR_PTR(ret);
9c3569b50f12e47 Tao Ma 2012-12-10 898
bc0ca9df3b2abb1 Jan Kara 2014-01-06 899 retry_journal:
9924a92a8c21757 Theodore Ts'o 2013-02-08 900 handle = ext4_journal_start(inode, EXT4_HT_INODE, 1);
9c3569b50f12e47 Tao Ma 2012-12-10 901 if (IS_ERR(handle)) {
9c3569b50f12e47 Tao Ma 2012-12-10 902 ret = PTR_ERR(handle);
9c3569b50f12e47 Tao Ma 2012-12-10 903 goto out;
9c3569b50f12e47 Tao Ma 2012-12-10 904 }
9c3569b50f12e47 Tao Ma 2012-12-10 905
9c3569b50f12e47 Tao Ma 2012-12-10 906 ret = ext4_prepare_inline_data(handle, inode, pos + len);
9c3569b50f12e47 Tao Ma 2012-12-10 @907 if (ret && ret != -ENOSPC)
52e4477758eef45 Jan Kara 2014-01-06 908 goto out_journal;
9c3569b50f12e47 Tao Ma 2012-12-10 909
9c3569b50f12e47 Tao Ma 2012-12-10 910 if (ret == -ENOSPC) {
8bc1379b82b8e80 Theodore Ts'o 2018-06-16 911 ext4_journal_stop(handle);
9c3569b50f12e47 Tao Ma 2012-12-10 912 ret = ext4_da_convert_inline_data_to_extent(mapping,
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 913) inode);
bc0ca9df3b2abb1 Jan Kara 2014-01-06 @914 if (ret == -ENOSPC &&
bc0ca9df3b2abb1 Jan Kara 2014-01-06 915 ext4_should_retry_alloc(inode->i_sb, &retries))
bc0ca9df3b2abb1 Jan Kara 2014-01-06 916 goto retry_journal;
9c3569b50f12e47 Tao Ma 2012-12-10 917 goto out;
9c3569b50f12e47 Tao Ma 2012-12-10 918 }
9c3569b50f12e47 Tao Ma 2012-12-10 919
36d116e99da7e45 Matthew Wilcox (Oracle 2022-02-22 920) /*
36d116e99da7e45 Matthew Wilcox (Oracle 2022-02-22 921) * We cannot recurse into the filesystem as the transaction
36d116e99da7e45 Matthew Wilcox (Oracle 2022-02-22 922) * is already started.
36d116e99da7e45 Matthew Wilcox (Oracle 2022-02-22 923) */
9a9d01f081ea29a Matthew Wilcox 2023-03-24 924 folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
9a9d01f081ea29a Matthew Wilcox 2023-03-24 925 mapping_gfp_mask(mapping));
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 926) if (IS_ERR(folio))
52e4477758eef45 Jan Kara 2014-01-06 927 goto out_journal;
9c3569b50f12e47 Tao Ma 2012-12-10 928
9c3569b50f12e47 Tao Ma 2012-12-10 929 down_read(&EXT4_I(inode)->xattr_sem);
9c3569b50f12e47 Tao Ma 2012-12-10 930 if (!ext4_has_inline_data(inode)) {
9c3569b50f12e47 Tao Ma 2012-12-10 931 ret = 0;
9c3569b50f12e47 Tao Ma 2012-12-10 932 goto out_release_page;
9c3569b50f12e47 Tao Ma 2012-12-10 933 }
9c3569b50f12e47 Tao Ma 2012-12-10 934
9a9d01f081ea29a Matthew Wilcox 2023-03-24 935 if (!folio_test_uptodate(folio)) {
6b87fbe4155007c Matthew Wilcox 2023-03-24 936 ret = ext4_read_inline_folio(inode, folio);
9c3569b50f12e47 Tao Ma 2012-12-10 937 if (ret < 0)
9c3569b50f12e47 Tao Ma 2012-12-10 938 goto out_release_page;
9c3569b50f12e47 Tao Ma 2012-12-10 939 }
188c299e2a26cc3 Jan Kara 2021-08-16 940 ret = ext4_journal_get_write_access(handle, inode->i_sb, iloc.bh,
188c299e2a26cc3 Jan Kara 2021-08-16 941 EXT4_JTR_NONE);
362eca70b53389b Theodore Ts'o 2018-07-10 942 if (ret)
362eca70b53389b Theodore Ts'o 2018-07-10 943 goto out_release_page;
9c3569b50f12e47 Tao Ma 2012-12-10 944
9c3569b50f12e47 Tao Ma 2012-12-10 945 up_read(&EXT4_I(inode)->xattr_sem);
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 946) goto out;
9c3569b50f12e47 Tao Ma 2012-12-10 947 out_release_page:
9c3569b50f12e47 Tao Ma 2012-12-10 948 up_read(&EXT4_I(inode)->xattr_sem);
9a9d01f081ea29a Matthew Wilcox 2023-03-24 949 folio_unlock(folio);
9a9d01f081ea29a Matthew Wilcox 2023-03-24 950 folio_put(folio);
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 951) folio = ERR_PTR(ret);
52e4477758eef45 Jan Kara 2014-01-06 952 out_journal:
9c3569b50f12e47 Tao Ma 2012-12-10 953 ext4_journal_stop(handle);
52e4477758eef45 Jan Kara 2014-01-06 954 out:
9c3569b50f12e47 Tao Ma 2012-12-10 955 brelse(iloc.bh);
8ca000469995a1f Matthew Wilcox (Oracle 2024-05-28 956) return folio;
9c3569b50f12e47 Tao Ma 2012-12-10 957 }
9c3569b50f12e47 Tao Ma 2012-12-10 958
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 3%]
* [PATCH 6/7] ext4: Convert to buffered_write_operations
2024-05-28 16:48 5% ` [PATCH 3/7] buffer: Add buffer_write_begin, buffer_write_end and __buffer_write_end Matthew Wilcox (Oracle)
@ 2024-05-28 16:48 13% ` Matthew Wilcox (Oracle)
2024-05-28 23:42 3% ` kernel test robot
1 sibling, 1 reply; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-05-28 16:48 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Matthew Wilcox (Oracle), linux-fsdevel, linux-ext4
Pass the appropriate buffered_write_operations to filemap_perform_write().
Saves a lot of page<->folio conversions.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/buffer.c | 2 +-
fs/ext4/ext4.h | 24 ++++-----
fs/ext4/file.c | 12 ++++-
fs/ext4/inline.c | 66 ++++++++++-------------
fs/ext4/inode.c | 134 ++++++++++++++++++++---------------------------
5 files changed, 108 insertions(+), 130 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 4064b21fe499..98f116e8abde 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2299,7 +2299,7 @@ int block_write_end(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *page, void *fsdata)
{
- return buffer_write_end(file, mapping, pos, len, copied,
+ return __buffer_write_end(file, mapping, pos, len, copied,
page_folio(page));
}
EXPORT_SYMBOL(block_write_end);
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 983dad8c07ec..b6f7509e3f55 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2971,8 +2971,6 @@ int ext4_walk_page_buffers(handle_t *handle,
struct buffer_head *bh));
int do_journal_get_write_access(handle_t *handle, struct inode *inode,
struct buffer_head *bh);
-#define FALL_BACK_TO_NONDELALLOC 1
-#define CONVERT_INLINE_DATA 2
typedef enum {
EXT4_IGET_NORMAL = 0,
@@ -3011,6 +3009,7 @@ extern int ext4_break_layouts(struct inode *);
extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length);
extern void ext4_set_inode_flags(struct inode *, bool init);
extern int ext4_alloc_da_blocks(struct inode *inode);
+int ext4_nonda_switch(struct super_block *sb);
extern void ext4_set_aops(struct inode *inode);
extern int ext4_writepage_trans_blocks(struct inode *);
extern int ext4_normal_submit_inode_data_buffers(struct jbd2_inode *jinode);
@@ -3026,6 +3025,10 @@ extern void ext4_da_update_reserve_space(struct inode *inode,
extern int ext4_issue_zeroout(struct inode *inode, ext4_lblk_t lblk,
ext4_fsblk_t pblk, ext4_lblk_t len);
+extern const struct buffered_write_operations ext4_bw_ops;
+extern const struct buffered_write_operations ext4_journalled_bw_ops;
+extern const struct buffered_write_operations ext4_da_bw_ops;
+
/* indirect.c */
extern int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
struct ext4_map_blocks *map, int flags);
@@ -3551,17 +3554,12 @@ extern int ext4_find_inline_data_nolock(struct inode *inode);
extern int ext4_destroy_inline_data(handle_t *handle, struct inode *inode);
int ext4_readpage_inline(struct inode *inode, struct folio *folio);
-extern int ext4_try_to_write_inline_data(struct address_space *mapping,
- struct inode *inode,
- loff_t pos, unsigned len,
- struct page **pagep);
-int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
- unsigned copied, struct folio *folio);
-extern int ext4_da_write_inline_data_begin(struct address_space *mapping,
- struct inode *inode,
- loff_t pos, unsigned len,
- struct page **pagep,
- void **fsdata);
+struct folio *ext4_try_to_write_inline_data(struct address_space *mapping,
+ struct inode *inode, loff_t pos, size_t len);
+size_t ext4_write_inline_data_end(struct inode *inode, loff_t pos, size_t len,
+ size_t copied, struct folio *folio);
+struct folio *ext4_da_write_inline_data_begin(struct address_space *mapping,
+ struct inode *inode, loff_t pos, size_t len);
extern int ext4_try_add_inline_entry(handle_t *handle,
struct ext4_filename *fname,
struct inode *dir, struct inode *inode);
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index c89e434db6b7..08c2772966a9 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -287,16 +287,26 @@ static ssize_t ext4_buffered_write_iter(struct kiocb *iocb,
{
ssize_t ret;
struct inode *inode = file_inode(iocb->ki_filp);
+ const struct buffered_write_operations *ops;
if (iocb->ki_flags & IOCB_NOWAIT)
return -EOPNOTSUPP;
+ if (ext4_should_journal_data(inode))
+ ops = &ext4_journalled_bw_ops;
+ else if (test_opt(inode->i_sb, DELALLOC) &&
+ !ext4_nonda_switch(inode->i_sb) &&
+ !ext4_verity_in_progress(inode))
+ ops = &ext4_da_bw_ops;
+ else
+ ops = &ext4_bw_ops;
+
inode_lock(inode);
ret = ext4_write_checks(iocb, from);
if (ret <= 0)
goto out;
- ret = generic_perform_write(iocb, from);
+ ret = filemap_perform_write(iocb, from, ops, NULL);
out:
inode_unlock(inode);
diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index d5bd1e3a5d36..cb5cb2cc9c2b 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -538,8 +538,9 @@ int ext4_readpage_inline(struct inode *inode, struct folio *folio)
return ret >= 0 ? 0 : ret;
}
-static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
- struct inode *inode)
+/* Returns NULL on success, ERR_PTR on failure */
+static void *ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ struct inode *inode)
{
int ret, needed_blocks, no_expand;
handle_t *handle = NULL;
@@ -554,14 +555,14 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
* will trap here again.
*/
ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
- return 0;
+ return NULL;
}
needed_blocks = ext4_writepage_trans_blocks(inode);
ret = ext4_get_inode_loc(inode, &iloc);
if (ret)
- return ret;
+ return ERR_PTR(ret);
retry:
handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, needed_blocks);
@@ -648,7 +649,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
if (handle)
ext4_journal_stop(handle);
brelse(iloc.bh);
- return ret;
+ return ERR_PTR(ret);
}
/*
@@ -657,10 +658,8 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
* in the inode also. If not, create the page the handle, move the data
* to the page make it update and let the later codes create extent for it.
*/
-int ext4_try_to_write_inline_data(struct address_space *mapping,
- struct inode *inode,
- loff_t pos, unsigned len,
- struct page **pagep)
+struct folio *ext4_try_to_write_inline_data(struct address_space *mapping,
+ struct inode *inode, loff_t pos, size_t len)
{
int ret;
handle_t *handle;
@@ -672,7 +671,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
ret = ext4_get_inode_loc(inode, &iloc);
if (ret)
- return ret;
+ return ERR_PTR(ret);
/*
* The possible write could happen in the inode,
@@ -680,7 +679,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
*/
handle = ext4_journal_start(inode, EXT4_HT_INODE, 1);
if (IS_ERR(handle)) {
- ret = PTR_ERR(handle);
+ folio = ERR_CAST(handle);
handle = NULL;
goto out;
}
@@ -703,17 +702,14 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
mapping_gfp_mask(mapping));
- if (IS_ERR(folio)) {
- ret = PTR_ERR(folio);
+ if (IS_ERR(folio))
goto out;
- }
- *pagep = &folio->page;
down_read(&EXT4_I(inode)->xattr_sem);
if (!ext4_has_inline_data(inode)) {
- ret = 0;
folio_unlock(folio);
folio_put(folio);
+ folio = NULL;
goto out_up_read;
}
@@ -726,21 +722,22 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
}
}
- ret = 1;
handle = NULL;
out_up_read:
up_read(&EXT4_I(inode)->xattr_sem);
out:
- if (handle && (ret != 1))
+ if (ret < 0)
+ folio = ERR_PTR(ret);
+ if (handle && IS_ERR_OR_NULL(folio))
ext4_journal_stop(handle);
brelse(iloc.bh);
- return ret;
+ return folio;
convert:
return ext4_convert_inline_data_to_extent(mapping, inode);
}
-int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
- unsigned copied, struct folio *folio)
+size_t ext4_write_inline_data_end(struct inode *inode, loff_t pos, size_t len,
+ size_t copied, struct folio *folio)
{
handle_t *handle = ext4_journal_current_handle();
int no_expand;
@@ -831,8 +828,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
* need to start the journal since the file's metadata isn't changed now.
*/
static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
- struct inode *inode,
- void **fsdata)
+ struct inode *inode)
{
int ret = 0, inline_size;
struct folio *folio;
@@ -869,7 +865,6 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
folio_mark_dirty(folio);
folio_mark_uptodate(folio);
ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
- *fsdata = (void *)CONVERT_INLINE_DATA;
out:
up_read(&EXT4_I(inode)->xattr_sem);
@@ -888,11 +883,8 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
* handle in writepages(the i_disksize update is left to the
* normal ext4_da_write_end).
*/
-int ext4_da_write_inline_data_begin(struct address_space *mapping,
- struct inode *inode,
- loff_t pos, unsigned len,
- struct page **pagep,
- void **fsdata)
+struct folio *ext4_da_write_inline_data_begin(struct address_space *mapping,
+ struct inode *inode, loff_t pos, size_t len)
{
int ret;
handle_t *handle;
@@ -902,7 +894,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
ret = ext4_get_inode_loc(inode, &iloc);
if (ret)
- return ret;
+ return ERR_PTR(ret);
retry_journal:
handle = ext4_journal_start(inode, EXT4_HT_INODE, 1);
@@ -918,8 +910,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
if (ret == -ENOSPC) {
ext4_journal_stop(handle);
ret = ext4_da_convert_inline_data_to_extent(mapping,
- inode,
- fsdata);
+ inode);
if (ret == -ENOSPC &&
ext4_should_retry_alloc(inode->i_sb, &retries))
goto retry_journal;
@@ -932,10 +923,8 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
*/
folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
mapping_gfp_mask(mapping));
- if (IS_ERR(folio)) {
- ret = PTR_ERR(folio);
+ if (IS_ERR(folio))
goto out_journal;
- }
down_read(&EXT4_I(inode)->xattr_sem);
if (!ext4_has_inline_data(inode)) {
@@ -954,18 +943,17 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
goto out_release_page;
up_read(&EXT4_I(inode)->xattr_sem);
- *pagep = &folio->page;
- brelse(iloc.bh);
- return 1;
+ goto out;
out_release_page:
up_read(&EXT4_I(inode)->xattr_sem);
folio_unlock(folio);
folio_put(folio);
+ folio = ERR_PTR(ret);
out_journal:
ext4_journal_stop(handle);
out:
brelse(iloc.bh);
- return ret;
+ return folio;
}
#ifdef INLINE_DIR_DEBUG
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 4bae9ccf5fe0..e9526e55e86c 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1014,7 +1014,7 @@ int do_journal_get_write_access(handle_t *handle, struct inode *inode,
#ifdef CONFIG_FS_ENCRYPTION
static int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
- get_block_t *get_block)
+ get_block_t *get_block, void **fsdata)
{
unsigned from = pos & (PAGE_SIZE - 1);
unsigned to = from + len;
@@ -1114,9 +1114,9 @@ static int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
* and the ext4_write_end(). So doing the jbd2_journal_start at the start of
* ext4_write_begin() is the right place.
*/
-static int ext4_write_begin(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len,
- struct page **pagep, void **fsdata)
+static struct folio *ext4_write_begin(struct file *file,
+ struct address_space *mapping, loff_t pos, size_t len,
+ void **fsdata)
{
struct inode *inode = mapping->host;
int ret, needed_blocks;
@@ -1127,7 +1127,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
unsigned from, to;
if (unlikely(ext4_forced_shutdown(inode->i_sb)))
- return -EIO;
+ return ERR_PTR(-EIO);
trace_ext4_write_begin(inode, pos, len);
/*
@@ -1140,12 +1140,9 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
to = from + len;
if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
- ret = ext4_try_to_write_inline_data(mapping, inode, pos, len,
- pagep);
- if (ret < 0)
- return ret;
- if (ret == 1)
- return 0;
+ folio = ext4_try_to_write_inline_data(mapping, inode, pos, len);
+ if (folio)
+ return folio;
}
/*
@@ -1159,7 +1156,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
mapping_gfp_mask(mapping));
if (IS_ERR(folio))
- return PTR_ERR(folio);
+ return folio;
/*
* The same as page allocation, we prealloc buffer heads before
* starting the handle.
@@ -1173,7 +1170,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, needed_blocks);
if (IS_ERR(handle)) {
folio_put(folio);
- return PTR_ERR(handle);
+ return ERR_CAST(handle);
}
folio_lock(folio);
@@ -1190,9 +1187,10 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
#ifdef CONFIG_FS_ENCRYPTION
if (ext4_should_dioread_nolock(inode))
ret = ext4_block_write_begin(folio, pos, len,
- ext4_get_block_unwritten);
+ ext4_get_block_unwritten, fsdata);
else
- ret = ext4_block_write_begin(folio, pos, len, ext4_get_block);
+ ret = ext4_block_write_begin(folio, pos, len, ext4_get_block,
+ fsdata);
#else
if (ext4_should_dioread_nolock(inode))
ret = __block_write_begin(&folio->page, pos, len,
@@ -1239,10 +1237,9 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
ext4_should_retry_alloc(inode->i_sb, &retries))
goto retry_journal;
folio_put(folio);
- return ret;
+ return ERR_PTR(ret);
}
- *pagep = &folio->page;
- return ret;
+ return folio;
}
/* For write_end() in data=journal mode */
@@ -1266,12 +1263,10 @@ static int write_end_fn(handle_t *handle, struct inode *inode,
* ext4 never places buffers on inode->i_mapping->i_private_list. metadata
* buffers are managed internally.
*/
-static int ext4_write_end(struct file *file,
- struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct page *page, void *fsdata)
+static size_t ext4_write_end(struct file *file, struct address_space *mapping,
+ loff_t pos, size_t len, size_t copied, struct folio *folio,
+ void **fsdata)
{
- struct folio *folio = page_folio(page);
handle_t *handle = ext4_journal_current_handle();
struct inode *inode = mapping->host;
loff_t old_size = inode->i_size;
@@ -1286,7 +1281,7 @@ static int ext4_write_end(struct file *file,
return ext4_write_inline_data_end(inode, pos, len, copied,
folio);
- copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
+ copied = __buffer_write_end(file, mapping, pos, len, copied, folio);
/*
* it's important to update i_size while still holding folio lock:
* page writeout could otherwise come in and zero beyond i_size.
@@ -1370,12 +1365,10 @@ static void ext4_journalled_zero_new_buffers(handle_t *handle,
} while (bh != head);
}
-static int ext4_journalled_write_end(struct file *file,
- struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct page *page, void *fsdata)
+static size_t ext4_journalled_write_end(struct file *file,
+ struct address_space *mapping, loff_t pos, size_t len,
+ size_t copied, struct folio *folio, void **fsdata)
{
- struct folio *folio = page_folio(page);
handle_t *handle = ext4_journal_current_handle();
struct inode *inode = mapping->host;
loff_t old_size = inode->i_size;
@@ -2816,7 +2809,7 @@ static int ext4_dax_writepages(struct address_space *mapping,
return ret;
}
-static int ext4_nonda_switch(struct super_block *sb)
+int ext4_nonda_switch(struct super_block *sb)
{
s64 free_clusters, dirty_clusters;
struct ext4_sb_info *sbi = EXT4_SB(sb);
@@ -2850,45 +2843,35 @@ static int ext4_nonda_switch(struct super_block *sb)
return 0;
}
-static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len,
- struct page **pagep, void **fsdata)
+static struct folio *ext4_da_write_begin(struct file *file,
+ struct address_space *mapping, loff_t pos, size_t len,
+ void **fsdata)
{
int ret, retries = 0;
struct folio *folio;
- pgoff_t index;
struct inode *inode = mapping->host;
if (unlikely(ext4_forced_shutdown(inode->i_sb)))
- return -EIO;
+ return ERR_PTR(-EIO);
- index = pos >> PAGE_SHIFT;
-
- if (ext4_nonda_switch(inode->i_sb) || ext4_verity_in_progress(inode)) {
- *fsdata = (void *)FALL_BACK_TO_NONDELALLOC;
- return ext4_write_begin(file, mapping, pos,
- len, pagep, fsdata);
- }
- *fsdata = (void *)0;
trace_ext4_da_write_begin(inode, pos, len);
if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
- ret = ext4_da_write_inline_data_begin(mapping, inode, pos, len,
- pagep, fsdata);
- if (ret < 0)
- return ret;
- if (ret == 1)
- return 0;
+ folio = ext4_da_write_inline_data_begin(mapping, inode, pos,
+ len);
+ if (folio)
+ return folio;
}
retry:
- folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
+ folio = __filemap_get_folio(mapping, pos / PAGE_SIZE, FGP_WRITEBEGIN,
mapping_gfp_mask(mapping));
if (IS_ERR(folio))
- return PTR_ERR(folio);
+ return folio;
#ifdef CONFIG_FS_ENCRYPTION
- ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
+ ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep,
+ fsdata);
#else
ret = __block_write_begin(&folio->page, pos, len, ext4_da_get_block_prep);
#endif
@@ -2906,11 +2889,10 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
if (ret == -ENOSPC &&
ext4_should_retry_alloc(inode->i_sb, &retries))
goto retry;
- return ret;
+ return ERR_PTR(ret);
}
- *pagep = &folio->page;
- return ret;
+ return folio;
}
/*
@@ -2936,9 +2918,8 @@ static int ext4_da_should_update_i_disksize(struct folio *folio,
return 1;
}
-static int ext4_da_do_write_end(struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct folio *folio)
+static size_t ext4_da_do_write_end(struct address_space *mapping, loff_t pos,
+ size_t len, size_t copied, struct folio *folio)
{
struct inode *inode = mapping->host;
loff_t old_size = inode->i_size;
@@ -2998,23 +2979,15 @@ static int ext4_da_do_write_end(struct address_space *mapping,
return copied;
}
-static int ext4_da_write_end(struct file *file,
- struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct page *page, void *fsdata)
+static size_t ext4_da_write_end(struct file *file,
+ struct address_space *mapping, loff_t pos, size_t len,
+ size_t copied, struct folio *folio, void **fsdata)
{
struct inode *inode = mapping->host;
- int write_mode = (int)(unsigned long)fsdata;
- struct folio *folio = page_folio(page);
-
- if (write_mode == FALL_BACK_TO_NONDELALLOC)
- return ext4_write_end(file, mapping, pos,
- len, copied, &folio->page, fsdata);
trace_ext4_da_write_end(inode, pos, len, copied);
- if (write_mode != CONVERT_INLINE_DATA &&
- ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
+ if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
ext4_has_inline_data(inode))
return ext4_write_inline_data_end(inode, pos, len, copied,
folio);
@@ -3521,8 +3494,6 @@ static const struct address_space_operations ext4_aops = {
.read_folio = ext4_read_folio,
.readahead = ext4_readahead,
.writepages = ext4_writepages,
- .write_begin = ext4_write_begin,
- .write_end = ext4_write_end,
.dirty_folio = ext4_dirty_folio,
.bmap = ext4_bmap,
.invalidate_folio = ext4_invalidate_folio,
@@ -3537,8 +3508,6 @@ static const struct address_space_operations ext4_journalled_aops = {
.read_folio = ext4_read_folio,
.readahead = ext4_readahead,
.writepages = ext4_writepages,
- .write_begin = ext4_write_begin,
- .write_end = ext4_journalled_write_end,
.dirty_folio = ext4_journalled_dirty_folio,
.bmap = ext4_bmap,
.invalidate_folio = ext4_journalled_invalidate_folio,
@@ -3553,8 +3522,6 @@ static const struct address_space_operations ext4_da_aops = {
.read_folio = ext4_read_folio,
.readahead = ext4_readahead,
.writepages = ext4_writepages,
- .write_begin = ext4_da_write_begin,
- .write_end = ext4_da_write_end,
.dirty_folio = ext4_dirty_folio,
.bmap = ext4_bmap,
.invalidate_folio = ext4_invalidate_folio,
@@ -3572,6 +3539,21 @@ static const struct address_space_operations ext4_dax_aops = {
.swap_activate = ext4_iomap_swap_activate,
};
+const struct buffered_write_operations ext4_bw_ops = {
+ .write_begin = ext4_write_begin,
+ .write_end = ext4_write_end,
+};
+
+const struct buffered_write_operations ext4_journalled_bw_ops = {
+ .write_begin = ext4_write_begin,
+ .write_end = ext4_journalled_write_end,
+};
+
+const struct buffered_write_operations ext4_da_bw_ops = {
+ .write_begin = ext4_da_write_begin,
+ .write_end = ext4_da_write_end,
+};
+
void ext4_set_aops(struct inode *inode)
{
switch (ext4_inode_journal_mode(inode)) {
--
2.43.0
^ permalink raw reply related [relevance 13%]
* [PATCH 3/7] buffer: Add buffer_write_begin, buffer_write_end and __buffer_write_end
@ 2024-05-28 16:48 5% ` Matthew Wilcox (Oracle)
2024-05-28 16:48 13% ` [PATCH 6/7] ext4: Convert to buffered_write_operations Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-05-28 16:48 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Matthew Wilcox (Oracle), linux-fsdevel, linux-ext4
These functions are to be called from filesystem implementations of
write_begin and write_end. They correspond to block_write_begin,
generic_write_end and block_write_end. The old functions need to
be kept around as they're used as function pointers.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/buffer.c | 80 ++++++++++++++++++++++++++-----------
include/linux/buffer_head.h | 6 +++
2 files changed, 63 insertions(+), 23 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 58ac52f20bf6..4064b21fe499 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2217,39 +2217,55 @@ static void __block_commit_write(struct folio *folio, size_t from, size_t to)
}
/*
- * block_write_begin takes care of the basic task of block allocation and
+ * buffer_write_begin - Helper for filesystem write_begin implementations
+ * @mapping: Address space being written to.
+ * @pos: Position in bytes within the file.
+ * @len: Number of bytes being written.
+ * @get_block: How to get buffer_heads for this filesystem.
+ *
+ * Take care of the basic task of block allocation and
* bringing partial write blocks uptodate first.
*
* The filesystem needs to handle block truncation upon failure.
+ *
+ * Return: The folio to write to, or an ERR_PTR on failure.
*/
-int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
- struct page **pagep, get_block_t *get_block)
+struct folio *buffer_write_begin(struct address_space *mapping, loff_t pos,
+ size_t len, get_block_t *get_block)
{
- pgoff_t index = pos >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio = __filemap_get_folio(mapping, pos / PAGE_SIZE,
+ FGP_WRITEBEGIN, mapping_gfp_mask(mapping));
int status;
- page = grab_cache_page_write_begin(mapping, index);
- if (!page)
- return -ENOMEM;
+ if (IS_ERR(folio))
+ return folio;
- status = __block_write_begin(page, pos, len, get_block);
+ status = __block_write_begin_int(folio, pos, len, get_block, NULL);
if (unlikely(status)) {
- unlock_page(page);
- put_page(page);
- page = NULL;
+ folio_unlock(folio);
+ folio_put(folio);
+ folio = ERR_PTR(status);
}
- *pagep = page;
- return status;
+ return folio;
+}
+EXPORT_SYMBOL(buffer_write_begin);
+
+int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,
+ struct page **pagep, get_block_t *get_block)
+{
+ struct folio *folio = buffer_write_begin(mapping, pos, len, get_block);
+
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+ *pagep = &folio->page;
+ return 0;
}
EXPORT_SYMBOL(block_write_begin);
-int block_write_end(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct page *page, void *fsdata)
+size_t __buffer_write_end(struct file *file, struct address_space *mapping,
+ loff_t pos, size_t len, size_t copied, struct folio *folio)
{
- struct folio *folio = page_folio(page);
size_t start = pos - folio_pos(folio);
if (unlikely(copied < len)) {
@@ -2277,17 +2293,26 @@ int block_write_end(struct file *file, struct address_space *mapping,
return copied;
}
-EXPORT_SYMBOL(block_write_end);
+EXPORT_SYMBOL(__buffer_write_end);
-int generic_write_end(struct file *file, struct address_space *mapping,
+int block_write_end(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *page, void *fsdata)
+{
+ return buffer_write_end(file, mapping, pos, len, copied,
+ page_folio(page));
+}
+EXPORT_SYMBOL(block_write_end);
+
+size_t buffer_write_end(struct file *file, struct address_space *mapping,
+ loff_t pos, size_t len, size_t copied,
+ struct folio *folio)
{
struct inode *inode = mapping->host;
loff_t old_size = inode->i_size;
bool i_size_changed = false;
- copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
+ copied = __buffer_write_end(file, mapping, pos, len, copied, folio);
/*
* No need to use i_size_read() here, the i_size cannot change under us
@@ -2301,8 +2326,8 @@ int generic_write_end(struct file *file, struct address_space *mapping,
i_size_changed = true;
}
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
if (old_size < pos)
pagecache_isize_extended(inode, old_size, pos);
@@ -2316,6 +2341,15 @@ int generic_write_end(struct file *file, struct address_space *mapping,
mark_inode_dirty(inode);
return copied;
}
+EXPORT_SYMBOL(buffer_write_end);
+
+int generic_write_end(struct file *file, struct address_space *mapping,
+ loff_t pos, unsigned len, unsigned copied,
+ struct page *page, void *fsdata)
+{
+ return buffer_write_end(file, mapping, pos, len, copied,
+ page_folio(page));
+}
EXPORT_SYMBOL(generic_write_end);
/*
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 1b7f14e39ab8..44e4b2b18cc0 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -267,6 +267,12 @@ int block_write_end(struct file *, struct address_space *,
int generic_write_end(struct file *, struct address_space *,
loff_t, unsigned, unsigned,
struct page *, void *);
+struct folio *buffer_write_begin(struct address_space *mapping, loff_t pos,
+ size_t len, get_block_t *get_block);
+size_t buffer_write_end(struct file *, struct address_space *, loff_t pos,
+ size_t len, size_t copied, struct folio *);
+size_t __buffer_write_end(struct file *, struct address_space *, loff_t pos,
+ size_t len, size_t copied, struct folio *);
void folio_zero_new_buffers(struct folio *folio, size_t from, size_t to);
int cont_write_begin(struct file *, struct address_space *, loff_t pos,
unsigned len, struct page **, void **fsdata, get_block_t *,
--
2.43.0
^ permalink raw reply related [relevance 5%]
* Re: page type is 3, passed migratetype is 1 (nr=512)
2024-05-27 13:14 5% ` Christoph Hellwig
@ 2024-05-28 16:47 0% ` Johannes Weiner
0 siblings, 0 replies; 200+ results
From: Johannes Weiner @ 2024-05-28 16:47 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andy Shevchenko, Baolin Wang, linux-mm
Hello,
On Mon, May 27, 2024 at 06:14:12AM -0700, Christoph Hellwig wrote:
> On Mon, May 27, 2024 at 01:58:25AM -0700, Christoph Hellwig wrote:
> > Hi all,
> >
> > when running xfstests on nfs against a local server I see warnings like
> > the ones above, which appear to have been added in commit
> > e0932b6c1f94 (mm: page_alloc: consolidate free page accounting").
>
> I've also reproduced this with xfstests on local xfs and no nfs in the
> loop:
>
> generic/176 214s ... [ 1204.507931] run fstests generic/176 at 2024-05-27 12:52:30
> [ 1204.969286] XFS (nvme0n1): Mounting V5 Filesystem cd936307-415f-48a3-b99d-a2d52ae1f273
> [ 1204.993621] XFS (nvme0n1): Ending clean mount
> [ 1205.387032] XFS (nvme1n1): Mounting V5 Filesystem ab3ee1a4-af62-4934-9a6a-6c2fde321850
> [ 1205.412322] XFS (nvme1n1): Ending clean mount
> [ 1205.440388] XFS (nvme1n1): Unmounting Filesystem ab3ee1a4-af62-4934-9a6a-6c2fde321850
> [ 1205.808063] XFS (nvme1n1): Mounting V5 Filesystem 7099b02d-9c58-4d1d-be1d-2cc472d12cd9
> [ 1205.827290] XFS (nvme1n1): Ending clean mount
> [ 1208.058931] ------------[ cut here ]------------
> [ 1208.059613] page type is 3, passed migratetype is 1 (nr=512)
> [ 1208.060402] WARNING: CPU: 0 PID: 509870 at mm/page_alloc.c:645 expand+0x1c5/0x1f0
> [ 1208.061352] Modules linked in: i2c_i801 crc32_pclmul i2c_smbus [last unloaded: scsi_debug]
> [ 1208.062344] CPU: 0 PID: 509870 Comm: xfs_io Not tainted 6.10.0-rc1+ #2437
> [ 1208.063150] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Thanks for the report.
Could you please send me your .config? I'll try to reproduce it
locally.
> [ 1208.064204] RIP: 0010:expand+0x1c5/0x1f0
> [ 1208.064625] Code: 05 16 70 bf 02 01 e8 ca fc ff ff 8b 54 24 34 44 89 e1 48 c7 c7 80 a2 28 83 48 89 c6 b8 01 00 3
> [ 1208.066555] RSP: 0018:ffffc90003b2b968 EFLAGS: 00010082
> [ 1208.067111] RAX: 0000000000000000 RBX: ffffffff83fa9480 RCX: 0000000000000000
> [ 1208.067872] RDX: 0000000000000005 RSI: 0000000000000027 RDI: 00000000ffffffff
> [ 1208.068629] RBP: 00000000001f2600 R08: 00000000fffeffff R09: 0000000000000001
> [ 1208.069336] R10: 0000000000000000 R11: ffffffff83676200 R12: 0000000000000009
> [ 1208.070038] R13: 0000000000000200 R14: 0000000000000001 R15: ffffea0007c98000
> [ 1208.070750] FS: 00007f72ca3d5780(0000) GS:ffff8881f9c00000(0000) knlGS:0000000000000000
> [ 1208.071552] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 1208.072121] CR2: 00007f72ca1fff38 CR3: 00000001aa0c6002 CR4: 0000000000770ef0
> [ 1208.072829] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 1208.073527] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7: 0000000000000400
> [ 1208.074225] PKRU: 55555554
> [ 1208.074507] Call Trace:
> [ 1208.074758] <TASK>
> [ 1208.074977] ? __warn+0x7b/0x120
> [ 1208.075308] ? expand+0x1c5/0x1f0
> [ 1208.075652] ? report_bug+0x191/0x1c0
> [ 1208.076043] ? handle_bug+0x3c/0x80
> [ 1208.076400] ? exc_invalid_op+0x17/0x70
> [ 1208.076782] ? asm_exc_invalid_op+0x1a/0x20
> [ 1208.077203] ? expand+0x1c5/0x1f0
> [ 1208.077543] ? expand+0x1c5/0x1f0
> [ 1208.077878] __rmqueue_pcplist+0x3a9/0x730
Ok so the allocator is taking a larger buddy off the freelist to
satisfy a smaller request, then puts the remainder back on the list.
There is no warning from the del_page_from_free_list(), so the buddy
type and the type of the list it was taken from are coherent.
The warning happens when it expands the remainder of the buddy and
finds the tail block to be of a different type.
Specifically, it takes a movable buddy (type 1) off the movable list,
but finds a tail block of it marked highatomic (type 3).
I don't see how we could have merged those during freeing, because the
highatomic buddy would have failed migratetype_is_mergeable().
Ah, but there DOES seem to be an issue with how we reserve
highatomics: reserving and unreserving happens one pageblock at a
time, but MAX_ORDER is usually bigger. If we rmqueue() an order-10
request, reserve_highatomic_block() will only convert the first
order-9 block in it; the tail will remain the original type, which
will produce a buddy of mixed type blocks upon freeing.
This doesn't fully explain the warning here. We'd expect to see it the
other way round - passing an assumed type of 3 (HIGHATOMIC) for the
remainder that is actually 1 (MOVABLE). But the pageblock-based
reservations look fishy. I'll cook up a patch to make this
range-based. It might just fix it in a way I'm not seeing just yet.
> [ 1208.078285] get_page_from_freelist+0x7a0/0xf00
> [ 1208.078745] __alloc_pages_noprof+0x153/0x2e0
> [ 1208.079181] __folio_alloc_noprof+0x10/0xa0
> [ 1208.079603] __filemap_get_folio+0x16b/0x370
> [ 1208.080030] iomap_write_begin+0x496/0x680
> [ 1208.080441] iomap_file_buffered_write+0x17f/0x440
> [ 1208.080916] xfs_file_buffered_write+0x7e/0x2a0
> [ 1208.081374] vfs_write+0x262/0x440
> [ 1208.081717] __x64_sys_pwrite64+0x8f/0xc0
> [ 1208.082112] do_syscall_64+0x4f/0x120
> [ 1208.082487] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 1208.082982] RIP: 0033:0x7f72ca4ce2b7
> [ 1208.083350] Code: 08 89 3c 24 48 89 4c 24 18 e8 15 f4 f8 ff 4c 8b 54 24 18 48 8b 54 24 10 41 89 c0 48 8b 74 24 b
> [ 1208.085126] RSP: 002b:00007ffe56d1a930 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
> [ 1208.085867] RAX: ffffffffffffffda RBX: 0000000154400000 RCX: 00007f72ca4ce2b7
> [ 1208.086560] RDX: 0000000000400000 RSI: 00007f72c9401000 RDI: 0000000000000003
> [ 1208.087248] RBP: 0000000154400000 R08: 0000000000000000 R09: 00007ffe56d1a9d0
> [ 1208.087946] R10: 0000000154400000 R11: 0000000000000293 R12: 00000000ffffffff
> [ 1208.088639] R13: 00000000abc00000 R14: 0000000000000000 R15: 0000000000000551
> [ 1208.089340] </TASK>
> [ 1208.089565] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [relevance 0%]
* [PATCH v5.1] fs: Allow fine-grained control of folio sizes
@ 2024-05-27 21:01 5% Matthew Wilcox (Oracle)
0 siblings, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-05-27 21:01 UTC (permalink / raw)
To: akpm, willy, djwong, brauner, david, chandan.babu
Cc: hare, ritesh.list, john.g.garry, ziy, linux-fsdevel, linux-xfs,
linux-mm, linux-block, gost.dev, p.raghav, kernel, mcgrof
We need filesystems to be able to communicate acceptable folio sizes
to the pagecache for a variety of uses (e.g. large block sizes).
Support a range of folio sizes between order-0 and order-31.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
For this version, I fixed the TODO that the maximum folio size was not
being honoured. I made some other changes too like adding const, moving
the location of the constants, checking CONFIG_TRANSPARENT_HUGEPAGE, and
dropping some of the functions which aren't needed until later patches.
(They can be added in the commits that need them). Also rebased against
current Linus tree, so MAX_PAGECACHE_ORDER no longer needs to be moved).
include/linux/pagemap.h | 81 +++++++++++++++++++++++++++++++++++------
mm/filemap.c | 6 +--
mm/readahead.c | 4 +-
3 files changed, 73 insertions(+), 18 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 1ed9274a0deb..c6aaceed0de6 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -204,13 +204,18 @@ enum mapping_flags {
AS_EXITING = 4, /* final truncate in progress */
/* writeback related tags are not used */
AS_NO_WRITEBACK_TAGS = 5,
- AS_LARGE_FOLIO_SUPPORT = 6,
- AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */
- AS_STABLE_WRITES, /* must wait for writeback before modifying
+ AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */
+ AS_STABLE_WRITES = 7, /* must wait for writeback before modifying
folio contents */
- AS_UNMOVABLE, /* The mapping cannot be moved, ever */
+ AS_UNMOVABLE = 8, /* The mapping cannot be moved, ever */
+ AS_FOLIO_ORDER_MIN = 16,
+ AS_FOLIO_ORDER_MAX = 21, /* Bits 16-25 are used for FOLIO_ORDER */
};
+#define AS_FOLIO_ORDER_MIN_MASK 0x001f0000
+#define AS_FOLIO_ORDER_MAX_MASK 0x03e00000
+#define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK | AS_FOLIO_ORDER_MAX_MASK)
+
/**
* mapping_set_error - record a writeback error in the address_space
* @mapping: the mapping in which an error should be set
@@ -359,9 +364,48 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
#define MAX_PAGECACHE_ORDER 8
#endif
+/*
+ * mapping_set_folio_order_range() - Set the orders supported by a file.
+ * @mapping: The address space of the file.
+ * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
+ * @max: Maximum folio order (between @min-MAX_PAGECACHE_ORDER inclusive).
+ *
+ * The filesystem should call this function in its inode constructor to
+ * indicate which base size (min) and maximum size (max) of folio the VFS
+ * can use to cache the contents of the file. This should only be used
+ * if the filesystem needs special handling of folio sizes (ie there is
+ * something the core cannot know).
+ * Do not tune it based on, eg, i_size.
+ *
+ * Context: This should not be called while the inode is active as it
+ * is non-atomic.
+ */
+static inline void mapping_set_folio_order_range(struct address_space *mapping,
+ unsigned int min, unsigned int max)
+{
+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+ return;
+
+ if (min > MAX_PAGECACHE_ORDER)
+ min = MAX_PAGECACHE_ORDER;
+ if (max > MAX_PAGECACHE_ORDER)
+ max = MAX_PAGECACHE_ORDER;
+ if (max < min)
+ max = min;
+
+ mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
+ (min << AS_FOLIO_ORDER_MIN) | (max << AS_FOLIO_ORDER_MAX);
+}
+
+static inline void mapping_set_folio_min_order(struct address_space *mapping,
+ unsigned int min)
+{
+ mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER);
+}
+
/**
* mapping_set_large_folios() - Indicate the file supports large folios.
- * @mapping: The file.
+ * @mapping: The address space of the file.
*
* The filesystem should call this function in its inode constructor to
* indicate that the VFS can use large folios to cache the contents of
@@ -372,7 +416,23 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
*/
static inline void mapping_set_large_folios(struct address_space *mapping)
{
- __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+ mapping_set_folio_order_range(mapping, 0, MAX_PAGECACHE_ORDER);
+}
+
+static inline
+unsigned int mapping_max_folio_order(const struct address_space *mapping)
+{
+ if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+ return 0;
+ return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX;
+}
+
+static inline
+unsigned int mapping_min_folio_order(const struct address_space *mapping)
+{
+ if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+ return 0;
+ return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN;
}
/*
@@ -381,16 +441,13 @@ static inline void mapping_set_large_folios(struct address_space *mapping)
*/
static inline bool mapping_large_folio_support(struct address_space *mapping)
{
- return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
- test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+ return mapping_max_folio_order(mapping) > 0;
}
/* Return the maximum folio size for this pagecache mapping, in bytes. */
-static inline size_t mapping_max_folio_size(struct address_space *mapping)
+static inline size_t mapping_max_folio_size(const struct address_space *mapping)
{
- if (mapping_large_folio_support(mapping))
- return PAGE_SIZE << MAX_PAGECACHE_ORDER;
- return PAGE_SIZE;
+ return PAGE_SIZE << mapping_max_folio_order(mapping);
}
static inline int filemap_nr_thps(struct address_space *mapping)
diff --git a/mm/filemap.c b/mm/filemap.c
index 382c3d06bfb1..0557020f130e 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1933,10 +1933,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP))))
fgp_flags |= FGP_LOCK;
- if (!mapping_large_folio_support(mapping))
- order = 0;
- if (order > MAX_PAGECACHE_ORDER)
- order = MAX_PAGECACHE_ORDER;
+ if (order > mapping_max_folio_order(mapping))
+ order = mapping_max_folio_order(mapping);
/* If we're not aligned, allocate a smaller folio */
if (index & ((1UL << order) - 1))
order = __ffs(index);
diff --git a/mm/readahead.c b/mm/readahead.c
index c1b23989d9ca..66058ae02f2e 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -503,9 +503,9 @@ void page_cache_ra_order(struct readahead_control *ractl,
limit = min(limit, index + ra->size - 1);
- if (new_order < MAX_PAGECACHE_ORDER) {
+ if (new_order < mapping_max_folio_order(mapping)) {
new_order += 2;
- new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
+ new_order = min(mapping_max_folio_order(mapping), new_order);
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
}
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH 6.8 459/493] ext4: remove the redundant folio_wait_stable()
@ 2024-05-27 18:57 6% ` Greg Kroah-Hartman
0 siblings, 0 replies; 200+ results
From: Greg Kroah-Hartman @ 2024-05-27 18:57 UTC (permalink / raw)
To: stable
Cc: Greg Kroah-Hartman, patches, Zhang Yi, Jan Kara, Theodore Tso,
Sasha Levin
6.8-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Yi <yi.zhang@huawei.com>
[ Upstream commit df0b5afc62f3368d657a8fe4a8d393ac481474c2 ]
__filemap_get_folio() with FGP_WRITEBEGIN parameter has already wait
for stable folio, so remove the redundant folio_wait_stable() in
ext4_da_write_begin(), it was left over from the commit cc883236b792
("ext4: drop unnecessary journal handle in delalloc write") that
removed the retry getting page logic.
Fixes: cc883236b792 ("ext4: drop unnecessary journal handle in delalloc write")
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20240419023005.2719050-1-yi.zhang@huaweicloud.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
fs/ext4/inode.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 2ccf3b5e3a7c4..31604907af50e 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2887,9 +2887,6 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
if (IS_ERR(folio))
return PTR_ERR(folio);
- /* In case writeback began while the folio was unlocked */
- folio_wait_stable(folio);
-
#ifdef CONFIG_FS_ENCRYPTION
ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
#else
--
2.43.0
^ permalink raw reply related [relevance 6%]
* [PATCH 6.9 397/427] ext4: remove the redundant folio_wait_stable()
@ 2024-05-27 18:57 6% ` Greg Kroah-Hartman
0 siblings, 0 replies; 200+ results
From: Greg Kroah-Hartman @ 2024-05-27 18:57 UTC (permalink / raw)
To: stable
Cc: Greg Kroah-Hartman, patches, Zhang Yi, Jan Kara, Theodore Tso,
Sasha Levin
6.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Yi <yi.zhang@huawei.com>
[ Upstream commit df0b5afc62f3368d657a8fe4a8d393ac481474c2 ]
__filemap_get_folio() with FGP_WRITEBEGIN parameter has already wait
for stable folio, so remove the redundant folio_wait_stable() in
ext4_da_write_begin(), it was left over from the commit cc883236b792
("ext4: drop unnecessary journal handle in delalloc write") that
removed the retry getting page logic.
Fixes: cc883236b792 ("ext4: drop unnecessary journal handle in delalloc write")
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20240419023005.2719050-1-yi.zhang@huaweicloud.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
fs/ext4/inode.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 537803250ca9a..6de6bf57699be 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2887,9 +2887,6 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
if (IS_ERR(folio))
return PTR_ERR(folio);
- /* In case writeback began while the folio was unlocked */
- folio_wait_stable(folio);
-
#ifdef CONFIG_FS_ENCRYPTION
ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
#else
--
2.43.0
^ permalink raw reply related [relevance 6%]
* [PATCH 2/2] nfs: add support for large folios
@ 2024-05-27 16:36 20% ` Christoph Hellwig
0 siblings, 0 replies; 200+ results
From: Christoph Hellwig @ 2024-05-27 16:36 UTC (permalink / raw)
To: Trond Myklebust, Anna Schumaker, Matthew Wilcox
Cc: linux-nfs, linux-fsdevel, linux-mm
NFS already is void of folio size assumption, so just pass the chunk size
to __filemap_get_folio and set the large folio address_space flag for all
regular files.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/nfs/file.c | 4 +++-
fs/nfs/inode.c | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 6bd127e6683dce..7f1295475a90fd 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -339,6 +339,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, struct page **pagep,
void **fsdata)
{
+ fgf_t fgp = FGP_WRITEBEGIN;
struct folio *folio;
int once_thru = 0;
int ret;
@@ -346,8 +347,9 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping,
dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%lu), %u@%lld)\n",
file, mapping->host->i_ino, len, (long long) pos);
+ fgp |= fgf_set_order(len);
start:
- folio = __filemap_get_folio(mapping, pos >> PAGE_SHIFT, FGP_WRITEBEGIN,
+ folio = __filemap_get_folio(mapping, pos >> PAGE_SHIFT, fgp,
mapping_gfp_mask(mapping));
if (IS_ERR(folio))
return PTR_ERR(folio);
diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
index acef52ecb1bb7e..6d185af4cb29d4 100644
--- a/fs/nfs/inode.c
+++ b/fs/nfs/inode.c
@@ -491,6 +491,7 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr)
inode->i_fop = NFS_SB(sb)->nfs_client->rpc_ops->file_ops;
inode->i_data.a_ops = &nfs_file_aops;
nfs_inode_init_regular(nfsi);
+ mapping_set_large_folios(inode->i_mapping);
} else if (S_ISDIR(inode->i_mode)) {
inode->i_op = NFS_SB(sb)->nfs_client->rpc_ops->dir_inode_ops;
inode->i_fop = &nfs_dir_operations;
--
2.43.0
^ permalink raw reply related [relevance 20%]
* Re: page type is 3, passed migratetype is 1 (nr=512)
2024-05-27 8:58 6% page type is 3, passed migratetype is 1 (nr=512) Christoph Hellwig
@ 2024-05-27 13:14 5% ` Christoph Hellwig
2024-05-28 16:47 0% ` Johannes Weiner
0 siblings, 1 reply; 200+ results
From: Christoph Hellwig @ 2024-05-27 13:14 UTC (permalink / raw)
To: Johannes Weiner; +Cc: Andy Shevchenko, Baolin Wang, linux-mm
On Mon, May 27, 2024 at 01:58:25AM -0700, Christoph Hellwig wrote:
> Hi all,
>
> when running xfstests on nfs against a local server I see warnings like
> the ones above, which appear to have been added in commit
> e0932b6c1f94 (mm: page_alloc: consolidate free page accounting").
I've also reproduced this with xfstests on local xfs and no nfs in the
loop:
generic/176 214s ... [ 1204.507931] run fstests generic/176 at 2024-05-27 12:52:30
[ 1204.969286] XFS (nvme0n1): Mounting V5 Filesystem cd936307-415f-48a3-b99d-a2d52ae1f273
[ 1204.993621] XFS (nvme0n1): Ending clean mount
[ 1205.387032] XFS (nvme1n1): Mounting V5 Filesystem ab3ee1a4-af62-4934-9a6a-6c2fde321850
[ 1205.412322] XFS (nvme1n1): Ending clean mount
[ 1205.440388] XFS (nvme1n1): Unmounting Filesystem ab3ee1a4-af62-4934-9a6a-6c2fde321850
[ 1205.808063] XFS (nvme1n1): Mounting V5 Filesystem 7099b02d-9c58-4d1d-be1d-2cc472d12cd9
[ 1205.827290] XFS (nvme1n1): Ending clean mount
[ 1208.058931] ------------[ cut here ]------------
[ 1208.059613] page type is 3, passed migratetype is 1 (nr=512)
[ 1208.060402] WARNING: CPU: 0 PID: 509870 at mm/page_alloc.c:645 expand+0x1c5/0x1f0
[ 1208.061352] Modules linked in: i2c_i801 crc32_pclmul i2c_smbus [last unloaded: scsi_debug]
[ 1208.062344] CPU: 0 PID: 509870 Comm: xfs_io Not tainted 6.10.0-rc1+ #2437
[ 1208.063150] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 1208.064204] RIP: 0010:expand+0x1c5/0x1f0
[ 1208.064625] Code: 05 16 70 bf 02 01 e8 ca fc ff ff 8b 54 24 34 44 89 e1 48 c7 c7 80 a2 28 83 48 89 c6 b8 01 00 3
[ 1208.066555] RSP: 0018:ffffc90003b2b968 EFLAGS: 00010082
[ 1208.067111] RAX: 0000000000000000 RBX: ffffffff83fa9480 RCX: 0000000000000000
[ 1208.067872] RDX: 0000000000000005 RSI: 0000000000000027 RDI: 00000000ffffffff
[ 1208.068629] RBP: 00000000001f2600 R08: 00000000fffeffff R09: 0000000000000001
[ 1208.069336] R10: 0000000000000000 R11: ffffffff83676200 R12: 0000000000000009
[ 1208.070038] R13: 0000000000000200 R14: 0000000000000001 R15: ffffea0007c98000
[ 1208.070750] FS: 00007f72ca3d5780(0000) GS:ffff8881f9c00000(0000) knlGS:0000000000000000
[ 1208.071552] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1208.072121] CR2: 00007f72ca1fff38 CR3: 00000001aa0c6002 CR4: 0000000000770ef0
[ 1208.072829] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1208.073527] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7: 0000000000000400
[ 1208.074225] PKRU: 55555554
[ 1208.074507] Call Trace:
[ 1208.074758] <TASK>
[ 1208.074977] ? __warn+0x7b/0x120
[ 1208.075308] ? expand+0x1c5/0x1f0
[ 1208.075652] ? report_bug+0x191/0x1c0
[ 1208.076043] ? handle_bug+0x3c/0x80
[ 1208.076400] ? exc_invalid_op+0x17/0x70
[ 1208.076782] ? asm_exc_invalid_op+0x1a/0x20
[ 1208.077203] ? expand+0x1c5/0x1f0
[ 1208.077543] ? expand+0x1c5/0x1f0
[ 1208.077878] __rmqueue_pcplist+0x3a9/0x730
[ 1208.078285] get_page_from_freelist+0x7a0/0xf00
[ 1208.078745] __alloc_pages_noprof+0x153/0x2e0
[ 1208.079181] __folio_alloc_noprof+0x10/0xa0
[ 1208.079603] __filemap_get_folio+0x16b/0x370
[ 1208.080030] iomap_write_begin+0x496/0x680
[ 1208.080441] iomap_file_buffered_write+0x17f/0x440
[ 1208.080916] xfs_file_buffered_write+0x7e/0x2a0
[ 1208.081374] vfs_write+0x262/0x440
[ 1208.081717] __x64_sys_pwrite64+0x8f/0xc0
[ 1208.082112] do_syscall_64+0x4f/0x120
[ 1208.082487] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 1208.082982] RIP: 0033:0x7f72ca4ce2b7
[ 1208.083350] Code: 08 89 3c 24 48 89 4c 24 18 e8 15 f4 f8 ff 4c 8b 54 24 18 48 8b 54 24 10 41 89 c0 48 8b 74 24 b
[ 1208.085126] RSP: 002b:00007ffe56d1a930 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
[ 1208.085867] RAX: ffffffffffffffda RBX: 0000000154400000 RCX: 00007f72ca4ce2b7
[ 1208.086560] RDX: 0000000000400000 RSI: 00007f72c9401000 RDI: 0000000000000003
[ 1208.087248] RBP: 0000000154400000 R08: 0000000000000000 R09: 00007ffe56d1a9d0
[ 1208.087946] R10: 0000000154400000 R11: 0000000000000293 R12: 00000000ffffffff
[ 1208.088639] R13: 00000000abc00000 R14: 0000000000000000 R15: 0000000000000551
[ 1208.089340] </TASK>
[ 1208.089565] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [relevance 5%]
* page type is 3, passed migratetype is 1 (nr=512)
@ 2024-05-27 8:58 6% Christoph Hellwig
2024-05-27 13:14 5% ` Christoph Hellwig
0 siblings, 1 reply; 200+ results
From: Christoph Hellwig @ 2024-05-27 8:58 UTC (permalink / raw)
To: Johannes Weiner; +Cc: Andy Shevchenko, Baolin Wang, linux-mm
Hi all,
when running xfstests on nfs against a local server I see warnings like
the ones above, which appear to have been added in commit
e0932b6c1f94 (mm: page_alloc: consolidate free page accounting").
Here is a typical backtrace:
generic/154 109s ... [ 617.027401] run fstests generic/154 at
2024-05-27 08:53:22
[ 628.242905] ------------[ cut here ]------------
[ 628.243233] page type is 3, passed migratetype is 1 (nr=512)
[ 628.243595] WARNING: CPU: 1 PID: 3608 at mm/page_alloc.c:645 expand+0x1c5/0x1f0
[ 628.244017] Modules linked in: crc32_pclmul i2c_piix4
[ 628.244321] CPU: 1 PID: 3608 Comm: nfsd Not tainted 6.10.0-rc1+ #2435
[ 628.244711] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 628.245270] RIP: 0010:expand+0x1c5/0x1f0
[ 628.245514] Code: 05 16 70 bf 02 01 e8 ca fc ff ff 8b 54 24 34 44 89 e1 48 c7 c7 80 a2 28 83 48 89 c6 b8 01 00 00 00 d3 e0 83
[ 628.246610] RSP: 0018:ffffc9000151f6f8 EFLAGS: 00010082
[ 628.246920] RAX: 0000000000000000 RBX: ffffffff83fa9480 RCX: 0000000000000000
[ 628.247337] RDX: 0000000000000003 RSI: 0000000000000027 RDI: 00000000ffffffff
[ 628.247760] RBP: 0000000000174200 R08: 00000000fffeffff R09: 0000000000000001
[ 628.248175] R10: 0000000000000000 R11: ffffffff83676200 R12: 0000000000000009
[ 628.248603] R13: 0000000000000200 R14: 0000000000000001 R15: ffffea0005d08000
[ 628.249022] FS: 0000000000000000(0000) GS:ffff888237d00000(0000) knlGS:0000000000000000
[ 628.249499] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 628.249841] CR2: 00005643a0160004 CR3: 000000000365a001 CR4: 0000000000770ef0
[ 628.250265] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 628.250697] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7: 0000000000000400
[ 628.251121] PKRU: 55555554
[ 628.251285] Call Trace:
[ 628.251436] <TASK>
[ 628.251572] ? __warn+0x7b/0x120
[ 628.251769] ? expand+0x1c5/0x1f0
[ 628.251969] ? report_bug+0x191/0x1c0
[ 628.252202] ? handle_bug+0x3c/0x80
[ 628.252413] ? exc_invalid_op+0x17/0x70
[ 628.252646] ? asm_exc_invalid_op+0x1a/0x20
[ 628.252899] ? expand+0x1c5/0x1f0
[ 628.253100] ? expand+0x1c5/0x1f0
[ 628.253299] ? __free_one_page+0x5e4/0x750
[ 628.253549] get_page_from_freelist+0x305/0xf00
[ 628.253822] __alloc_pages_slowpath.constprop.0+0x42b/0xca0
[ 628.254148] __alloc_pages_noprof+0x2b9/0x2e0
[ 628.254408] __folio_alloc_noprof+0x10/0xa0
[ 628.254661] __filemap_get_folio+0x16b/0x370
[ 628.254915] iomap_write_begin+0x496/0x680
[ 628.255159] iomap_file_buffered_write+0x17f/0x440
[ 628.255441] ? __break_lease+0x469/0x710
[ 628.255682] xfs_file_buffered_write+0x7e/0x2a0
[ 628.255952] do_iter_readv_writev+0x107/0x200
[ 628.256213] vfs_iter_write+0x9e/0x240
[ 628.256436] nfsd_vfs_write+0x188/0x5a0
[ 628.256673] nfsd4_write+0x133/0x1c0
[ 628.256888] nfsd4_proc_compound+0x2e3/0x5f0
[ 628.257144] nfsd_dispatch+0xc8/0x210
[ 628.257363] svc_process_common+0x4cf/0x700
[ 628.257619] ? __pfx_nfsd_dispatch+0x10/0x10
[ 628.257873] svc_process+0x13e/0x190
[ 628.258086] svc_recv+0x838/0xa00
[ 628.258286] ? __pfx_nfsd+0x10/0x10
[ 628.258500] nfsd+0x82/0xf0
[ 628.258668] kthread+0xd9/0x110
[ 628.258900] ? __pfx_kthread+0x10/0x10
[ 628.259192] ret_from_fork+0x2c/0x50
[ 628.259477] ? __pfx_kthread+0x10/0x10
[ 628.259763] ret_from_fork_asm+0x1a/0x30
[ 628.260070] </TASK>
[ 628.260244] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [relevance 6%]
* Re: RIP: + BUG: with 6.8.11 and BTRFS
@ 2024-05-26 9:11 4% ` Toralf Förster
0 siblings, 0 replies; 200+ results
From: Toralf Förster @ 2024-05-26 9:11 UTC (permalink / raw)
To: Linux Kernel, linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 469 bytes --]
On 5/26/24 11:08, Toralf Förster wrote:
> Attached is the output of
>
> grep -A 200 -e BUG: -e RIP: messages > splats.txt
>
> from a Gentoo hardened Linux server running at a bare metal server since
> 3 yrs. The system contained 2x 3.84 TiB NVMe drives, / was a raid1,
> /data was configured as raid0.
>
> I upgraded yesterday from kernel 6.8.10 to 6.8.11.
>
> The system does not recover from reboot in moment.
>
And here's the atatchment
--
Toralf
[-- Attachment #2: splats.txt --]
[-- Type: text/plain, Size: 968119 bytes --]
May 26 04:02:34 mr-fox kernel: BUG: Bad page state in process kcompactd0 pfn:15dd75
May 26 04:02:34 mr-fox kernel: page:000000003a37a5ec refcount:0 mapcount:0 mapping:00000000f0bb3562 index:0x4cd994a1 pfn:0x15dd75
May 26 04:02:34 mr-fox kernel: aops:0xffffffffa5c2a700 ino:1
May 26 04:02:34 mr-fox kernel: flags: 0x5fff0000000000c(referenced|uptodate|node=0|zone=2|lastcpupid=0x1fff)
May 26 04:02:34 mr-fox kernel: page_type: 0xffffffff()
May 26 04:02:34 mr-fox kernel: raw: 05fff0000000000c dead000000000100 dead000000000122 ffff988ac61fdd70
May 26 04:02:34 mr-fox kernel: raw: 000000004cd994a1 0000000000000000 00000000ffffffff 0000000000000000
May 26 04:02:34 mr-fox kernel: page dumped because: non-NULL mapping
May 26 04:02:34 mr-fox kernel: CPU: 12 PID: 180 Comm: kcompactd0 Tainted: G T 6.8.11 #13
May 26 04:02:34 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 04:02:34 mr-fox kernel: Call Trace:
May 26 04:02:34 mr-fox kernel: <TASK>
May 26 04:02:34 mr-fox kernel: dump_stack_lvl+0x53/0x70
May 26 04:02:34 mr-fox kernel: bad_page+0x67/0x100
May 26 04:02:34 mr-fox kernel: free_unref_page_prepare+0x28e/0x300
May 26 04:02:34 mr-fox kernel: free_unref_page+0x2f/0x170
May 26 04:02:34 mr-fox kernel: btrfs_release_extent_buffer_pages+0x4a/0x70
May 26 04:02:34 mr-fox kernel: release_extent_buffer+0x48/0xc0
May 26 04:02:34 mr-fox kernel: btree_release_folio+0x25/0x40
May 26 04:02:34 mr-fox kernel: btree_migrate_folio+0x31/0x70
May 26 04:02:34 mr-fox kernel: move_to_new_folio+0x5e/0x160
May 26 04:02:34 mr-fox kernel: migrate_pages_batch+0x84f/0xb90
May 26 04:02:34 mr-fox kernel: ? split_map_pages+0x190/0x190
May 26 04:02:34 mr-fox kernel: ? pick_next_task_fair+0x1c5/0x580
May 26 04:02:34 mr-fox kernel: migrate_pages+0x9d4/0xc90
May 26 04:02:34 mr-fox kernel: ? isolate_freepages_block+0x440/0x440
May 26 04:02:34 mr-fox kernel: ? split_map_pages+0x190/0x190
May 26 04:02:34 mr-fox kernel: ? __mod_memcg_lruvec_state+0xcd/0x190
May 26 04:02:34 mr-fox kernel: compact_zone+0xb6b/0xe20
May 26 04:02:34 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 04:02:34 mr-fox kernel: ? __schedule+0x3b4/0x17d0
May 26 04:02:34 mr-fox kernel: proactive_compact_node+0x8a/0x100
May 26 04:02:34 mr-fox kernel: kcompactd+0x1e7/0x410
May 26 04:02:34 mr-fox kernel: ? swake_up_locked+0x60/0x60
May 26 04:02:34 mr-fox kernel: ? kcompactd_do_work+0x2e0/0x2e0
May 26 04:02:34 mr-fox kernel: kthread+0xcb/0xf0
May 26 04:02:34 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 04:02:34 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 04:02:34 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 04:02:34 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 04:02:34 mr-fox kernel: </TASK>
May 26 04:02:34 mr-fox kernel: Disabling lock debugging due to kernel taint
May 26 04:03:00 mr-fox crond[30536]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:03:00 mr-fox CROND[30545]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:03:00 mr-fox crond[30537]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:03:00 mr-fox CROND[30550]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:03:00 mr-fox crond[30540]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:03:00 mr-fox crond[30544]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:03:00 mr-fox CROND[30556]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:03:00 mr-fox crond[30541]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:03:00 mr-fox CROND[30554]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:03:00 mr-fox CROND[30572]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:03:00 mr-fox CROND[30544]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:03:00 mr-fox CROND[30544]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:03:00 mr-fox CROND[30541]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:03:00 mr-fox CROND[30541]: pam_unix(crond:session): session closed for user root
May 26 04:03:01 mr-fox CROND[30540]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:03:01 mr-fox CROND[30540]: pam_unix(crond:session): session closed for user root
May 26 04:03:02 mr-fox CROND[13572]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:03:02 mr-fox CROND[13572]: pam_unix(crond:session): session closed for user root
May 26 04:03:03 mr-fox CROND[13573]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:03:03 mr-fox CROND[13573]: pam_unix(crond:session): session closed for user root
May 26 04:04:00 mr-fox crond[15741]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:04:00 mr-fox crond[15739]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:04:00 mr-fox crond[15742]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:04:00 mr-fox crond[15743]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:04:00 mr-fox CROND[15747]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:04:00 mr-fox CROND[15748]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:04:00 mr-fox crond[15740]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:04:00 mr-fox CROND[15752]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:04:00 mr-fox CROND[15750]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:04:00 mr-fox CROND[15754]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:04:00 mr-fox CROND[15743]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:04:00 mr-fox CROND[15743]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:04:00 mr-fox CROND[15742]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:04:00 mr-fox CROND[15742]: pam_unix(crond:session): session closed for user root
May 26 04:04:00 mr-fox CROND[15741]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:04:00 mr-fox CROND[15741]: pam_unix(crond:session): session closed for user root
May 26 04:04:02 mr-fox CROND[30536]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:04:02 mr-fox CROND[30536]: pam_unix(crond:session): session closed for user root
May 26 04:04:02 mr-fox CROND[30537]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:04:02 mr-fox CROND[30537]: pam_unix(crond:session): session closed for user root
May 26 04:05:00 mr-fox crond[14940]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:05:00 mr-fox crond[14937]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 04:05:00 mr-fox CROND[14950]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:05:00 mr-fox crond[14938]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:05:00 mr-fox crond[14944]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:05:00 mr-fox CROND[14952]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 04:05:00 mr-fox CROND[14957]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 04:05:00 mr-fox crond[14941]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:05:00 mr-fox CROND[14956]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:05:00 mr-fox CROND[14959]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:05:00 mr-fox crond[14949]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:05:00 mr-fox CROND[14966]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:05:01 mr-fox crond[14943]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:05:01 mr-fox CROND[14971]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 04:05:01 mr-fox crond[14942]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:05:01 mr-fox CROND[14979]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:05:01 mr-fox CROND[14942]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:05:01 mr-fox CROND[14942]: pam_unix(crond:session): session closed for user root
May 26 04:05:01 mr-fox CROND[14949]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:05:01 mr-fox CROND[14949]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:05:01 mr-fox CROND[15739]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:05:01 mr-fox CROND[15739]: pam_unix(crond:session): session closed for user root
May 26 04:05:01 mr-fox CROND[14943]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 04:05:01 mr-fox CROND[14943]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:05:01 mr-fox CROND[14941]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:05:01 mr-fox CROND[14941]: pam_unix(crond:session): session closed for user root
May 26 04:05:01 mr-fox CROND[14937]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 04:05:01 mr-fox CROND[14937]: pam_unix(crond:session): session closed for user torproject
May 26 04:05:02 mr-fox CROND[15740]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:05:02 mr-fox CROND[15740]: pam_unix(crond:session): session closed for user root
May 26 04:05:32 mr-fox CROND[14944]: (tinderbox) CMDEND (sleep 10; /opt/tb/bin/index.sh)
May 26 04:05:32 mr-fox CROND[14944]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:06:00 mr-fox crond[13989]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:06:00 mr-fox crond[13990]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:06:00 mr-fox crond[13992]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:06:00 mr-fox CROND[13999]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:06:00 mr-fox crond[13996]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:06:00 mr-fox CROND[14002]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:06:00 mr-fox crond[13994]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:06:00 mr-fox CROND[14001]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:06:00 mr-fox CROND[14003]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:06:00 mr-fox CROND[14006]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:06:00 mr-fox CROND[13996]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:06:00 mr-fox CROND[13996]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:06:00 mr-fox CROND[13994]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:06:00 mr-fox CROND[13994]: pam_unix(crond:session): session closed for user root
May 26 04:06:01 mr-fox CROND[13992]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:06:01 mr-fox CROND[13992]: pam_unix(crond:session): session closed for user root
May 26 04:06:02 mr-fox CROND[14938]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:06:02 mr-fox CROND[14938]: pam_unix(crond:session): session closed for user root
May 26 04:06:03 mr-fox CROND[14940]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:06:03 mr-fox CROND[14940]: pam_unix(crond:session): session closed for user root
May 26 04:07:00 mr-fox crond[10221]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:07:00 mr-fox crond[10225]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:07:00 mr-fox crond[10219]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:07:00 mr-fox crond[10222]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:07:00 mr-fox crond[10223]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:07:00 mr-fox CROND[10231]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:07:00 mr-fox CROND[10232]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:07:00 mr-fox CROND[10234]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:07:00 mr-fox CROND[10236]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:07:00 mr-fox CROND[10237]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:07:00 mr-fox CROND[10225]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:07:00 mr-fox CROND[10225]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:07:00 mr-fox CROND[10223]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:07:00 mr-fox CROND[10223]: pam_unix(crond:session): session closed for user root
May 26 04:07:00 mr-fox CROND[10222]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:07:00 mr-fox CROND[10222]: pam_unix(crond:session): session closed for user root
May 26 04:07:02 mr-fox CROND[13989]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:07:02 mr-fox CROND[13989]: pam_unix(crond:session): session closed for user root
May 26 04:07:02 mr-fox CROND[13990]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:07:02 mr-fox CROND[13990]: pam_unix(crond:session): session closed for user root
May 26 04:08:00 mr-fox crond[6387]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:08:00 mr-fox crond[6391]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:08:00 mr-fox crond[6392]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:08:00 mr-fox CROND[6397]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:08:00 mr-fox crond[6386]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:08:00 mr-fox crond[6389]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:08:00 mr-fox CROND[6405]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:08:00 mr-fox CROND[6407]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:08:00 mr-fox CROND[6409]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:08:00 mr-fox CROND[6406]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:08:00 mr-fox CROND[6392]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:08:00 mr-fox CROND[6392]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:08:00 mr-fox CROND[6391]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:08:00 mr-fox CROND[6391]: pam_unix(crond:session): session closed for user root
May 26 04:08:01 mr-fox CROND[10219]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:08:01 mr-fox CROND[10219]: pam_unix(crond:session): session closed for user root
May 26 04:08:02 mr-fox CROND[6389]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:08:02 mr-fox CROND[6389]: pam_unix(crond:session): session closed for user root
May 26 04:08:02 mr-fox CROND[10221]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:08:02 mr-fox CROND[10221]: pam_unix(crond:session): session closed for user root
May 26 04:09:00 mr-fox crond[23983]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:09:00 mr-fox crond[23982]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:09:00 mr-fox crond[23984]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:09:00 mr-fox CROND[23990]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:09:00 mr-fox CROND[23991]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:09:00 mr-fox CROND[23992]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:09:00 mr-fox crond[23987]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 04:09:00 mr-fox crond[23986]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:09:00 mr-fox CROND[23999]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:09:00 mr-fox CROND[23996]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 04:09:00 mr-fox CROND[23986]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 04:09:00 mr-fox CROND[23986]: pam_unix(crond:session): session closed for user root
May 26 04:09:00 mr-fox CROND[23987]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 04:09:00 mr-fox CROND[23987]: pam_unix(crond:session): session closed for user tinderbox
May 26 04:09:00 mr-fox CROND[23984]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 04:09:00 mr-fox CROND[23984]: pam_unix(crond:session): session closed for user root
May 26 04:09:04 mr-fox CROND[6386]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 04:09:04 mr-fox CROND[6386]: pam_unix(crond:session): session closed for user root
May 26 04:09:05 mr-fox CROND[6387]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:09:05 mr-fox CROND[6387]: pam_unix(crond:session): session closed for user root
May 26 04:10:00 mr-fox crond[6295]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 04:10:00 mr-fox crond[6298]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:10:00 mr-fox crond[6299]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:10:00 mr-fox CROND[6308]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 04:10:00 mr-fox crond[6296]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 04:10:00 mr-fox CROND[6310]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 04:10:00 mr-fox crond[6302]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
--
May 26 06:02:44 mr-fox kernel: RIP: 0010:xas_start+0x3d/0x120
May 26 06:02:44 mr-fox kernel: Code: 0f 84 81 00 00 00 48 81 fd 05 c0 ff ff 76 06 48 83 f8 02 74 5d 48 8b 03 48 8b 73 08 48 8b 40 08 48 89 c2 83 e2 03 48 83 fa 02 <75> 08 48 3d 00 10 00 00 77 20 48 85 f6 75 31 48 c7 43 18 00 00 00
May 26 06:02:44 mr-fox kernel: RSP: 0018:ffffa401077e3930 EFLAGS: 00000246
May 26 06:02:44 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:02:44 mr-fox kernel: RDX: 0000000000000002 RSI: 000000004cd994a1 RDI: ffffa401077e3970
May 26 06:02:44 mr-fox kernel: RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
May 26 06:02:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:02:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:02:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:02:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:02:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 000000012a228000 CR4: 0000000000f50ef0
May 26 06:02:44 mr-fox kernel: PKRU: 55555554
May 26 06:02:44 mr-fox kernel: Call Trace:
May 26 06:02:44 mr-fox kernel: <IRQ>
May 26 06:02:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:02:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:02:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:02:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:02:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:02:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:02:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:02:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:02:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:02:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:02:44 mr-fox kernel: </IRQ>
May 26 06:02:44 mr-fox kernel: <TASK>
May 26 06:02:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:02:44 mr-fox kernel: ? xas_start+0x3d/0x120
May 26 06:02:44 mr-fox kernel: xas_load+0xe/0x60
May 26 06:02:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:02:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:02:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:02:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:02:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:02:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:02:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:02:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:02:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:02:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:02:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:02:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:02:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:02:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:02:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:02:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:02:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:02:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:02:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:02:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:02:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:02:44 mr-fox kernel: </TASK>
May 26 06:02:47 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 15471 jiffies s: 491905 root: 0x2/.
May 26 06:02:47 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:02:47 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:02:47 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:02:47 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:02:47 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:02:47 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:02:47 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 06:02:47 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 06:02:47 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:02:47 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:02:47 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:02:47 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:02:47 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:02:47 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:02:47 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:02:47 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:02:47 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 000000012a228000 CR4: 0000000000f50ef0
May 26 06:02:47 mr-fox kernel: PKRU: 55555554
May 26 06:02:47 mr-fox kernel: Call Trace:
May 26 06:02:47 mr-fox kernel: <NMI>
May 26 06:02:47 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:02:47 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:02:47 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:02:47 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:02:47 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:02:47 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:02:47 mr-fox kernel: ? filemap_get_entry+0x6d/0x160
May 26 06:02:47 mr-fox kernel: ? filemap_get_entry+0x6d/0x160
May 26 06:02:47 mr-fox kernel: ? filemap_get_entry+0x6d/0x160
May 26 06:02:47 mr-fox kernel: </NMI>
May 26 06:02:47 mr-fox kernel: <TASK>
May 26 06:02:47 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:02:47 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:02:47 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:02:47 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:02:47 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:02:47 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:02:47 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:02:47 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:02:47 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:02:47 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:02:47 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:02:47 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:02:47 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:02:47 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:02:47 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:02:47 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:02:47 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:02:47 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:02:47 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:02:47 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:02:47 mr-fox kernel: </TASK>
May 26 06:03:00 mr-fox crond[30376]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:03:00 mr-fox crond[30378]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:03:00 mr-fox crond[30375]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:03:00 mr-fox CROND[30389]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:03:00 mr-fox CROND[30390]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:03:00 mr-fox CROND[30392]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:03:00 mr-fox crond[30380]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:03:00 mr-fox crond[30377]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:03:00 mr-fox CROND[30401]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:03:00 mr-fox CROND[30403]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:03:00 mr-fox CROND[30378]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:03:00 mr-fox CROND[30378]: pam_unix(crond:session): session closed for user root
May 26 06:03:00 mr-fox CROND[30380]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:03:00 mr-fox CROND[30380]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:03:01 mr-fox CROND[30377]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:03:01 mr-fox CROND[30377]: pam_unix(crond:session): session closed for user root
May 26 06:03:01 mr-fox CROND[2083]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:03:01 mr-fox CROND[2083]: pam_unix(crond:session): session closed for user root
May 26 06:04:00 mr-fox crond[24809]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:04:00 mr-fox crond[24811]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:04:00 mr-fox crond[24812]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:04:00 mr-fox crond[24813]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:04:00 mr-fox crond[24810]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:04:00 mr-fox CROND[24817]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:04:00 mr-fox CROND[24818]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:04:00 mr-fox CROND[24820]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:04:00 mr-fox CROND[24821]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:04:00 mr-fox CROND[24819]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:04:00 mr-fox CROND[24813]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:04:00 mr-fox CROND[24813]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:04:00 mr-fox CROND[24812]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:04:00 mr-fox CROND[24812]: pam_unix(crond:session): session closed for user root
May 26 06:04:00 mr-fox CROND[24811]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:04:00 mr-fox CROND[24811]: pam_unix(crond:session): session closed for user root
May 26 06:04:02 mr-fox CROND[30376]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:04:02 mr-fox CROND[30376]: pam_unix(crond:session): session closed for user root
May 26 06:05:00 mr-fox crond[26845]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:05:00 mr-fox crond[26846]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:05:00 mr-fox crond[26848]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:05:00 mr-fox crond[26852]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:05:00 mr-fox crond[26849]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:05:00 mr-fox CROND[26856]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:05:00 mr-fox crond[26851]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:05:00 mr-fox CROND[26857]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:05:00 mr-fox CROND[26859]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:05:00 mr-fox CROND[26860]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:05:00 mr-fox CROND[26861]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:05:00 mr-fox CROND[26862]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:05:00 mr-fox crond[26853]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:05:00 mr-fox crond[26850]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:05:00 mr-fox CROND[26865]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:05:00 mr-fox CROND[26866]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:05:00 mr-fox CROND[26853]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:05:00 mr-fox CROND[26853]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:05:00 mr-fox CROND[26850]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:05:00 mr-fox CROND[26850]: pam_unix(crond:session): session closed for user root
May 26 06:05:00 mr-fox CROND[26851]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:05:00 mr-fox CROND[26851]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:05:00 mr-fox CROND[26849]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:05:00 mr-fox CROND[26849]: pam_unix(crond:session): session closed for user root
May 26 06:05:00 mr-fox CROND[26845]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:05:00 mr-fox CROND[26845]: pam_unix(crond:session): session closed for user torproject
May 26 06:05:01 mr-fox CROND[24810]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:05:01 mr-fox CROND[24810]: pam_unix(crond:session): session closed for user root
May 26 06:05:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:05:44 mr-fox kernel: rcu: \x0921-....: (60003 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=26215
May 26 06:05:44 mr-fox kernel: rcu: \x09(t=60004 jiffies g=8794409 q=6228675 ncpus=32)
May 26 06:05:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:05:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:05:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:05:44 mr-fox kernel: RIP: 0010:xas_descend+0x13/0xd0
May 26 06:05:44 mr-fox kernel: Code: 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e <48> 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48
May 26 06:05:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000282
May 26 06:05:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 000000000000000c
May 26 06:05:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988b1fdbb478 RDI: ffffa401077e3970
May 26 06:05:44 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:05:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:05:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:05:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:05:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:05:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:05:44 mr-fox kernel: PKRU: 55555554
May 26 06:05:44 mr-fox kernel: Call Trace:
May 26 06:05:44 mr-fox kernel: <IRQ>
May 26 06:05:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:05:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:05:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:05:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:05:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:05:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:05:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:05:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:05:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:05:44 mr-fox kernel: </IRQ>
May 26 06:05:44 mr-fox kernel: <TASK>
May 26 06:05:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:05:44 mr-fox kernel: ? xas_descend+0x13/0xd0
May 26 06:05:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:05:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:05:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:05:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:05:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:05:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:05:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:05:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:05:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:05:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:05:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:05:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:05:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:05:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:05:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:05:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:05:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:05:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:05:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:05:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:05:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:05:44 mr-fox kernel: </TASK>
May 26 06:05:49 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 61039 jiffies s: 491905 root: 0x2/.
May 26 06:05:49 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:05:49 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:05:49 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:05:49 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:05:49 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:05:49 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:05:49 mr-fox kernel: RIP: 0010:tcp_ack+0x69e/0x14b0
May 26 06:05:49 mr-fox kernel: Code: c7 43 60 00 00 00 00 48 8b 74 24 38 48 89 df 48 c7 43 58 00 00 00 00 e8 30 c4 0d 00 41 8b 84 24 40 01 00 00 2b 83 d0 00 00 00 <41> 89 84 24 40 01 00 00 8b 8b b8 00 00 00 48 8b b3 c0 00 00 00 48
May 26 06:05:49 mr-fox kernel: RSP: 0018:ffffa401005f0960 EFLAGS: 00000246
May 26 06:05:49 mr-fox kernel: RAX: 0000000000000000 RBX: ffff98991a592300 RCX: 0000000000000000
May 26 06:05:49 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:05:49 mr-fox kernel: RBP: ffffa401005f09f0 R08: 0000000000000000 R09: 0000000000000000
May 26 06:05:49 mr-fox kernel: R10: 0000000000000000 R11: 000000001a179271 R12: ffff988d0c615c80
May 26 06:05:49 mr-fox kernel: R13: 0000000000000004 R14: 0000000000000000 R15: 0000000000000000
May 26 06:05:49 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:05:49 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:05:49 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:05:49 mr-fox kernel: PKRU: 55555554
May 26 06:05:49 mr-fox kernel: Call Trace:
May 26 06:05:49 mr-fox kernel: <NMI>
May 26 06:05:49 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:05:49 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:05:49 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:05:49 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:05:49 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:05:49 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:05:49 mr-fox kernel: ? tcp_ack+0x69e/0x14b0
May 26 06:05:49 mr-fox kernel: ? tcp_ack+0x69e/0x14b0
May 26 06:05:49 mr-fox kernel: ? tcp_ack+0x69e/0x14b0
May 26 06:05:49 mr-fox kernel: </NMI>
May 26 06:05:49 mr-fox kernel: <IRQ>
May 26 06:05:49 mr-fox kernel: tcp_rcv_established+0x146/0x6c0
May 26 06:05:49 mr-fox kernel: ? sk_filter_trim_cap+0x40/0x220
May 26 06:05:49 mr-fox kernel: tcp_v4_do_rcv+0x153/0x240
May 26 06:05:49 mr-fox kernel: tcp_v4_rcv+0xe00/0xea0
May 26 06:05:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:49 mr-fox kernel: ip_protocol_deliver_rcu+0x32/0x180
May 26 06:05:49 mr-fox kernel: ip_local_deliver_finish+0x74/0xa0
May 26 06:05:49 mr-fox kernel: ip_sublist_rcv_finish+0x7f/0x90
May 26 06:05:49 mr-fox kernel: ip_sublist_rcv+0x176/0x1c0
May 26 06:05:49 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 06:05:49 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 06:05:49 mr-fox kernel: __netif_receive_skb_list_core+0x293/0x2d0
May 26 06:05:49 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 06:05:49 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 06:05:49 mr-fox kernel: igb_poll+0x605/0x1370
May 26 06:05:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:49 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 06:05:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:49 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:05:49 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:05:49 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:05:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:49 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:05:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:49 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:05:49 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:05:49 mr-fox kernel: </IRQ>
May 26 06:05:49 mr-fox kernel: <TASK>
May 26 06:05:49 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:05:49 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:05:49 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:05:49 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:05:49 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:05:49 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:05:49 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:05:49 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:05:49 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:05:49 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:05:49 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:05:49 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:05:49 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:05:49 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:05:49 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:05:49 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:05:49 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:05:49 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:05:49 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:05:49 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:05:49 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:05:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:05:49 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:05:49 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:05:49 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:05:49 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:05:49 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:05:49 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:05:49 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:05:49 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:05:49 mr-fox kernel: </TASK>
May 26 06:06:00 mr-fox crond[30958]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:06:00 mr-fox crond[30959]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:06:00 mr-fox crond[30960]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:06:00 mr-fox crond[30961]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:06:00 mr-fox CROND[30964]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:06:00 mr-fox CROND[30965]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:06:00 mr-fox crond[30956]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:06:00 mr-fox CROND[30968]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:06:00 mr-fox CROND[30969]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:06:00 mr-fox CROND[30970]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:06:00 mr-fox CROND[30961]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:06:00 mr-fox CROND[30961]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:06:00 mr-fox CROND[30960]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:06:00 mr-fox CROND[30960]: pam_unix(crond:session): session closed for user root
May 26 06:06:00 mr-fox CROND[30959]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:06:00 mr-fox CROND[30959]: pam_unix(crond:session): session closed for user root
May 26 06:06:00 mr-fox CROND[26848]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:06:00 mr-fox CROND[26848]: pam_unix(crond:session): session closed for user root
May 26 06:07:00 mr-fox CROND[30958]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:07:00 mr-fox CROND[30958]: pam_unix(crond:session): session closed for user root
May 26 06:07:00 mr-fox crond[7168]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:07:00 mr-fox crond[7170]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:07:00 mr-fox crond[7169]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:07:00 mr-fox crond[7171]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:07:00 mr-fox crond[7172]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:07:00 mr-fox CROND[7174]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:07:00 mr-fox CROND[7175]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:07:00 mr-fox CROND[7176]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:07:00 mr-fox CROND[7177]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:07:00 mr-fox CROND[7178]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:07:00 mr-fox CROND[7172]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:07:00 mr-fox CROND[7172]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:07:00 mr-fox CROND[7171]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:07:00 mr-fox CROND[7171]: pam_unix(crond:session): session closed for user root
May 26 06:07:00 mr-fox CROND[7170]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:07:00 mr-fox CROND[7170]: pam_unix(crond:session): session closed for user root
May 26 06:08:00 mr-fox crond[14400]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:08:00 mr-fox crond[14401]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:08:00 mr-fox crond[14402]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:08:00 mr-fox crond[14404]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:08:00 mr-fox CROND[14408]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:08:00 mr-fox CROND[14409]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:08:00 mr-fox crond[14405]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:08:00 mr-fox CROND[14411]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:08:00 mr-fox CROND[14413]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:08:00 mr-fox CROND[14414]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:08:00 mr-fox CROND[14405]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:08:00 mr-fox CROND[14405]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:08:00 mr-fox CROND[14404]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:08:00 mr-fox CROND[14404]: pam_unix(crond:session): session closed for user root
May 26 06:08:01 mr-fox CROND[14402]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:08:01 mr-fox CROND[14402]: pam_unix(crond:session): session closed for user root
May 26 06:08:01 mr-fox CROND[7169]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:08:01 mr-fox CROND[7169]: pam_unix(crond:session): session closed for user root
May 26 06:08:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:08:44 mr-fox kernel: rcu: \x0921-....: (105007 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=44004
May 26 06:08:44 mr-fox kernel: rcu: \x09(t=105008 jiffies g=8794409 q=8040467 ncpus=32)
May 26 06:08:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:08:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:08:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:08:44 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 06:08:44 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 06:08:44 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:08:44 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:08:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:08:44 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:08:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:08:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:08:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:08:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:08:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:08:44 mr-fox kernel: PKRU: 55555554
May 26 06:08:44 mr-fox kernel: Call Trace:
May 26 06:08:44 mr-fox kernel: <IRQ>
May 26 06:08:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:08:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:08:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:08:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:08:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:08:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:08:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:08:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:08:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:08:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:08:44 mr-fox kernel: </IRQ>
May 26 06:08:44 mr-fox kernel: <TASK>
May 26 06:08:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:08:44 mr-fox kernel: ? filemap_get_entry+0x6d/0x160
May 26 06:08:44 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 06:08:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:08:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:08:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:08:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:08:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:08:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:08:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:08:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:08:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:08:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:08:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:08:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:08:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:08:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:08:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:08:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:08:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:08:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:08:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:08:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:08:44 mr-fox kernel: </TASK>
May 26 06:08:49 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 106096 jiffies s: 491905 root: 0x2/.
May 26 06:08:49 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:08:49 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:08:49 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:08:49 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:08:49 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:08:49 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:08:49 mr-fox kernel: RIP: 0010:_raw_spin_lock_irqsave+0x13/0x40
May 26 06:08:49 mr-fox kernel: Code: 31 ff e9 3b 32 15 00 b8 01 00 00 00 31 d2 31 ff e9 2d 32 15 00 66 90 f3 0f 1e fa 53 9c 5b fa 31 c0 ba 01 00 00 00 f0 0f b1 17 <75> 0f 48 89 d8 5b 31 d2 31 f6 31 ff e9 07 32 15 00 89 c6 e8 55 03
May 26 06:08:49 mr-fox kernel: RSP: 0018:ffffa401005f0cf8 EFLAGS: 00000046
May 26 06:08:49 mr-fox kernel: RAX: 0000000000000000 RBX: 0000000000000282 RCX: 000000000000000c
May 26 06:08:49 mr-fox kernel: RDX: 0000000000000001 RSI: 00000000cec51000 RDI: ffffc400ffb51870
May 26 06:08:49 mr-fox kernel: RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
May 26 06:08:49 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000000cec51
May 26 06:08:49 mr-fox kernel: R13: ffffa401005f0d68 R14: ffffa401005f0d50 R15: ffffc400ffb51870
May 26 06:08:49 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:08:49 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:08:49 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:08:49 mr-fox kernel: PKRU: 55555554
May 26 06:08:49 mr-fox kernel: Call Trace:
May 26 06:08:49 mr-fox kernel: <NMI>
May 26 06:08:49 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:08:49 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:08:49 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:08:49 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:08:49 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:08:49 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:08:49 mr-fox kernel: ? _raw_spin_lock_irqsave+0x13/0x40
May 26 06:08:49 mr-fox kernel: ? _raw_spin_lock_irqsave+0x13/0x40
May 26 06:08:49 mr-fox kernel: ? _raw_spin_lock_irqsave+0x13/0x40
May 26 06:08:49 mr-fox kernel: </NMI>
May 26 06:08:49 mr-fox kernel: <IRQ>
May 26 06:08:49 mr-fox kernel: iommu_dma_free_iova+0xa1/0x220
May 26 06:08:49 mr-fox kernel: __iommu_dma_unmap+0xe0/0x170
May 26 06:08:49 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 06:08:49 mr-fox kernel: igb_poll+0x106/0x1370
May 26 06:08:49 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 06:08:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:08:49 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 06:08:49 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:08:49 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:08:49 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:08:49 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:08:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:08:49 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:08:49 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:08:49 mr-fox kernel: </IRQ>
May 26 06:08:49 mr-fox kernel: <TASK>
May 26 06:08:49 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:08:49 mr-fox kernel: RIP: 0010:xas_descend+0x2/0xd0
May 26 06:08:49 mr-fox kernel: Code: 18 0f b6 4c 24 10 4c 8b 04 24 e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 <41> 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80
May 26 06:08:49 mr-fox kernel: RSP: 0018:ffffa401077e3940 EFLAGS: 00000206
May 26 06:08:49 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:08:49 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 06:08:49 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:08:49 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:08:49 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:08:49 mr-fox kernel: xas_load+0x49/0x60
May 26 06:08:49 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:08:49 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:08:49 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:08:49 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:08:49 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:08:49 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:08:49 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:08:49 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:08:49 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:08:49 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:08:49 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:08:49 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:08:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:08:49 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:08:49 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:08:49 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:08:49 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:08:49 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:08:49 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:08:49 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:08:49 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:08:49 mr-fox kernel: </TASK>
May 26 06:09:00 mr-fox crond[20917]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:09:00 mr-fox crond[20920]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:09:00 mr-fox crond[20921]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:09:00 mr-fox CROND[20925]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:09:00 mr-fox CROND[20926]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:09:00 mr-fox crond[20922]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:09:00 mr-fox CROND[20928]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:09:00 mr-fox crond[20919]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:09:00 mr-fox CROND[20930]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:09:00 mr-fox CROND[20931]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:09:00 mr-fox CROND[20922]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:09:00 mr-fox CROND[20922]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:09:00 mr-fox CROND[20921]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:09:00 mr-fox CROND[20921]: pam_unix(crond:session): session closed for user root
May 26 06:09:00 mr-fox CROND[20920]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:09:00 mr-fox CROND[20920]: pam_unix(crond:session): session closed for user root
May 26 06:09:01 mr-fox CROND[14401]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:09:01 mr-fox CROND[14401]: pam_unix(crond:session): session closed for user root
May 26 06:10:00 mr-fox crond[27959]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:10:00 mr-fox crond[27958]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:10:00 mr-fox crond[27960]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:10:00 mr-fox crond[27964]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:10:00 mr-fox crond[27961]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:10:00 mr-fox CROND[27969]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:10:00 mr-fox crond[27963]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:10:00 mr-fox crond[27965]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:10:00 mr-fox crond[27966]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:10:00 mr-fox CROND[27971]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:10:00 mr-fox CROND[27972]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:10:00 mr-fox CROND[27973]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:10:00 mr-fox CROND[27975]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:10:00 mr-fox CROND[27976]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:10:00 mr-fox CROND[27977]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:10:00 mr-fox CROND[27974]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:10:00 mr-fox CROND[27966]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:10:00 mr-fox CROND[27966]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:10:00 mr-fox CROND[27963]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:10:00 mr-fox CROND[27963]: pam_unix(crond:session): session closed for user root
May 26 06:10:00 mr-fox CROND[27964]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:10:00 mr-fox CROND[27964]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:10:00 mr-fox CROND[27961]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:10:00 mr-fox CROND[27961]: pam_unix(crond:session): session closed for user root
May 26 06:10:00 mr-fox CROND[27958]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:10:00 mr-fox CROND[27958]: pam_unix(crond:session): session closed for user torproject
May 26 06:10:00 mr-fox CROND[20919]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:10:00 mr-fox CROND[20919]: pam_unix(crond:session): session closed for user root
May 26 06:11:00 mr-fox crond[3902]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:11:00 mr-fox crond[3904]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:11:00 mr-fox crond[3901]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:11:00 mr-fox CROND[3908]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:11:00 mr-fox CROND[3911]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:11:00 mr-fox crond[3905]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:11:00 mr-fox crond[3903]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:11:00 mr-fox CROND[3913]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:11:00 mr-fox CROND[3914]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:11:00 mr-fox CROND[3915]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:11:00 mr-fox CROND[3905]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:11:00 mr-fox CROND[3905]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:11:00 mr-fox CROND[3904]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:11:00 mr-fox CROND[3904]: pam_unix(crond:session): session closed for user root
May 26 06:11:01 mr-fox CROND[27960]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:11:01 mr-fox CROND[27960]: pam_unix(crond:session): session closed for user root
May 26 06:11:01 mr-fox CROND[3903]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:11:01 mr-fox CROND[3903]: pam_unix(crond:session): session closed for user root
May 26 06:11:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:11:44 mr-fox kernel: rcu: \x0921-....: (150011 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=61509
May 26 06:11:44 mr-fox kernel: rcu: \x09(t=150012 jiffies g=8794409 q=9907950 ncpus=32)
May 26 06:11:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:11:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:11:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:11:44 mr-fox kernel: RIP: 0010:xas_descend+0x2/0xd0
May 26 06:11:44 mr-fox kernel: Code: 18 0f b6 4c 24 10 4c 8b 04 24 e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 <41> 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80
May 26 06:11:44 mr-fox kernel: RSP: 0018:ffffa401077e3940 EFLAGS: 00000206
May 26 06:11:44 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:11:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 06:11:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:11:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:11:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:11:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:11:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:11:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:11:44 mr-fox kernel: PKRU: 55555554
May 26 06:11:44 mr-fox kernel: Call Trace:
May 26 06:11:44 mr-fox kernel: <IRQ>
May 26 06:11:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:11:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:11:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:11:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:11:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:11:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:11:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:11:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:11:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:11:44 mr-fox kernel: </IRQ>
May 26 06:11:44 mr-fox kernel: <TASK>
May 26 06:11:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:11:44 mr-fox kernel: ? xas_descend+0x2/0xd0
May 26 06:11:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:11:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:11:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:11:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:11:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:11:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:11:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:11:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:11:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:11:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:11:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:11:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:11:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:11:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:11:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:11:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:11:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:11:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:11:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:11:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:11:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:11:44 mr-fox kernel: </TASK>
May 26 06:11:49 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 151152 jiffies s: 491905 root: 0x2/.
May 26 06:11:49 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:11:49 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:11:49 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:11:49 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:11:49 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:11:49 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:11:49 mr-fox kernel: RIP: 0010:iommu_dma_free_iova+0xf6/0x220
May 26 06:11:49 mr-fox kernel: Code: 2f 01 00 00 41 8b 0f 85 c9 0f 84 42 01 00 00 8d 48 01 41 23 4f 0c 41 89 4f 08 48 8d 0c 80 48 c1 e1 03 49 8d 04 0f 4c 89 60 10 <48> 89 68 18 48 8b b3 90 00 00 00 48 89 70 30 49 8b 76 18 49 39 f5
May 26 06:11:49 mr-fox kernel: RSP: 0018:ffffa401005f0d08 EFLAGS: 00000016
May 26 06:11:49 mr-fox kernel: RAX: ffffc400ffb51cd0 RBX: ffff988ac1292400 RCX: 0000000000000460
May 26 06:11:49 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:11:49 mr-fox kernel: RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
May 26 06:11:49 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000000cfc53
May 26 06:11:49 mr-fox kernel: R13: ffffa401005f0d68 R14: ffffa401005f0d50 R15: ffffc400ffb51870
May 26 06:11:49 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:11:49 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:11:49 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:11:49 mr-fox kernel: PKRU: 55555554
May 26 06:11:49 mr-fox kernel: Call Trace:
May 26 06:11:49 mr-fox kernel: <NMI>
May 26 06:11:49 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:11:49 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:11:49 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:11:49 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:11:49 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:11:49 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:11:49 mr-fox kernel: ? iommu_dma_free_iova+0xf6/0x220
May 26 06:11:49 mr-fox kernel: ? iommu_dma_free_iova+0xf6/0x220
May 26 06:11:49 mr-fox kernel: ? iommu_dma_free_iova+0xf6/0x220
May 26 06:11:49 mr-fox kernel: </NMI>
May 26 06:11:49 mr-fox kernel: <IRQ>
May 26 06:11:49 mr-fox kernel: __iommu_dma_unmap+0xe0/0x170
May 26 06:11:49 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 06:11:49 mr-fox kernel: igb_poll+0x106/0x1370
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: ? wq_worker_tick+0xd/0xd0
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:11:49 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:11:49 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:11:49 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:11:49 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:11:49 mr-fox kernel: </IRQ>
May 26 06:11:49 mr-fox kernel: <TASK>
May 26 06:11:49 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:11:49 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:11:49 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:11:49 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:11:49 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:11:49 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:11:49 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:11:49 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:11:49 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:11:49 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:11:49 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:11:49 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:11:49 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:11:49 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:11:49 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:11:49 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:11:49 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:11:49 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:11:49 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:11:49 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:11:49 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:11:49 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:11:49 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:11:49 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:11:49 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:11:49 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:11:49 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:11:49 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:11:49 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:11:49 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:11:49 mr-fox kernel: </TASK>
May 26 06:12:00 mr-fox crond[10409]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:12:00 mr-fox crond[10408]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:12:00 mr-fox crond[10410]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:12:00 mr-fox crond[10411]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:12:00 mr-fox crond[10412]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:12:00 mr-fox CROND[10416]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:12:00 mr-fox CROND[10417]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:12:00 mr-fox CROND[10418]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:12:00 mr-fox CROND[10419]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:12:00 mr-fox CROND[10420]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:12:00 mr-fox CROND[10412]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:12:00 mr-fox CROND[10412]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:12:00 mr-fox CROND[10411]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:12:00 mr-fox CROND[10411]: pam_unix(crond:session): session closed for user root
May 26 06:12:00 mr-fox CROND[10410]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:12:00 mr-fox CROND[10410]: pam_unix(crond:session): session closed for user root
May 26 06:12:01 mr-fox CROND[3902]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:12:01 mr-fox CROND[3902]: pam_unix(crond:session): session closed for user root
May 26 06:13:00 mr-fox crond[19085]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:13:00 mr-fox crond[19086]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:13:00 mr-fox crond[19087]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:13:00 mr-fox crond[19089]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:13:00 mr-fox CROND[19093]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:13:00 mr-fox CROND[19095]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:13:00 mr-fox CROND[19094]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:13:00 mr-fox CROND[19096]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:13:00 mr-fox crond[19088]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:13:00 mr-fox CROND[19100]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:13:00 mr-fox CROND[19089]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:13:00 mr-fox CROND[19089]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:13:00 mr-fox CROND[19088]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:13:00 mr-fox CROND[19088]: pam_unix(crond:session): session closed for user root
May 26 06:13:00 mr-fox CROND[19087]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:13:00 mr-fox CROND[19087]: pam_unix(crond:session): session closed for user root
May 26 06:13:00 mr-fox CROND[10409]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:13:00 mr-fox CROND[10409]: pam_unix(crond:session): session closed for user root
May 26 06:14:00 mr-fox crond[25462]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:14:00 mr-fox crond[25460]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:14:00 mr-fox crond[25463]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:14:00 mr-fox crond[25465]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:14:00 mr-fox crond[25461]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:14:00 mr-fox CROND[25468]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:14:00 mr-fox CROND[25469]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:14:00 mr-fox CROND[25470]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:14:00 mr-fox CROND[25471]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:14:00 mr-fox CROND[25473]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:14:00 mr-fox CROND[25465]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:14:00 mr-fox CROND[25465]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:14:00 mr-fox CROND[25463]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:14:00 mr-fox CROND[25463]: pam_unix(crond:session): session closed for user root
May 26 06:14:00 mr-fox CROND[25462]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:14:00 mr-fox CROND[25462]: pam_unix(crond:session): session closed for user root
May 26 06:14:01 mr-fox CROND[19086]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:14:01 mr-fox CROND[19086]: pam_unix(crond:session): session closed for user root
May 26 06:14:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:14:44 mr-fox kernel: rcu: \x0921-....: (195014 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=78756
May 26 06:14:44 mr-fox kernel: rcu: \x09(t=195015 jiffies g=8794409 q=11131553 ncpus=32)
May 26 06:14:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:14:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:14:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:14:44 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:14:44 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:14:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:14:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:14:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:14:44 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:14:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:14:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:14:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:14:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:14:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:14:44 mr-fox kernel: PKRU: 55555554
May 26 06:14:44 mr-fox kernel: Call Trace:
May 26 06:14:44 mr-fox kernel: <IRQ>
May 26 06:14:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:14:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:14:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:14:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:14:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:14:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:14:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:14:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:14:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:14:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:14:44 mr-fox kernel: </IRQ>
May 26 06:14:44 mr-fox kernel: <TASK>
May 26 06:14:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:14:44 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 06:14:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:14:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:14:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:14:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:14:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:14:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:14:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:14:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:14:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:14:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:14:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:14:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:14:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:14:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:14:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:14:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:14:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:14:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:14:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:14:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:14:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:14:44 mr-fox kernel: </TASK>
May 26 06:14:50 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 196208 jiffies s: 491905 root: 0x2/.
May 26 06:14:50 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:14:50 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:14:50 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:14:50 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:14:50 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:14:50 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:14:50 mr-fox kernel: RIP: 0010:fetch_pte+0x78/0x170
May 26 06:14:50 mr-fox kernel: Code: 00 01 00 00 83 eb 01 44 8d 34 db 41 8d 4e 0c 83 f9 3f 0f 87 0e f9 2d 00 4c 89 e0 49 8b 95 08 01 00 00 48 d3 e8 25 ff 01 00 00 <48> 8d 04 c2 ba 01 00 00 00 48 d3 e2 48 89 55 00 85 db 0f 8e b2 00
May 26 06:14:50 mr-fox kernel: RSP: 0018:ffffa401005f0c38 EFLAGS: 00000206
May 26 06:14:50 mr-fox kernel: RAX: 0000000000000003 RBX: 0000000000000002 RCX: 000000000000001e
May 26 06:14:50 mr-fox kernel: RDX: ffff988ac41fa000 RSI: 00000000fed95000 RDI: ffff988ac2211088
May 26 06:14:50 mr-fox kernel: RBP: ffffa401005f0c80 R08: 0000000000000000 R09: 0000000000000000
May 26 06:14:50 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000fed95000
May 26 06:14:50 mr-fox kernel: R13: ffff988ac2211088 R14: 0000000000000012 R15: ffff988ac2211088
May 26 06:14:50 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:14:50 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:14:50 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:14:50 mr-fox kernel: PKRU: 55555554
May 26 06:14:50 mr-fox kernel: Call Trace:
May 26 06:14:50 mr-fox kernel: <NMI>
May 26 06:14:50 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:14:50 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:14:50 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:14:50 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:14:50 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:14:50 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:14:50 mr-fox kernel: ? fetch_pte+0x78/0x170
May 26 06:14:50 mr-fox kernel: ? fetch_pte+0x78/0x170
May 26 06:14:50 mr-fox kernel: ? fetch_pte+0x78/0x170
May 26 06:14:50 mr-fox kernel: </NMI>
May 26 06:14:50 mr-fox kernel: <IRQ>
May 26 06:14:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:14:50 mr-fox kernel: iommu_v1_unmap_pages+0x81/0x140
May 26 06:14:50 mr-fox kernel: amd_iommu_unmap_pages+0x40/0x130
May 26 06:14:50 mr-fox kernel: __iommu_unmap+0xbf/0x120
May 26 06:14:50 mr-fox kernel: __iommu_dma_unmap+0xb5/0x170
May 26 06:14:50 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 06:14:50 mr-fox kernel: igb_poll+0x106/0x1370
May 26 06:14:50 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 06:14:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:14:50 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 06:14:50 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:14:50 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:14:50 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:14:50 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:14:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:14:50 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:14:50 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:14:50 mr-fox kernel: </IRQ>
May 26 06:14:50 mr-fox kernel: <TASK>
May 26 06:14:50 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:14:50 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:14:50 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:14:50 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:14:50 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:14:50 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:14:50 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:14:50 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:14:50 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:14:50 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:14:50 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:14:50 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:14:50 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:14:50 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:14:50 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:14:50 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:14:50 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:14:50 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:14:50 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:14:50 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:14:50 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:14:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:14:50 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:14:50 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:14:50 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:14:50 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:14:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:14:50 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:14:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:14:50 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:14:50 mr-fox kernel: </TASK>
May 26 06:15:00 mr-fox crond[30290]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:15:00 mr-fox crond[30291]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:15:00 mr-fox crond[30292]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:15:00 mr-fox crond[30294]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:15:00 mr-fox CROND[30301]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:15:00 mr-fox crond[30296]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:15:00 mr-fox crond[30295]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:15:00 mr-fox crond[30293]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:15:00 mr-fox CROND[30302]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:15:00 mr-fox CROND[30304]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:15:00 mr-fox CROND[30303]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:15:00 mr-fox crond[30298]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:15:00 mr-fox CROND[30305]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:15:00 mr-fox CROND[30307]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:15:00 mr-fox CROND[30308]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:15:00 mr-fox CROND[30309]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:15:00 mr-fox CROND[30298]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:15:00 mr-fox CROND[30298]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:15:00 mr-fox CROND[30294]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:15:00 mr-fox CROND[30294]: pam_unix(crond:session): session closed for user root
May 26 06:15:00 mr-fox CROND[30295]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:15:00 mr-fox CROND[30295]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:15:00 mr-fox CROND[30293]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:15:00 mr-fox CROND[30293]: pam_unix(crond:session): session closed for user root
May 26 06:15:00 mr-fox CROND[30290]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:15:00 mr-fox CROND[30290]: pam_unix(crond:session): session closed for user torproject
May 26 06:15:01 mr-fox CROND[25461]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:15:01 mr-fox CROND[25461]: pam_unix(crond:session): session closed for user root
May 26 06:16:00 mr-fox crond[5459]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:16:00 mr-fox crond[5461]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:16:00 mr-fox crond[5462]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:16:00 mr-fox crond[5464]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:16:00 mr-fox crond[5463]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:16:00 mr-fox CROND[5468]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:16:00 mr-fox CROND[5469]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:16:00 mr-fox CROND[5470]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:16:00 mr-fox CROND[5471]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:16:00 mr-fox CROND[5472]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:16:00 mr-fox CROND[5464]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:16:00 mr-fox CROND[5464]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:16:00 mr-fox CROND[5463]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:16:00 mr-fox CROND[5463]: pam_unix(crond:session): session closed for user root
May 26 06:16:00 mr-fox CROND[5462]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:16:00 mr-fox CROND[5462]: pam_unix(crond:session): session closed for user root
May 26 06:16:00 mr-fox CROND[30292]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:16:00 mr-fox CROND[30292]: pam_unix(crond:session): session closed for user root
May 26 06:17:00 mr-fox crond[14291]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:17:00 mr-fox crond[14290]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:17:00 mr-fox crond[14293]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:17:00 mr-fox CROND[14298]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:17:00 mr-fox CROND[14299]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:17:00 mr-fox crond[14292]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:17:00 mr-fox crond[14294]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:17:00 mr-fox CROND[14301]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:17:00 mr-fox CROND[14303]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:17:00 mr-fox CROND[14305]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:17:00 mr-fox CROND[14294]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:17:00 mr-fox CROND[14293]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:17:00 mr-fox CROND[14294]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:17:00 mr-fox CROND[14293]: pam_unix(crond:session): session closed for user root
May 26 06:17:00 mr-fox CROND[14292]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:17:00 mr-fox CROND[5461]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:17:00 mr-fox CROND[5461]: pam_unix(crond:session): session closed for user root
May 26 06:17:00 mr-fox CROND[14292]: pam_unix(crond:session): session closed for user root
May 26 06:17:40 mr-fox kernel: fuzz-extrainfo invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
May 26 06:17:40 mr-fox kernel: CPU: 1 PID: 20863 Comm: fuzz-extrainfo Tainted: G B T 6.8.11 #13
May 26 06:17:40 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:17:40 mr-fox kernel: Call Trace:
May 26 06:17:40 mr-fox kernel: <TASK>
May 26 06:17:40 mr-fox kernel: dump_stack_lvl+0x53/0x70
May 26 06:17:40 mr-fox kernel: dump_header+0x40/0x1d0
May 26 06:17:40 mr-fox kernel: oom_kill_process+0x10b/0x250
May 26 06:17:40 mr-fox kernel: out_of_memory+0x259/0x590
May 26 06:17:40 mr-fox kernel: mem_cgroup_out_of_memory+0x116/0x170
May 26 06:17:40 mr-fox kernel: try_charge_memcg+0x76c/0x830
May 26 06:17:40 mr-fox kernel: __mem_cgroup_charge+0x3f/0xc0
May 26 06:17:40 mr-fox kernel: ? __alloc_pages+0x17d/0x300
May 26 06:17:40 mr-fox kernel: __filemap_add_folio+0x308/0x3b0
May 26 06:17:40 mr-fox kernel: ? scan_shadow_nodes+0x30/0x30
May 26 06:17:40 mr-fox kernel: filemap_add_folio+0x37/0xb0
May 26 06:17:40 mr-fox kernel: __filemap_get_folio+0x158/0x2c0
May 26 06:17:40 mr-fox kernel: filemap_fault+0x14c/0x9e0
May 26 06:17:40 mr-fox kernel: __do_fault+0x30/0x1a0
May 26 06:17:40 mr-fox kernel: __handle_mm_fault+0x1312/0x1a30
May 26 06:17:40 mr-fox kernel: handle_mm_fault+0x15f/0x370
May 26 06:17:40 mr-fox kernel: exc_page_fault+0x16f/0x6b0
May 26 06:17:40 mr-fox kernel: asm_exc_page_fault+0x26/0x30
May 26 06:17:40 mr-fox kernel: RIP: 0033:0x7fe55c7a09d8
May 26 06:17:40 mr-fox kernel: Code: Unable to access opcode bytes at 0x7fe55c7a09ae.
May 26 06:17:40 mr-fox kernel: RSP: 002b:00007ffe777732d8 EFLAGS: 00010206
May 26 06:17:40 mr-fox kernel: RAX: 00007fe55c7a09d8 RBX: ffffffffffffffff RCX: 0000000000000000
May 26 06:17:40 mr-fox kernel: RDX: 00007fe55c7abad0 RSI: 00007fe55c7ac000 RDI: 00007fe55c7ac008
May 26 06:17:40 mr-fox kernel: RBP: 00007fe55c95a9f0 R08: 0000000000000020 R09: 000056309c5c9820
May 26 06:17:40 mr-fox kernel: R10: 00000000e4f78060 R11: 00000000e4f78060 R12: 00007fe55c7ab3f8
May 26 06:17:40 mr-fox kernel: R13: 00007fe55c95a9f0 R14: 0000000000000007 R15: 00007ffe77773300
May 26 06:17:40 mr-fox kernel: </TASK>
May 26 06:17:40 mr-fox kernel: memory: usage 8388608kB, limit 8388608kB, failcnt 3338
May 26 06:17:40 mr-fox kernel: swap: usage 0kB, limit 0kB, failcnt 0
May 26 06:17:40 mr-fox kernel: Memory cgroup stats for /fuzzing/tor_extrainfo_20240526-042002_7a5d94bcf842:
May 26 06:17:40 mr-fox kernel: anon 8523776
May 26 06:17:40 mr-fox kernel: file 10391552
May 26 06:17:40 mr-fox kernel: kernel 8571019264
May 26 06:17:40 mr-fox kernel: kernel_stack 32768
May 26 06:17:40 mr-fox kernel: pagetables 151552
May 26 06:17:40 mr-fox kernel: sec_pagetables 0
May 26 06:17:40 mr-fox kernel: percpu 1088
May 26 06:17:40 mr-fox kernel: sock 0
May 26 06:17:40 mr-fox kernel: vmalloc 0
May 26 06:17:40 mr-fox kernel: shmem 10346496
May 26 06:17:40 mr-fox kernel: zswap 0
May 26 06:17:40 mr-fox kernel: zswapped 0
May 26 06:17:40 mr-fox kernel: file_mapped 61440
May 26 06:17:40 mr-fox kernel: file_dirty 0
May 26 06:17:40 mr-fox kernel: file_writeback 0
May 26 06:17:40 mr-fox kernel: swapcached 0
May 26 06:17:40 mr-fox kernel: anon_thp 0
May 26 06:17:40 mr-fox kernel: file_thp 0
May 26 06:17:40 mr-fox kernel: shmem_thp 0
May 26 06:17:40 mr-fox kernel: inactive_anon 12357632
May 26 06:17:40 mr-fox kernel: active_anon 6512640
May 26 06:17:40 mr-fox kernel: inactive_file 45056
May 26 06:17:40 mr-fox kernel: active_file 0
May 26 06:17:40 mr-fox kernel: unevictable 0
May 26 06:17:40 mr-fox kernel: slab_reclaimable 9231024
May 26 06:17:40 mr-fox kernel: slab_unreclaimable 8561575768
May 26 06:17:40 mr-fox kernel: slab 8570806792
May 26 06:17:40 mr-fox kernel: workingset_refault_anon 0
May 26 06:17:40 mr-fox kernel: workingset_refault_file 510
May 26 06:17:40 mr-fox kernel: workingset_activate_anon 0
May 26 06:17:40 mr-fox kernel: workingset_activate_file 75
May 26 06:17:40 mr-fox kernel: workingset_restore_anon 0
May 26 06:17:40 mr-fox kernel: workingset_restore_file 61
May 26 06:17:40 mr-fox kernel: workingset_nodereclaim 0
May 26 06:17:40 mr-fox kernel: pgscan 5782
May 26 06:17:40 mr-fox kernel: pgsteal 1380
May 26 06:17:40 mr-fox kernel: pgscan_kswapd 87
May 26 06:17:40 mr-fox kernel: pgscan_direct 5695
May 26 06:17:40 mr-fox kernel: pgscan_khugepaged 0
May 26 06:17:40 mr-fox kernel: pgsteal_kswapd 87
May 26 06:17:40 mr-fox kernel: pgsteal_direct 1293
May 26 06:17:40 mr-fox kernel: pgsteal_khugepaged 0
May 26 06:17:40 mr-fox kernel: pgfault 1604631247
May 26 06:17:40 mr-fox kernel: pgmajfault 37
May 26 06:17:40 mr-fox kernel: pgrefill 599
May 26 06:17:40 mr-fox kernel: pgactivate 260600
May 26 06:17:40 mr-fox kernel: pgdeactivate 520
May 26 06:17:40 mr-fox kernel: pglazyfree 0
May 26 06:17:40 mr-fox kernel: pglazyfreed 0
May 26 06:17:40 mr-fox kernel: zswpin 0
May 26 06:17:40 mr-fox kernel: zswpout 0
May 26 06:17:40 mr-fox kernel: zswpwb 0
May 26 06:17:40 mr-fox kernel: thp_fault_alloc 0
May 26 06:17:40 mr-fox kernel: thp_collapse_alloc 0
May 26 06:17:40 mr-fox kernel: thp_swpout 0
May 26 06:17:40 mr-fox kernel: thp_swpout_fallback 0
May 26 06:17:40 mr-fox kernel: Tasks state (memory values in pages):
May 26 06:17:40 mr-fox kernel: [ pid ] uid tgid total_vm rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
May 26 06:17:40 mr-fox kernel: [ 4257] 1001 4257 4725 2944 2368 576 0 81920 0 0 afl-fuzz
May 26 06:17:40 mr-fox kernel: [ 5677] 1001 5677 4246 2176 320 1344 512 69632 0 0 fuzz-extrainfo
May 26 06:17:40 mr-fox kernel: [ 20863] 1001 20863 4246 1102 334 576 192 69632 0 0 fuzz-extrainfo
May 26 06:17:40 mr-fox kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=tor_extrainfo_20240526-042002_7a5d94bcf842,mems_allowed=0,oom_memcg=/fuzzing/tor_extrainfo_20240526-042002_7a5d94bcf842,task_memcg=/fuzzing/tor_extrainfo_20240526-042002_7a5d94bcf842,task=afl-fuzz,pid=4257,uid=1001
May 26 06:17:40 mr-fox kernel: Memory cgroup out of memory: Killed process 4257 (afl-fuzz) total-vm:18900kB, anon-rss:9472kB, file-rss:2304kB, shmem-rss:0kB, UID:1001 pgtables:80kB oom_score_adj:0
May 26 06:17:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:17:44 mr-fox kernel: rcu: \x0921-....: (240018 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=96117
May 26 06:17:44 mr-fox kernel: rcu: \x09(t=240019 jiffies g=8794409 q=12292563 ncpus=32)
May 26 06:17:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:17:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:17:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:17:44 mr-fox kernel: RIP: 0010:xas_descend+0x3c/0xd0
May 26 06:17:44 mr-fox kernel: Code: 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03 <48> 83 fa 02 75 08 48 3d fd 00 00 00 76 2f 41 88 5c 24 12 48 83 c4
May 26 06:17:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000202
May 26 06:17:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: 0000000000000001 RCX: 000000000000001e
May 26 06:17:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 06:17:44 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:17:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:17:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:17:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:17:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:17:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:17:44 mr-fox kernel: PKRU: 55555554
May 26 06:17:44 mr-fox kernel: Call Trace:
May 26 06:17:44 mr-fox kernel: <IRQ>
May 26 06:17:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:17:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:17:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:17:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:17:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:17:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:17:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:17:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:17:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:17:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:17:44 mr-fox kernel: </IRQ>
May 26 06:17:44 mr-fox kernel: <TASK>
May 26 06:17:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:17:44 mr-fox kernel: ? xas_descend+0x3c/0xd0
May 26 06:17:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:17:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:17:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:17:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:17:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:17:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:17:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:17:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:17:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:17:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:17:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:17:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:17:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:17:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:17:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:17:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:17:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:17:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:17:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:17:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:17:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:17:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:17:44 mr-fox kernel: </TASK>
May 26 06:17:50 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 241264 jiffies s: 491905 root: 0x2/.
May 26 06:17:50 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:17:50 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:17:50 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:17:50 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:17:50 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:17:50 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:17:50 mr-fox kernel: RIP: 0010:napi_consume_skb+0x27/0xc0
May 26 06:17:50 mr-fox kernel: Code: 00 66 90 f3 0f 1e fa 53 48 89 fb 85 f6 0f 84 8c 00 00 00 48 85 ff 74 7b 8b 87 d4 00 00 00 83 f8 01 75 53 48 89 df f6 43 7e 0c <75> 25 e8 62 87 ff ff 48 83 bb c0 00 00 00 00 74 0d be 01 00 00 00
May 26 06:17:50 mr-fox kernel: RSP: 0018:ffffa401005f0de8 EFLAGS: 00000202
May 26 06:17:50 mr-fox kernel: RAX: 0000000000000001 RBX: ffff989b4c351798 RCX: 0000000000000000
May 26 06:17:50 mr-fox kernel: RDX: 0000000000000001 RSI: 0000000000000040 RDI: ffff989b4c351798
May 26 06:17:50 mr-fox kernel: RBP: 00000000000043da R08: 0000000000000000 R09: 0000000000000000
May 26 06:17:50 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa40100119040
May 26 06:17:50 mr-fox kernel: R13: ffff988ac873fa40 R14: 00000000ffffff04 R15: ffffa40100119060
May 26 06:17:50 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:17:50 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:17:50 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:17:50 mr-fox kernel: PKRU: 55555554
May 26 06:17:50 mr-fox kernel: Call Trace:
May 26 06:17:50 mr-fox kernel: <NMI>
May 26 06:17:50 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:17:50 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:17:50 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:17:50 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:17:50 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:17:50 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:17:50 mr-fox kernel: ? napi_consume_skb+0x27/0xc0
May 26 06:17:50 mr-fox kernel: ? napi_consume_skb+0x27/0xc0
May 26 06:17:50 mr-fox kernel: ? napi_consume_skb+0x27/0xc0
May 26 06:17:50 mr-fox kernel: </NMI>
May 26 06:17:50 mr-fox kernel: <IRQ>
May 26 06:17:50 mr-fox kernel: igb_poll+0xea/0x1370
May 26 06:17:50 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 06:17:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:17:50 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 06:17:50 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:17:50 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:17:50 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:17:50 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:17:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:17:50 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:17:50 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:17:50 mr-fox kernel: </IRQ>
May 26 06:17:50 mr-fox kernel: <TASK>
May 26 06:17:50 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:17:50 mr-fox kernel: RIP: 0010:filemap_get_entry+0x60/0x160
May 26 06:17:50 mr-fox kernel: Code: 24 20 03 00 00 00 48 c7 44 24 28 00 00 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 <48> 8d 7c 24 08 e8 56 70 78 00 48 89 c3 48 3d 02 04 00 00 74 e2 48
May 26 06:17:50 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:17:50 mr-fox kernel: RAX: 0000000000000000 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:17:50 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:17:50 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:17:50 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:17:50 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:17:50 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 06:17:50 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:17:50 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:17:50 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:17:50 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:17:50 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:17:50 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:17:50 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:17:50 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:17:50 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:17:50 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:17:50 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:17:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:17:50 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:17:50 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:17:50 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:17:50 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:17:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:17:50 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:17:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:17:50 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:17:50 mr-fox kernel: </TASK>
May 26 06:18:00 mr-fox crond[21217]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:18:00 mr-fox crond[21218]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:18:00 mr-fox crond[21219]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:18:00 mr-fox crond[21221]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:18:00 mr-fox CROND[21224]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:18:00 mr-fox CROND[21223]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:18:00 mr-fox crond[21220]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:18:00 mr-fox CROND[21225]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:18:00 mr-fox CROND[21226]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:18:00 mr-fox CROND[21227]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:18:00 mr-fox CROND[21221]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:18:00 mr-fox CROND[21221]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:18:00 mr-fox CROND[21220]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:18:00 mr-fox CROND[21220]: pam_unix(crond:session): session closed for user root
May 26 06:18:00 mr-fox CROND[21219]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:18:00 mr-fox CROND[21219]: pam_unix(crond:session): session closed for user root
May 26 06:18:01 mr-fox CROND[14291]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:18:01 mr-fox CROND[14291]: pam_unix(crond:session): session closed for user root
May 26 06:19:00 mr-fox crond[22782]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:19:00 mr-fox crond[22783]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:19:00 mr-fox crond[22784]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:19:00 mr-fox crond[22781]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:19:00 mr-fox crond[22785]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:19:00 mr-fox CROND[22787]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:19:00 mr-fox CROND[22788]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:19:00 mr-fox CROND[22789]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:19:00 mr-fox CROND[22790]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:19:00 mr-fox CROND[22791]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:19:00 mr-fox CROND[22785]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:19:00 mr-fox CROND[22785]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:19:00 mr-fox CROND[22784]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:19:00 mr-fox CROND[22784]: pam_unix(crond:session): session closed for user root
May 26 06:19:00 mr-fox CROND[22783]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:19:00 mr-fox CROND[22783]: pam_unix(crond:session): session closed for user root
May 26 06:19:01 mr-fox CROND[21218]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:19:01 mr-fox CROND[21218]: pam_unix(crond:session): session closed for user root
May 26 06:20:00 mr-fox crond[24340]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:20:00 mr-fox crond[24342]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:20:00 mr-fox crond[24341]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:20:00 mr-fox crond[24344]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:20:00 mr-fox CROND[24349]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:20:00 mr-fox CROND[24351]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:20:00 mr-fox CROND[24350]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:20:00 mr-fox crond[24343]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:20:00 mr-fox CROND[24352]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:20:00 mr-fox crond[24346]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:20:00 mr-fox crond[24345]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:20:00 mr-fox crond[24347]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:20:00 mr-fox CROND[24353]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:20:00 mr-fox CROND[24354]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:20:00 mr-fox CROND[24355]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:20:00 mr-fox CROND[24356]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:20:00 mr-fox CROND[24347]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:20:00 mr-fox CROND[24347]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:20:00 mr-fox CROND[24344]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:20:00 mr-fox CROND[24344]: pam_unix(crond:session): session closed for user root
May 26 06:20:00 mr-fox CROND[24345]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:20:00 mr-fox CROND[24345]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:20:00 mr-fox CROND[24343]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:20:00 mr-fox CROND[24343]: pam_unix(crond:session): session closed for user root
May 26 06:20:00 mr-fox sudo[25073]: torproject : PWD=/home/torproject ; USER=root ; COMMAND=/opt/fuzz-utils/fuzz-cgroup.sh tor_extrainfo_20240526-042002_7a5d94bcf842
May 26 06:20:00 mr-fox sudo[25073]: pam_unix(sudo:session): session opened for user root(uid=0) by torproject(uid=1001)
May 26 06:20:00 mr-fox sudo[25073]: pam_unix(sudo:session): session closed for user root
May 26 06:20:00 mr-fox CROND[22782]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:20:00 mr-fox CROND[22782]: pam_unix(crond:session): session closed for user root
May 26 06:20:01 mr-fox sudo[25160]: torproject : PWD=/tmp/torproject/fuzzing/tor_diff-apply_20240526-062001_7a5d94bcf842 ; USER=root ; COMMAND=/opt/fuzz-utils/fuzz-cgroup.sh tor_diff-apply_20240526-062001_7a5d94bcf842 25157
May 26 06:20:02 mr-fox sudo[25160]: pam_unix(sudo:session): session opened for user root(uid=0) by torproject(uid=1001)
May 26 06:20:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:20:44 mr-fox kernel: rcu: \x0921-....: (285022 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=113914
May 26 06:20:44 mr-fox kernel: rcu: \x09(t=285023 jiffies g=8794409 q=12760513 ncpus=32)
May 26 06:20:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:20:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:20:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:20:44 mr-fox kernel: RIP: 0010:xas_load+0x49/0x60
May 26 06:20:44 mr-fox kernel: Code: 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff <80> 7d 00 00 75 bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40
May 26 06:20:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 06:20:44 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:20:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:20:44 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:20:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:20:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:20:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:20:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:20:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:20:44 mr-fox kernel: PKRU: 55555554
May 26 06:20:44 mr-fox kernel: Call Trace:
May 26 06:20:44 mr-fox kernel: <IRQ>
May 26 06:20:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:20:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:20:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:20:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:20:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:20:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:20:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:20:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:20:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:20:44 mr-fox kernel: </IRQ>
May 26 06:20:44 mr-fox kernel: <TASK>
May 26 06:20:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:20:44 mr-fox kernel: ? xas_load+0x49/0x60
May 26 06:20:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:20:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:20:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:20:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:20:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:20:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:20:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:20:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:20:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:20:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:20:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:20:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:20:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:20:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:20:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:20:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:20:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:20:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:20:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:20:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:20:44 mr-fox kernel: </TASK>
May 26 06:20:50 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 286320 jiffies s: 491905 root: 0x2/.
May 26 06:20:50 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:20:50 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:20:50 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:20:50 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:20:50 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:20:50 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:20:50 mr-fox kernel: RIP: 0010:nf_conntrack_in+0x4/0x540
May 26 06:20:50 mr-fox kernel: Code: 89 df e8 3f f6 ff ff 83 f8 01 0f 85 0b fc ff ff 48 8b 7b 68 48 83 e7 f8 e9 11 ff ff ff e8 b4 1a 18 00 0f 1f 40 00 f3 0f 1e fa <41> 57 41 56 41 55 49 89 fd 41 54 55 53 48 89 f3 48 83 ec 50 65 48
May 26 06:20:50 mr-fox kernel: RSP: 0018:ffffa401005f0b68 EFLAGS: 00000286
May 26 06:20:50 mr-fox kernel: RAX: ffffffffa5827c50 RBX: 0000000000000001 RCX: 0000000000000000
May 26 06:20:50 mr-fox kernel: RDX: ffffa401005f0c10 RSI: ffffa401005f0c10 RDI: ffff9891f62bee00
May 26 06:20:50 mr-fox kernel: RBP: ffff9891f62bee00 R08: ffffa401005f0c90 R09: 0000000000000000
May 26 06:20:50 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa401005f0c10
May 26 06:20:50 mr-fox kernel: R13: 0000000000000001 R14: ffff988ac7fb1420 R15: ffff9891f62bee00
May 26 06:20:50 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:20:50 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:20:50 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:20:50 mr-fox kernel: PKRU: 55555554
May 26 06:20:50 mr-fox kernel: Call Trace:
May 26 06:20:50 mr-fox kernel: <NMI>
May 26 06:20:50 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:20:50 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:20:50 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:20:50 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:20:50 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:20:50 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:20:50 mr-fox kernel: ? nf_ct_netns_do_get+0x220/0x220
May 26 06:20:50 mr-fox kernel: ? nf_conntrack_in+0x4/0x540
May 26 06:20:50 mr-fox kernel: ? nf_conntrack_in+0x4/0x540
May 26 06:20:50 mr-fox kernel: ? nf_conntrack_in+0x4/0x540
May 26 06:20:50 mr-fox kernel: </NMI>
May 26 06:20:50 mr-fox kernel: <IRQ>
May 26 06:20:50 mr-fox kernel: nf_hook_slow+0x3c/0x100
May 26 06:20:50 mr-fox kernel: nf_hook_slow_list+0x90/0x140
May 26 06:20:50 mr-fox kernel: ip_sublist_rcv+0x71/0x1c0
May 26 06:20:50 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 06:20:50 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 06:20:50 mr-fox kernel: __netif_receive_skb_list_core+0x293/0x2d0
May 26 06:20:50 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 06:20:50 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 06:20:50 mr-fox kernel: igb_poll+0x605/0x1370
May 26 06:20:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:50 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 06:20:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:50 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:20:50 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:20:50 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:20:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:50 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:20:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:50 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:20:50 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:20:50 mr-fox kernel: </IRQ>
May 26 06:20:50 mr-fox kernel: <TASK>
May 26 06:20:50 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:20:50 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 06:20:50 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 06:20:50 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 06:20:50 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:20:50 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:20:50 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:20:50 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:20:50 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:20:50 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:20:50 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:20:50 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:20:50 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:20:50 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:20:50 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:20:50 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:20:50 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:20:50 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:20:50 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:20:50 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:20:50 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:20:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:20:50 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:20:50 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:20:50 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:20:50 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:20:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:20:50 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:20:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:20:50 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:20:50 mr-fox kernel: </TASK>
May 26 06:21:00 mr-fox crond[7772]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:21:00 mr-fox crond[7774]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:21:00 mr-fox crond[7770]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:21:00 mr-fox crond[7773]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:21:00 mr-fox CROND[7778]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:21:00 mr-fox CROND[7780]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:21:00 mr-fox CROND[7781]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:21:00 mr-fox CROND[7782]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:21:00 mr-fox crond[7775]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:21:00 mr-fox CROND[7783]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:21:00 mr-fox CROND[7775]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:21:00 mr-fox CROND[7775]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:21:00 mr-fox CROND[7774]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:21:00 mr-fox CROND[7774]: pam_unix(crond:session): session closed for user root
May 26 06:21:00 mr-fox CROND[24342]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:21:00 mr-fox CROND[24342]: pam_unix(crond:session): session closed for user root
May 26 06:21:00 mr-fox CROND[7773]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:21:00 mr-fox CROND[7773]: pam_unix(crond:session): session closed for user root
May 26 06:22:00 mr-fox crond[12237]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:22:00 mr-fox crond[12238]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:22:00 mr-fox crond[12239]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:22:00 mr-fox crond[12240]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:22:00 mr-fox crond[12241]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:22:00 mr-fox CROND[12245]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:22:00 mr-fox CROND[12246]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:22:00 mr-fox CROND[12247]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:22:00 mr-fox CROND[12248]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:22:00 mr-fox CROND[12249]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:22:00 mr-fox CROND[12241]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:22:00 mr-fox CROND[12241]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:22:00 mr-fox CROND[12240]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:22:00 mr-fox CROND[12240]: pam_unix(crond:session): session closed for user root
May 26 06:22:01 mr-fox CROND[12239]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:22:01 mr-fox CROND[12239]: pam_unix(crond:session): session closed for user root
May 26 06:22:01 mr-fox CROND[7772]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:22:01 mr-fox CROND[7772]: pam_unix(crond:session): session closed for user root
May 26 06:23:00 mr-fox crond[12351]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:23:00 mr-fox crond[12349]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:23:00 mr-fox crond[12350]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:23:00 mr-fox crond[12352]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:23:00 mr-fox CROND[12358]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:23:00 mr-fox CROND[12357]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:23:00 mr-fox CROND[12359]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:23:00 mr-fox crond[12354]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:23:00 mr-fox CROND[12360]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:23:00 mr-fox CROND[12362]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:23:00 mr-fox CROND[12354]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:23:00 mr-fox CROND[12354]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:23:00 mr-fox CROND[12352]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:23:00 mr-fox CROND[12352]: pam_unix(crond:session): session closed for user root
May 26 06:23:00 mr-fox CROND[12351]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:23:00 mr-fox CROND[12351]: pam_unix(crond:session): session closed for user root
May 26 06:23:01 mr-fox CROND[12238]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:23:01 mr-fox CROND[12238]: pam_unix(crond:session): session closed for user root
May 26 06:23:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:23:44 mr-fox kernel: rcu: \x0921-....: (330025 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=131149
May 26 06:23:44 mr-fox kernel: rcu: \x09(t=330026 jiffies g=8794409 q=14267849 ncpus=32)
May 26 06:23:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:23:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:23:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:23:44 mr-fox kernel: RIP: 0010:xas_descend+0x40/0xd0
May 26 06:23:44 mr-fox kernel: Code: 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 <75> 08 48 3d fd 00 00 00 76 2f 41 88 5c 24 12 48 83 c4 08 5b 5d 41
May 26 06:23:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000246
May 26 06:23:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 06:23:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 06:23:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:23:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:23:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:23:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:23:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:23:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:23:44 mr-fox kernel: PKRU: 55555554
May 26 06:23:44 mr-fox kernel: Call Trace:
May 26 06:23:44 mr-fox kernel: <IRQ>
May 26 06:23:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:23:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:23:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:23:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:23:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:23:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:23:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:23:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:23:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:23:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:23:44 mr-fox kernel: </IRQ>
May 26 06:23:44 mr-fox kernel: <TASK>
May 26 06:23:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:23:44 mr-fox kernel: ? xas_descend+0x40/0xd0
May 26 06:23:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:23:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:23:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:23:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:23:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:23:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:23:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:23:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:23:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:23:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:23:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:23:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:23:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:23:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:23:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:23:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:23:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:23:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:23:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:23:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:23:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:23:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:23:44 mr-fox kernel: </TASK>
May 26 06:23:50 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 331375 jiffies s: 491905 root: 0x2/.
May 26 06:23:50 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:23:50 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:23:50 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:23:50 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:23:50 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:23:50 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:23:50 mr-fox kernel: RIP: 0010:__build_skb_around+0x9e/0x120
May 26 06:23:50 mr-fox kernel: Code: 65 8b 15 41 c0 88 5a 66 41 89 94 24 84 00 00 00 48 01 d8 48 c7 00 00 00 00 00 48 c7 40 08 00 00 00 00 48 c7 40 10 00 00 00 00 <48> c7 40 18 00 00 00 00 c7 40 20 01 00 00 00 5b 5d 41 5c 31 c0 31
May 26 06:23:50 mr-fox kernel: RSP: 0018:ffffa401005f0da0 EFLAGS: 00000286
May 26 06:23:50 mr-fox kernel: RAX: ffff989b517b1ec0 RBX: ffff989b517b1800 RCX: 00000000ffffffff
May 26 06:23:50 mr-fox kernel: RDX: 0000000000000015 RSI: ffff989b517b1800 RDI: ffff988b1b383600
May 26 06:23:50 mr-fox kernel: RBP: 0000000000000900 R08: 0000000000000000 R09: 0000000000000000
May 26 06:23:50 mr-fox kernel: R10: ffff988ac1a460c0 R11: 0000000000000000 R12: ffff988b1b383600
May 26 06:23:50 mr-fox kernel: R13: ffffa401001b9d68 R14: ffff988ac873fb40 R15: 0000000000000007
May 26 06:23:50 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:23:50 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:23:50 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:23:50 mr-fox kernel: PKRU: 55555554
May 26 06:23:50 mr-fox kernel: Call Trace:
May 26 06:23:50 mr-fox kernel: <NMI>
May 26 06:23:50 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:23:50 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:23:50 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:23:50 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:23:50 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:23:50 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:23:50 mr-fox kernel: ? __build_skb_around+0x9e/0x120
May 26 06:23:50 mr-fox kernel: ? __build_skb_around+0x9e/0x120
May 26 06:23:50 mr-fox kernel: ? __build_skb_around+0x9e/0x120
May 26 06:23:50 mr-fox kernel: </NMI>
May 26 06:23:50 mr-fox kernel: <IRQ>
May 26 06:23:50 mr-fox kernel: __napi_build_skb+0x53/0x70
May 26 06:23:50 mr-fox kernel: napi_build_skb+0x10/0xc0
May 26 06:23:50 mr-fox kernel: igb_poll+0xa12/0x1370
May 26 06:23:50 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:23:50 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:23:50 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:23:50 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:23:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:23:50 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:23:50 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:23:50 mr-fox kernel: </IRQ>
May 26 06:23:50 mr-fox kernel: <TASK>
May 26 06:23:50 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:23:50 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 06:23:50 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 06:23:50 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:23:50 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:23:50 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:23:50 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:23:50 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:23:50 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:23:50 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 06:23:50 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:23:50 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:23:50 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:23:50 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:23:50 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:23:50 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:23:50 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:23:50 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:23:50 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:23:50 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:23:50 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:23:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:23:50 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:23:50 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:23:50 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:23:50 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:23:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:23:50 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:23:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:23:50 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:23:50 mr-fox kernel: </TASK>
May 26 06:24:00 mr-fox crond[14213]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:24:00 mr-fox crond[14214]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:24:00 mr-fox crond[14217]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:24:00 mr-fox crond[14216]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:24:00 mr-fox crond[14218]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:24:00 mr-fox CROND[14221]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:24:00 mr-fox CROND[14222]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:24:00 mr-fox CROND[14223]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:24:00 mr-fox CROND[14225]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:24:00 mr-fox CROND[14224]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:24:00 mr-fox CROND[14218]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:24:00 mr-fox CROND[14218]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:24:00 mr-fox CROND[14217]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:24:00 mr-fox CROND[14217]: pam_unix(crond:session): session closed for user root
May 26 06:24:00 mr-fox CROND[14216]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:24:00 mr-fox CROND[14216]: pam_unix(crond:session): session closed for user root
May 26 06:24:00 mr-fox CROND[12350]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:24:00 mr-fox CROND[12350]: pam_unix(crond:session): session closed for user root
May 26 06:25:00 mr-fox crond[5448]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:25:00 mr-fox crond[5452]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:25:00 mr-fox crond[5449]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:25:00 mr-fox crond[5451]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:25:00 mr-fox crond[5454]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:25:00 mr-fox CROND[5462]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:25:00 mr-fox crond[5456]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:25:00 mr-fox CROND[5463]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:25:00 mr-fox CROND[5464]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:25:00 mr-fox CROND[5465]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:25:00 mr-fox CROND[5466]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:25:00 mr-fox CROND[5467]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:25:00 mr-fox crond[5453]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:25:00 mr-fox CROND[5470]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:25:00 mr-fox crond[5455]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:25:00 mr-fox CROND[5473]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:25:00 mr-fox CROND[5448]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:25:00 mr-fox CROND[5448]: pam_unix(crond:session): session closed for user torproject
May 26 06:25:00 mr-fox CROND[5456]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:25:00 mr-fox CROND[5456]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:25:00 mr-fox CROND[5453]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:25:00 mr-fox CROND[5453]: pam_unix(crond:session): session closed for user root
May 26 06:25:01 mr-fox CROND[5454]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:25:01 mr-fox CROND[5454]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:25:01 mr-fox CROND[14214]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:25:01 mr-fox CROND[14214]: pam_unix(crond:session): session closed for user root
May 26 06:25:01 mr-fox CROND[5452]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:25:01 mr-fox CROND[5452]: pam_unix(crond:session): session closed for user root
May 26 06:26:00 mr-fox crond[5093]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:26:00 mr-fox crond[5094]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:26:00 mr-fox crond[5092]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:26:00 mr-fox crond[5095]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:26:00 mr-fox crond[5096]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:26:00 mr-fox CROND[5100]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:26:00 mr-fox CROND[5102]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:26:00 mr-fox CROND[5101]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:26:00 mr-fox CROND[5104]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:26:00 mr-fox CROND[5103]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:26:00 mr-fox CROND[5096]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:26:00 mr-fox CROND[5096]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:26:00 mr-fox CROND[5095]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:26:00 mr-fox CROND[5095]: pam_unix(crond:session): session closed for user root
May 26 06:26:00 mr-fox CROND[5094]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:26:00 mr-fox CROND[5094]: pam_unix(crond:session): session closed for user root
May 26 06:26:01 mr-fox CROND[5451]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:26:01 mr-fox CROND[5451]: pam_unix(crond:session): session closed for user root
May 26 06:26:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:26:44 mr-fox kernel: rcu: \x0921-....: (375029 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=148473
May 26 06:26:44 mr-fox kernel: rcu: \x09(t=375030 jiffies g=8794409 q=15437943 ncpus=32)
May 26 06:26:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:26:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:26:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:26:44 mr-fox kernel: RIP: 0010:xas_descend+0x2/0xd0
May 26 06:26:44 mr-fox kernel: Code: 18 0f b6 4c 24 10 4c 8b 04 24 e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 <41> 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80
May 26 06:26:44 mr-fox kernel: RSP: 0018:ffffa401077e3940 EFLAGS: 00000206
May 26 06:26:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:26:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988b1fdbb478 RDI: ffffa401077e3970
May 26 06:26:44 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:26:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:26:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:26:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:26:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:26:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:26:44 mr-fox kernel: PKRU: 55555554
May 26 06:26:44 mr-fox kernel: Call Trace:
May 26 06:26:44 mr-fox kernel: <IRQ>
May 26 06:26:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:26:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:26:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:26:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:26:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:26:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:26:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:26:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:26:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:26:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:26:44 mr-fox kernel: </IRQ>
May 26 06:26:44 mr-fox kernel: <TASK>
May 26 06:26:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:26:44 mr-fox kernel: ? xas_descend+0x2/0xd0
May 26 06:26:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:26:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:26:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:26:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:26:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:26:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:26:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:26:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:26:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:26:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:26:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:26:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:26:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:26:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:26:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:26:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:26:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:26:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:26:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:26:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:26:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:26:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:26:44 mr-fox kernel: </TASK>
May 26 06:26:50 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 376432 jiffies s: 491905 root: 0x2/.
May 26 06:26:50 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:26:50 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:26:50 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:26:50 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:26:50 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:26:50 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:26:50 mr-fox kernel: RIP: 0010:iommu_v1_unmap_pages+0x84/0x140
May 26 06:26:50 mr-fox kernel: Code: 00 00 00 49 bc 8f e3 38 8e e3 38 8e e3 48 d3 e3 48 85 db 0f 84 86 00 00 00 48 89 e2 4c 89 f6 4c 89 ff e8 6f fd ff ff 49 89 c0 <48> 85 c0 74 70 48 8b 34 24 49 8d 78 08 49 c7 00 00 00 00 00 f3 48
May 26 06:26:50 mr-fox kernel: RSP: 0018:ffffa401005f0c80 EFLAGS: 00000246
May 26 06:26:50 mr-fox kernel: RAX: ffff988ac7128610 RBX: 0000000000001000 RCX: 0000000000000000
May 26 06:26:50 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:26:50 mr-fox kernel: RBP: 000000000000000c R08: ffff988ac7128610 R09: 0000000000000000
May 26 06:26:50 mr-fox kernel: R10: ffff988d24f78240 R11: 0000000000000000 R12: e38e38e38e38e38f
May 26 06:26:50 mr-fox kernel: R13: 0000000000000000 R14: 00000000cf6c2000 R15: ffff988ac2211088
May 26 06:26:50 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:26:50 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:26:50 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:26:50 mr-fox kernel: PKRU: 55555554
May 26 06:26:50 mr-fox kernel: Call Trace:
May 26 06:26:50 mr-fox kernel: <NMI>
May 26 06:26:50 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:26:50 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:26:50 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:26:50 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:26:50 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:26:50 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:26:50 mr-fox kernel: ? iommu_v1_unmap_pages+0x84/0x140
May 26 06:26:50 mr-fox kernel: ? iommu_v1_unmap_pages+0x84/0x140
May 26 06:26:50 mr-fox kernel: ? iommu_v1_unmap_pages+0x84/0x140
May 26 06:26:50 mr-fox kernel: </NMI>
May 26 06:26:50 mr-fox kernel: <IRQ>
May 26 06:26:50 mr-fox kernel: amd_iommu_unmap_pages+0x40/0x130
May 26 06:26:50 mr-fox kernel: __iommu_unmap+0xbf/0x120
May 26 06:26:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:26:50 mr-fox kernel: __iommu_dma_unmap+0xb5/0x170
May 26 06:26:50 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 06:26:50 mr-fox kernel: igb_poll+0x106/0x1370
May 26 06:26:50 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 06:26:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:26:50 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 06:26:50 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:26:50 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:26:50 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:26:50 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:26:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:26:50 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:26:50 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:26:50 mr-fox kernel: </IRQ>
May 26 06:26:50 mr-fox kernel: <TASK>
May 26 06:26:50 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:26:50 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:26:50 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:26:50 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:26:50 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:26:50 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:26:50 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:26:50 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:26:50 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:26:50 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:26:50 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:26:50 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:26:50 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:26:50 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:26:50 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:26:50 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:26:50 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:26:50 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:26:50 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:26:50 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:26:50 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:26:50 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:26:50 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:26:50 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:26:50 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:26:50 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:26:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:26:50 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:26:50 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:26:50 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:26:50 mr-fox kernel: </TASK>
May 26 06:27:00 mr-fox crond[7075]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:27:00 mr-fox crond[7077]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:27:00 mr-fox crond[7076]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:27:00 mr-fox crond[7079]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:27:00 mr-fox CROND[7082]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:27:00 mr-fox crond[7080]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:27:00 mr-fox CROND[7083]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:27:00 mr-fox CROND[7084]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:27:00 mr-fox CROND[7085]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:27:00 mr-fox CROND[7087]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:27:00 mr-fox CROND[7080]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:27:00 mr-fox CROND[7080]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:27:00 mr-fox CROND[7079]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:27:00 mr-fox CROND[7079]: pam_unix(crond:session): session closed for user root
May 26 06:27:00 mr-fox CROND[5093]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:27:00 mr-fox CROND[5093]: pam_unix(crond:session): session closed for user root
May 26 06:27:00 mr-fox CROND[7077]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:27:00 mr-fox CROND[7077]: pam_unix(crond:session): session closed for user root
May 26 06:28:00 mr-fox crond[10049]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:28:00 mr-fox crond[10048]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:28:00 mr-fox crond[10051]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:28:00 mr-fox crond[10053]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:28:00 mr-fox crond[10052]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:28:00 mr-fox CROND[10057]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:28:00 mr-fox CROND[10060]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:28:00 mr-fox CROND[10061]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:28:00 mr-fox CROND[10064]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:28:00 mr-fox CROND[10063]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:28:00 mr-fox CROND[10053]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:28:00 mr-fox CROND[10053]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:28:00 mr-fox CROND[10052]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:28:00 mr-fox CROND[10052]: pam_unix(crond:session): session closed for user root
May 26 06:28:00 mr-fox CROND[10051]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:28:00 mr-fox CROND[10051]: pam_unix(crond:session): session closed for user root
May 26 06:28:01 mr-fox CROND[7076]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:28:01 mr-fox CROND[7076]: pam_unix(crond:session): session closed for user root
May 26 06:29:00 mr-fox crond[10339]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:29:00 mr-fox crond[10338]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:29:00 mr-fox crond[10340]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:29:00 mr-fox crond[10341]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:29:00 mr-fox crond[10342]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:29:00 mr-fox CROND[10347]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:29:00 mr-fox CROND[10346]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:29:00 mr-fox CROND[10348]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:29:00 mr-fox CROND[10349]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:29:00 mr-fox CROND[10350]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:29:00 mr-fox CROND[10342]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:29:00 mr-fox CROND[10342]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:29:00 mr-fox CROND[10341]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:29:00 mr-fox CROND[10341]: pam_unix(crond:session): session closed for user root
May 26 06:29:00 mr-fox CROND[10340]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:29:00 mr-fox CROND[10340]: pam_unix(crond:session): session closed for user root
May 26 06:29:01 mr-fox CROND[10049]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:29:01 mr-fox CROND[10049]: pam_unix(crond:session): session closed for user root
May 26 06:29:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:29:44 mr-fox kernel: rcu: \x0921-....: (420033 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=165880
May 26 06:29:44 mr-fox kernel: rcu: \x09(t=420034 jiffies g=8794409 q=16531831 ncpus=32)
May 26 06:29:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:29:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:29:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:29:44 mr-fox kernel: RIP: 0010:xas_load+0x3c/0x60
May 26 06:29:44 mr-fox kernel: Code: e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe <72> e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 75 bf eb d1 66
May 26 06:29:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:29:44 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:29:44 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:29:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:29:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:29:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:29:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:29:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:29:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:29:44 mr-fox kernel: PKRU: 55555554
May 26 06:29:44 mr-fox kernel: Call Trace:
May 26 06:29:44 mr-fox kernel: <IRQ>
May 26 06:29:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:29:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:29:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 06:29:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:29:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:29:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:29:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:29:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:29:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:29:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:29:44 mr-fox kernel: </IRQ>
May 26 06:29:44 mr-fox kernel: <TASK>
May 26 06:29:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:29:44 mr-fox kernel: ? xas_load+0x3c/0x60
May 26 06:29:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:29:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:29:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:29:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:29:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:29:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:29:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:29:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:29:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:29:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:29:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:29:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:29:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:29:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:29:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:29:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:29:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:29:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:29:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:29:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:29:44 mr-fox kernel: </TASK>
May 26 06:29:51 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 421488 jiffies s: 491905 root: 0x2/.
May 26 06:29:51 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:29:51 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:29:51 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:29:51 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:29:51 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:29:51 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:29:51 mr-fox kernel: RIP: 0010:alloc_iova_fast+0xbd/0x410
May 26 06:29:51 mr-fox kernel: Code: d2 75 1c 49 8b 41 10 48 83 38 00 0f 84 e8 01 00 00 49 89 49 10 48 89 c1 49 89 41 08 4c 8b 10 41 8d 6a ff 48 63 c5 48 83 f8 7f <72> 1d e9 69 02 00 00 85 ed 0f 84 80 00 00 00 83 ed 01 48 63 c5 48
May 26 06:29:51 mr-fox kernel: RSP: 0018:ffffa401005f09b0 EFLAGS: 00000093
May 26 06:29:51 mr-fox kernel: RAX: 000000000000005e RBX: ffff988ac1292408 RCX: ffff989089626000
May 26 06:29:51 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:29:51 mr-fox kernel: RBP: 000000000000005e R08: 00000000000fffff R09: ffff98a96ed7d8c0
May 26 06:29:51 mr-fox kernel: R10: 000000000000005f R11: ffff988afab0d570 R12: 0000000000000000
May 26 06:29:51 mr-fox kernel: R13: 000000000000000c R14: 0000000000000286 R15: ffff988ac1292408
May 26 06:29:51 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:29:51 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:29:51 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:29:51 mr-fox kernel: PKRU: 55555554
May 26 06:29:51 mr-fox kernel: Call Trace:
May 26 06:29:51 mr-fox kernel: <NMI>
May 26 06:29:51 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:29:51 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:29:51 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:29:51 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:29:51 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:29:51 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:29:51 mr-fox kernel: ? alloc_iova_fast+0xbd/0x410
May 26 06:29:51 mr-fox kernel: ? alloc_iova_fast+0xbd/0x410
May 26 06:29:51 mr-fox kernel: ? alloc_iova_fast+0xbd/0x410
May 26 06:29:51 mr-fox kernel: </NMI>
May 26 06:29:51 mr-fox kernel: <IRQ>
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: iommu_dma_alloc_iova+0xed/0x170
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: __iommu_dma_map+0x60/0xf0
May 26 06:29:51 mr-fox kernel: iommu_dma_map_page+0xc2/0x230
May 26 06:29:51 mr-fox kernel: igb_xmit_frame_ring+0x91b/0xc00
May 26 06:29:51 mr-fox kernel: ? netif_skb_features+0x93/0x2c0
May 26 06:29:51 mr-fox kernel: dev_hard_start_xmit+0xa0/0xf0
May 26 06:29:51 mr-fox kernel: sch_direct_xmit+0x8d/0x290
May 26 06:29:51 mr-fox kernel: __dev_queue_xmit+0x49a/0x9a0
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: ip_finish_output2+0x258/0x500
May 26 06:29:51 mr-fox kernel: __ip_queue_xmit+0x16b/0x480
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: __tcp_transmit_skb+0xbad/0xd30
May 26 06:29:51 mr-fox kernel: __tcp_retransmit_skb+0x1a9/0x800
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: ? __mod_timer+0x115/0x3b0
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: ? retransmits_timed_out.part.0+0x8d/0x170
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: tcp_retransmit_skb+0x11/0xa0
May 26 06:29:51 mr-fox kernel: tcp_retransmit_timer+0x492/0xa60
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 06:29:51 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 06:29:51 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 06:29:51 mr-fox kernel: __run_timers+0x20a/0x240
May 26 06:29:51 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 06:29:51 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:29:51 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:29:51 mr-fox kernel: </IRQ>
May 26 06:29:51 mr-fox kernel: <TASK>
May 26 06:29:51 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:29:51 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:29:51 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:29:51 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:29:51 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:29:51 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:29:51 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:29:51 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:29:51 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:29:51 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:29:51 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:29:51 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:29:51 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:29:51 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:29:51 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:29:51 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:29:51 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:29:51 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:29:51 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:29:51 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:29:51 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:29:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:29:51 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:29:51 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:29:51 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:29:51 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:29:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:29:51 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:29:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:29:51 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:29:51 mr-fox kernel: </TASK>
May 26 06:30:00 mr-fox crond[12109]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:30:00 mr-fox crond[12110]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:30:00 mr-fox crond[12112]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:30:00 mr-fox crond[12113]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:30:00 mr-fox crond[12114]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:30:00 mr-fox crond[12117]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:30:00 mr-fox crond[12116]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:30:00 mr-fox CROND[12121]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:30:00 mr-fox CROND[12119]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:30:00 mr-fox CROND[12122]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:30:00 mr-fox crond[12115]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:30:00 mr-fox CROND[12123]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:30:00 mr-fox CROND[12124]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:30:00 mr-fox CROND[12125]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:30:00 mr-fox CROND[12126]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:30:00 mr-fox CROND[12127]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:30:00 mr-fox CROND[12109]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:30:00 mr-fox CROND[12109]: pam_unix(crond:session): session closed for user torproject
May 26 06:30:00 mr-fox CROND[12117]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:30:00 mr-fox CROND[12117]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:30:00 mr-fox CROND[12114]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:30:00 mr-fox CROND[12114]: pam_unix(crond:session): session closed for user root
May 26 06:30:00 mr-fox CROND[12115]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:30:00 mr-fox CROND[12115]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:30:00 mr-fox CROND[12113]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:30:00 mr-fox CROND[12113]: pam_unix(crond:session): session closed for user root
May 26 06:30:00 mr-fox CROND[10339]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:30:00 mr-fox CROND[10339]: pam_unix(crond:session): session closed for user root
May 26 06:31:00 mr-fox crond[14743]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:31:00 mr-fox crond[14745]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:31:00 mr-fox crond[14744]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:31:00 mr-fox crond[14746]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:31:00 mr-fox CROND[14749]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:31:00 mr-fox CROND[14750]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:31:00 mr-fox CROND[14751]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:31:00 mr-fox CROND[14752]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:31:00 mr-fox crond[14747]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:31:00 mr-fox CROND[14754]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:31:00 mr-fox CROND[14747]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:31:00 mr-fox CROND[14747]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:31:00 mr-fox CROND[14746]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:31:00 mr-fox CROND[14746]: pam_unix(crond:session): session closed for user root
May 26 06:31:00 mr-fox CROND[12112]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:31:00 mr-fox CROND[12112]: pam_unix(crond:session): session closed for user root
May 26 06:31:00 mr-fox CROND[14745]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:31:00 mr-fox CROND[14745]: pam_unix(crond:session): session closed for user root
May 26 06:32:00 mr-fox crond[18459]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:32:00 mr-fox crond[18460]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:32:00 mr-fox crond[18461]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:32:00 mr-fox crond[18462]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:32:00 mr-fox CROND[18467]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:32:00 mr-fox CROND[18466]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:32:00 mr-fox CROND[18468]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:32:00 mr-fox CROND[18469]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:32:00 mr-fox crond[18458]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:32:00 mr-fox CROND[18472]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:32:00 mr-fox CROND[18462]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:32:00 mr-fox CROND[18462]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:32:00 mr-fox CROND[18461]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:32:00 mr-fox CROND[18461]: pam_unix(crond:session): session closed for user root
May 26 06:32:00 mr-fox CROND[18460]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:32:00 mr-fox CROND[18460]: pam_unix(crond:session): session closed for user root
May 26 06:32:01 mr-fox CROND[14744]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:32:01 mr-fox CROND[14744]: pam_unix(crond:session): session closed for user root
May 26 06:32:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:32:44 mr-fox kernel: rcu: \x0921-....: (465037 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=183192
May 26 06:32:44 mr-fox kernel: rcu: \x09(t=465038 jiffies g=8794409 q=17647318 ncpus=32)
May 26 06:32:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:32:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:32:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:32:44 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:32:44 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:32:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:32:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:32:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:32:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:32:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:32:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:32:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:32:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:32:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:32:44 mr-fox kernel: PKRU: 55555554
May 26 06:32:44 mr-fox kernel: Call Trace:
May 26 06:32:44 mr-fox kernel: <IRQ>
May 26 06:32:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:32:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:32:44 mr-fox kernel: ? tcp_write_xmit+0xe3/0x13b0
May 26 06:32:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:32:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:32:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:32:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:32:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:32:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:32:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:32:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:32:44 mr-fox kernel: </IRQ>
May 26 06:32:44 mr-fox kernel: <TASK>
May 26 06:32:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:32:44 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 06:32:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:32:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:32:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:32:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:32:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:32:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:32:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:32:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:32:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:32:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:32:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:32:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:32:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:32:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:32:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:32:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:32:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:32:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:32:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:32:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:32:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:32:44 mr-fox kernel: </TASK>
May 26 06:32:51 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 466544 jiffies s: 491905 root: 0x2/.
May 26 06:32:51 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:32:51 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:32:51 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:32:51 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:32:51 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:32:51 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:32:51 mr-fox kernel: RIP: 0010:__const_udelay+0xc/0x40
May 26 06:32:51 mr-fox kernel: Code: 1e fa 48 8b 05 ad 26 4d 00 e9 b0 56 05 00 cc 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa 65 48 8b 0d 4c c8 6b 5a <48> 85 c9 48 0f 44 0d a1 01 6b 00 48 8d 04 bd 00 00 00 00 48 89 ca
May 26 06:32:51 mr-fox kernel: RSP: 0018:ffffa401005f0c38 EFLAGS: 00000002
May 26 06:32:51 mr-fox kernel: RAX: 00000000008c370e RBX: 0000000000000002 RCX: 0000000000cf1ca0
May 26 06:32:51 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000010c7
May 26 06:32:51 mr-fox kernel: RBP: ffffa401005f0c90 R08: 0000000000000000 R09: 0000000000000000
May 26 06:32:51 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000008c370f
May 26 06:32:51 mr-fox kernel: R13: 0000000000000046 R14: ffff988ac01b3014 R15: ffff988ac01b3000
May 26 06:32:51 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:32:51 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:32:51 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:32:51 mr-fox kernel: PKRU: 55555554
May 26 06:32:51 mr-fox kernel: Call Trace:
May 26 06:32:51 mr-fox kernel: <NMI>
May 26 06:32:51 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:32:51 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:32:51 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:32:51 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:32:51 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:32:51 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:32:51 mr-fox kernel: ? __const_udelay+0xc/0x40
May 26 06:32:51 mr-fox kernel: ? __const_udelay+0xc/0x40
May 26 06:32:51 mr-fox kernel: ? __const_udelay+0xc/0x40
May 26 06:32:51 mr-fox kernel: </NMI>
May 26 06:32:51 mr-fox kernel: <IRQ>
May 26 06:32:51 mr-fox kernel: iommu_completion_wait.isra.0+0xea/0x130
May 26 06:32:51 mr-fox kernel: amd_iommu_domain_flush_pages+0x5d/0x1a0
May 26 06:32:51 mr-fox kernel: amd_iommu_flush_iotlb_all+0x32/0x50
May 26 06:32:51 mr-fox kernel: fq_flush_iotlb+0x20/0x40
May 26 06:32:51 mr-fox kernel: iommu_dma_free_iova+0x207/0x220
May 26 06:32:51 mr-fox kernel: __iommu_dma_unmap+0xe0/0x170
May 26 06:32:51 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 06:32:51 mr-fox kernel: igb_poll+0x106/0x1370
May 26 06:32:51 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 06:32:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:32:51 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 06:32:51 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:32:51 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:32:51 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:32:51 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:32:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:32:51 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:32:51 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:32:51 mr-fox kernel: </IRQ>
May 26 06:32:51 mr-fox kernel: <TASK>
May 26 06:32:51 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:32:51 mr-fox kernel: RIP: 0010:xas_load+0x20/0x60
May 26 06:32:51 mr-fox kernel: Code: ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 <77> 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48
May 26 06:32:51 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000282
May 26 06:32:51 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:32:51 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:32:51 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:32:51 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:32:51 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:32:51 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:32:51 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:32:51 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:32:51 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:32:51 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:32:51 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:32:51 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:32:51 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:32:51 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:32:51 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:32:51 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:32:51 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:32:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:32:51 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:32:51 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:32:51 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:32:51 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:32:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:32:51 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:32:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:32:51 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:32:51 mr-fox kernel: </TASK>
May 26 06:33:00 mr-fox crond[21170]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:33:00 mr-fox crond[21167]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:33:00 mr-fox crond[21169]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:33:00 mr-fox crond[21171]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:33:00 mr-fox crond[21172]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:33:00 mr-fox CROND[21175]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:33:00 mr-fox CROND[21174]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:33:00 mr-fox CROND[21176]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:33:00 mr-fox CROND[21177]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:33:00 mr-fox CROND[21178]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:33:00 mr-fox CROND[21172]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:33:00 mr-fox CROND[21172]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:33:00 mr-fox CROND[21171]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:33:00 mr-fox CROND[21171]: pam_unix(crond:session): session closed for user root
May 26 06:33:00 mr-fox CROND[21170]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:33:00 mr-fox CROND[21170]: pam_unix(crond:session): session closed for user root
May 26 06:33:01 mr-fox CROND[18459]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:33:01 mr-fox CROND[18459]: pam_unix(crond:session): session closed for user root
May 26 06:34:00 mr-fox crond[22271]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:34:00 mr-fox crond[22272]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:34:00 mr-fox crond[22273]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:34:00 mr-fox crond[22275]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:34:00 mr-fox crond[22276]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:34:00 mr-fox CROND[22282]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:34:00 mr-fox CROND[22279]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:34:00 mr-fox CROND[22281]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:34:00 mr-fox CROND[22280]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:34:00 mr-fox CROND[22284]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:34:00 mr-fox CROND[22276]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:34:00 mr-fox CROND[22276]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:34:00 mr-fox CROND[22275]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:34:00 mr-fox CROND[22275]: pam_unix(crond:session): session closed for user root
May 26 06:34:00 mr-fox CROND[22273]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:34:00 mr-fox CROND[22273]: pam_unix(crond:session): session closed for user root
May 26 06:34:00 mr-fox CROND[21169]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:34:00 mr-fox CROND[21169]: pam_unix(crond:session): session closed for user root
May 26 06:35:00 mr-fox crond[25960]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:35:00 mr-fox crond[25959]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:35:00 mr-fox crond[25961]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:35:00 mr-fox crond[25962]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:35:00 mr-fox crond[25958]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:35:00 mr-fox crond[25957]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:35:00 mr-fox CROND[25966]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:35:00 mr-fox CROND[25967]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:35:00 mr-fox CROND[25968]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:35:00 mr-fox crond[25963]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:35:00 mr-fox crond[25964]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:35:00 mr-fox CROND[25969]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:35:00 mr-fox CROND[25972]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:35:00 mr-fox CROND[25970]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:35:00 mr-fox CROND[25973]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:35:00 mr-fox CROND[25975]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:35:00 mr-fox CROND[25957]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:35:00 mr-fox CROND[25957]: pam_unix(crond:session): session closed for user torproject
May 26 06:35:00 mr-fox CROND[25964]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:35:00 mr-fox CROND[25964]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:35:00 mr-fox CROND[25961]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:35:00 mr-fox CROND[25961]: pam_unix(crond:session): session closed for user root
May 26 06:35:00 mr-fox CROND[25962]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:35:00 mr-fox CROND[25962]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:35:00 mr-fox CROND[22272]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:35:00 mr-fox CROND[22272]: pam_unix(crond:session): session closed for user root
May 26 06:35:00 mr-fox CROND[25960]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:35:00 mr-fox CROND[25960]: pam_unix(crond:session): session closed for user root
May 26 06:35:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:35:44 mr-fox kernel: rcu: \x0921-....: (510040 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=200450
May 26 06:35:44 mr-fox kernel: rcu: \x09(t=510041 jiffies g=8794409 q=18753758 ncpus=32)
May 26 06:35:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:35:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:35:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:35:44 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:35:44 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:35:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:35:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:35:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:35:44 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:35:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:35:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:35:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:35:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:35:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:35:44 mr-fox kernel: PKRU: 55555554
May 26 06:35:44 mr-fox kernel: Call Trace:
May 26 06:35:44 mr-fox kernel: <IRQ>
May 26 06:35:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:35:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:35:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:35:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:35:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:35:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:35:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:35:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:35:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:35:44 mr-fox kernel: </IRQ>
May 26 06:35:44 mr-fox kernel: <TASK>
May 26 06:35:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:35:44 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 06:35:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:35:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:35:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:35:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:35:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:35:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:35:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:35:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:35:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:35:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:35:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:35:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:35:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:35:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:35:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:35:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:35:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:35:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:35:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:35:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:35:44 mr-fox kernel: </TASK>
May 26 06:35:51 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 511599 jiffies s: 491905 root: 0x2/.
May 26 06:35:51 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:35:51 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:35:51 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:35:51 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:35:51 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:35:51 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:35:51 mr-fox kernel: RIP: 0010:iommu_dma_map_page+0x93/0x230
May 26 06:35:51 mr-fox kernel: Code: 85 c0 44 0f 44 d0 45 89 d4 4c 89 f7 41 bf ff ff ff ff e8 50 e8 ff ff 48 8b 50 40 49 8b 86 08 02 00 00 48 85 c0 74 0f 4c 8b 38 <b8> ff ff ff ff 4d 85 ff 4c 0f 44 f8 49 81 7e 60 20 e7 43 a6 0f 84
May 26 06:35:51 mr-fox kernel: RSP: 0018:ffffa401005f0ab0 EFLAGS: 00000286
May 26 06:35:51 mr-fox kernel: RAX: ffff988ac1a46088 RBX: 00000004a131a6be RCX: 0000000000000042
May 26 06:35:51 mr-fox kernel: RDX: ffff988ac1292400 RSI: 000000001284c680 RDI: 0000000000000000
May 26 06:35:51 mr-fox kernel: RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
May 26 06:35:51 mr-fox kernel: R10: ffff988c735087d8 R11: ffffa401001219e0 R12: 0000000000000005
May 26 06:35:51 mr-fox kernel: R13: 0000000000000000 R14: ffff988ac1a460c0 R15: ffffffffffffffff
May 26 06:35:51 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:35:51 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:35:51 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:35:51 mr-fox kernel: PKRU: 55555554
May 26 06:35:51 mr-fox kernel: Call Trace:
May 26 06:35:51 mr-fox kernel: <NMI>
May 26 06:35:51 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:35:51 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:35:51 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:35:51 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:35:51 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:35:51 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:35:51 mr-fox kernel: ? iommu_dma_map_page+0x93/0x230
May 26 06:35:51 mr-fox kernel: ? iommu_dma_map_page+0x93/0x230
May 26 06:35:51 mr-fox kernel: ? iommu_dma_map_page+0x93/0x230
May 26 06:35:51 mr-fox kernel: </NMI>
May 26 06:35:51 mr-fox kernel: <IRQ>
May 26 06:35:51 mr-fox kernel: igb_xmit_frame_ring+0x2d7/0xc00
May 26 06:35:51 mr-fox kernel: ? netif_skb_features+0x93/0x2c0
May 26 06:35:51 mr-fox kernel: dev_hard_start_xmit+0xa0/0xf0
May 26 06:35:51 mr-fox kernel: sch_direct_xmit+0x8d/0x290
May 26 06:35:51 mr-fox kernel: __dev_queue_xmit+0x49a/0x9a0
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: ip_finish_output2+0x258/0x500
May 26 06:35:51 mr-fox kernel: __ip_queue_xmit+0x16b/0x480
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: __tcp_transmit_skb+0xbad/0xd30
May 26 06:35:51 mr-fox kernel: __tcp_retransmit_skb+0x1a9/0x800
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: ? __mod_timer+0x115/0x3b0
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: ? retransmits_timed_out.part.0+0x8d/0x170
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: tcp_retransmit_skb+0x11/0xa0
May 26 06:35:51 mr-fox kernel: tcp_retransmit_timer+0x492/0xa60
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 06:35:51 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 06:35:51 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 06:35:51 mr-fox kernel: __run_timers+0x20a/0x240
May 26 06:35:51 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 06:35:51 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:35:51 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:35:51 mr-fox kernel: </IRQ>
May 26 06:35:51 mr-fox kernel: <TASK>
May 26 06:35:51 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:35:51 mr-fox kernel: RIP: 0010:xas_load+0xe/0x60
May 26 06:35:51 mr-fox kernel: Code: 00 00 eb 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff <48> 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d
May 26 06:35:51 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:35:51 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:35:51 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:35:51 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:35:51 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:35:51 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:35:51 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:35:51 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:35:51 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:35:51 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:35:51 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:35:51 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:35:51 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:35:51 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:35:51 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:35:51 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:35:51 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:35:51 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:35:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:35:51 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:35:51 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:35:51 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:35:51 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:35:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:35:51 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:35:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:35:51 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:35:51 mr-fox kernel: </TASK>
May 26 06:36:00 mr-fox crond[27414]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:36:00 mr-fox crond[27413]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:36:00 mr-fox crond[27412]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:36:00 mr-fox crond[27417]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:36:00 mr-fox CROND[27420]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:36:00 mr-fox CROND[27419]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:36:00 mr-fox CROND[27421]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:36:00 mr-fox crond[27415]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:36:00 mr-fox CROND[27422]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:36:00 mr-fox CROND[27424]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:36:00 mr-fox CROND[27417]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:36:00 mr-fox CROND[27417]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:36:00 mr-fox CROND[27415]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:36:00 mr-fox CROND[27415]: pam_unix(crond:session): session closed for user root
May 26 06:36:01 mr-fox CROND[27414]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:36:01 mr-fox CROND[27414]: pam_unix(crond:session): session closed for user root
May 26 06:36:01 mr-fox CROND[25959]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:36:01 mr-fox CROND[25959]: pam_unix(crond:session): session closed for user root
May 26 06:37:00 mr-fox crond[28903]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:37:00 mr-fox crond[28906]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:37:00 mr-fox crond[28904]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:37:00 mr-fox crond[28907]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:37:00 mr-fox CROND[28918]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:37:00 mr-fox CROND[28914]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:37:00 mr-fox CROND[28920]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:37:00 mr-fox CROND[28921]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:37:00 mr-fox crond[28908]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:37:00 mr-fox CROND[28925]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:37:00 mr-fox CROND[28908]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:37:00 mr-fox CROND[28908]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:37:00 mr-fox CROND[28907]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:37:00 mr-fox CROND[28907]: pam_unix(crond:session): session closed for user root
May 26 06:37:00 mr-fox CROND[28906]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:37:00 mr-fox CROND[28906]: pam_unix(crond:session): session closed for user root
May 26 06:37:01 mr-fox CROND[27413]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:37:01 mr-fox CROND[27413]: pam_unix(crond:session): session closed for user root
May 26 06:38:00 mr-fox crond[29652]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:38:00 mr-fox crond[29654]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:38:00 mr-fox CROND[29660]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:38:00 mr-fox crond[29655]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:38:00 mr-fox CROND[29661]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:38:00 mr-fox crond[29656]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:38:00 mr-fox CROND[29665]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:38:00 mr-fox crond[29657]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:38:00 mr-fox CROND[29666]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:38:00 mr-fox CROND[29667]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:38:00 mr-fox CROND[29657]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:38:00 mr-fox CROND[29657]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:38:00 mr-fox CROND[29656]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:38:00 mr-fox CROND[29656]: pam_unix(crond:session): session closed for user root
May 26 06:38:00 mr-fox CROND[29655]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:38:00 mr-fox CROND[29655]: pam_unix(crond:session): session closed for user root
May 26 06:38:00 mr-fox CROND[28904]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:38:00 mr-fox CROND[28904]: pam_unix(crond:session): session closed for user root
May 26 06:38:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:38:44 mr-fox kernel: rcu: \x0921-....: (555044 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=217679
May 26 06:38:44 mr-fox kernel: rcu: \x09(t=555045 jiffies g=8794409 q=19873872 ncpus=32)
May 26 06:38:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:38:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:38:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:38:44 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 06:38:44 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 06:38:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000282
May 26 06:38:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:38:44 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:38:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:38:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:38:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:38:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:38:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:38:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:38:44 mr-fox kernel: PKRU: 55555554
May 26 06:38:44 mr-fox kernel: Call Trace:
May 26 06:38:44 mr-fox kernel: <IRQ>
May 26 06:38:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:38:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:38:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:38:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:38:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:38:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:38:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:38:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:38:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:38:44 mr-fox kernel: </IRQ>
May 26 06:38:44 mr-fox kernel: <TASK>
May 26 06:38:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:38:44 mr-fox kernel: ? xas_load+0x35/0x60
May 26 06:38:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:38:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:38:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:38:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:38:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:38:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:38:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:38:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:38:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:38:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:38:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:38:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:38:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:38:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:38:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:38:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:38:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:38:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:38:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:38:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:38:44 mr-fox kernel: </TASK>
May 26 06:38:51 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 556655 jiffies s: 491905 root: 0x2/.
May 26 06:38:51 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:38:51 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:38:51 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:38:51 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:38:51 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:38:51 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:38:51 mr-fox kernel: RIP: 0010:skb_release_head_state+0x39/0x80
May 26 06:38:51 mr-fox kernel: Code: 75 3d 48 8b 43 60 48 85 c0 74 08 48 89 df e8 6e a1 21 00 48 8b 7b 68 48 83 ff 07 76 16 48 83 e7 f8 b8 ff ff ff ff f0 0f c1 07 <83> f8 01 74 20 85 c0 7e 22 5b 31 c0 31 f6 31 ff e9 0d ca 36 00 40
May 26 06:38:51 mr-fox kernel: RSP: 0018:ffffa401005f0dd8 EFLAGS: 00000213
May 26 06:38:51 mr-fox kernel: RAX: 0000000000000002 RBX: ffff988b81979600 RCX: 0000000000000000
May 26 06:38:51 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9893aa567300
May 26 06:38:51 mr-fox kernel: RBP: 0000000000007828 R08: 0000000000000000 R09: 0000000000000000
May 26 06:38:51 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa401001197c0
May 26 06:38:51 mr-fox kernel: R13: ffff988ac873fa40 R14: 00000000ffffff7c R15: ffffa401001197d0
May 26 06:38:51 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:38:51 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:38:51 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:38:51 mr-fox kernel: PKRU: 55555554
May 26 06:38:51 mr-fox kernel: Call Trace:
May 26 06:38:51 mr-fox kernel: <NMI>
May 26 06:38:51 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:38:51 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:38:51 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:38:51 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:38:51 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:38:51 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:38:51 mr-fox kernel: ? skb_release_head_state+0x39/0x80
May 26 06:38:51 mr-fox kernel: ? skb_release_head_state+0x39/0x80
May 26 06:38:51 mr-fox kernel: ? skb_release_head_state+0x39/0x80
May 26 06:38:51 mr-fox kernel: </NMI>
May 26 06:38:51 mr-fox kernel: <IRQ>
May 26 06:38:51 mr-fox kernel: napi_consume_skb+0x2e/0xc0
May 26 06:38:51 mr-fox kernel: igb_poll+0xea/0x1370
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: ? wq_worker_tick+0xd/0xd0
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:38:51 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:38:51 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:38:51 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:38:51 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:38:51 mr-fox kernel: </IRQ>
May 26 06:38:51 mr-fox kernel: <TASK>
May 26 06:38:51 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:38:51 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 06:38:51 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 06:38:51 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 06:38:51 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 06:38:51 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 06:38:51 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:38:51 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:38:51 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:38:51 mr-fox kernel: xas_load+0x49/0x60
May 26 06:38:51 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:38:51 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:38:51 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:38:51 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:38:51 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:38:51 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:38:51 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:38:51 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:38:51 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:38:51 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:38:51 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:38:51 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:38:51 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:38:51 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:38:51 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:38:51 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:38:51 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:38:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:38:51 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:38:51 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:38:51 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:38:51 mr-fox kernel: </TASK>
May 26 06:39:00 mr-fox crond[32009]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:39:00 mr-fox crond[32011]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:39:00 mr-fox crond[32010]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:39:00 mr-fox crond[32012]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:39:00 mr-fox crond[32007]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:39:00 mr-fox CROND[32016]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:39:00 mr-fox CROND[32015]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:39:00 mr-fox CROND[32017]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:39:00 mr-fox CROND[32018]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:39:00 mr-fox CROND[32019]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:39:00 mr-fox CROND[32012]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:39:00 mr-fox CROND[32012]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:39:00 mr-fox CROND[32011]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:39:00 mr-fox CROND[32011]: pam_unix(crond:session): session closed for user root
May 26 06:39:01 mr-fox CROND[29654]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:39:01 mr-fox CROND[29654]: pam_unix(crond:session): session closed for user root
May 26 06:39:01 mr-fox CROND[32010]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:39:01 mr-fox CROND[32010]: pam_unix(crond:session): session closed for user root
May 26 06:40:00 mr-fox crond[31870]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:40:00 mr-fox crond[31869]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:40:00 mr-fox crond[31872]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:40:00 mr-fox crond[31871]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:40:00 mr-fox crond[31875]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:40:00 mr-fox CROND[31880]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:40:00 mr-fox CROND[31881]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:40:00 mr-fox crond[31874]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:40:00 mr-fox CROND[31882]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:40:00 mr-fox crond[31877]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:40:00 mr-fox CROND[31884]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:40:00 mr-fox crond[31876]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:40:00 mr-fox CROND[31885]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:40:00 mr-fox CROND[31886]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:40:00 mr-fox CROND[31887]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:40:00 mr-fox CROND[31888]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:40:00 mr-fox CROND[31869]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:40:00 mr-fox CROND[31869]: pam_unix(crond:session): session closed for user torproject
May 26 06:40:00 mr-fox CROND[31877]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:40:00 mr-fox CROND[31877]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:40:00 mr-fox CROND[31874]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:40:00 mr-fox CROND[31874]: pam_unix(crond:session): session closed for user root
May 26 06:40:00 mr-fox CROND[31875]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:40:00 mr-fox CROND[31875]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:40:00 mr-fox CROND[31872]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:40:00 mr-fox CROND[31872]: pam_unix(crond:session): session closed for user root
May 26 06:40:01 mr-fox CROND[32009]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:40:01 mr-fox CROND[32009]: pam_unix(crond:session): session closed for user root
May 26 06:41:00 mr-fox crond[2750]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:41:00 mr-fox crond[2752]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:41:00 mr-fox crond[2751]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:41:00 mr-fox crond[2749]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:41:00 mr-fox crond[2753]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:41:00 mr-fox CROND[2757]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:41:00 mr-fox CROND[2756]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:41:00 mr-fox CROND[2758]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:41:00 mr-fox CROND[2759]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:41:00 mr-fox CROND[2762]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:41:00 mr-fox CROND[2753]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:41:00 mr-fox CROND[2753]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:41:00 mr-fox CROND[2752]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:41:00 mr-fox CROND[2752]: pam_unix(crond:session): session closed for user root
May 26 06:41:00 mr-fox CROND[2751]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:41:00 mr-fox CROND[2751]: pam_unix(crond:session): session closed for user root
May 26 06:41:00 mr-fox CROND[31871]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:41:00 mr-fox CROND[31871]: pam_unix(crond:session): session closed for user root
May 26 06:41:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:41:44 mr-fox kernel: rcu: \x0921-....: (600048 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=234922
May 26 06:41:44 mr-fox kernel: rcu: \x09(t=600049 jiffies g=8794409 q=21060947 ncpus=32)
May 26 06:41:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:41:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:41:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:41:44 mr-fox kernel: RIP: 0010:xas_descend+0xc/0xd0
May 26 06:41:44 mr-fox kernel: Code: e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 <48> 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3
May 26 06:41:44 mr-fox kernel: RSP: 0018:ffffa401077e3928 EFLAGS: 00000206
May 26 06:41:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:41:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 06:41:44 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:41:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:41:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:41:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:41:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:41:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:41:44 mr-fox kernel: PKRU: 55555554
May 26 06:41:44 mr-fox kernel: Call Trace:
May 26 06:41:44 mr-fox kernel: <IRQ>
May 26 06:41:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:41:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:41:44 mr-fox kernel: ? tcp_write_xmit+0xe3/0x13b0
May 26 06:41:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:41:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:41:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:41:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:41:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:41:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:41:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:41:44 mr-fox kernel: </IRQ>
May 26 06:41:44 mr-fox kernel: <TASK>
May 26 06:41:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:41:44 mr-fox kernel: ? xas_descend+0xc/0xd0
May 26 06:41:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:41:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:41:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:41:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:41:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:41:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:41:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:41:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:41:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:41:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:41:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:41:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:41:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:41:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:41:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:41:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:41:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:41:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:41:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:41:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:41:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:41:44 mr-fox kernel: </TASK>
May 26 06:41:52 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 601711 jiffies s: 491905 root: 0x2/.
May 26 06:41:52 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:41:52 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:41:52 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:41:52 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:41:52 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:41:52 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:41:52 mr-fox kernel: RIP: 0010:fetch_pte+0x145/0x170
May 26 06:41:52 mr-fox kernel: Code: 74 25 48 8b 10 f6 c2 01 75 81 31 c0 48 83 c4 10 5b 5d 41 5c 41 5d 41 5e 41 5f 31 d2 31 c9 31 f6 31 ff e9 29 18 45 00 48 8b 38 <81> e7 00 0e 00 00 48 81 ff 00 0e 00 00 75 d3 48 83 c4 10 48 89 ee
May 26 06:41:52 mr-fox kernel: RSP: 0018:ffffa401005f0d58 EFLAGS: 00000246
May 26 06:41:52 mr-fox kernel: RAX: ffff988ceccfa9f0 RBX: 0000000000000000 RCX: 0000000000000003
May 26 06:41:52 mr-fox kernel: RDX: 0000000000001000 RSI: 0000000000000001 RDI: 300000011af3a001
May 26 06:41:52 mr-fox kernel: RBP: ffffa401005f0da0 R08: 0000000000000000 R09: 0000000000000000
May 26 06:41:52 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000ce93e8ea
May 26 06:41:52 mr-fox kernel: R13: 000ffffffffff000 R14: 0000000000000003 R15: 0000000000000000
May 26 06:41:52 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:41:52 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:41:52 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:41:52 mr-fox kernel: PKRU: 55555554
May 26 06:41:52 mr-fox kernel: Call Trace:
May 26 06:41:52 mr-fox kernel: <NMI>
May 26 06:41:52 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:41:52 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:41:52 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:41:52 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:41:52 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:41:52 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:41:52 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 06:41:52 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 06:41:52 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 06:41:52 mr-fox kernel: </NMI>
May 26 06:41:52 mr-fox kernel: <IRQ>
May 26 06:41:52 mr-fox kernel: iommu_v1_iova_to_phys+0x2b/0xa0
May 26 06:41:52 mr-fox kernel: iommu_dma_unmap_page+0x2d/0xa0
May 26 06:41:52 mr-fox kernel: igb_poll+0x106/0x1370
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: ? wq_worker_tick+0xd/0xd0
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:41:52 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:41:52 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:41:52 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:41:52 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:41:52 mr-fox kernel: </IRQ>
May 26 06:41:52 mr-fox kernel: <TASK>
May 26 06:41:52 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:41:52 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 06:41:52 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 06:41:52 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:41:52 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:41:52 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:41:52 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:41:52 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:41:52 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:41:52 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 06:41:52 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:41:52 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:41:52 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:41:52 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:41:52 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:41:52 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:41:52 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:41:52 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:41:52 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:41:52 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:41:52 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:41:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:41:52 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:41:52 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:41:52 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:41:52 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:41:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:41:52 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:41:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:41:52 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:41:52 mr-fox kernel: </TASK>
May 26 06:42:00 mr-fox crond[4517]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:42:00 mr-fox crond[4519]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:42:00 mr-fox crond[4518]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:42:00 mr-fox crond[4521]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:42:00 mr-fox crond[4520]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:42:00 mr-fox CROND[4525]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:42:00 mr-fox CROND[4524]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:42:00 mr-fox CROND[4526]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:42:00 mr-fox CROND[4527]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:42:00 mr-fox CROND[4529]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:42:00 mr-fox CROND[4521]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:42:00 mr-fox CROND[4521]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:42:00 mr-fox CROND[4520]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:42:00 mr-fox CROND[4520]: pam_unix(crond:session): session closed for user root
May 26 06:42:00 mr-fox CROND[4519]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:42:00 mr-fox CROND[4519]: pam_unix(crond:session): session closed for user root
May 26 06:42:01 mr-fox CROND[2750]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:42:01 mr-fox CROND[2750]: pam_unix(crond:session): session closed for user root
May 26 06:43:00 mr-fox crond[3635]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:43:00 mr-fox crond[3637]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:43:00 mr-fox crond[3639]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:43:00 mr-fox crond[3640]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:43:00 mr-fox CROND[3643]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:43:00 mr-fox CROND[3644]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:43:00 mr-fox CROND[3646]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:43:00 mr-fox crond[3638]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:43:00 mr-fox CROND[3649]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:43:00 mr-fox CROND[3647]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:43:00 mr-fox CROND[3640]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:43:00 mr-fox CROND[3640]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:43:00 mr-fox CROND[3639]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:43:00 mr-fox CROND[3639]: pam_unix(crond:session): session closed for user root
May 26 06:43:00 mr-fox CROND[3638]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:43:00 mr-fox CROND[3638]: pam_unix(crond:session): session closed for user root
May 26 06:43:01 mr-fox CROND[4518]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:43:01 mr-fox CROND[4518]: pam_unix(crond:session): session closed for user root
May 26 06:44:00 mr-fox crond[4346]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:44:00 mr-fox crond[4344]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:44:00 mr-fox crond[4347]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:44:00 mr-fox crond[4345]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:44:00 mr-fox crond[4348]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:44:00 mr-fox CROND[4354]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:44:00 mr-fox CROND[4353]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:44:00 mr-fox CROND[4352]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:44:00 mr-fox CROND[4356]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:44:00 mr-fox CROND[4355]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:44:00 mr-fox CROND[4348]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:44:00 mr-fox CROND[4348]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:44:00 mr-fox CROND[4347]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:44:00 mr-fox CROND[4347]: pam_unix(crond:session): session closed for user root
May 26 06:44:00 mr-fox CROND[4346]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:44:00 mr-fox CROND[4346]: pam_unix(crond:session): session closed for user root
May 26 06:44:00 mr-fox CROND[3637]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:44:00 mr-fox CROND[3637]: pam_unix(crond:session): session closed for user root
May 26 06:44:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:44:44 mr-fox kernel: rcu: \x0921-....: (645052 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=252150
May 26 06:44:44 mr-fox kernel: rcu: \x09(t=645053 jiffies g=8794409 q=22264427 ncpus=32)
May 26 06:44:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:44:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:44:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:44:44 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 06:44:44 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 06:44:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 06:44:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: 000000000000000c RCX: 0000000000000018
May 26 06:44:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 06:44:44 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:44:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:44:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:44:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:44:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:44:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:44:44 mr-fox kernel: PKRU: 55555554
May 26 06:44:44 mr-fox kernel: Call Trace:
May 26 06:44:44 mr-fox kernel: <IRQ>
May 26 06:44:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:44:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:44:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 06:44:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:44:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:44:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:44:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:44:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:44:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:44:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:44:44 mr-fox kernel: </IRQ>
May 26 06:44:44 mr-fox kernel: <TASK>
May 26 06:44:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:44:44 mr-fox kernel: ? xas_descend+0x26/0xd0
May 26 06:44:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:44:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:44:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:44:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:44:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:44:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:44:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:44:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:44:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:44:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:44:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:44:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:44:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:44:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:44:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:44:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:44:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:44:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:44:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:44:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:44:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:44:44 mr-fox kernel: </TASK>
May 26 06:44:52 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 646768 jiffies s: 491905 root: 0x2/.
May 26 06:44:52 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:44:52 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:44:52 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:44:52 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:44:52 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:44:52 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:44:52 mr-fox kernel: RIP: 0010:__dev_queue_xmit+0x419/0x9a0
May 26 06:44:52 mr-fox kernel: Code: a8 04 0f 84 ac 01 00 00 49 8b 84 24 d8 00 00 00 a8 0c 0f 85 9c 01 00 00 4d 8d ac 24 44 01 00 00 4c 89 ef e8 99 9e 1f 00 85 c0 <75> 21 f0 49 0f ba ac 24 d8 00 00 00 02 0f 82 77 01 00 00 4c 89 ef
May 26 06:44:52 mr-fox kernel: RSP: 0018:ffffa401005f0ad0 EFLAGS: 00000202
May 26 06:44:52 mr-fox kernel: RAX: 0000000000000001 RBX: ffff988c73508ed8 RCX: 0000000000000000
May 26 06:44:52 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:44:52 mr-fox kernel: RBP: ffff988ac76ec3c0 R08: 0000000000000000 R09: ffff988af428b200
May 26 06:44:52 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988ac5507400
May 26 06:44:52 mr-fox kernel: R13: ffff988ac5507544 R14: 0000000000000010 R15: ffff988ac4bb4000
May 26 06:44:52 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:44:52 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:44:52 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:44:52 mr-fox kernel: PKRU: 55555554
May 26 06:44:52 mr-fox kernel: Call Trace:
May 26 06:44:52 mr-fox kernel: <NMI>
May 26 06:44:52 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:44:52 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:44:52 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:44:52 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:44:52 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:44:52 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:44:52 mr-fox kernel: ? __dev_queue_xmit+0x419/0x9a0
May 26 06:44:52 mr-fox kernel: ? __dev_queue_xmit+0x419/0x9a0
May 26 06:44:52 mr-fox kernel: ? __dev_queue_xmit+0x419/0x9a0
May 26 06:44:52 mr-fox kernel: </NMI>
May 26 06:44:52 mr-fox kernel: <IRQ>
May 26 06:44:52 mr-fox kernel: ? ip6t_do_table+0x30b/0x590
May 26 06:44:52 mr-fox kernel: ip6_finish_output2+0x2c0/0x610
May 26 06:44:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:52 mr-fox kernel: ? ip6_output+0xa7/0x290
May 26 06:44:52 mr-fox kernel: ip6_xmit+0x3fc/0x600
May 26 06:44:52 mr-fox kernel: ? ip6_output+0x290/0x290
May 26 06:44:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:52 mr-fox kernel: ? __sk_dst_check+0x34/0xa0
May 26 06:44:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:52 mr-fox kernel: ? inet6_csk_route_socket+0x132/0x210
May 26 06:44:52 mr-fox kernel: inet6_csk_xmit+0xe9/0x160
May 26 06:44:52 mr-fox kernel: __tcp_transmit_skb+0x5d0/0xd30
May 26 06:44:52 mr-fox kernel: tcp_write_xmit+0x4c9/0x13b0
May 26 06:44:52 mr-fox kernel: ? ktime_get+0x42/0xb0
May 26 06:44:52 mr-fox kernel: tcp_send_loss_probe+0x16f/0x260
May 26 06:44:52 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 06:44:52 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 06:44:52 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 06:44:52 mr-fox kernel: __run_timers+0x20a/0x240
May 26 06:44:52 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 06:44:52 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:44:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:52 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:44:52 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:44:52 mr-fox kernel: </IRQ>
May 26 06:44:52 mr-fox kernel: <TASK>
May 26 06:44:52 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:44:52 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:44:52 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:44:52 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:44:52 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:44:52 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:44:52 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:44:52 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:44:52 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:44:52 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:44:52 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:44:52 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:44:52 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:44:52 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:44:52 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:44:52 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:44:52 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:44:52 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:44:52 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:44:52 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:44:52 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:44:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:44:52 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:44:52 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:44:52 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:44:52 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:44:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:44:52 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:44:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:44:52 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:44:52 mr-fox kernel: </TASK>
May 26 06:45:00 mr-fox crond[7947]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:45:00 mr-fox crond[7950]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:45:00 mr-fox crond[7952]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:45:00 mr-fox crond[7951]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:45:00 mr-fox crond[7949]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:45:00 mr-fox CROND[7955]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:45:00 mr-fox crond[7953]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:45:00 mr-fox CROND[7956]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:45:00 mr-fox crond[7954]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:45:00 mr-fox crond[7948]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:45:00 mr-fox CROND[7957]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:45:00 mr-fox CROND[7958]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:45:00 mr-fox CROND[7959]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:45:00 mr-fox CROND[7960]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:45:00 mr-fox CROND[7962]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:45:00 mr-fox CROND[7961]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:45:00 mr-fox CROND[7947]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:45:00 mr-fox CROND[7947]: pam_unix(crond:session): session closed for user torproject
May 26 06:45:00 mr-fox CROND[7954]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:45:00 mr-fox CROND[7954]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:45:00 mr-fox CROND[7951]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:45:00 mr-fox CROND[7951]: pam_unix(crond:session): session closed for user root
May 26 06:45:00 mr-fox CROND[7952]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:45:00 mr-fox CROND[7952]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:45:00 mr-fox CROND[7950]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:45:00 mr-fox CROND[7950]: pam_unix(crond:session): session closed for user root
May 26 06:45:00 mr-fox CROND[4345]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:45:00 mr-fox CROND[4345]: pam_unix(crond:session): session closed for user root
May 26 06:46:00 mr-fox crond[31751]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:46:00 mr-fox crond[31754]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:46:00 mr-fox crond[31752]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:46:00 mr-fox CROND[31759]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:46:00 mr-fox crond[31753]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:46:00 mr-fox CROND[31760]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:46:00 mr-fox crond[31755]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:46:00 mr-fox CROND[31761]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:46:00 mr-fox CROND[31764]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:46:00 mr-fox CROND[31765]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:46:00 mr-fox CROND[31755]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:46:00 mr-fox CROND[31755]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:46:00 mr-fox CROND[31754]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:46:00 mr-fox CROND[31754]: pam_unix(crond:session): session closed for user root
May 26 06:46:00 mr-fox CROND[31753]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:46:00 mr-fox CROND[31753]: pam_unix(crond:session): session closed for user root
May 26 06:46:01 mr-fox CROND[7949]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:46:01 mr-fox CROND[7949]: pam_unix(crond:session): session closed for user root
May 26 06:47:00 mr-fox crond[32257]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:47:00 mr-fox crond[32258]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:47:00 mr-fox crond[32259]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:47:00 mr-fox crond[32260]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:47:00 mr-fox CROND[32265]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:47:00 mr-fox CROND[32266]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:47:00 mr-fox crond[32261]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:47:00 mr-fox CROND[32268]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:47:00 mr-fox CROND[32269]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:47:00 mr-fox CROND[32270]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:47:00 mr-fox CROND[32261]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:47:00 mr-fox CROND[32261]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:47:00 mr-fox CROND[32260]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:47:00 mr-fox CROND[32260]: pam_unix(crond:session): session closed for user root
May 26 06:47:00 mr-fox CROND[32259]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:47:00 mr-fox CROND[32259]: pam_unix(crond:session): session closed for user root
May 26 06:47:01 mr-fox CROND[31752]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:47:01 mr-fox CROND[31752]: pam_unix(crond:session): session closed for user root
May 26 06:47:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:47:44 mr-fox kernel: rcu: \x0921-....: (690056 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=269452
May 26 06:47:44 mr-fox kernel: rcu: \x09(t=690057 jiffies g=8794409 q=23557866 ncpus=32)
May 26 06:47:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:47:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:47:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:47:44 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 06:47:44 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 06:47:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 06:47:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 06:47:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 06:47:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:47:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:47:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:47:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:47:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:47:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:47:44 mr-fox kernel: PKRU: 55555554
May 26 06:47:44 mr-fox kernel: Call Trace:
May 26 06:47:44 mr-fox kernel: <IRQ>
May 26 06:47:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:47:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:47:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:47:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:47:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:47:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:47:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:47:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:47:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:47:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:47:44 mr-fox kernel: </IRQ>
May 26 06:47:44 mr-fox kernel: <TASK>
May 26 06:47:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:47:44 mr-fox kernel: ? xas_descend+0x31/0xd0
May 26 06:47:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:47:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:47:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:47:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:47:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:47:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:47:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:47:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:47:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:47:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:47:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:47:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:47:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:47:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:47:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:47:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:47:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:47:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:47:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:47:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:47:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:47:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:47:44 mr-fox kernel: </TASK>
May 26 06:47:52 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 691823 jiffies s: 491905 root: 0x2/.
May 26 06:47:52 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:47:52 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:47:52 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:47:52 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:47:52 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:47:52 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:47:52 mr-fox kernel: RIP: 0010:clear_page_erms+0xd/0x20
May 26 06:47:52 mr-fox kernel: Code: 47 20 48 89 47 28 48 89 47 30 48 89 47 38 48 8d 7f 40 75 d9 90 e9 be 8a 1a 00 0f 1f 00 f3 0f 1e fa b9 00 10 00 00 31 c0 f3 aa <e9> a9 8a 1a 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 f3 0f 1e
May 26 06:47:52 mr-fox kernel: RSP: 0018:ffffa401005f0d58 EFLAGS: 00000246
May 26 06:47:52 mr-fox kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
May 26 06:47:52 mr-fox kernel: RDX: ffffcf6320570200 RSI: ffffcf6320570400 RDI: ffff98a1d5c09000
May 26 06:47:52 mr-fox kernel: RBP: ffffcf6320570200 R08: 0000000000000001 R09: 0000000000000008
May 26 06:47:52 mr-fox kernel: R10: 0000000000000004 R11: 0000000000000000 R12: 0000000000000003
May 26 06:47:52 mr-fox kernel: R13: ffffcf6320570400 R14: 0000000000000001 R15: 0000000000000008
May 26 06:47:52 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:47:52 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:47:52 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:47:52 mr-fox kernel: PKRU: 55555554
May 26 06:47:52 mr-fox kernel: Call Trace:
May 26 06:47:52 mr-fox kernel: <NMI>
May 26 06:47:52 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:47:52 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:47:52 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:47:52 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:47:52 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:47:52 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:47:52 mr-fox kernel: ? clear_page_erms+0xd/0x20
May 26 06:47:52 mr-fox kernel: ? clear_page_erms+0xd/0x20
May 26 06:47:52 mr-fox kernel: ? clear_page_erms+0xd/0x20
May 26 06:47:52 mr-fox kernel: </NMI>
May 26 06:47:52 mr-fox kernel: <IRQ>
May 26 06:47:52 mr-fox kernel: free_unref_page_prepare+0x114/0x300
May 26 06:47:52 mr-fox kernel: free_unref_page+0x2f/0x170
May 26 06:47:52 mr-fox kernel: skb_release_data.isra.0+0xfb/0x1e0
May 26 06:47:52 mr-fox kernel: __kfree_skb+0x24/0x30
May 26 06:47:52 mr-fox kernel: tcp_write_queue_purge+0xed/0x310
May 26 06:47:52 mr-fox kernel: tcp_retransmit_timer+0x28e/0xa60
May 26 06:47:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:47:52 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 06:47:52 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 06:47:52 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 06:47:52 mr-fox kernel: __run_timers+0x20a/0x240
May 26 06:47:52 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 06:47:52 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:47:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:47:52 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:47:52 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:47:52 mr-fox kernel: </IRQ>
May 26 06:47:52 mr-fox kernel: <TASK>
May 26 06:47:52 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:47:52 mr-fox kernel: RIP: 0010:xas_start+0x0/0x120
May 26 06:47:52 mr-fox kernel: Code: d3 e0 48 23 47 08 48 d3 e3 48 01 c3 48 89 5d 08 48 83 c4 08 5b 5d 41 5c 31 c0 31 d2 31 c9 31 f6 31 ff e9 2e c8 1a 00 0f 1f 00 <55> 53 48 89 fb 48 83 ec 08 48 8b 6f 18 48 89 e8 83 e0 03 0f 84 81
May 26 06:47:52 mr-fox kernel: RSP: 0018:ffffa401077e3948 EFLAGS: 00000246
May 26 06:47:52 mr-fox kernel: RAX: 0000000000000000 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:47:52 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffa401077e3970
May 26 06:47:52 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:47:52 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:47:52 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:47:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:47:52 mr-fox kernel: xas_load+0xe/0x60
May 26 06:47:52 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:47:52 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:47:52 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:47:52 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:47:52 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:47:52 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:47:52 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:47:52 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:47:52 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:47:52 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:47:52 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:47:52 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:47:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:47:52 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:47:52 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:47:52 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:47:52 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:47:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:47:52 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:47:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:47:52 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:47:52 mr-fox kernel: </TASK>
May 26 06:48:00 mr-fox crond[4685]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:48:00 mr-fox crond[4687]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:48:00 mr-fox crond[4688]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:48:00 mr-fox crond[4689]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:48:00 mr-fox CROND[4691]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:48:00 mr-fox crond[4684]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:48:00 mr-fox CROND[4694]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:48:00 mr-fox CROND[4695]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:48:00 mr-fox CROND[4692]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:48:00 mr-fox CROND[4696]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:48:00 mr-fox CROND[4689]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:48:00 mr-fox CROND[4689]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:48:00 mr-fox CROND[4688]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:48:00 mr-fox CROND[4688]: pam_unix(crond:session): session closed for user root
May 26 06:48:00 mr-fox CROND[4687]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:48:00 mr-fox CROND[4687]: pam_unix(crond:session): session closed for user root
May 26 06:48:00 mr-fox CROND[32258]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:48:00 mr-fox CROND[32258]: pam_unix(crond:session): session closed for user root
May 26 06:49:00 mr-fox crond[8783]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:49:00 mr-fox crond[8782]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:49:00 mr-fox crond[8780]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:49:00 mr-fox crond[8784]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:49:00 mr-fox CROND[8788]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:49:00 mr-fox crond[8785]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:49:00 mr-fox CROND[8791]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:49:00 mr-fox CROND[8790]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:49:00 mr-fox CROND[8792]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:49:00 mr-fox CROND[8793]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:49:00 mr-fox CROND[8785]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:49:00 mr-fox CROND[8785]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:49:00 mr-fox CROND[8784]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:49:00 mr-fox CROND[8784]: pam_unix(crond:session): session closed for user root
May 26 06:49:00 mr-fox CROND[8783]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:49:00 mr-fox CROND[8783]: pam_unix(crond:session): session closed for user root
May 26 06:49:00 mr-fox CROND[4685]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:49:00 mr-fox CROND[4685]: pam_unix(crond:session): session closed for user root
May 26 06:50:00 mr-fox crond[11624]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:50:00 mr-fox crond[11626]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:50:00 mr-fox crond[11625]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:50:00 mr-fox crond[11627]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:50:00 mr-fox CROND[11635]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:50:00 mr-fox crond[11629]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:50:00 mr-fox CROND[11636]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:50:00 mr-fox CROND[11637]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:50:00 mr-fox crond[11631]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:50:00 mr-fox crond[11630]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:50:00 mr-fox crond[11632]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:50:00 mr-fox CROND[11638]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:50:00 mr-fox CROND[11640]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:50:00 mr-fox CROND[11641]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:50:00 mr-fox CROND[11643]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:50:00 mr-fox CROND[11642]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:50:00 mr-fox CROND[11624]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:50:00 mr-fox CROND[11624]: pam_unix(crond:session): session closed for user torproject
May 26 06:50:00 mr-fox CROND[11632]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:50:00 mr-fox CROND[11632]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:50:00 mr-fox CROND[11629]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:50:00 mr-fox CROND[11629]: pam_unix(crond:session): session closed for user root
May 26 06:50:01 mr-fox CROND[11630]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:50:01 mr-fox CROND[11630]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:50:01 mr-fox CROND[11627]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:50:01 mr-fox CROND[11627]: pam_unix(crond:session): session closed for user root
May 26 06:50:01 mr-fox CROND[8782]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:50:01 mr-fox CROND[8782]: pam_unix(crond:session): session closed for user root
May 26 06:50:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:50:44 mr-fox kernel: rcu: \x0921-....: (735060 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=286664
May 26 06:50:44 mr-fox kernel: rcu: \x09(t=735061 jiffies g=8794409 q=24738565 ncpus=32)
May 26 06:50:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:50:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:50:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:50:44 mr-fox kernel: RIP: 0010:xas_descend+0x40/0xd0
May 26 06:50:44 mr-fox kernel: Code: 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 <75> 08 48 3d fd 00 00 00 76 2f 41 88 5c 24 12 48 83 c4 08 5b 5d 41
May 26 06:50:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000246
May 26 06:50:44 mr-fox kernel: RAX: ffff988d667606da RBX: 0000000000000019 RCX: 000000000000000c
May 26 06:50:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988b1fdbb478 RDI: ffffa401077e3970
May 26 06:50:44 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:50:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:50:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:50:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:50:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:50:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:50:44 mr-fox kernel: PKRU: 55555554
May 26 06:50:44 mr-fox kernel: Call Trace:
May 26 06:50:44 mr-fox kernel: <IRQ>
May 26 06:50:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:50:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:50:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:50:44 mr-fox kernel: ? tcp_delack_timer+0xb5/0xf0
May 26 06:50:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:50:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:50:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:50:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:50:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:50:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:50:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:50:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:50:44 mr-fox kernel: </IRQ>
May 26 06:50:44 mr-fox kernel: <TASK>
May 26 06:50:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:50:44 mr-fox kernel: ? xas_descend+0x40/0xd0
May 26 06:50:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:50:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:50:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:50:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:50:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:50:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:50:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:50:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:50:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:50:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:50:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:50:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:50:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:50:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:50:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:50:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:50:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:50:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:50:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:50:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:50:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:50:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:50:44 mr-fox kernel: </TASK>
May 26 06:50:52 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 736880 jiffies s: 491905 root: 0x2/.
May 26 06:50:52 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:50:52 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:50:52 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:50:52 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:50:52 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:50:52 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:50:52 mr-fox kernel: RIP: 0010:napi_consume_skb+0x1b/0xc0
May 26 06:50:52 mr-fox kernel: Code: 20 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 f3 0f 1e fa 53 48 89 fb 85 f6 0f 84 8c 00 00 00 48 85 ff 74 7b 8b 87 d4 00 00 00 <83> f8 01 75 53 48 89 df f6 43 7e 0c 75 25 e8 62 87 ff ff 48 83 bb
May 26 06:50:52 mr-fox kernel: RSP: 0018:ffffa401005f0de8 EFLAGS: 00000282
May 26 06:50:52 mr-fox kernel: RAX: 0000000000000001 RBX: ffff988ad987c458 RCX: 0000000000000000
May 26 06:50:52 mr-fox kernel: RDX: 0000000000000001 RSI: 0000000000000040 RDI: ffff988ad987c458
May 26 06:50:52 mr-fox kernel: RBP: 0000000000009f0a R08: 0000000000000000 R09: 0000000000000000
May 26 06:50:52 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa40100119f90
May 26 06:50:52 mr-fox kernel: R13: ffff988ac873fa40 R14: 00000000fffffff9 R15: ffffa40100119fb0
May 26 06:50:52 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:50:52 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:50:52 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:50:52 mr-fox kernel: PKRU: 55555554
May 26 06:50:52 mr-fox kernel: Call Trace:
May 26 06:50:52 mr-fox kernel: <NMI>
May 26 06:50:52 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:50:52 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:50:52 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:50:52 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:50:52 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:50:52 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:50:52 mr-fox kernel: ? napi_consume_skb+0x1b/0xc0
May 26 06:50:52 mr-fox kernel: ? napi_consume_skb+0x1b/0xc0
May 26 06:50:52 mr-fox kernel: ? napi_consume_skb+0x1b/0xc0
May 26 06:50:52 mr-fox kernel: </NMI>
May 26 06:50:52 mr-fox kernel: <IRQ>
May 26 06:50:52 mr-fox kernel: igb_poll+0xea/0x1370
May 26 06:50:52 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 06:50:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:50:52 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 06:50:52 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:50:52 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:50:52 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:50:52 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:50:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:50:52 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:50:52 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:50:52 mr-fox kernel: </IRQ>
May 26 06:50:52 mr-fox kernel: <TASK>
May 26 06:50:52 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:50:52 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 06:50:52 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 06:50:52 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:50:52 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:50:52 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:50:52 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:50:52 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:50:52 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:50:52 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 06:50:52 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:50:52 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:50:52 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:50:52 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:50:52 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:50:52 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:50:52 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:50:52 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:50:52 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:50:52 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:50:52 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:50:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:50:52 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:50:52 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:50:52 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:50:52 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:50:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:50:52 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:50:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:50:52 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:50:52 mr-fox kernel: </TASK>
May 26 06:51:00 mr-fox crond[12429]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:51:00 mr-fox crond[12430]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:51:00 mr-fox crond[12431]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:51:00 mr-fox crond[12433]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:51:00 mr-fox CROND[12436]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:51:00 mr-fox CROND[12438]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:51:00 mr-fox CROND[12437]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:51:00 mr-fox CROND[12439]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:51:00 mr-fox crond[12432]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:51:00 mr-fox CROND[12441]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:51:00 mr-fox CROND[12433]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:51:00 mr-fox CROND[12433]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:51:00 mr-fox CROND[12432]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:51:00 mr-fox CROND[12432]: pam_unix(crond:session): session closed for user root
May 26 06:51:00 mr-fox CROND[12431]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:51:00 mr-fox CROND[12431]: pam_unix(crond:session): session closed for user root
May 26 06:51:01 mr-fox CROND[11626]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:51:01 mr-fox CROND[11626]: pam_unix(crond:session): session closed for user root
May 26 06:52:00 mr-fox crond[14931]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:52:00 mr-fox crond[14932]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:52:00 mr-fox crond[14933]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:52:00 mr-fox crond[14935]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:52:00 mr-fox crond[14930]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:52:00 mr-fox CROND[14938]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:52:00 mr-fox CROND[14940]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:52:00 mr-fox CROND[14939]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:52:00 mr-fox CROND[14941]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:52:00 mr-fox CROND[14942]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:52:00 mr-fox CROND[14935]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:52:00 mr-fox CROND[14935]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:52:00 mr-fox CROND[14933]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:52:00 mr-fox CROND[14933]: pam_unix(crond:session): session closed for user root
May 26 06:52:00 mr-fox CROND[14932]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:52:00 mr-fox CROND[14932]: pam_unix(crond:session): session closed for user root
May 26 06:52:00 mr-fox CROND[12430]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:52:00 mr-fox CROND[12430]: pam_unix(crond:session): session closed for user root
May 26 06:53:00 mr-fox crond[18691]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:53:00 mr-fox crond[18689]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:53:00 mr-fox crond[18692]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:53:00 mr-fox crond[18693]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:53:00 mr-fox crond[18694]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:53:00 mr-fox CROND[18697]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:53:00 mr-fox CROND[18698]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:53:00 mr-fox CROND[18700]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:53:00 mr-fox CROND[18701]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:53:00 mr-fox CROND[18702]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:53:00 mr-fox CROND[18694]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:53:00 mr-fox CROND[18694]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:53:00 mr-fox CROND[18693]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:53:00 mr-fox CROND[18693]: pam_unix(crond:session): session closed for user root
May 26 06:53:01 mr-fox CROND[18692]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:53:01 mr-fox CROND[18692]: pam_unix(crond:session): session closed for user root
May 26 06:53:01 mr-fox CROND[14931]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:53:01 mr-fox CROND[14931]: pam_unix(crond:session): session closed for user root
May 26 06:53:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:53:44 mr-fox kernel: rcu: \x0921-....: (780064 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=303960
May 26 06:53:44 mr-fox kernel: rcu: \x09(t=780065 jiffies g=8794409 q=25927955 ncpus=32)
May 26 06:53:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:53:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:53:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:53:44 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 06:53:44 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 06:53:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 06:53:44 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:53:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:53:44 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 06:53:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:53:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:53:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:53:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:53:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:53:44 mr-fox kernel: PKRU: 55555554
May 26 06:53:44 mr-fox kernel: Call Trace:
May 26 06:53:44 mr-fox kernel: <IRQ>
May 26 06:53:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:53:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:53:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 06:53:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:53:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:53:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:53:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:53:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:53:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:53:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:53:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:53:44 mr-fox kernel: </IRQ>
May 26 06:53:44 mr-fox kernel: <TASK>
May 26 06:53:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:53:44 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 06:53:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:53:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:53:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:53:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:53:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:53:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:53:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:53:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:53:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:53:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:53:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:53:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:53:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:53:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:53:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:53:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:53:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:53:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:53:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:53:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:53:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:53:44 mr-fox kernel: </TASK>
May 26 06:53:52 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 781936 jiffies s: 491905 root: 0x2/.
May 26 06:53:52 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:53:52 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:53:52 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:53:52 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:53:52 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:53:52 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:53:52 mr-fox kernel: RIP: 0010:__nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:53:52 mr-fox kernel: Code: 00 49 8b 07 49 8b 4f 08 48 33 43 10 48 33 4b 18 48 09 c8 75 09 8b 43 20 41 39 47 10 74 7f 48 8b 1b f6 c3 01 75 5b 0f b6 4b 37 <4c> 89 ed 48 8d 04 cd 00 00 00 00 48 29 c8 48 c1 e0 03 48 29 c5 48
May 26 06:53:52 mr-fox kernel: RSP: 0018:ffffa401005f0aa8 EFLAGS: 00000246
May 26 06:53:52 mr-fox kernel: RAX: ffff988acb39f330 RBX: ffff988ac653e048 RCX: 0000000000000001
May 26 06:53:52 mr-fox kernel: RDX: 00000000cf99b3a3 RSI: ffff988acb200000 RDI: ffffffffa6ad7f40
May 26 06:53:52 mr-fox kernel: RBP: ffffa401005f0b08 R08: 0000000000000000 R09: 0000000000000000
May 26 06:53:52 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000033e66
May 26 06:53:52 mr-fox kernel: R13: fffffffffffffff0 R14: ffffffffa6ad7f40 R15: ffffa401005f0b08
May 26 06:53:52 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:53:52 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:53:52 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:53:52 mr-fox kernel: PKRU: 55555554
May 26 06:53:52 mr-fox kernel: Call Trace:
May 26 06:53:52 mr-fox kernel: <NMI>
May 26 06:53:52 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:53:52 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:53:52 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:53:52 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:53:52 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:53:52 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:53:52 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:53:52 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:53:52 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:53:52 mr-fox kernel: </NMI>
May 26 06:53:52 mr-fox kernel: <IRQ>
May 26 06:53:52 mr-fox kernel: nf_conntrack_in+0xdc/0x540
May 26 06:53:52 mr-fox kernel: nf_hook_slow+0x3c/0x100
May 26 06:53:52 mr-fox kernel: nf_hook_slow_list+0x90/0x140
May 26 06:53:52 mr-fox kernel: ip_sublist_rcv+0x71/0x1c0
May 26 06:53:52 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 06:53:52 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 06:53:52 mr-fox kernel: __netif_receive_skb_list_core+0x2a5/0x2d0
May 26 06:53:52 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 06:53:52 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 06:53:52 mr-fox kernel: igb_poll+0x605/0x1370
May 26 06:53:52 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 06:53:52 mr-fox kernel: net_rx_action+0x202/0x590
May 26 06:53:52 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 06:53:52 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:53:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:53:52 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:53:52 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:53:52 mr-fox kernel: </IRQ>
May 26 06:53:52 mr-fox kernel: <TASK>
May 26 06:53:52 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:53:52 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 06:53:52 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 06:53:52 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000282
May 26 06:53:52 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 06:53:52 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:53:52 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:53:52 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:53:52 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:53:52 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:53:52 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:53:52 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:53:52 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:53:52 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:53:52 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:53:52 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:53:52 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:53:52 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:53:52 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:53:52 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:53:52 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:53:52 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:53:52 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:53:52 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:53:52 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:53:52 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:53:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:53:52 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:53:52 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:53:52 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:53:52 mr-fox kernel: </TASK>
May 26 06:54:00 mr-fox crond[19580]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:54:00 mr-fox crond[19582]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:54:00 mr-fox crond[19581]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:54:00 mr-fox crond[19583]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:54:00 mr-fox crond[19585]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:54:00 mr-fox CROND[19589]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:54:00 mr-fox CROND[19587]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:54:00 mr-fox CROND[19590]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:54:00 mr-fox CROND[19591]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:54:00 mr-fox CROND[19588]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:54:00 mr-fox CROND[19585]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:54:00 mr-fox CROND[19585]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:54:00 mr-fox CROND[19583]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:54:00 mr-fox CROND[19583]: pam_unix(crond:session): session closed for user root
May 26 06:54:00 mr-fox CROND[19582]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:54:00 mr-fox CROND[19582]: pam_unix(crond:session): session closed for user root
May 26 06:54:01 mr-fox CROND[18691]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:54:01 mr-fox CROND[18691]: pam_unix(crond:session): session closed for user root
May 26 06:55:00 mr-fox crond[22083]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:55:00 mr-fox crond[22084]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:55:00 mr-fox crond[22082]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 06:55:00 mr-fox crond[22087]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:55:00 mr-fox crond[22086]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:55:00 mr-fox CROND[22093]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:55:00 mr-fox crond[22085]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:55:00 mr-fox CROND[22096]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:55:00 mr-fox CROND[22097]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:55:00 mr-fox CROND[22098]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:55:00 mr-fox crond[22089]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:55:00 mr-fox CROND[22099]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:55:00 mr-fox crond[22090]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:55:00 mr-fox CROND[22101]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:55:00 mr-fox CROND[22102]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 06:55:00 mr-fox CROND[22103]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:55:00 mr-fox CROND[22082]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 06:55:00 mr-fox CROND[22082]: pam_unix(crond:session): session closed for user torproject
May 26 06:55:00 mr-fox CROND[22090]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:55:00 mr-fox CROND[22090]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:55:00 mr-fox CROND[22086]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:55:00 mr-fox CROND[22086]: pam_unix(crond:session): session closed for user root
May 26 06:55:00 mr-fox CROND[22087]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 06:55:00 mr-fox CROND[22087]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:55:00 mr-fox CROND[22085]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:55:00 mr-fox CROND[22085]: pam_unix(crond:session): session closed for user root
May 26 06:55:00 mr-fox CROND[19581]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:55:00 mr-fox CROND[19581]: pam_unix(crond:session): session closed for user root
May 26 06:56:00 mr-fox crond[24422]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:56:00 mr-fox crond[24424]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:56:00 mr-fox crond[24425]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:56:00 mr-fox CROND[24430]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:56:00 mr-fox crond[24426]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:56:00 mr-fox CROND[24431]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:56:00 mr-fox CROND[24432]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:56:00 mr-fox crond[24427]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:56:00 mr-fox CROND[24435]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:56:00 mr-fox CROND[24436]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:56:00 mr-fox CROND[24427]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:56:00 mr-fox CROND[24427]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:56:00 mr-fox CROND[24426]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:56:00 mr-fox CROND[24426]: pam_unix(crond:session): session closed for user root
May 26 06:56:00 mr-fox CROND[24425]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:56:00 mr-fox CROND[24425]: pam_unix(crond:session): session closed for user root
May 26 06:56:00 mr-fox CROND[22084]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:56:00 mr-fox CROND[22084]: pam_unix(crond:session): session closed for user root
May 26 06:56:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:56:44 mr-fox kernel: rcu: \x0921-....: (825068 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=321402
May 26 06:56:44 mr-fox kernel: rcu: \x09(t=825069 jiffies g=8794409 q=27245568 ncpus=32)
May 26 06:56:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:56:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:56:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:56:44 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 06:56:44 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 06:56:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 06:56:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: 0000000000000001 RCX: 000000000000001e
May 26 06:56:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 06:56:44 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:56:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:56:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:56:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:56:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:56:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:56:44 mr-fox kernel: PKRU: 55555554
May 26 06:56:44 mr-fox kernel: Call Trace:
May 26 06:56:44 mr-fox kernel: <IRQ>
May 26 06:56:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:56:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:56:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 06:56:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:56:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:56:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:56:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:56:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:56:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:56:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:56:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:56:44 mr-fox kernel: </IRQ>
May 26 06:56:44 mr-fox kernel: <TASK>
May 26 06:56:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:56:44 mr-fox kernel: ? xas_descend+0x31/0xd0
May 26 06:56:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:56:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:56:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:56:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:56:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:56:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:56:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:56:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:56:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:56:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:56:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:56:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:56:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:56:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:56:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:56:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:56:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:56:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:56:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:56:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:56:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:56:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:56:44 mr-fox kernel: </TASK>
May 26 06:56:53 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 826992 jiffies s: 491905 root: 0x2/.
May 26 06:56:53 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:56:53 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:56:53 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:56:53 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:56:53 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:56:53 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:56:53 mr-fox kernel: RIP: 0010:_raw_spin_lock+0xf/0x30
May 26 06:56:53 mr-fox kernel: Code: 31 ff e9 af 2f 15 00 e9 0f 03 00 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa 31 c0 ba 01 00 00 00 f0 0f b1 17 <75> 0d 31 c0 31 d2 31 f6 31 ff e9 7d 2f 15 00 89 c6 e9 cb 00 00 00
May 26 06:56:53 mr-fox kernel: RSP: 0018:ffffa401005f0e88 EFLAGS: 00000246
May 26 06:56:53 mr-fox kernel: RAX: 0000000000000000 RBX: ffff9892c7273248 RCX: 0000000000000000
May 26 06:56:53 mr-fox kernel: RDX: 0000000000000001 RSI: ffffffffa587d6d0 RDI: ffff9892c7272ed8
May 26 06:56:53 mr-fox kernel: RBP: ffff9892c7272e40 R08: 0000000000000000 R09: 0000000000000000
May 26 06:56:53 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa401005f0ee8
May 26 06:56:53 mr-fox kernel: R13: ffff98a96ed5bbb0 R14: ffffa401005f0ef0 R15: ffff98a96ed5bb40
May 26 06:56:53 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:56:53 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:56:53 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:56:53 mr-fox kernel: PKRU: 55555554
May 26 06:56:53 mr-fox kernel: Call Trace:
May 26 06:56:53 mr-fox kernel: <NMI>
May 26 06:56:53 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:56:53 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:56:53 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:56:53 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:56:53 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:56:53 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:56:53 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 06:56:53 mr-fox kernel: ? _raw_spin_lock+0xf/0x30
May 26 06:56:53 mr-fox kernel: ? _raw_spin_lock+0xf/0x30
May 26 06:56:53 mr-fox kernel: ? _raw_spin_lock+0xf/0x30
May 26 06:56:53 mr-fox kernel: </NMI>
May 26 06:56:53 mr-fox kernel: <IRQ>
May 26 06:56:53 mr-fox kernel: tcp_write_timer+0x1e/0xd0
May 26 06:56:53 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 06:56:53 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 06:56:53 mr-fox kernel: __run_timers+0x20a/0x240
May 26 06:56:53 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 06:56:53 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:56:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:56:53 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:56:53 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:56:53 mr-fox kernel: </IRQ>
May 26 06:56:53 mr-fox kernel: <TASK>
May 26 06:56:53 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:56:53 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 06:56:53 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 06:56:53 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 06:56:53 mr-fox kernel: RAX: ffff988d55f18ffa RBX: 0000000000000021 RCX: 0000000000000000
May 26 06:56:53 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988d55f18ff8 RDI: ffffa401077e3970
May 26 06:56:53 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:56:53 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:56:53 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:56:53 mr-fox kernel: xas_load+0x49/0x60
May 26 06:56:53 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:56:53 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:56:53 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:56:53 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:56:53 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:56:53 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:56:53 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:56:53 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:56:53 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:56:53 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:56:53 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:56:53 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:56:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:56:53 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:56:53 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:56:53 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:56:53 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:56:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:56:53 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:56:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:56:53 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:56:53 mr-fox kernel: </TASK>
May 26 06:57:00 mr-fox crond[28177]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:57:00 mr-fox crond[28178]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:57:00 mr-fox crond[28176]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:57:00 mr-fox crond[28179]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:57:00 mr-fox crond[28180]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:57:00 mr-fox CROND[28182]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:57:00 mr-fox CROND[28183]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:57:00 mr-fox CROND[28185]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:57:00 mr-fox CROND[28184]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:57:00 mr-fox CROND[28186]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:57:01 mr-fox CROND[28180]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:57:01 mr-fox CROND[28180]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:57:01 mr-fox CROND[28179]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:57:01 mr-fox CROND[28179]: pam_unix(crond:session): session closed for user root
May 26 06:57:01 mr-fox CROND[28178]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:57:01 mr-fox CROND[28178]: pam_unix(crond:session): session closed for user root
May 26 06:57:01 mr-fox CROND[24424]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:57:01 mr-fox CROND[24424]: pam_unix(crond:session): session closed for user root
May 26 06:58:00 mr-fox crond[29085]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:58:00 mr-fox crond[29084]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:58:00 mr-fox crond[29086]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:58:00 mr-fox crond[29083]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:58:00 mr-fox CROND[29092]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:58:00 mr-fox CROND[29091]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:58:00 mr-fox CROND[29093]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:58:00 mr-fox CROND[29094]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:58:00 mr-fox crond[29088]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:58:00 mr-fox CROND[29098]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:58:00 mr-fox CROND[29088]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:58:00 mr-fox CROND[29088]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:58:00 mr-fox CROND[29086]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:58:00 mr-fox CROND[29086]: pam_unix(crond:session): session closed for user root
May 26 06:58:00 mr-fox CROND[29085]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:58:00 mr-fox CROND[29085]: pam_unix(crond:session): session closed for user root
May 26 06:58:01 mr-fox CROND[28177]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:58:01 mr-fox CROND[28177]: pam_unix(crond:session): session closed for user root
May 26 06:59:00 mr-fox crond[32011]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:59:00 mr-fox crond[32010]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:59:00 mr-fox crond[32014]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:59:00 mr-fox crond[32012]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 06:59:00 mr-fox crond[32015]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 06:59:00 mr-fox CROND[32018]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:59:00 mr-fox CROND[32020]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 06:59:00 mr-fox CROND[32021]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:59:00 mr-fox CROND[32023]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 06:59:00 mr-fox CROND[32024]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 06:59:00 mr-fox CROND[32015]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 06:59:00 mr-fox CROND[32015]: pam_unix(crond:session): session closed for user tinderbox
May 26 06:59:00 mr-fox CROND[32014]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 06:59:00 mr-fox CROND[32014]: pam_unix(crond:session): session closed for user root
May 26 06:59:00 mr-fox CROND[32012]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 06:59:00 mr-fox CROND[32012]: pam_unix(crond:session): session closed for user root
May 26 06:59:00 mr-fox CROND[29084]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 06:59:00 mr-fox CROND[29084]: pam_unix(crond:session): session closed for user root
May 26 06:59:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 06:59:44 mr-fox kernel: rcu: \x0921-....: (870072 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=338803
May 26 06:59:44 mr-fox kernel: rcu: \x09(t=870073 jiffies g=8794409 q=28418353 ncpus=32)
May 26 06:59:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:59:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:59:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:59:44 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 06:59:44 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 06:59:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 06:59:44 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 06:59:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 06:59:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 06:59:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 06:59:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:59:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:59:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:59:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:59:44 mr-fox kernel: PKRU: 55555554
May 26 06:59:44 mr-fox kernel: Call Trace:
May 26 06:59:44 mr-fox kernel: <IRQ>
May 26 06:59:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 06:59:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 06:59:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 06:59:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 06:59:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 06:59:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 06:59:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:59:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 06:59:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 06:59:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 06:59:44 mr-fox kernel: </IRQ>
May 26 06:59:44 mr-fox kernel: <TASK>
May 26 06:59:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:59:44 mr-fox kernel: ? xas_descend+0x31/0xd0
May 26 06:59:44 mr-fox kernel: xas_load+0x49/0x60
May 26 06:59:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 06:59:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:59:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:59:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:59:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:59:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:59:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:59:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:59:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:59:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:59:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:59:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:59:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:59:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:59:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:59:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:59:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:59:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:59:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:59:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:59:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:59:44 mr-fox kernel: </TASK>
May 26 06:59:53 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 872048 jiffies s: 491905 root: 0x2/.
May 26 06:59:53 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 06:59:53 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 06:59:53 mr-fox kernel: NMI backtrace for cpu 21
May 26 06:59:53 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 06:59:53 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 06:59:53 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 06:59:53 mr-fox kernel: RIP: 0010:__nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:59:53 mr-fox kernel: Code: 00 49 8b 07 49 8b 4f 08 48 33 43 10 48 33 4b 18 48 09 c8 75 09 8b 43 20 41 39 47 10 74 7f 48 8b 1b f6 c3 01 75 5b 0f b6 4b 37 <4c> 89 ed 48 8d 04 cd 00 00 00 00 48 29 c8 48 c1 e0 03 48 29 c5 48
May 26 06:59:53 mr-fox kernel: RSP: 0018:ffffa401005f0c28 EFLAGS: 00000246
May 26 06:59:53 mr-fox kernel: RAX: ffff988acb271210 RBX: ffff988d66b3fe10 RCX: 0000000000000000
May 26 06:59:53 mr-fox kernel: RDX: 00000000389089f1 RSI: ffff988acb200000 RDI: ffffffffa6ad7f40
May 26 06:59:53 mr-fox kernel: RBP: ffffa401005f0c88 R08: 0000000000000000 R09: 0000000000000000
May 26 06:59:53 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 000000000000e242
May 26 06:59:53 mr-fox kernel: R13: fffffffffffffff0 R14: ffffffffa6ad7f40 R15: ffffa401005f0c88
May 26 06:59:53 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 06:59:53 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 06:59:53 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 06:59:53 mr-fox kernel: PKRU: 55555554
May 26 06:59:53 mr-fox kernel: Call Trace:
May 26 06:59:53 mr-fox kernel: <NMI>
May 26 06:59:53 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 06:59:53 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 06:59:53 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 06:59:53 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 06:59:53 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 06:59:53 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 06:59:53 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:59:53 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:59:53 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 06:59:53 mr-fox kernel: </NMI>
May 26 06:59:53 mr-fox kernel: <IRQ>
May 26 06:59:53 mr-fox kernel: nf_conntrack_in+0xdc/0x540
May 26 06:59:53 mr-fox kernel: nf_hook_slow+0x3c/0x100
May 26 06:59:53 mr-fox kernel: __ip_local_out+0xc6/0x100
May 26 06:59:53 mr-fox kernel: ? ip_output+0x100/0x100
May 26 06:59:53 mr-fox kernel: ip_local_out+0x16/0x70
May 26 06:59:53 mr-fox kernel: __ip_queue_xmit+0x16b/0x480
May 26 06:59:53 mr-fox kernel: __tcp_transmit_skb+0xbad/0xd30
May 26 06:59:53 mr-fox kernel: tcp_delack_timer_handler+0xa9/0x110
May 26 06:59:53 mr-fox kernel: tcp_delack_timer+0xb5/0xf0
May 26 06:59:53 mr-fox kernel: ? tcp_delack_timer_handler+0x110/0x110
May 26 06:59:53 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 06:59:53 mr-fox kernel: __run_timers+0x20a/0x240
May 26 06:59:53 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 06:59:53 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 06:59:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:59:53 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 06:59:53 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 06:59:53 mr-fox kernel: </IRQ>
May 26 06:59:53 mr-fox kernel: <TASK>
May 26 06:59:53 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 06:59:53 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 06:59:53 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 06:59:53 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 06:59:53 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 06:59:53 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 06:59:53 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 06:59:53 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 06:59:53 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 06:59:53 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 06:59:53 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 06:59:53 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 06:59:53 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 06:59:53 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 06:59:53 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 06:59:53 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 06:59:53 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 06:59:53 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 06:59:53 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 06:59:53 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 06:59:53 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 06:59:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 06:59:53 mr-fox kernel: process_one_work+0x16a/0x280
May 26 06:59:53 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 06:59:53 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 06:59:53 mr-fox kernel: kthread+0xcb/0xf0
May 26 06:59:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:59:53 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 06:59:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 06:59:53 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 06:59:53 mr-fox kernel: </TASK>
May 26 07:00:00 mr-fox crond[2524]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:00:00 mr-fox crond[2525]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:00:00 mr-fox crond[2527]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:00:00 mr-fox crond[2526]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:00:00 mr-fox crond[2531]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:00:00 mr-fox CROND[2538]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:00:00 mr-fox crond[2528]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:00:00 mr-fox CROND[2540]: (root) CMD (/etc/conf.d/ipv4-rules.sh update; /etc/conf.d/ipv6-rules.sh update)
May 26 07:00:00 mr-fox CROND[2541]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:00:00 mr-fox CROND[2539]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:00:00 mr-fox crond[2530]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:00:00 mr-fox crond[2533]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:00:00 mr-fox crond[2532]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:00:00 mr-fox crond[2534]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:00:00 mr-fox CROND[2543]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:00:00 mr-fox crond[2535]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:00:00 mr-fox crond[2537]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:00:00 mr-fox CROND[2544]: (root) CMD (/opt/torutils/update_tor.sh)
May 26 07:00:00 mr-fox CROND[2545]: (tinderbox) CMD (sudo /opt/tb/bin/collect_data.sh 2>/dev/null)
May 26 07:00:00 mr-fox CROND[2553]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:00:00 mr-fox CROND[2548]: (tinderbox) CMD (sudo /opt/tb/bin/house_keeping.sh >/dev/null)
May 26 07:00:00 mr-fox CROND[2547]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:00:00 mr-fox CROND[2546]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:00:00 mr-fox CROND[2554]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:00:00 mr-fox CROND[2524]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:00:00 mr-fox CROND[2524]: pam_unix(crond:session): session closed for user torproject
May 26 07:00:00 mr-fox CROND[2537]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:00:00 mr-fox CROND[2537]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:00:00 mr-fox CROND[2531]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:00:00 mr-fox CROND[2531]: pam_unix(crond:session): session closed for user root
May 26 07:00:00 mr-fox sudo[2561]: tinderbox : PWD=/home/tinderbox ; USER=root ; COMMAND=/opt/tb/bin/house_keeping.sh
May 26 07:00:00 mr-fox sudo[2560]: tinderbox : PWD=/home/tinderbox ; USER=root ; COMMAND=/opt/tb/bin/collect_data.sh
May 26 07:00:00 mr-fox sudo[2560]: pam_unix(sudo:session): session opened for user root(uid=0) by tinderbox(uid=1003)
May 26 07:00:00 mr-fox sudo[2561]: pam_unix(sudo:session): session opened for user root(uid=0) by tinderbox(uid=1003)
May 26 07:00:00 mr-fox CROND[2533]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:00:00 mr-fox CROND[2533]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:00:00 mr-fox CROND[2530]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:00:00 mr-fox CROND[2530]: pam_unix(crond:session): session closed for user root
May 26 07:00:01 mr-fox CROND[32011]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:00:01 mr-fox CROND[32011]: pam_unix(crond:session): session closed for user root
May 26 07:00:02 mr-fox CROND[2528]: (root) CMDEND (/opt/torutils/update_tor.sh)
May 26 07:00:02 mr-fox CROND[2528]: pam_unix(crond:session): session closed for user root
May 26 07:00:03 mr-fox CROND[2527]: (root) CMDEND (/etc/conf.d/ipv4-rules.sh update; /etc/conf.d/ipv6-rules.sh update)
May 26 07:00:03 mr-fox CROND[2527]: pam_unix(crond:session): session closed for user root
May 26 07:01:00 mr-fox CROND[2345]: (root) CMD (run-parts /etc/cron.hourly)
May 26 07:01:00 mr-fox crond[2341]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:01:00 mr-fox crond[2340]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:01:00 mr-fox crond[2344]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:01:00 mr-fox crond[2343]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:01:00 mr-fox crond[2346]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:01:00 mr-fox CROND[2350]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:01:00 mr-fox CROND[2352]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:01:00 mr-fox CROND[2353]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:01:00 mr-fox CROND[2355]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:01:00 mr-fox CROND[2351]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:01:00 mr-fox CROND[2339]: (root) CMDEND (run-parts /etc/cron.hourly)
May 26 07:01:00 mr-fox CROND[2346]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:01:00 mr-fox CROND[2346]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:01:00 mr-fox CROND[2344]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:01:00 mr-fox CROND[2344]: pam_unix(crond:session): session closed for user root
May 26 07:01:00 mr-fox CROND[2343]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:01:00 mr-fox CROND[2343]: pam_unix(crond:session): session closed for user root
May 26 07:01:01 mr-fox CROND[2526]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:01:01 mr-fox CROND[2526]: pam_unix(crond:session): session closed for user root
May 26 07:02:00 mr-fox crond[4484]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:02:00 mr-fox crond[4486]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:02:00 mr-fox crond[4487]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:02:00 mr-fox crond[4488]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:02:00 mr-fox CROND[4492]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:02:00 mr-fox crond[4483]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:02:00 mr-fox CROND[4493]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:02:00 mr-fox CROND[4494]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:02:00 mr-fox CROND[4495]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:02:00 mr-fox CROND[4496]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:02:00 mr-fox CROND[4488]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:02:00 mr-fox CROND[4488]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:02:00 mr-fox CROND[4487]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:02:00 mr-fox CROND[4487]: pam_unix(crond:session): session closed for user root
May 26 07:02:00 mr-fox CROND[4486]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:02:00 mr-fox CROND[4486]: pam_unix(crond:session): session closed for user root
May 26 07:02:00 mr-fox CROND[2341]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:02:00 mr-fox CROND[2341]: pam_unix(crond:session): session closed for user root
May 26 07:02:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:02:44 mr-fox kernel: rcu: \x0921-....: (915076 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=356210
May 26 07:02:44 mr-fox kernel: rcu: \x09(t=915077 jiffies g=8794409 q=29608275 ncpus=32)
May 26 07:02:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:02:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:02:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:02:44 mr-fox kernel: RIP: 0010:xas_start+0x2b/0x120
May 26 07:02:44 mr-fox kernel: Code: 53 48 89 fb 48 83 ec 08 48 8b 6f 18 48 89 e8 83 e0 03 0f 84 81 00 00 00 48 81 fd 05 c0 ff ff 76 06 48 83 f8 02 74 5d 48 8b 03 <48> 8b 73 08 48 8b 40 08 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d
May 26 07:02:44 mr-fox kernel: RSP: 0018:ffffa401077e3930 EFLAGS: 00000213
May 26 07:02:44 mr-fox kernel: RAX: ffff988ac61fdda0 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:02:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffa401077e3970
May 26 07:02:44 mr-fox kernel: RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
May 26 07:02:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:02:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:02:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:02:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:02:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:02:44 mr-fox kernel: PKRU: 55555554
May 26 07:02:44 mr-fox kernel: Call Trace:
May 26 07:02:44 mr-fox kernel: <IRQ>
May 26 07:02:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:02:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:02:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 07:02:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:02:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:02:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:02:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:02:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:02:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:02:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:02:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:02:44 mr-fox kernel: </IRQ>
May 26 07:02:44 mr-fox kernel: <TASK>
May 26 07:02:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:02:44 mr-fox kernel: ? xas_start+0x2b/0x120
May 26 07:02:44 mr-fox kernel: xas_load+0xe/0x60
May 26 07:02:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:02:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:02:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:02:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:02:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:02:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:02:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:02:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:02:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:02:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:02:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:02:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:02:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:02:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:02:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:02:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:02:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:02:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:02:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:02:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:02:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:02:44 mr-fox kernel: </TASK>
May 26 07:02:53 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 917103 jiffies s: 491905 root: 0x2/.
May 26 07:02:53 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:02:53 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:02:53 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:02:53 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:02:53 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:02:53 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:02:53 mr-fox kernel: RIP: 0010:iommu_dma_free_iova+0xb3/0x220
May 26 07:02:53 mr-fox kernel: Code: 93 88 00 00 00 65 48 03 15 0a f1 95 5a 49 89 d7 4c 89 ff e8 bf 75 2f 00 4c 89 fe 48 89 df 48 89 04 24 e8 80 f4 ff ff 41 8b 07 <85> c0 0f 84 60 01 00 00 41 8b 47 08 8d 48 01 41 23 4f 0c 41 3b 4f
May 26 07:02:53 mr-fox kernel: RSP: 0018:ffffa401005f0d08 EFLAGS: 00000046
May 26 07:02:53 mr-fox kernel: RAX: 0000000000000001 RBX: ffff988ac1292400 RCX: 000000000000000c
May 26 07:02:53 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:02:53 mr-fox kernel: RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
May 26 07:02:53 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000000c9614
May 26 07:02:53 mr-fox kernel: R13: ffffa401005f0d68 R14: ffffa401005f0d50 R15: ffffc400ffb51870
May 26 07:02:53 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:02:53 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:02:53 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:02:53 mr-fox kernel: PKRU: 55555554
May 26 07:02:53 mr-fox kernel: Call Trace:
May 26 07:02:53 mr-fox kernel: <NMI>
May 26 07:02:53 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:02:53 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:02:53 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:02:53 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:02:53 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:02:53 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:02:53 mr-fox kernel: ? iommu_dma_free_iova+0xb3/0x220
May 26 07:02:53 mr-fox kernel: ? iommu_dma_free_iova+0xb3/0x220
May 26 07:02:53 mr-fox kernel: ? iommu_dma_free_iova+0xb3/0x220
May 26 07:02:53 mr-fox kernel: </NMI>
May 26 07:02:53 mr-fox kernel: <IRQ>
May 26 07:02:53 mr-fox kernel: __iommu_dma_unmap+0xe0/0x170
May 26 07:02:53 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 07:02:53 mr-fox kernel: igb_poll+0x106/0x1370
May 26 07:02:53 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 07:02:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:02:53 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 07:02:53 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:02:53 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:02:53 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:02:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:02:53 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:02:53 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:02:53 mr-fox kernel: </IRQ>
May 26 07:02:53 mr-fox kernel: <TASK>
May 26 07:02:53 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:02:53 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 07:02:53 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 07:02:53 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 07:02:53 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:02:53 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:02:53 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:02:53 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:02:53 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:02:53 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:02:53 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:02:53 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:02:53 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:02:53 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:02:53 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:02:53 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:02:53 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:02:53 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:02:53 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:02:53 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:02:53 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:02:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:02:53 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:02:53 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:02:53 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:02:53 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:02:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:02:53 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:02:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:02:53 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:02:53 mr-fox kernel: </TASK>
May 26 07:03:00 mr-fox crond[6492]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:03:00 mr-fox crond[6493]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:03:00 mr-fox crond[6495]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:03:00 mr-fox crond[6496]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:03:00 mr-fox CROND[6499]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:03:00 mr-fox CROND[6501]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:03:00 mr-fox crond[6494]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:03:00 mr-fox CROND[6502]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:03:00 mr-fox CROND[6503]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:03:00 mr-fox CROND[6504]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:03:00 mr-fox CROND[6496]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:03:00 mr-fox CROND[6496]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:03:00 mr-fox CROND[6495]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:03:00 mr-fox CROND[6495]: pam_unix(crond:session): session closed for user root
May 26 07:03:00 mr-fox CROND[6494]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:03:00 mr-fox CROND[6494]: pam_unix(crond:session): session closed for user root
May 26 07:03:00 mr-fox CROND[4484]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:03:00 mr-fox CROND[4484]: pam_unix(crond:session): session closed for user root
May 26 07:04:00 mr-fox crond[10549]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:04:00 mr-fox crond[10550]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:04:00 mr-fox crond[10551]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:04:00 mr-fox crond[10548]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:04:00 mr-fox crond[10552]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:04:00 mr-fox CROND[10556]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:04:00 mr-fox CROND[10557]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:04:00 mr-fox CROND[10558]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:04:00 mr-fox CROND[10559]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:04:00 mr-fox CROND[10560]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:04:00 mr-fox CROND[10552]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:04:00 mr-fox CROND[10552]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:04:00 mr-fox CROND[10551]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:04:00 mr-fox CROND[10551]: pam_unix(crond:session): session closed for user root
May 26 07:04:01 mr-fox CROND[10550]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:04:01 mr-fox CROND[10550]: pam_unix(crond:session): session closed for user root
May 26 07:04:01 mr-fox CROND[6493]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:04:01 mr-fox CROND[6493]: pam_unix(crond:session): session closed for user root
May 26 07:05:00 mr-fox crond[13200]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:05:00 mr-fox crond[13199]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:05:00 mr-fox crond[13201]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:05:00 mr-fox crond[13205]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:05:00 mr-fox crond[13204]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:05:00 mr-fox crond[13206]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:05:00 mr-fox crond[13203]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:05:00 mr-fox CROND[13209]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:05:00 mr-fox CROND[13210]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:05:00 mr-fox crond[13207]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:05:00 mr-fox CROND[13211]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:05:00 mr-fox CROND[13212]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:05:00 mr-fox CROND[13213]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:05:00 mr-fox CROND[13214]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:05:00 mr-fox CROND[13215]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:05:00 mr-fox CROND[13216]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:05:00 mr-fox CROND[13199]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:05:00 mr-fox CROND[13199]: pam_unix(crond:session): session closed for user torproject
May 26 07:05:00 mr-fox CROND[13207]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:05:00 mr-fox CROND[13207]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:05:00 mr-fox CROND[13204]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:05:00 mr-fox CROND[13204]: pam_unix(crond:session): session closed for user root
May 26 07:05:00 mr-fox CROND[13205]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:05:00 mr-fox CROND[13205]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:05:00 mr-fox CROND[13203]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:05:00 mr-fox CROND[13203]: pam_unix(crond:session): session closed for user root
May 26 07:05:01 mr-fox CROND[10549]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:05:01 mr-fox CROND[10549]: pam_unix(crond:session): session closed for user root
May 26 07:05:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:05:44 mr-fox kernel: rcu: \x0921-....: (960080 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=374122
May 26 07:05:44 mr-fox kernel: rcu: \x09(t=960081 jiffies g=8794409 q=30803666 ncpus=32)
May 26 07:05:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:05:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:05:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:05:44 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 07:05:44 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 07:05:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 07:05:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:05:44 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:05:44 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:05:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:05:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:05:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:05:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:05:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:05:44 mr-fox kernel: PKRU: 55555554
May 26 07:05:44 mr-fox kernel: Call Trace:
May 26 07:05:44 mr-fox kernel: <IRQ>
May 26 07:05:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:05:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:05:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:05:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:05:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:05:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:05:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:05:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:05:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:05:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:05:44 mr-fox kernel: </IRQ>
May 26 07:05:44 mr-fox kernel: <TASK>
May 26 07:05:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:05:44 mr-fox kernel: ? xas_load+0x35/0x60
May 26 07:05:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:05:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:05:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:05:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:05:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:05:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:05:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:05:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:05:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:05:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:05:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:05:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:05:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:05:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:05:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:05:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:05:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:05:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:05:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:05:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:05:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:05:44 mr-fox kernel: </TASK>
May 26 07:05:53 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 962159 jiffies s: 491905 root: 0x2/.
May 26 07:05:53 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:05:53 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:05:53 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:05:53 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:05:53 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:05:53 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:05:53 mr-fox kernel: RIP: 0010:skb_clone+0x21/0xd0
May 26 07:05:53 mr-fox kernel: Code: 1f 84 00 00 00 00 00 66 90 f3 0f 1e fa 53 48 89 fb 48 83 ec 08 48 85 ff 74 1c 8b 87 b8 00 00 00 48 03 87 c0 00 00 00 0f b6 10 <f6> c2 01 74 07 48 83 78 28 00 75 71 0f b6 43 7e 89 c2 83 e2 0c 80
May 26 07:05:53 mr-fox kernel: RSP: 0018:ffffa401005f0d50 EFLAGS: 00000282
May 26 07:05:53 mr-fox kernel: RAX: ffff98998f234e80 RBX: ffff988dc55d41c0 RCX: 0000000000000801
May 26 07:05:53 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000820 RDI: ffff988dc55d41c0
May 26 07:05:53 mr-fox kernel: RBP: ffff988dc55d41c0 R08: 0000000088737df5 R09: 0000000000000000
May 26 07:05:53 mr-fox kernel: R10: 0000000000000001 R11: 0000000000000000 R12: ffff988dc55d4218
May 26 07:05:53 mr-fox kernel: R13: 0000000088737df5 R14: 00002154c5d8553e R15: 00000000000005a8
May 26 07:05:53 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:05:53 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:05:53 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:05:53 mr-fox kernel: PKRU: 55555554
May 26 07:05:53 mr-fox kernel: Call Trace:
May 26 07:05:53 mr-fox kernel: <NMI>
May 26 07:05:53 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:05:53 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:05:53 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:05:53 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:05:53 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:05:53 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:05:53 mr-fox kernel: ? skb_clone+0x21/0xd0
May 26 07:05:53 mr-fox kernel: ? skb_clone+0x21/0xd0
May 26 07:05:53 mr-fox kernel: ? skb_clone+0x21/0xd0
May 26 07:05:53 mr-fox kernel: </NMI>
May 26 07:05:53 mr-fox kernel: <IRQ>
May 26 07:05:53 mr-fox kernel: __tcp_transmit_skb+0x68c/0xd30
May 26 07:05:53 mr-fox kernel: tcp_write_xmit+0x4c9/0x13b0
May 26 07:05:53 mr-fox kernel: ? ktime_get+0x42/0xb0
May 26 07:05:53 mr-fox kernel: tcp_send_loss_probe+0x16f/0x260
May 26 07:05:53 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 07:05:53 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 07:05:53 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:05:53 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:05:53 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:05:53 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:05:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:05:53 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:05:53 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:05:53 mr-fox kernel: </IRQ>
May 26 07:05:53 mr-fox kernel: <TASK>
May 26 07:05:53 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:05:53 mr-fox kernel: RIP: 0010:xas_load+0x22/0x60
May 26 07:05:53 mr-fox kernel: Code: be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f <5b> 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68
May 26 07:05:53 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:05:53 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:05:53 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:05:53 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:05:53 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:05:53 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:05:53 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:05:53 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:05:53 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:05:53 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:05:53 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:05:53 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:05:53 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:05:53 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:05:53 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:05:53 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:05:53 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:05:53 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:05:53 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:05:53 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:05:53 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:05:53 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:05:53 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:05:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:05:53 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:05:53 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:05:53 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:05:53 mr-fox kernel: </TASK>
May 26 07:06:00 mr-fox crond[15611]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:06:00 mr-fox crond[15610]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:06:00 mr-fox crond[15612]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:06:00 mr-fox crond[15613]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:06:00 mr-fox crond[15614]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:06:00 mr-fox CROND[15617]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:06:00 mr-fox CROND[15618]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:06:00 mr-fox CROND[15620]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:06:00 mr-fox CROND[15622]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:06:00 mr-fox CROND[15623]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:06:00 mr-fox CROND[15614]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:06:00 mr-fox CROND[15614]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:06:00 mr-fox CROND[15613]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:06:00 mr-fox CROND[15613]: pam_unix(crond:session): session closed for user root
May 26 07:06:00 mr-fox CROND[15612]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:06:00 mr-fox CROND[15612]: pam_unix(crond:session): session closed for user root
May 26 07:06:00 mr-fox CROND[13201]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:06:00 mr-fox CROND[13201]: pam_unix(crond:session): session closed for user root
May 26 07:07:00 mr-fox crond[19516]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:07:00 mr-fox crond[19519]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:07:00 mr-fox crond[19520]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:07:00 mr-fox crond[19521]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:07:00 mr-fox crond[19518]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:07:00 mr-fox CROND[19525]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:07:00 mr-fox CROND[19526]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:07:00 mr-fox CROND[19527]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:07:00 mr-fox CROND[19528]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:07:00 mr-fox CROND[19529]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:07:00 mr-fox CROND[19521]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:07:00 mr-fox CROND[19521]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:07:00 mr-fox CROND[19520]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:07:00 mr-fox CROND[19520]: pam_unix(crond:session): session closed for user root
May 26 07:07:00 mr-fox CROND[19519]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:07:00 mr-fox CROND[19519]: pam_unix(crond:session): session closed for user root
May 26 07:07:00 mr-fox CROND[15611]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:07:00 mr-fox CROND[15611]: pam_unix(crond:session): session closed for user root
May 26 07:08:00 mr-fox crond[13553]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:08:00 mr-fox crond[13554]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:08:00 mr-fox crond[13555]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:08:00 mr-fox crond[13556]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:08:00 mr-fox CROND[13561]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:08:00 mr-fox crond[13558]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:08:00 mr-fox CROND[13562]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:08:00 mr-fox CROND[13565]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:08:00 mr-fox CROND[13566]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:08:00 mr-fox CROND[13568]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:08:01 mr-fox CROND[13558]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:08:01 mr-fox CROND[13558]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:08:01 mr-fox CROND[13556]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:08:01 mr-fox CROND[13556]: pam_unix(crond:session): session closed for user root
May 26 07:08:01 mr-fox CROND[13555]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:08:01 mr-fox CROND[13555]: pam_unix(crond:session): session closed for user root
May 26 07:08:01 mr-fox CROND[19518]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:08:01 mr-fox CROND[19518]: pam_unix(crond:session): session closed for user root
May 26 07:08:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:08:44 mr-fox kernel: rcu: \x0921-....: (1005084 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=391805
May 26 07:08:44 mr-fox kernel: rcu: \x09(t=1005085 jiffies g=8794409 q=32038023 ncpus=32)
May 26 07:08:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:08:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:08:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:08:44 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 07:08:44 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 07:08:44 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 07:08:44 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 07:08:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:08:44 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 07:08:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:08:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:08:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:08:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:08:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:08:44 mr-fox kernel: PKRU: 55555554
May 26 07:08:44 mr-fox kernel: Call Trace:
May 26 07:08:44 mr-fox kernel: <IRQ>
May 26 07:08:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:08:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:08:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:08:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:08:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:08:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:08:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:08:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:08:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:08:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:08:44 mr-fox kernel: </IRQ>
May 26 07:08:44 mr-fox kernel: <TASK>
May 26 07:08:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:08:44 mr-fox kernel: ? filemap_get_entry+0x6d/0x160
May 26 07:08:44 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 07:08:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:08:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:08:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:08:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:08:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:08:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:08:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:08:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:08:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:08:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:08:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:08:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:08:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:08:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:08:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:08:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:08:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:08:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:08:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:08:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:08:44 mr-fox kernel: </TASK>
May 26 07:08:54 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1007216 jiffies s: 491905 root: 0x2/.
May 26 07:08:54 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:08:54 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:08:54 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:08:54 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:08:54 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:08:54 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:08:54 mr-fox kernel: RIP: 0010:kmem_cache_free_bulk+0x2fa/0x310
May 26 07:08:54 mr-fox kernel: Code: fe ff ff 4c 8b 4c 24 08 44 8b 44 24 14 4c 89 d1 48 89 da 48 8b 74 24 48 4c 89 f7 4c 89 5c 24 40 e8 6b b2 ff ff 4c 8b 5c 24 40 <eb> 85 0f 0b e8 cd 1a 76 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 90
May 26 07:08:54 mr-fox kernel: RSP: 0018:ffffa401005f0d10 EFLAGS: 00000246
May 26 07:08:54 mr-fox kernel: RAX: 0000000000000000 RBX: ffff988b2447cb00 RCX: 0000000000000000
May 26 07:08:54 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:08:54 mr-fox kernel: RBP: ffffa401005f0dc0 R08: 0000000000000000 R09: 0000000000000000
May 26 07:08:54 mr-fox kernel: R10: 0000000000000000 R11: 000000000000000e R12: ffff988b2447cb00
May 26 07:08:54 mr-fox kernel: R13: ffff988ac0137b00 R14: ffff988ac0137b00 R15: ffff98a96ed60970
May 26 07:08:54 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:08:54 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:08:54 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:08:54 mr-fox kernel: PKRU: 55555554
May 26 07:08:54 mr-fox kernel: Call Trace:
May 26 07:08:54 mr-fox kernel: <NMI>
May 26 07:08:54 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:08:54 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:08:54 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:08:54 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:08:54 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:08:54 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:08:54 mr-fox kernel: ? kmem_cache_free_bulk+0x2fa/0x310
May 26 07:08:54 mr-fox kernel: ? kmem_cache_free_bulk+0x2fa/0x310
May 26 07:08:54 mr-fox kernel: ? kmem_cache_free_bulk+0x2fa/0x310
May 26 07:08:54 mr-fox kernel: </NMI>
May 26 07:08:54 mr-fox kernel: <IRQ>
May 26 07:08:54 mr-fox kernel: ? napi_skb_cache_put+0x80/0xc0
May 26 07:08:54 mr-fox kernel: napi_skb_cache_put+0x80/0xc0
May 26 07:08:54 mr-fox kernel: igb_poll+0xea/0x1370
May 26 07:08:54 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 07:08:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:08:54 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 07:08:54 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:08:54 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:08:54 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:08:54 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:08:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:08:54 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:08:54 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:08:54 mr-fox kernel: </IRQ>
May 26 07:08:54 mr-fox kernel: <TASK>
May 26 07:08:54 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:08:54 mr-fox kernel: RIP: 0010:xas_descend+0x1a/0xd0
May 26 07:08:54 mr-fox kernel: Code: 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f <0f> 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5
May 26 07:08:54 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000293
May 26 07:08:54 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 000000004cd994a1 RCX: 000000000000000c
May 26 07:08:54 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988b1fdbb478 RDI: ffffa401077e3970
May 26 07:08:54 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 07:08:54 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:08:54 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:08:54 mr-fox kernel: xas_load+0x49/0x60
May 26 07:08:54 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:08:54 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:08:54 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:08:54 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:08:54 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:08:54 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:08:54 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:08:54 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:08:54 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:08:54 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:08:54 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:08:54 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:08:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:08:54 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:08:54 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:08:54 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:08:54 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:08:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:08:54 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:08:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:08:54 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:08:54 mr-fox kernel: </TASK>
May 26 07:09:00 mr-fox crond[15081]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:09:00 mr-fox crond[15082]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:09:00 mr-fox crond[15084]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:09:00 mr-fox crond[15085]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:09:00 mr-fox crond[15086]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:09:00 mr-fox CROND[15089]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:09:00 mr-fox CROND[15090]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:09:00 mr-fox CROND[15091]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:09:00 mr-fox CROND[15093]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:09:00 mr-fox CROND[15092]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:09:00 mr-fox CROND[15086]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:09:00 mr-fox CROND[15086]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:09:00 mr-fox CROND[15085]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:09:00 mr-fox CROND[15085]: pam_unix(crond:session): session closed for user root
May 26 07:09:00 mr-fox CROND[15084]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:09:00 mr-fox CROND[15084]: pam_unix(crond:session): session closed for user root
May 26 07:09:01 mr-fox CROND[13554]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:09:01 mr-fox CROND[13554]: pam_unix(crond:session): session closed for user root
May 26 07:10:00 mr-fox crond[18098]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:10:00 mr-fox crond[18104]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:10:00 mr-fox crond[18101]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:10:00 mr-fox crond[18100]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:10:00 mr-fox crond[18102]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:10:00 mr-fox crond[18103]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:10:00 mr-fox crond[18105]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:10:00 mr-fox crond[18106]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:10:00 mr-fox CROND[18112]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:10:00 mr-fox CROND[18113]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:10:00 mr-fox CROND[18115]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:10:00 mr-fox CROND[18114]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:10:00 mr-fox CROND[18116]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:10:00 mr-fox CROND[18117]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:10:00 mr-fox CROND[18118]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:10:00 mr-fox CROND[18119]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:10:00 mr-fox CROND[18098]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:10:00 mr-fox CROND[18098]: pam_unix(crond:session): session closed for user torproject
May 26 07:10:00 mr-fox CROND[18106]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:10:00 mr-fox CROND[18106]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:10:00 mr-fox CROND[18103]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:10:00 mr-fox CROND[18103]: pam_unix(crond:session): session closed for user root
May 26 07:10:00 mr-fox CROND[18104]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:10:00 mr-fox CROND[18104]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:10:00 mr-fox CROND[18102]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:10:00 mr-fox CROND[18102]: pam_unix(crond:session): session closed for user root
May 26 07:10:00 mr-fox CROND[15082]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:10:00 mr-fox CROND[15082]: pam_unix(crond:session): session closed for user root
May 26 07:11:00 mr-fox crond[21417]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:11:00 mr-fox crond[21416]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:11:00 mr-fox crond[21418]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:11:00 mr-fox crond[21419]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:11:00 mr-fox CROND[21425]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:11:00 mr-fox CROND[21426]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:11:00 mr-fox CROND[21428]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:11:00 mr-fox CROND[21427]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:11:00 mr-fox crond[21421]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:11:00 mr-fox CROND[21431]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:11:00 mr-fox CROND[21421]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:11:00 mr-fox CROND[21421]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:11:00 mr-fox CROND[21419]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:11:00 mr-fox CROND[21419]: pam_unix(crond:session): session closed for user root
May 26 07:11:00 mr-fox CROND[21418]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:11:00 mr-fox CROND[21418]: pam_unix(crond:session): session closed for user root
May 26 07:11:01 mr-fox CROND[18101]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:11:01 mr-fox CROND[18101]: pam_unix(crond:session): session closed for user root
May 26 07:11:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:11:44 mr-fox kernel: rcu: \x0921-....: (1050088 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=409781
May 26 07:11:44 mr-fox kernel: rcu: \x09(t=1050089 jiffies g=8794409 q=33148757 ncpus=32)
May 26 07:11:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:11:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:11:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:11:44 mr-fox kernel: RIP: 0010:xas_load+0xe/0x60
May 26 07:11:44 mr-fox kernel: Code: 00 00 eb 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff <48> 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d
May 26 07:11:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:11:44 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:11:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:11:44 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 07:11:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:11:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:11:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:11:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:11:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:11:44 mr-fox kernel: PKRU: 55555554
May 26 07:11:44 mr-fox kernel: Call Trace:
May 26 07:11:44 mr-fox kernel: <IRQ>
May 26 07:11:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:11:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:11:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:11:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:11:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:11:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:11:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:11:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:11:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:11:44 mr-fox kernel: </IRQ>
May 26 07:11:44 mr-fox kernel: <TASK>
May 26 07:11:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:11:44 mr-fox kernel: ? xas_load+0xe/0x60
May 26 07:11:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:11:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:11:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:11:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:11:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:11:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:11:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:11:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:11:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:11:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:11:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:11:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:11:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:11:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:11:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:11:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:11:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:11:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:11:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:11:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:11:44 mr-fox kernel: </TASK>
May 26 07:11:54 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1052272 jiffies s: 491905 root: 0x2/.
May 26 07:11:54 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:11:54 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:11:54 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:11:54 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:11:54 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:11:54 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:11:54 mr-fox kernel: RIP: 0010:iommu_v1_map_pages+0x629/0x8f0
May 26 07:11:54 mr-fox kernel: Code: b4 24 00 01 00 00 eb 1e 8d 42 fe 83 f8 04 77 0d 48 8d b4 24 b0 00 00 00 e8 24 f3 ff ff 49 83 c4 08 4d 39 ec 74 53 49 8b 14 24 <31> f6 48 89 d0 f0 49 0f b1 34 24 0f 85 9a 01 00 00 f6 c2 01 74 dd
May 26 07:11:54 mr-fox kernel: RSP: 0018:ffffa401005f08b0 EFLAGS: 00000286
May 26 07:11:54 mr-fox kernel: RAX: ffff988acd41f000 RBX: 6000000000000001 RCX: 000000000000000c
May 26 07:11:54 mr-fox kernel: RDX: 0000000000000000 RSI: 000ffffffffff000 RDI: 0000000000000000
May 26 07:11:54 mr-fox kernel: RBP: 000ffffffffff000 R08: ffff988acd41fa30 R09: 0000000000000001
May 26 07:11:54 mr-fox kernel: R10: 0000000000001000 R11: ffffa401005f09f0 R12: ffff988acd41fa30
May 26 07:11:54 mr-fox kernel: R13: ffff988acd41fa38 R14: 0000000000000820 R15: ffff988acd41fa30
May 26 07:11:54 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:11:54 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:11:54 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:11:54 mr-fox kernel: PKRU: 55555554
May 26 07:11:54 mr-fox kernel: Call Trace:
May 26 07:11:54 mr-fox kernel: <NMI>
May 26 07:11:54 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:11:54 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:11:54 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:11:54 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:11:54 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:11:54 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:11:54 mr-fox kernel: ? iommu_v1_map_pages+0x629/0x8f0
May 26 07:11:54 mr-fox kernel: ? iommu_v1_map_pages+0x629/0x8f0
May 26 07:11:54 mr-fox kernel: ? iommu_v1_map_pages+0x629/0x8f0
May 26 07:11:54 mr-fox kernel: </NMI>
May 26 07:11:54 mr-fox kernel: <IRQ>
May 26 07:11:54 mr-fox kernel: __iommu_map+0x106/0x1b0
May 26 07:11:54 mr-fox kernel: iommu_map+0x27/0xa0
May 26 07:11:54 mr-fox kernel: __iommu_dma_map+0x87/0xf0
May 26 07:11:54 mr-fox kernel: iommu_dma_map_page+0xc2/0x230
May 26 07:11:54 mr-fox kernel: igb_xmit_frame_ring+0x91b/0xc00
May 26 07:11:54 mr-fox kernel: ? netif_skb_features+0x93/0x2c0
May 26 07:11:54 mr-fox kernel: dev_hard_start_xmit+0xa0/0xf0
May 26 07:11:54 mr-fox kernel: sch_direct_xmit+0x8d/0x290
May 26 07:11:54 mr-fox kernel: __dev_queue_xmit+0x49a/0x9a0
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: ip_finish_output2+0x258/0x500
May 26 07:11:54 mr-fox kernel: __ip_queue_xmit+0x16b/0x480
May 26 07:11:54 mr-fox kernel: __tcp_transmit_skb+0xbad/0xd30
May 26 07:11:54 mr-fox kernel: __tcp_retransmit_skb+0x1a9/0x800
May 26 07:11:54 mr-fox kernel: ? lock_timer_base+0x2f/0xc0
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: tcp_retransmit_skb+0x11/0xa0
May 26 07:11:54 mr-fox kernel: tcp_retransmit_timer+0x492/0xa60
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 07:11:54 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 07:11:54 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:11:54 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:11:54 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:11:54 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:11:54 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:11:54 mr-fox kernel: </IRQ>
May 26 07:11:54 mr-fox kernel: <TASK>
May 26 07:11:54 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:11:54 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 07:11:54 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 07:11:54 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 07:11:54 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:11:54 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:11:54 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:11:54 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:11:54 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:11:54 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:11:54 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:11:54 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:11:54 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:11:54 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:11:54 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:11:54 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:11:54 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:11:54 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:11:54 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:11:54 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:11:54 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:11:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:11:54 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:11:54 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:11:54 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:11:54 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:11:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:11:54 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:11:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:11:54 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:11:54 mr-fox kernel: </TASK>
May 26 07:12:00 mr-fox crond[25328]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:12:00 mr-fox crond[25329]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:12:00 mr-fox crond[25327]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:12:00 mr-fox crond[25330]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:12:00 mr-fox CROND[25333]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:12:00 mr-fox CROND[25334]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:12:00 mr-fox crond[25325]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:12:00 mr-fox CROND[25335]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:12:00 mr-fox CROND[25336]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:12:00 mr-fox CROND[25337]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:12:00 mr-fox CROND[25330]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:12:00 mr-fox CROND[25330]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:12:00 mr-fox CROND[25329]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:12:00 mr-fox CROND[25329]: pam_unix(crond:session): session closed for user root
May 26 07:12:00 mr-fox CROND[25328]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:12:00 mr-fox CROND[25328]: pam_unix(crond:session): session closed for user root
May 26 07:12:01 mr-fox CROND[21417]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:12:01 mr-fox CROND[21417]: pam_unix(crond:session): session closed for user root
May 26 07:13:00 mr-fox crond[27159]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:13:00 mr-fox crond[27161]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:13:00 mr-fox crond[27160]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:13:00 mr-fox crond[27162]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:13:00 mr-fox CROND[27165]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:13:00 mr-fox CROND[27166]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:13:00 mr-fox CROND[27168]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:13:00 mr-fox CROND[27167]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:13:00 mr-fox crond[27163]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:13:00 mr-fox CROND[27173]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:13:00 mr-fox CROND[27163]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:13:00 mr-fox CROND[27163]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:13:00 mr-fox CROND[27162]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:13:00 mr-fox CROND[27162]: pam_unix(crond:session): session closed for user root
May 26 07:13:00 mr-fox CROND[27161]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:13:00 mr-fox CROND[27161]: pam_unix(crond:session): session closed for user root
May 26 07:13:00 mr-fox CROND[25327]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:13:00 mr-fox CROND[25327]: pam_unix(crond:session): session closed for user root
May 26 07:14:00 mr-fox crond[29486]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:14:00 mr-fox crond[29487]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:14:00 mr-fox crond[29489]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:14:00 mr-fox crond[29490]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:14:00 mr-fox crond[29488]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:14:00 mr-fox CROND[29494]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:14:00 mr-fox CROND[29496]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:14:00 mr-fox CROND[29495]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:14:00 mr-fox CROND[29497]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:14:00 mr-fox CROND[29498]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:14:00 mr-fox CROND[29490]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:14:00 mr-fox CROND[29490]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:14:00 mr-fox CROND[29489]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:14:00 mr-fox CROND[29489]: pam_unix(crond:session): session closed for user root
May 26 07:14:00 mr-fox CROND[29488]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:14:00 mr-fox CROND[29488]: pam_unix(crond:session): session closed for user root
May 26 07:14:00 mr-fox CROND[27160]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:14:00 mr-fox CROND[27160]: pam_unix(crond:session): session closed for user root
May 26 07:14:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:14:44 mr-fox kernel: rcu: \x0921-....: (1095092 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=427902
May 26 07:14:44 mr-fox kernel: rcu: \x09(t=1095093 jiffies g=8794409 q=34188666 ncpus=32)
May 26 07:14:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:14:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:14:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:14:44 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 07:14:44 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 07:14:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 07:14:44 mr-fox kernel: RAX: ffff988acf90e6ca RBX: 0000000000000036 RCX: 0000000000000012
May 26 07:14:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 07:14:44 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:14:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:14:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:14:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:14:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:14:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:14:44 mr-fox kernel: PKRU: 55555554
May 26 07:14:44 mr-fox kernel: Call Trace:
May 26 07:14:44 mr-fox kernel: <IRQ>
May 26 07:14:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:14:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:14:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:14:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:14:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:14:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:14:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:14:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:14:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:14:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:14:44 mr-fox kernel: </IRQ>
May 26 07:14:44 mr-fox kernel: <TASK>
May 26 07:14:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:14:44 mr-fox kernel: ? xas_descend+0x26/0xd0
May 26 07:14:44 mr-fox kernel: xas_load+0x49/0x60
May 26 07:14:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:14:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:14:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:14:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:14:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:14:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:14:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:14:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:14:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:14:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:14:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:14:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:14:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:14:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:14:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:14:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:14:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:14:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:14:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:14:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:14:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:14:44 mr-fox kernel: </TASK>
May 26 07:14:54 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1097328 jiffies s: 491905 root: 0x2/.
May 26 07:14:54 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:14:54 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:14:54 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:14:54 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:14:54 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:14:54 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:14:54 mr-fox kernel: RIP: 0010:clear_page_erms+0xb/0x20
May 26 07:14:54 mr-fox kernel: Code: 48 89 47 20 48 89 47 28 48 89 47 30 48 89 47 38 48 8d 7f 40 75 d9 90 e9 be 8a 1a 00 0f 1f 00 f3 0f 1e fa b9 00 10 00 00 31 c0 <f3> aa e9 a9 8a 1a 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 f3
May 26 07:14:54 mr-fox kernel: RSP: 0018:ffffa401005f0d58 EFLAGS: 00000246
May 26 07:14:54 mr-fox kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000920
May 26 07:14:54 mr-fox kernel: RDX: ffffcf62f4cf5740 RSI: ffffcf62f4cf5800 RDI: ffff9896f3d5d6e0
May 26 07:14:54 mr-fox kernel: RBP: ffffcf62f4cf5600 R08: 0000000000000001 R09: 0000000000000008
May 26 07:14:54 mr-fox kernel: R10: 0000000000000003 R11: 0000000000000000 R12: 0000000000000003
May 26 07:14:54 mr-fox kernel: R13: ffffcf62f4cf5800 R14: 0000000000000001 R15: 0000000000000008
May 26 07:14:54 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:14:54 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:14:54 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:14:54 mr-fox kernel: PKRU: 55555554
May 26 07:14:54 mr-fox kernel: Call Trace:
May 26 07:14:54 mr-fox kernel: <NMI>
May 26 07:14:54 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:14:54 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:14:54 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:14:54 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:14:54 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:14:54 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:14:54 mr-fox kernel: ? clear_page_erms+0xb/0x20
May 26 07:14:54 mr-fox kernel: ? clear_page_erms+0xb/0x20
May 26 07:14:54 mr-fox kernel: ? clear_page_erms+0xb/0x20
May 26 07:14:54 mr-fox kernel: </NMI>
May 26 07:14:54 mr-fox kernel: <IRQ>
May 26 07:14:54 mr-fox kernel: free_unref_page_prepare+0x114/0x300
May 26 07:14:54 mr-fox kernel: free_unref_page+0x2f/0x170
May 26 07:14:54 mr-fox kernel: skb_release_data.isra.0+0xfb/0x1e0
May 26 07:14:54 mr-fox kernel: __kfree_skb+0x24/0x30
May 26 07:14:54 mr-fox kernel: tcp_write_queue_purge+0xed/0x310
May 26 07:14:54 mr-fox kernel: tcp_retransmit_timer+0x28e/0xa60
May 26 07:14:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:14:54 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 07:14:54 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 07:14:54 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:14:54 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:14:54 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:14:54 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:14:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:14:54 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:14:54 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:14:54 mr-fox kernel: </IRQ>
May 26 07:14:54 mr-fox kernel: <TASK>
May 26 07:14:54 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:14:54 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:14:54 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:14:54 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:14:54 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:14:54 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:14:54 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:14:54 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:14:54 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:14:54 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:14:54 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:14:54 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:14:54 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:14:54 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:14:54 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:14:54 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:14:54 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:14:54 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:14:54 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:14:54 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:14:54 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:14:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:14:54 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:14:54 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:14:54 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:14:54 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:14:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:14:54 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:14:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:14:54 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:14:54 mr-fox kernel: </TASK>
May 26 07:15:00 mr-fox crond[32184]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:15:00 mr-fox crond[32183]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:15:00 mr-fox crond[32180]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:15:00 mr-fox crond[32181]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:15:00 mr-fox crond[32185]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:15:00 mr-fox crond[32186]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:15:00 mr-fox CROND[32191]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:15:00 mr-fox CROND[32193]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:15:00 mr-fox CROND[32194]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:15:00 mr-fox CROND[32192]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:15:00 mr-fox crond[32189]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:15:00 mr-fox crond[32187]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:15:00 mr-fox CROND[32195]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:15:00 mr-fox CROND[32196]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:15:00 mr-fox CROND[32197]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:15:00 mr-fox CROND[32199]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:15:00 mr-fox CROND[32180]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:15:00 mr-fox CROND[32180]: pam_unix(crond:session): session closed for user torproject
May 26 07:15:00 mr-fox CROND[32189]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:15:00 mr-fox CROND[32189]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:15:00 mr-fox CROND[32185]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:15:00 mr-fox CROND[32185]: pam_unix(crond:session): session closed for user root
May 26 07:15:01 mr-fox CROND[32186]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:15:01 mr-fox CROND[32186]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:15:01 mr-fox CROND[32184]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:15:01 mr-fox CROND[32184]: pam_unix(crond:session): session closed for user root
May 26 07:15:01 mr-fox CROND[29487]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:15:01 mr-fox CROND[29487]: pam_unix(crond:session): session closed for user root
May 26 07:16:00 mr-fox crond[3442]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:16:00 mr-fox crond[3446]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:16:00 mr-fox crond[3443]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:16:00 mr-fox crond[3447]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:16:00 mr-fox crond[3444]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:16:00 mr-fox CROND[3450]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:16:00 mr-fox CROND[3451]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:16:00 mr-fox CROND[3452]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:16:00 mr-fox CROND[3456]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:16:00 mr-fox CROND[3457]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:16:00 mr-fox CROND[3447]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:16:00 mr-fox CROND[3447]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:16:00 mr-fox CROND[3446]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:16:00 mr-fox CROND[3446]: pam_unix(crond:session): session closed for user root
May 26 07:16:00 mr-fox CROND[3444]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:16:00 mr-fox CROND[3444]: pam_unix(crond:session): session closed for user root
May 26 07:16:01 mr-fox CROND[32183]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:16:01 mr-fox CROND[32183]: pam_unix(crond:session): session closed for user root
May 26 07:17:00 mr-fox crond[7744]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:17:00 mr-fox crond[7743]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:17:00 mr-fox crond[7747]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:17:00 mr-fox crond[7746]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:17:00 mr-fox crond[7745]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:17:00 mr-fox CROND[7751]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:17:00 mr-fox CROND[7752]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:17:00 mr-fox CROND[7753]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:17:00 mr-fox CROND[7754]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:17:00 mr-fox CROND[7755]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:17:00 mr-fox CROND[7747]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:17:00 mr-fox CROND[7747]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:17:00 mr-fox CROND[7746]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:17:00 mr-fox CROND[7746]: pam_unix(crond:session): session closed for user root
May 26 07:17:00 mr-fox CROND[7745]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:17:00 mr-fox CROND[7745]: pam_unix(crond:session): session closed for user root
May 26 07:17:00 mr-fox CROND[3443]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:17:00 mr-fox CROND[3443]: pam_unix(crond:session): session closed for user root
May 26 07:17:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:17:44 mr-fox kernel: rcu: \x0921-....: (1140096 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=445939
May 26 07:17:44 mr-fox kernel: rcu: \x09(t=1140097 jiffies g=8794409 q=35247467 ncpus=32)
May 26 07:17:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:17:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:17:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:17:44 mr-fox kernel: RIP: 0010:xas_load+0x11/0x60
May 26 07:17:44 mr-fox kernel: Code: 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 <83> e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31
May 26 07:17:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:17:44 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:17:44 mr-fox kernel: RDX: ffff988ac0754ffa RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:17:44 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 07:17:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:17:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:17:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:17:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:17:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:17:44 mr-fox kernel: PKRU: 55555554
May 26 07:17:44 mr-fox kernel: Call Trace:
May 26 07:17:44 mr-fox kernel: <IRQ>
May 26 07:17:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:17:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:17:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:17:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:17:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:17:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:17:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:17:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:17:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:17:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:17:44 mr-fox kernel: </IRQ>
May 26 07:17:44 mr-fox kernel: <TASK>
May 26 07:17:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:17:44 mr-fox kernel: ? xas_load+0x11/0x60
May 26 07:17:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:17:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:17:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:17:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:17:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:17:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:17:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:17:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:17:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:17:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:17:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:17:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:17:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:17:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:17:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:17:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:17:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:17:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:17:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:17:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:17:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:17:44 mr-fox kernel: </TASK>
May 26 07:17:54 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1142384 jiffies s: 491905 root: 0x2/.
May 26 07:17:54 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:17:54 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:17:54 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:17:54 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:17:54 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:17:54 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:17:54 mr-fox kernel: RIP: 0010:tcp4_gro_complete+0x2f/0x90
May 26 07:17:54 mr-fox kernel: Code: 57 70 48 8b 8f c0 00 00 00 31 c0 44 0f b7 87 b0 00 00 00 29 f2 83 c2 06 c1 e2 08 42 03 44 01 10 42 13 44 01 0c 11 d0 83 d0 00 <89> c2 66 31 c0 c1 e2 10 01 d0 15 ff ff 00 00 f7 d0 0f b7 97 ae 00
May 26 07:17:54 mr-fox kernel: RSP: 0018:ffffa401005f0d20 EFLAGS: 00000286
May 26 07:17:54 mr-fox kernel: RAX: 00000000d6b90b74 RBX: ffff988a7360d300 RCX: ffff989c858ef000
May 26 07:17:54 mr-fox kernel: RDX: 0000000000084400 RSI: 0000000000000014 RDI: ffff988a7360d300
May 26 07:17:54 mr-fox kernel: RBP: ffffffffa64a74c0 R08: 00000000000000ce R09: 000000000000dc05
May 26 07:17:54 mr-fox kernel: R10: 00000000ffff23fa R11: 0000000000000000 R12: ffff988ac873f850
May 26 07:17:54 mr-fox kernel: R13: 0000000000000081 R14: ffff988ac873f850 R15: 0000000000000002
May 26 07:17:54 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:17:54 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:17:54 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:17:54 mr-fox kernel: PKRU: 55555554
May 26 07:17:54 mr-fox kernel: Call Trace:
May 26 07:17:54 mr-fox kernel: <NMI>
May 26 07:17:54 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:17:54 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:17:54 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:17:54 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:17:54 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:17:54 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:17:54 mr-fox kernel: ? tcp4_gro_complete+0x2f/0x90
May 26 07:17:54 mr-fox kernel: ? tcp4_gro_complete+0x2f/0x90
May 26 07:17:54 mr-fox kernel: ? tcp4_gro_complete+0x2f/0x90
May 26 07:17:54 mr-fox kernel: </NMI>
May 26 07:17:54 mr-fox kernel: <IRQ>
May 26 07:17:54 mr-fox kernel: napi_gro_complete.constprop.0+0x17c/0x190
May 26 07:17:54 mr-fox kernel: dev_gro_receive+0x43a/0x7b0
May 26 07:17:54 mr-fox kernel: napi_gro_receive+0x67/0x1b0
May 26 07:17:54 mr-fox kernel: igb_poll+0x605/0x1370
May 26 07:17:54 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:17:54 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:17:54 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:17:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:17:54 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:17:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:17:54 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:17:54 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:17:54 mr-fox kernel: </IRQ>
May 26 07:17:54 mr-fox kernel: <TASK>
May 26 07:17:54 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:17:54 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 07:17:54 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 07:17:54 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 07:17:54 mr-fox kernel: RAX: ffff988ac07566ca RBX: 0000000000000001 RCX: 000000000000001e
May 26 07:17:54 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 07:17:54 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:17:54 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:17:54 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:17:54 mr-fox kernel: xas_load+0x49/0x60
May 26 07:17:54 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:17:54 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:17:54 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:17:54 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:17:54 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:17:54 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:17:54 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:17:54 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:17:54 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:17:54 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:17:54 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:17:54 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:17:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:17:54 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:17:54 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:17:54 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:17:54 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:17:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:17:54 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:17:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:17:54 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:17:54 mr-fox kernel: </TASK>
May 26 07:18:00 mr-fox crond[10804]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:18:00 mr-fox crond[10803]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:18:00 mr-fox crond[10802]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:18:00 mr-fox crond[10801]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:18:00 mr-fox crond[10805]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:18:00 mr-fox CROND[10806]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:18:00 mr-fox CROND[10808]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:18:00 mr-fox CROND[10809]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:18:00 mr-fox CROND[10807]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:18:00 mr-fox CROND[10810]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:18:00 mr-fox CROND[10805]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:18:00 mr-fox CROND[10805]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:18:00 mr-fox CROND[10804]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:18:00 mr-fox CROND[10804]: pam_unix(crond:session): session closed for user root
May 26 07:18:01 mr-fox CROND[10803]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:18:01 mr-fox CROND[10803]: pam_unix(crond:session): session closed for user root
May 26 07:18:01 mr-fox CROND[7744]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:18:01 mr-fox CROND[7744]: pam_unix(crond:session): session closed for user root
May 26 07:19:00 mr-fox crond[15386]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:19:00 mr-fox crond[15388]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:19:00 mr-fox crond[15387]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:19:00 mr-fox crond[15390]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:19:00 mr-fox crond[15389]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:19:00 mr-fox CROND[15394]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:19:00 mr-fox CROND[15395]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:19:00 mr-fox CROND[15397]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:19:00 mr-fox CROND[15398]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:19:00 mr-fox CROND[15396]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:19:00 mr-fox CROND[15390]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:19:00 mr-fox CROND[15390]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:19:00 mr-fox CROND[15389]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:19:00 mr-fox CROND[15389]: pam_unix(crond:session): session closed for user root
May 26 07:19:00 mr-fox CROND[15388]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:19:00 mr-fox CROND[15388]: pam_unix(crond:session): session closed for user root
May 26 07:19:01 mr-fox CROND[10802]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:19:01 mr-fox CROND[10802]: pam_unix(crond:session): session closed for user root
May 26 07:20:00 mr-fox crond[20108]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:20:00 mr-fox crond[20110]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:20:00 mr-fox crond[20111]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:20:00 mr-fox crond[20112]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:20:00 mr-fox crond[20109]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:20:00 mr-fox CROND[20118]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:20:00 mr-fox crond[20114]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:20:00 mr-fox CROND[20119]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:20:00 mr-fox crond[20113]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:20:00 mr-fox CROND[20121]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:20:00 mr-fox CROND[20122]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:20:00 mr-fox CROND[20123]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:20:00 mr-fox CROND[20124]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:20:00 mr-fox crond[20116]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:20:00 mr-fox CROND[20125]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:20:00 mr-fox CROND[20127]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:20:00 mr-fox CROND[20108]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:20:00 mr-fox CROND[20108]: pam_unix(crond:session): session closed for user torproject
May 26 07:20:00 mr-fox CROND[20116]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:20:00 mr-fox CROND[20116]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:20:00 mr-fox CROND[20112]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:20:00 mr-fox CROND[20112]: pam_unix(crond:session): session closed for user root
May 26 07:20:00 mr-fox CROND[20113]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:20:00 mr-fox CROND[20113]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:20:00 mr-fox CROND[20111]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:20:00 mr-fox CROND[20111]: pam_unix(crond:session): session closed for user root
May 26 07:20:00 mr-fox CROND[15387]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:20:00 mr-fox CROND[15387]: pam_unix(crond:session): session closed for user root
May 26 07:20:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:20:44 mr-fox kernel: rcu: \x0921-....: (1185100 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=464292
May 26 07:20:44 mr-fox kernel: rcu: \x09(t=1185101 jiffies g=8794409 q=36298079 ncpus=32)
May 26 07:20:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:20:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:20:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:20:44 mr-fox kernel: RIP: 0010:tcp_mt+0x121/0x260
May 26 07:20:44 mr-fox kernel: Code: 66 41 39 54 24 06 0f 93 c2 21 ca 40 38 d7 0f 84 49 ff ff ff 0f b6 50 0d 41 22 54 24 09 41 3a 54 24 0a 89 f2 0f 95 c1 c0 ea 02 <83> e2 01 38 d1 0f 85 28 ff ff ff 45 0f b6 6c 24 08 ba 01 00 00 00
May 26 07:20:44 mr-fox kernel: RSP: 0018:ffffa401005f0ac8 EFLAGS: 00000212
May 26 07:20:44 mr-fox kernel: RAX: ffff988f7e3160e2 RBX: ffffa401005f0b80 RCX: 0000000000000001
May 26 07:20:44 mr-fox kernel: RDX: 0000000000000001 RSI: 0000000000000004 RDI: 0000000000000000
May 26 07:20:44 mr-fox kernel: RBP: ffff988b4b85ed00 R08: ffff988f7e3160ce R09: ffff988ac4bb4110
May 26 07:20:44 mr-fox kernel: R10: ffff988af2b74040 R11: 0000000000000000 R12: ffff988af2b74288
May 26 07:20:44 mr-fox kernel: R13: ffffffffa5cd0238 R14: ffff988af2b741f8 R15: ffff988f7e3160ce
May 26 07:20:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:20:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:20:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:20:44 mr-fox kernel: PKRU: 55555554
May 26 07:20:44 mr-fox kernel: Call Trace:
May 26 07:20:44 mr-fox kernel: <IRQ>
May 26 07:20:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:20:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:20:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:20:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:20:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:20:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:20:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:20:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:20:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x31/0x80
May 26 07:20:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:20:44 mr-fox kernel: ? tcp_mt+0x121/0x260
May 26 07:20:44 mr-fox kernel: ? __mod_timer+0x115/0x3b0
May 26 07:20:44 mr-fox kernel: ipt_do_table+0x290/0x500
May 26 07:20:44 mr-fox kernel: nf_hook_slow+0x3c/0x100
May 26 07:20:44 mr-fox kernel: ip_local_deliver+0x9b/0x100
May 26 07:20:44 mr-fox kernel: ? ip_protocol_deliver_rcu+0x180/0x180
May 26 07:20:44 mr-fox kernel: ip_sublist_rcv_finish+0x7f/0x90
May 26 07:20:44 mr-fox kernel: ip_sublist_rcv+0x176/0x1c0
May 26 07:20:44 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 07:20:44 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 07:20:44 mr-fox kernel: __netif_receive_skb_list_core+0x293/0x2d0
May 26 07:20:44 mr-fox kernel: ? iommu_dma_map_page+0xc2/0x230
May 26 07:20:44 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 07:20:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:44 mr-fox kernel: napi_gro_complete.constprop.0+0x13b/0x190
May 26 07:20:44 mr-fox kernel: napi_gro_flush+0xc3/0x170
May 26 07:20:44 mr-fox kernel: __napi_poll+0xd8/0x1a0
May 26 07:20:44 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:20:44 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:20:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:44 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:20:44 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:20:44 mr-fox kernel: </IRQ>
May 26 07:20:44 mr-fox kernel: <TASK>
May 26 07:20:44 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:20:44 mr-fox kernel: RIP: 0010:xas_load+0x49/0x60
May 26 07:20:44 mr-fox kernel: Code: 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff <80> 7d 00 00 75 bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40
May 26 07:20:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:20:44 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:20:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:20:44 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:20:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:20:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:20:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:20:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:20:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:20:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:20:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:20:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:20:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:20:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:20:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:20:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:20:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:20:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:20:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:20:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:20:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:20:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:20:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:20:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:20:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:20:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:20:44 mr-fox kernel: </TASK>
May 26 07:20:54 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1187440 jiffies s: 491905 root: 0x2/.
May 26 07:20:54 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:20:54 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:20:54 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:20:54 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:20:54 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:20:54 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:20:54 mr-fox kernel: RIP: 0010:ip6_xmit+0x27e/0x600
May 26 07:20:54 mr-fox kernel: Code: 0f 85 e0 02 00 00 e8 21 1d 01 00 3b 43 70 0f 82 e7 00 00 00 48 8b 44 24 20 48 85 c0 74 0c 48 8b 80 98 03 00 00 65 48 ff 40 28 <48> 8b 4c 24 28 48 8b 81 a0 01 00 00 65 48 ff 40 28 48 85 db 0f 84
May 26 07:20:54 mr-fox kernel: RSP: 0018:ffffa401005f0c30 EFLAGS: 00000202
May 26 07:20:54 mr-fox kernel: RAX: 00002b5790e21310 RBX: ffff98a8b979e000 RCX: 0000000000000040
May 26 07:20:54 mr-fox kernel: RDX: 0000000000000000 RSI: 000c64027a24022a RDI: 0000000000000000
May 26 07:20:54 mr-fox kernel: RBP: ffffa401005f0d20 R08: 0000000000000040 R09: 0000000000000000
May 26 07:20:54 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000020 R12: 0000000000002000
May 26 07:20:54 mr-fox kernel: R13: ffff988cf4b8b200 R14: ffffa401005f0d38 R15: ffff988aeb41bf00
May 26 07:20:54 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:20:54 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:20:54 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:20:54 mr-fox kernel: PKRU: 55555554
May 26 07:20:54 mr-fox kernel: Call Trace:
May 26 07:20:54 mr-fox kernel: <NMI>
May 26 07:20:54 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:20:54 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:20:54 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:20:54 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:20:54 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:20:54 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:20:54 mr-fox kernel: ? ip6_xmit+0x27e/0x600
May 26 07:20:54 mr-fox kernel: ? ip6_xmit+0x27e/0x600
May 26 07:20:54 mr-fox kernel: ? ip6_xmit+0x27e/0x600
May 26 07:20:54 mr-fox kernel: </NMI>
May 26 07:20:54 mr-fox kernel: <IRQ>
May 26 07:20:54 mr-fox kernel: ? ip6_output+0x290/0x290
May 26 07:20:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:54 mr-fox kernel: ? __sk_dst_check+0x34/0xa0
May 26 07:20:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:54 mr-fox kernel: ? inet6_csk_route_socket+0x132/0x210
May 26 07:20:54 mr-fox kernel: inet6_csk_xmit+0xe9/0x160
May 26 07:20:54 mr-fox kernel: __tcp_transmit_skb+0x5d0/0xd30
May 26 07:20:54 mr-fox kernel: tcp_delack_timer_handler+0xa9/0x110
May 26 07:20:54 mr-fox kernel: tcp_delack_timer+0xb5/0xf0
May 26 07:20:54 mr-fox kernel: ? tcp_delack_timer_handler+0x110/0x110
May 26 07:20:54 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:20:54 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:20:54 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:20:54 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:20:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:54 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:20:54 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:20:54 mr-fox kernel: </IRQ>
May 26 07:20:54 mr-fox kernel: <TASK>
May 26 07:20:54 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:20:54 mr-fox kernel: RIP: 0010:xas_descend+0x2/0xd0
May 26 07:20:54 mr-fox kernel: Code: 18 0f b6 4c 24 10 4c 8b 04 24 e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 <41> 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80
May 26 07:20:54 mr-fox kernel: RSP: 0018:ffffa401077e3940 EFLAGS: 00000206
May 26 07:20:54 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:20:54 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 07:20:54 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:20:54 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:20:54 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:20:54 mr-fox kernel: xas_load+0x49/0x60
May 26 07:20:54 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:20:54 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:20:54 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:20:54 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:20:54 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:20:54 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:20:54 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:20:54 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:20:54 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:20:54 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:20:54 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:20:54 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:20:54 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:20:54 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:20:54 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:20:54 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:20:54 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:20:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:20:54 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:20:54 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:20:54 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:20:54 mr-fox kernel: </TASK>
May 26 07:21:00 mr-fox crond[25560]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:21:00 mr-fox crond[25558]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:21:00 mr-fox crond[25562]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:21:00 mr-fox crond[25563]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:21:00 mr-fox crond[25559]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:21:00 mr-fox CROND[25565]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:21:00 mr-fox CROND[25566]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:21:00 mr-fox CROND[25567]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:21:00 mr-fox CROND[25568]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:21:00 mr-fox CROND[25569]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:21:00 mr-fox CROND[25563]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:21:00 mr-fox CROND[25563]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:21:00 mr-fox CROND[25562]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:21:00 mr-fox CROND[25562]: pam_unix(crond:session): session closed for user root
May 26 07:21:00 mr-fox CROND[25560]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:21:00 mr-fox CROND[25560]: pam_unix(crond:session): session closed for user root
May 26 07:21:01 mr-fox CROND[20110]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:21:01 mr-fox CROND[20110]: pam_unix(crond:session): session closed for user root
May 26 07:22:00 mr-fox crond[25248]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:22:00 mr-fox crond[25247]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:22:00 mr-fox crond[25250]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:22:00 mr-fox crond[25249]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:22:00 mr-fox CROND[25255]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:22:00 mr-fox CROND[25256]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:22:00 mr-fox CROND[25257]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:22:00 mr-fox CROND[25259]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:22:00 mr-fox crond[25251]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:22:00 mr-fox CROND[25261]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:22:00 mr-fox CROND[25251]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:22:00 mr-fox CROND[25251]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:22:00 mr-fox CROND[25250]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:22:00 mr-fox CROND[25250]: pam_unix(crond:session): session closed for user root
May 26 07:22:00 mr-fox CROND[25249]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:22:00 mr-fox CROND[25249]: pam_unix(crond:session): session closed for user root
May 26 07:22:01 mr-fox CROND[25559]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:22:01 mr-fox CROND[25559]: pam_unix(crond:session): session closed for user root
May 26 07:23:00 mr-fox crond[26989]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:23:00 mr-fox crond[26991]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:23:00 mr-fox crond[26994]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:23:00 mr-fox crond[26993]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:23:00 mr-fox crond[26992]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:23:00 mr-fox CROND[26998]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:23:00 mr-fox CROND[26999]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:23:00 mr-fox CROND[27000]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:23:00 mr-fox CROND[27001]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:23:00 mr-fox CROND[27002]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:23:00 mr-fox CROND[26994]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:23:00 mr-fox CROND[26994]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:23:00 mr-fox CROND[26993]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:23:00 mr-fox CROND[26993]: pam_unix(crond:session): session closed for user root
May 26 07:23:00 mr-fox CROND[26992]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:23:00 mr-fox CROND[26992]: pam_unix(crond:session): session closed for user root
May 26 07:23:00 mr-fox CROND[25248]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:23:00 mr-fox CROND[25248]: pam_unix(crond:session): session closed for user root
May 26 07:23:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:23:44 mr-fox kernel: rcu: \x0921-....: (1230104 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=482666
May 26 07:23:44 mr-fox kernel: rcu: \x09(t=1230105 jiffies g=8794409 q=37360230 ncpus=32)
May 26 07:23:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:23:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:23:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:23:44 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 07:23:44 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 07:23:44 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 07:23:44 mr-fox kernel: RAX: ffff988ac07566ca RBX: 000000000000000c RCX: 0000000000000018
May 26 07:23:44 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 07:23:44 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:23:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:23:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:23:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:23:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:23:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:23:44 mr-fox kernel: PKRU: 55555554
May 26 07:23:44 mr-fox kernel: Call Trace:
May 26 07:23:44 mr-fox kernel: <IRQ>
May 26 07:23:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:23:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:23:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 07:23:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:23:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:23:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:23:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:23:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:23:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:23:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:23:44 mr-fox kernel: </IRQ>
May 26 07:23:44 mr-fox kernel: <TASK>
May 26 07:23:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:23:44 mr-fox kernel: ? xas_descend+0x26/0xd0
May 26 07:23:44 mr-fox kernel: xas_load+0x49/0x60
May 26 07:23:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:23:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:23:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:23:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:23:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:23:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:23:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:23:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:23:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:23:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:23:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:23:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:23:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:23:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:23:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:23:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:23:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:23:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:23:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:23:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:23:44 mr-fox kernel: </TASK>
May 26 07:23:55 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1232496 jiffies s: 491905 root: 0x2/.
May 26 07:23:55 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:23:55 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:23:55 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:23:55 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:23:55 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:23:55 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:23:55 mr-fox kernel: RIP: 0010:iommu_map+0x29/0xa0
May 26 07:23:55 mr-fox kernel: Code: 90 f3 0f 1e fa 41 56 41 55 41 54 55 53 4c 8b 77 08 41 f7 c1 07 00 04 00 75 5e 48 89 fd 49 89 f4 49 89 cd e8 29 fe ff ff 89 c3 <85> c0 75 1d 49 8b 46 20 48 85 c0 74 14 4c 89 ea 4c 89 e6 48 89 ef
May 26 07:23:55 mr-fox kernel: RSP: 0018:ffffa401005f08b0 EFLAGS: 00000246
May 26 07:23:55 mr-fox kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
May 26 07:23:55 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:23:55 mr-fox kernel: RBP: ffff988ac2211010 R08: 0000000000000000 R09: 0000000000000000
May 26 07:23:55 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000cf742000
May 26 07:23:55 mr-fox kernel: R13: 0000000000001000 R14: ffffffffa5ca5540 R15: ffff988ac1292400
May 26 07:23:55 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:23:55 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:23:55 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:23:55 mr-fox kernel: PKRU: 55555554
May 26 07:23:55 mr-fox kernel: Call Trace:
May 26 07:23:55 mr-fox kernel: <NMI>
May 26 07:23:55 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:23:55 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:23:55 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:23:55 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:23:55 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:23:55 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:23:55 mr-fox kernel: ? iommu_map+0x29/0xa0
May 26 07:23:55 mr-fox kernel: ? iommu_map+0x29/0xa0
May 26 07:23:55 mr-fox kernel: ? iommu_map+0x29/0xa0
May 26 07:23:55 mr-fox kernel: </NMI>
May 26 07:23:55 mr-fox kernel: <IRQ>
May 26 07:23:55 mr-fox kernel: __iommu_dma_map+0x87/0xf0
May 26 07:23:55 mr-fox kernel: iommu_dma_map_page+0xc2/0x230
May 26 07:23:55 mr-fox kernel: igb_xmit_frame_ring+0x91b/0xc00
May 26 07:23:55 mr-fox kernel: ? netif_skb_features+0x93/0x2c0
May 26 07:23:55 mr-fox kernel: dev_hard_start_xmit+0xa0/0xf0
May 26 07:23:55 mr-fox kernel: sch_direct_xmit+0x8d/0x290
May 26 07:23:55 mr-fox kernel: __dev_queue_xmit+0x49a/0x9a0
May 26 07:23:55 mr-fox kernel: ? ip6t_do_table+0x30b/0x590
May 26 07:23:55 mr-fox kernel: ip6_finish_output2+0x2c0/0x610
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? ip6_output+0xa7/0x290
May 26 07:23:55 mr-fox kernel: ip6_xmit+0x3fc/0x600
May 26 07:23:55 mr-fox kernel: ? ip6_output+0x290/0x290
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? __sk_dst_check+0x34/0xa0
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? inet6_csk_route_socket+0x132/0x210
May 26 07:23:55 mr-fox kernel: inet6_csk_xmit+0xe9/0x160
May 26 07:23:55 mr-fox kernel: __tcp_transmit_skb+0x5d0/0xd30
May 26 07:23:55 mr-fox kernel: __tcp_retransmit_skb+0x1a9/0x800
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? __mod_timer+0x115/0x3b0
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? retransmits_timed_out.part.0+0x8d/0x170
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: tcp_retransmit_skb+0x11/0xa0
May 26 07:23:55 mr-fox kernel: tcp_retransmit_timer+0x492/0xa60
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 07:23:55 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 07:23:55 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:23:55 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:23:55 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:23:55 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:23:55 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:23:55 mr-fox kernel: </IRQ>
May 26 07:23:55 mr-fox kernel: <TASK>
May 26 07:23:55 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:23:55 mr-fox kernel: RIP: 0010:xas_load+0x49/0x60
May 26 07:23:55 mr-fox kernel: Code: 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff <80> 7d 00 00 75 bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40
May 26 07:23:55 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:23:55 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:23:55 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:23:55 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:23:55 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:23:55 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:23:55 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:23:55 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:23:55 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:23:55 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:23:55 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:23:55 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:23:55 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:23:55 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:23:55 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:23:55 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:23:55 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:23:55 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:23:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:23:55 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:23:55 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:23:55 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:23:55 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:23:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:23:55 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:23:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:23:55 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:23:55 mr-fox kernel: </TASK>
May 26 07:24:00 mr-fox crond[29060]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:24:00 mr-fox crond[29059]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:24:00 mr-fox crond[29061]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:24:00 mr-fox crond[29062]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:24:00 mr-fox CROND[29066]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:24:00 mr-fox CROND[29067]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:24:00 mr-fox crond[29063]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:24:00 mr-fox CROND[29069]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:24:00 mr-fox CROND[29068]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:24:00 mr-fox CROND[29071]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:24:00 mr-fox CROND[29063]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:24:00 mr-fox CROND[29063]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:24:00 mr-fox CROND[29062]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:24:00 mr-fox CROND[29062]: pam_unix(crond:session): session closed for user root
May 26 07:24:00 mr-fox CROND[29061]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:24:00 mr-fox CROND[29061]: pam_unix(crond:session): session closed for user root
May 26 07:24:00 mr-fox CROND[26991]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:24:00 mr-fox CROND[26991]: pam_unix(crond:session): session closed for user root
May 26 07:25:00 mr-fox crond[30521]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:25:00 mr-fox crond[30522]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:25:00 mr-fox crond[30523]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:25:00 mr-fox crond[30524]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:25:00 mr-fox CROND[30533]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:25:00 mr-fox CROND[30534]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:25:00 mr-fox CROND[30535]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:25:00 mr-fox CROND[30536]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:25:00 mr-fox crond[30525]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:25:00 mr-fox crond[30527]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:25:00 mr-fox crond[30529]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:25:00 mr-fox crond[30528]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:25:00 mr-fox CROND[30540]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:25:00 mr-fox CROND[30541]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:25:00 mr-fox CROND[30542]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:25:00 mr-fox CROND[30543]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:25:00 mr-fox CROND[30521]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:25:00 mr-fox CROND[30521]: pam_unix(crond:session): session closed for user torproject
May 26 07:25:00 mr-fox CROND[30529]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:25:00 mr-fox CROND[30529]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:25:00 mr-fox CROND[30525]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:25:00 mr-fox CROND[30525]: pam_unix(crond:session): session closed for user root
May 26 07:25:00 mr-fox CROND[30527]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:25:00 mr-fox CROND[30527]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:25:00 mr-fox CROND[30524]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:25:00 mr-fox CROND[30524]: pam_unix(crond:session): session closed for user root
May 26 07:25:01 mr-fox CROND[29060]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:25:01 mr-fox CROND[29060]: pam_unix(crond:session): session closed for user root
May 26 07:26:00 mr-fox crond[31551]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:26:00 mr-fox crond[31550]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:26:00 mr-fox crond[31549]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:26:00 mr-fox crond[31552]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:26:00 mr-fox crond[31553]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:26:00 mr-fox CROND[31557]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:26:00 mr-fox CROND[31558]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:26:00 mr-fox CROND[31559]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:26:00 mr-fox CROND[31560]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:26:00 mr-fox CROND[31562]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:26:00 mr-fox CROND[31553]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:26:00 mr-fox CROND[31553]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:26:00 mr-fox CROND[31552]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:26:00 mr-fox CROND[31552]: pam_unix(crond:session): session closed for user root
May 26 07:26:00 mr-fox CROND[31551]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:26:00 mr-fox CROND[31551]: pam_unix(crond:session): session closed for user root
May 26 07:26:01 mr-fox CROND[30523]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:26:01 mr-fox CROND[30523]: pam_unix(crond:session): session closed for user root
May 26 07:26:44 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:26:44 mr-fox kernel: rcu: \x0921-....: (1275108 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=501198
May 26 07:26:44 mr-fox kernel: rcu: \x09(t=1275109 jiffies g=8794409 q=38408927 ncpus=32)
May 26 07:26:44 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:26:44 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:26:44 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:26:44 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:26:44 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:26:44 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:26:44 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:26:44 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:26:44 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:26:44 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:26:44 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:26:44 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:26:44 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:26:44 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:26:44 mr-fox kernel: PKRU: 55555554
May 26 07:26:44 mr-fox kernel: Call Trace:
May 26 07:26:44 mr-fox kernel: <IRQ>
May 26 07:26:44 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:26:44 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:26:44 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 07:26:44 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:26:44 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:26:44 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:26:44 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:26:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:44 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:26:44 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:26:44 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:26:44 mr-fox kernel: </IRQ>
May 26 07:26:44 mr-fox kernel: <TASK>
May 26 07:26:44 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:26:44 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 07:26:44 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:26:44 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:26:44 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:26:44 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:26:44 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:26:44 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:26:44 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:26:44 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:26:44 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:26:44 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:26:44 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:26:44 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:26:44 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:44 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:26:44 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:26:44 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:26:44 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:26:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:26:44 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:26:44 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:26:44 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:26:44 mr-fox kernel: </TASK>
May 26 07:26:55 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1277552 jiffies s: 491905 root: 0x2/.
May 26 07:26:55 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:26:55 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:26:55 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:26:55 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:26:55 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:26:55 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:26:55 mr-fox kernel: RIP: 0010:_raw_spin_lock_irqsave+0x7/0x40
May 26 07:26:55 mr-fox kernel: Code: 00 00 f0 0f b1 17 74 0b 31 c0 31 d2 31 ff e9 3b 32 15 00 b8 01 00 00 00 31 d2 31 ff e9 2d 32 15 00 66 90 f3 0f 1e fa 53 9c 5b <fa> 31 c0 ba 01 00 00 00 f0 0f b1 17 75 0f 48 89 d8 5b 31 d2 31 f6
May 26 07:26:55 mr-fox kernel: RSP: 0018:ffffa401005f0c88 EFLAGS: 00000086
May 26 07:26:55 mr-fox kernel: RAX: 0000000000000000 RBX: 0000000000000086 RCX: 0000000000000000
May 26 07:26:55 mr-fox kernel: RDX: 0000000000000001 RSI: 00000000000cf1e7 RDI: ffff98a96ed7d8c0
May 26 07:26:55 mr-fox kernel: RBP: ffff988ac4405000 R08: 0000000000000000 R09: 0000000000000000
May 26 07:26:55 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988ac1292408
May 26 07:26:55 mr-fox kernel: R13: 00000000008f7ba1 R14: 00000000000cf1e7 R15: ffffc400ffb51870
May 26 07:26:55 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:26:55 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:26:55 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:26:55 mr-fox kernel: PKRU: 55555554
May 26 07:26:55 mr-fox kernel: Call Trace:
May 26 07:26:55 mr-fox kernel: <NMI>
May 26 07:26:55 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:26:55 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:26:55 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:26:55 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:26:55 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:26:55 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:26:55 mr-fox kernel: ? _raw_spin_lock_irqsave+0x7/0x40
May 26 07:26:55 mr-fox kernel: ? _raw_spin_lock_irqsave+0x7/0x40
May 26 07:26:55 mr-fox kernel: ? _raw_spin_lock_irqsave+0x7/0x40
May 26 07:26:55 mr-fox kernel: </NMI>
May 26 07:26:55 mr-fox kernel: <IRQ>
May 26 07:26:55 mr-fox kernel: free_iova_fast+0x60/0x1b0
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: fq_ring_free_locked+0x43/0x90
May 26 07:26:55 mr-fox kernel: iommu_dma_free_iova+0x212/0x220
May 26 07:26:55 mr-fox kernel: __iommu_dma_unmap+0xe0/0x170
May 26 07:26:55 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 07:26:55 mr-fox kernel: igb_poll+0x106/0x1370
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: ? wq_worker_tick+0xd/0xd0
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:26:55 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:26:55 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:26:55 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:26:55 mr-fox kernel: </IRQ>
May 26 07:26:55 mr-fox kernel: <TASK>
May 26 07:26:55 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:26:55 mr-fox kernel: RIP: 0010:xas_load+0x3c/0x60
May 26 07:26:55 mr-fox kernel: Code: e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe <72> e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 75 bf eb d1 66
May 26 07:26:55 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:26:55 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:26:55 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:26:55 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 07:26:55 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:26:55 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:26:55 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:26:55 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:26:55 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:26:55 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:26:55 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:26:55 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:26:55 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:26:55 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:26:55 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:26:55 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:26:55 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:26:55 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:26:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:26:55 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:26:55 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:26:55 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:26:55 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:26:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:26:55 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:26:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:26:55 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:26:55 mr-fox kernel: </TASK>
May 26 07:27:00 mr-fox crond[31959]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:27:00 mr-fox crond[31958]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:27:00 mr-fox crond[31960]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:27:00 mr-fox crond[31961]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:27:00 mr-fox CROND[31965]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:27:00 mr-fox CROND[31966]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:27:00 mr-fox CROND[31967]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:27:00 mr-fox crond[31963]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:27:00 mr-fox CROND[31968]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:27:00 mr-fox CROND[31970]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:27:00 mr-fox CROND[31963]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:27:00 mr-fox CROND[31963]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:27:00 mr-fox CROND[31961]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:27:00 mr-fox CROND[31961]: pam_unix(crond:session): session closed for user root
May 26 07:27:00 mr-fox CROND[31960]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:27:00 mr-fox CROND[31960]: pam_unix(crond:session): session closed for user root
May 26 07:27:00 mr-fox CROND[31550]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:27:00 mr-fox CROND[31550]: pam_unix(crond:session): session closed for user root
May 26 07:28:00 mr-fox crond[2630]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:28:00 mr-fox crond[2627]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:28:00 mr-fox crond[2631]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:28:00 mr-fox crond[2628]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:28:00 mr-fox crond[2632]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:28:00 mr-fox CROND[2636]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:28:00 mr-fox CROND[2638]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:28:00 mr-fox CROND[2637]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:28:00 mr-fox CROND[2639]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:28:00 mr-fox CROND[2640]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:28:00 mr-fox CROND[2632]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:28:00 mr-fox CROND[2632]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:28:00 mr-fox CROND[2631]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:28:00 mr-fox CROND[2631]: pam_unix(crond:session): session closed for user root
May 26 07:28:00 mr-fox CROND[2630]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:28:00 mr-fox CROND[2630]: pam_unix(crond:session): session closed for user root
May 26 07:28:00 mr-fox CROND[31959]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:28:00 mr-fox CROND[31959]: pam_unix(crond:session): session closed for user root
May 26 07:29:00 mr-fox crond[4110]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:29:00 mr-fox crond[4112]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:29:00 mr-fox crond[4109]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:29:00 mr-fox crond[4113]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:29:00 mr-fox crond[4111]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:29:00 mr-fox CROND[4117]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:29:00 mr-fox CROND[4118]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:29:00 mr-fox CROND[4119]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:29:00 mr-fox CROND[4120]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:29:00 mr-fox CROND[4121]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:29:00 mr-fox CROND[4113]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:29:00 mr-fox CROND[4113]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:29:00 mr-fox CROND[4112]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:29:00 mr-fox CROND[4112]: pam_unix(crond:session): session closed for user root
May 26 07:29:01 mr-fox CROND[4111]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:29:01 mr-fox CROND[4111]: pam_unix(crond:session): session closed for user root
May 26 07:29:01 mr-fox CROND[2628]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:29:01 mr-fox CROND[2628]: pam_unix(crond:session): session closed for user root
May 26 07:29:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:29:45 mr-fox kernel: rcu: \x0921-....: (1320112 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=519936
May 26 07:29:45 mr-fox kernel: rcu: \x09(t=1320113 jiffies g=8794409 q=39445448 ncpus=32)
May 26 07:29:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:29:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:29:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:29:45 mr-fox kernel: RIP: 0010:xas_load+0x20/0x60
May 26 07:29:45 mr-fox kernel: Code: ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 <77> 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48
May 26 07:29:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 07:29:45 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:29:45 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:29:45 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:29:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:29:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:29:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:29:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:29:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:29:45 mr-fox kernel: PKRU: 55555554
May 26 07:29:45 mr-fox kernel: Call Trace:
May 26 07:29:45 mr-fox kernel: <IRQ>
May 26 07:29:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:29:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:29:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:29:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:29:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:29:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:29:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:29:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:29:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:29:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:29:45 mr-fox kernel: </IRQ>
May 26 07:29:45 mr-fox kernel: <TASK>
May 26 07:29:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:29:45 mr-fox kernel: ? xas_load+0x20/0x60
May 26 07:29:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:29:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:29:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:29:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:29:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:29:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:29:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:29:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:29:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:29:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:29:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:29:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:29:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:29:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:29:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:29:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:29:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:29:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:29:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:29:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:29:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:29:45 mr-fox kernel: </TASK>
May 26 07:29:55 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1322607 jiffies s: 491905 root: 0x2/.
May 26 07:29:55 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:29:55 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:29:55 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:29:55 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:29:55 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:29:55 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:29:55 mr-fox kernel: RIP: 0010:fq_ring_free_locked+0x33/0x90
May 26 07:29:55 mr-fox kernel: Code: 8b af 98 00 00 00 8b 06 85 c0 74 6c 8b 6e 04 48 89 f3 39 6e 08 74 4c 4c 8d 67 08 eb 32 48 8d 7c 03 20 83 c5 01 e8 3d 43 b2 ff <49> 8b 56 18 49 8b 76 10 4c 89 e7 e8 4d 3a 00 00 8b 43 04 8b 53 0c
May 26 07:29:55 mr-fox kernel: RSP: 0018:ffffa401005f0cd8 EFLAGS: 00000046
May 26 07:29:55 mr-fox kernel: RAX: 0000000000000000 RBX: ffffc400ffb51870 RCX: 0000000000000000
May 26 07:29:55 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:29:55 mr-fox kernel: RBP: 000000000000006b R08: 0000000000000000 R09: 0000000000000000
May 26 07:29:55 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988ac1292408
May 26 07:29:55 mr-fox kernel: R13: 0000000000900279 R14: ffffc400ffb52900 R15: ffffc400ffb51870
May 26 07:29:55 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:29:55 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:29:55 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:29:55 mr-fox kernel: PKRU: 55555554
May 26 07:29:55 mr-fox kernel: Call Trace:
May 26 07:29:55 mr-fox kernel: <NMI>
May 26 07:29:55 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:29:55 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:29:55 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:29:55 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:29:55 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:29:55 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:29:55 mr-fox kernel: ? fq_ring_free_locked+0x33/0x90
May 26 07:29:55 mr-fox kernel: ? fq_ring_free_locked+0x33/0x90
May 26 07:29:55 mr-fox kernel: ? fq_ring_free_locked+0x33/0x90
May 26 07:29:55 mr-fox kernel: </NMI>
May 26 07:29:55 mr-fox kernel: <IRQ>
May 26 07:29:55 mr-fox kernel: iommu_dma_free_iova+0x212/0x220
May 26 07:29:55 mr-fox kernel: __iommu_dma_unmap+0xe0/0x170
May 26 07:29:55 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 07:29:55 mr-fox kernel: igb_poll+0x106/0x1370
May 26 07:29:55 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 07:29:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:29:55 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 07:29:55 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:29:55 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:29:55 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:29:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:29:55 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:29:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:29:55 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:29:55 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:29:55 mr-fox kernel: </IRQ>
May 26 07:29:55 mr-fox kernel: <TASK>
May 26 07:29:55 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:29:55 mr-fox kernel: RIP: 0010:xas_load+0x11/0x60
May 26 07:29:55 mr-fox kernel: Code: 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 <83> e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31
May 26 07:29:55 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:29:55 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:29:55 mr-fox kernel: RDX: ffff988ac0754ffa RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:29:55 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 07:29:55 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:29:55 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:29:55 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:29:55 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:29:55 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:29:55 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:29:55 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:29:55 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:29:55 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:29:55 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:29:55 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:29:55 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:29:55 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:29:55 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:29:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:29:55 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:29:55 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:29:55 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:29:55 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:29:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:29:55 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:29:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:29:55 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:29:55 mr-fox kernel: </TASK>
May 26 07:30:00 mr-fox crond[4677]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:30:00 mr-fox crond[4676]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:30:00 mr-fox crond[4675]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:30:00 mr-fox crond[4679]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:30:00 mr-fox crond[4680]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:30:00 mr-fox crond[4678]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:30:00 mr-fox CROND[4685]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:30:00 mr-fox CROND[4683]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:30:00 mr-fox crond[4681]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:30:00 mr-fox CROND[4686]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:30:00 mr-fox CROND[4687]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:30:00 mr-fox crond[4682]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:30:00 mr-fox CROND[4688]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:30:00 mr-fox CROND[4690]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:30:00 mr-fox CROND[4689]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:30:00 mr-fox CROND[4691]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:30:00 mr-fox CROND[4675]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:30:00 mr-fox CROND[4682]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:30:00 mr-fox CROND[4675]: pam_unix(crond:session): session closed for user torproject
May 26 07:30:00 mr-fox CROND[4682]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:30:00 mr-fox CROND[4679]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:30:00 mr-fox CROND[4679]: pam_unix(crond:session): session closed for user root
May 26 07:30:00 mr-fox CROND[4680]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:30:00 mr-fox CROND[4680]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:30:00 mr-fox CROND[4678]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:30:00 mr-fox CROND[4678]: pam_unix(crond:session): session closed for user root
May 26 07:30:01 mr-fox CROND[4110]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:30:01 mr-fox CROND[4110]: pam_unix(crond:session): session closed for user root
May 26 07:31:00 mr-fox crond[7753]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:31:00 mr-fox crond[7750]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:31:00 mr-fox crond[7754]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:31:00 mr-fox crond[7751]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:31:00 mr-fox crond[7755]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:31:00 mr-fox CROND[7760]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:31:00 mr-fox CROND[7761]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:31:00 mr-fox CROND[7762]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:31:00 mr-fox CROND[7763]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:31:00 mr-fox CROND[7764]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:31:00 mr-fox CROND[7755]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:31:00 mr-fox CROND[7755]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:31:00 mr-fox CROND[7754]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:31:00 mr-fox CROND[7754]: pam_unix(crond:session): session closed for user root
May 26 07:31:00 mr-fox CROND[7753]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:31:00 mr-fox CROND[7753]: pam_unix(crond:session): session closed for user root
May 26 07:31:00 mr-fox CROND[4677]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:31:00 mr-fox CROND[4677]: pam_unix(crond:session): session closed for user root
May 26 07:32:00 mr-fox crond[11526]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:32:00 mr-fox crond[11525]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:32:00 mr-fox crond[11524]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:32:00 mr-fox crond[11528]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:32:00 mr-fox CROND[11532]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:32:00 mr-fox crond[11529]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:32:00 mr-fox CROND[11533]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:32:00 mr-fox CROND[11535]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:32:00 mr-fox CROND[11536]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:32:00 mr-fox CROND[11537]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:32:00 mr-fox CROND[11529]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:32:00 mr-fox CROND[11529]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:32:00 mr-fox CROND[11528]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:32:00 mr-fox CROND[11528]: pam_unix(crond:session): session closed for user root
May 26 07:32:00 mr-fox CROND[11526]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:32:00 mr-fox CROND[11526]: pam_unix(crond:session): session closed for user root
May 26 07:32:01 mr-fox CROND[7751]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:32:01 mr-fox CROND[7751]: pam_unix(crond:session): session closed for user root
May 26 07:32:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:32:45 mr-fox kernel: rcu: \x0921-....: (1365116 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=538444
May 26 07:32:45 mr-fox kernel: rcu: \x09(t=1365117 jiffies g=8794409 q=40490189 ncpus=32)
May 26 07:32:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:32:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:32:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:32:45 mr-fox kernel: RIP: 0010:xas_start+0x7b/0x120
May 26 07:32:45 mr-fox kernel: Code: 00 00 00 48 83 c4 08 5b 5d 31 d2 31 c9 31 f6 31 ff e9 c4 c7 1a 00 0f b6 48 fe 80 f9 3f 0f 87 a3 00 00 00 48 d3 ee 48 83 fe 3f <76> cf 48 c7 43 18 01 00 00 00 48 83 c4 08 31 c0 5b 5d 31 d2 31 c9
May 26 07:32:45 mr-fox kernel: RSP: 0018:ffffa401077e3930 EFLAGS: 00000293
May 26 07:32:45 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 000000000000001e
May 26 07:32:45 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000001 RDI: ffffa401077e3970
May 26 07:32:45 mr-fox kernel: RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
May 26 07:32:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:32:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:32:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:32:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:32:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:32:45 mr-fox kernel: PKRU: 55555554
May 26 07:32:45 mr-fox kernel: Call Trace:
May 26 07:32:45 mr-fox kernel: <IRQ>
May 26 07:32:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:32:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:32:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:32:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:32:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:32:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:32:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:32:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:32:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:32:45 mr-fox kernel: </IRQ>
May 26 07:32:45 mr-fox kernel: <TASK>
May 26 07:32:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:32:45 mr-fox kernel: ? xas_start+0x7b/0x120
May 26 07:32:45 mr-fox kernel: xas_load+0xe/0x60
May 26 07:32:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:32:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:32:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:32:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:32:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:32:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:32:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:32:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:32:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:32:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:32:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:32:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:32:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:32:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:32:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:32:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:32:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:32:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:32:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:32:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:32:45 mr-fox kernel: </TASK>
May 26 07:32:55 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1367663 jiffies s: 491905 root: 0x2/.
May 26 07:32:55 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:32:55 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:32:55 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:32:55 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:32:55 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:32:55 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:32:55 mr-fox kernel: RIP: 0010:__tcp_transmit_skb+0x289/0xd30
May 26 07:32:55 mr-fox kernel: Code: 0f b6 85 80 00 00 00 4c 8b bd c8 00 00 00 83 e2 01 83 e0 ef c1 e2 04 09 d0 88 85 80 00 00 00 0f b7 83 de 02 00 00 66 41 89 07 <0f> b7 43 0c 66 41 89 47 02 8b 45 28 0f c8 41 89 47 04 44 89 e8 0f
May 26 07:32:55 mr-fox kernel: RSP: 0018:ffffa401005f0d10 EFLAGS: 00000202
May 26 07:32:55 mr-fox kernel: RAX: 000000000000e324 RBX: ffff98949c718000 RCX: ffffffffa578da10
May 26 07:32:55 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff98949c718144
May 26 07:32:55 mr-fox kernel: RBP: ffff98992457cb58 R08: 00000000e8ecf2a7 R09: 0000000000000000
May 26 07:32:55 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000020
May 26 07:32:55 mr-fox kernel: R13: 00000000e8ecf2a7 R14: 000022b819abe335 R15: ffff988b4cced2a0
May 26 07:32:55 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:32:55 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:32:55 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:32:55 mr-fox kernel: PKRU: 55555554
May 26 07:32:55 mr-fox kernel: Call Trace:
May 26 07:32:55 mr-fox kernel: <NMI>
May 26 07:32:55 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:32:55 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:32:55 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:32:55 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:32:55 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:32:55 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:32:55 mr-fox kernel: ? sock_efree+0x70/0x70
May 26 07:32:55 mr-fox kernel: ? __tcp_transmit_skb+0x289/0xd30
May 26 07:32:55 mr-fox kernel: ? __tcp_transmit_skb+0x289/0xd30
May 26 07:32:55 mr-fox kernel: ? __tcp_transmit_skb+0x289/0xd30
May 26 07:32:55 mr-fox kernel: </NMI>
May 26 07:32:55 mr-fox kernel: <IRQ>
May 26 07:32:55 mr-fox kernel: __tcp_retransmit_skb+0x1a9/0x800
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: ? __mod_timer+0x115/0x3b0
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: ? retransmits_timed_out.part.0+0x8d/0x170
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: tcp_retransmit_skb+0x11/0xa0
May 26 07:32:55 mr-fox kernel: tcp_retransmit_timer+0x492/0xa60
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 07:32:55 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 07:32:55 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:32:55 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:32:55 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:32:55 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:32:55 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:32:55 mr-fox kernel: </IRQ>
May 26 07:32:55 mr-fox kernel: <TASK>
May 26 07:32:55 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:32:55 mr-fox kernel: RIP: 0010:xas_load+0x49/0x60
May 26 07:32:55 mr-fox kernel: Code: 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff <80> 7d 00 00 75 bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40
May 26 07:32:55 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:32:55 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:32:55 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:32:55 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:32:55 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:32:55 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:32:55 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:32:55 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:32:55 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:32:55 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:32:55 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:32:55 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:32:55 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:32:55 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:32:55 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:32:55 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:32:55 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:32:55 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:32:55 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:32:55 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:32:55 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:32:55 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:32:55 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:32:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:32:55 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:32:55 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:32:55 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:32:55 mr-fox kernel: </TASK>
May 26 07:33:00 mr-fox crond[11048]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:33:00 mr-fox crond[11047]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:33:00 mr-fox crond[11049]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:33:00 mr-fox crond[11052]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:33:00 mr-fox crond[11051]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:33:00 mr-fox CROND[11055]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:33:00 mr-fox CROND[11056]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:33:00 mr-fox CROND[11054]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:33:00 mr-fox CROND[11058]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:33:00 mr-fox CROND[11057]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:33:00 mr-fox CROND[11052]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:33:00 mr-fox CROND[11052]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:33:00 mr-fox CROND[11051]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:33:00 mr-fox CROND[11051]: pam_unix(crond:session): session closed for user root
May 26 07:33:00 mr-fox CROND[11049]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:33:00 mr-fox CROND[11049]: pam_unix(crond:session): session closed for user root
May 26 07:33:01 mr-fox CROND[11525]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:33:01 mr-fox CROND[11525]: pam_unix(crond:session): session closed for user root
May 26 07:34:00 mr-fox crond[14356]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:34:00 mr-fox crond[14355]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:34:00 mr-fox crond[14357]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:34:00 mr-fox crond[14358]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:34:00 mr-fox CROND[14362]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:34:00 mr-fox CROND[14363]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:34:00 mr-fox CROND[14364]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:34:00 mr-fox CROND[14365]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:34:00 mr-fox crond[14359]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:34:00 mr-fox CROND[14368]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:34:00 mr-fox CROND[14359]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:34:00 mr-fox CROND[14359]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:34:00 mr-fox CROND[14358]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:34:00 mr-fox CROND[14358]: pam_unix(crond:session): session closed for user root
May 26 07:34:00 mr-fox CROND[14357]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:34:00 mr-fox CROND[14357]: pam_unix(crond:session): session closed for user root
May 26 07:34:00 mr-fox CROND[11048]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:34:00 mr-fox CROND[11048]: pam_unix(crond:session): session closed for user root
May 26 07:35:00 mr-fox crond[15677]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:35:00 mr-fox crond[15676]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:35:00 mr-fox crond[15678]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:35:00 mr-fox crond[15679]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:35:00 mr-fox crond[15681]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:35:00 mr-fox crond[15675]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:35:00 mr-fox CROND[15684]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:35:00 mr-fox CROND[15685]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:35:00 mr-fox CROND[15686]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:35:00 mr-fox crond[15680]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:35:00 mr-fox CROND[15687]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:35:00 mr-fox crond[15682]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:35:00 mr-fox CROND[15688]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:35:00 mr-fox CROND[15689]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:35:00 mr-fox CROND[15690]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:35:00 mr-fox CROND[15692]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:35:00 mr-fox CROND[15675]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:35:00 mr-fox CROND[15675]: pam_unix(crond:session): session closed for user torproject
May 26 07:35:00 mr-fox CROND[15682]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:35:00 mr-fox CROND[15682]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:35:00 mr-fox CROND[15679]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:35:00 mr-fox CROND[15679]: pam_unix(crond:session): session closed for user root
May 26 07:35:00 mr-fox CROND[15680]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:35:00 mr-fox CROND[15680]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:35:00 mr-fox CROND[15678]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:35:00 mr-fox CROND[15678]: pam_unix(crond:session): session closed for user root
May 26 07:35:00 mr-fox CROND[14356]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:35:00 mr-fox CROND[14356]: pam_unix(crond:session): session closed for user root
May 26 07:35:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:35:45 mr-fox kernel: rcu: \x0921-....: (1410120 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=557083
May 26 07:35:45 mr-fox kernel: rcu: \x09(t=1410121 jiffies g=8794409 q=41521604 ncpus=32)
May 26 07:35:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:35:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:35:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:35:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:35:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:35:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:35:45 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:35:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:35:45 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:35:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:35:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:35:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:35:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:35:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:35:45 mr-fox kernel: PKRU: 55555554
May 26 07:35:45 mr-fox kernel: Call Trace:
May 26 07:35:45 mr-fox kernel: <IRQ>
May 26 07:35:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:35:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:35:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:35:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:35:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:35:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:35:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:35:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:35:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:35:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:35:45 mr-fox kernel: </IRQ>
May 26 07:35:45 mr-fox kernel: <TASK>
May 26 07:35:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:35:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 07:35:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:35:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:35:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:35:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:35:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:35:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:35:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:35:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:35:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:35:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:35:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:35:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:35:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:35:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:35:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:35:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:35:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:35:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:35:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:35:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:35:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:35:45 mr-fox kernel: </TASK>
May 26 07:35:56 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1412720 jiffies s: 491905 root: 0x2/.
May 26 07:35:56 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:35:56 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:35:56 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:35:56 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:35:56 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:35:56 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:35:56 mr-fox kernel: RIP: 0010:raw_local_deliver+0x7e/0x1d0
May 26 07:35:56 mr-fox kernel: Code: 83 fa 01 76 10 48 83 e2 fe 44 8b b2 94 00 00 00 45 85 f6 75 07 44 8b b5 90 00 00 00 48 83 c0 08 31 c9 48 8b 1c c5 80 9f ad a6 <48> 85 db 74 1e 48 83 eb 68 74 18 49 01 f5 4c 3b 63 30 74 41 48 8b
May 26 07:35:56 mr-fox kernel: RSP: 0018:ffffa401005f0b50 EFLAGS: 00000246
May 26 07:35:56 mr-fox kernel: RAX: 0000000000000082 RBX: 0000000000000000 RCX: 0000000000000000
May 26 07:35:56 mr-fox kernel: RDX: ffff988ac3005b00 RSI: 00000000000000ce RDI: ffff988bce2d7800
May 26 07:35:56 mr-fox kernel: RBP: ffff988bce2d7800 R08: 0000000000000000 R09: 0000000000000000
May 26 07:35:56 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffffffa6ad7f40
May 26 07:35:56 mr-fox kernel: R13: ffff989ef929e000 R14: 0000000000000002 R15: ffff988ac3005b00
May 26 07:35:56 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:35:56 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:35:56 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:35:56 mr-fox kernel: PKRU: 55555554
May 26 07:35:56 mr-fox kernel: Call Trace:
May 26 07:35:56 mr-fox kernel: <NMI>
May 26 07:35:56 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:35:56 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:35:56 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:35:56 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:35:56 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:35:56 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:35:56 mr-fox kernel: ? raw_local_deliver+0x7e/0x1d0
May 26 07:35:56 mr-fox kernel: ? raw_local_deliver+0x7e/0x1d0
May 26 07:35:56 mr-fox kernel: ? raw_local_deliver+0x7e/0x1d0
May 26 07:35:56 mr-fox kernel: </NMI>
May 26 07:35:56 mr-fox kernel: <IRQ>
May 26 07:35:56 mr-fox kernel: ? nf_hook_slow+0x3c/0x100
May 26 07:35:56 mr-fox kernel: ip_protocol_deliver_rcu+0x4b/0x180
May 26 07:35:56 mr-fox kernel: ip_local_deliver_finish+0x74/0xa0
May 26 07:35:56 mr-fox kernel: ip_sublist_rcv_finish+0x7f/0x90
May 26 07:35:56 mr-fox kernel: ip_sublist_rcv+0x176/0x1c0
May 26 07:35:56 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 07:35:56 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 07:35:56 mr-fox kernel: __netif_receive_skb_list_core+0x2a5/0x2d0
May 26 07:35:56 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 07:35:56 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 07:35:56 mr-fox kernel: igb_poll+0x605/0x1370
May 26 07:35:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:35:56 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 07:35:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:35:56 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:35:56 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:35:56 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:35:56 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:35:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:35:56 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:35:56 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:35:56 mr-fox kernel: </IRQ>
May 26 07:35:56 mr-fox kernel: <TASK>
May 26 07:35:56 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:35:56 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6a/0x160
May 26 07:35:56 mr-fox kernel: Code: 28 00 00 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 <48> 89 c3 48 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0
May 26 07:35:56 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 07:35:56 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 07:35:56 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:35:56 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 07:35:56 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:35:56 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:35:56 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 07:35:56 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:35:56 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:35:56 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:35:56 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:35:56 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:35:56 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:35:56 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:35:56 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:35:56 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:35:56 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:35:56 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:35:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:35:56 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:35:56 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:35:56 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:35:56 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:35:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:35:56 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:35:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:35:56 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:35:56 mr-fox kernel: </TASK>
May 26 07:36:00 mr-fox crond[18040]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:36:00 mr-fox crond[18037]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:36:00 mr-fox crond[18041]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:36:00 mr-fox crond[18042]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:36:00 mr-fox CROND[18045]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:36:00 mr-fox crond[18039]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:36:00 mr-fox CROND[18047]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:36:00 mr-fox CROND[18046]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:36:00 mr-fox CROND[18048]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:36:00 mr-fox CROND[18050]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:36:00 mr-fox CROND[18042]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:36:00 mr-fox CROND[18042]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:36:00 mr-fox CROND[18041]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:36:00 mr-fox CROND[18041]: pam_unix(crond:session): session closed for user root
May 26 07:36:01 mr-fox CROND[18040]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:36:01 mr-fox CROND[18040]: pam_unix(crond:session): session closed for user root
May 26 07:36:01 mr-fox CROND[15677]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:36:01 mr-fox CROND[15677]: pam_unix(crond:session): session closed for user root
May 26 07:37:00 mr-fox crond[19724]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:37:00 mr-fox crond[19725]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:37:00 mr-fox crond[19723]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:37:00 mr-fox crond[19726]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:37:00 mr-fox CROND[19733]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:37:00 mr-fox CROND[19734]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:37:00 mr-fox CROND[19731]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:37:00 mr-fox CROND[19732]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:37:00 mr-fox crond[19728]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:37:00 mr-fox CROND[19737]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:37:00 mr-fox CROND[19728]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:37:00 mr-fox CROND[19728]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:37:00 mr-fox CROND[19726]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:37:00 mr-fox CROND[19726]: pam_unix(crond:session): session closed for user root
May 26 07:37:00 mr-fox CROND[19725]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:37:00 mr-fox CROND[19725]: pam_unix(crond:session): session closed for user root
May 26 07:37:01 mr-fox CROND[18039]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:37:01 mr-fox CROND[18039]: pam_unix(crond:session): session closed for user root
May 26 07:38:00 mr-fox crond[21717]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:38:00 mr-fox crond[21718]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:38:00 mr-fox crond[21720]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:38:00 mr-fox CROND[21724]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:38:00 mr-fox crond[21721]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:38:00 mr-fox CROND[21725]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:38:00 mr-fox crond[21719]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:38:00 mr-fox CROND[21727]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:38:00 mr-fox CROND[21728]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:38:00 mr-fox CROND[21729]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:38:00 mr-fox CROND[21721]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:38:00 mr-fox CROND[21721]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:38:00 mr-fox CROND[21720]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:38:00 mr-fox CROND[21720]: pam_unix(crond:session): session closed for user root
May 26 07:38:00 mr-fox CROND[21719]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:38:00 mr-fox CROND[21719]: pam_unix(crond:session): session closed for user root
May 26 07:38:00 mr-fox CROND[19724]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:38:00 mr-fox CROND[19724]: pam_unix(crond:session): session closed for user root
May 26 07:38:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:38:45 mr-fox kernel: rcu: \x0921-....: (1455124 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=576097
May 26 07:38:45 mr-fox kernel: rcu: \x09(t=1455125 jiffies g=8794409 q=42553763 ncpus=32)
May 26 07:38:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:38:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:38:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:38:45 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 07:38:45 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 07:38:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 07:38:45 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 07:38:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 07:38:45 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:38:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:38:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:38:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:38:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:38:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:38:45 mr-fox kernel: PKRU: 55555554
May 26 07:38:45 mr-fox kernel: Call Trace:
May 26 07:38:45 mr-fox kernel: <IRQ>
May 26 07:38:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:38:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:38:45 mr-fox kernel: ? tcp_write_xmit+0xe3/0x13b0
May 26 07:38:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:38:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:38:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:38:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:38:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:38:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:38:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:38:45 mr-fox kernel: </IRQ>
May 26 07:38:45 mr-fox kernel: <TASK>
May 26 07:38:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:38:45 mr-fox kernel: ? xas_descend+0x31/0xd0
May 26 07:38:45 mr-fox kernel: xas_load+0x49/0x60
May 26 07:38:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:38:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:38:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:38:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:38:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:38:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:38:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:38:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:38:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:38:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:38:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:38:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:38:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:38:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:38:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:38:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:38:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:38:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:38:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:38:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:38:45 mr-fox kernel: </TASK>
May 26 07:38:56 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1457775 jiffies s: 491905 root: 0x2/.
May 26 07:38:56 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:38:56 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:38:56 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:38:56 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:38:56 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:38:56 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:38:56 mr-fox kernel: RIP: 0010:__inet_lookup_established+0x60/0x140
May 26 07:38:56 mr-fox kernel: Code: e1 10 45 09 c1 44 89 cd e8 fd fd ff ff 41 89 c7 41 89 c4 48 8b 03 44 23 7b 10 4a 8d 0c f8 48 8b 19 f6 c3 01 75 0e 44 39 63 a0 <74> 33 48 8b 1b f6 c3 01 74 f2 48 d1 eb 49 39 df 75 e2 31 c0 48 83
May 26 07:38:56 mr-fox kernel: RSP: 0018:ffffa401005f0a98 EFLAGS: 00000246
May 26 07:38:56 mr-fox kernel: RAX: ffff988ac2400000 RBX: ffff988df40aca68 RCX: ffff988ac242ce10
May 26 07:38:56 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:38:56 mr-fox kernel: RBP: 00000000e6a02923 R08: 0000000000000000 R09: 00000000e6a02923
May 26 07:38:56 mr-fox kernel: R10: ffff98926e6be8ce R11: 000000000d5e1541 R12: 00000000841059c2
May 26 07:38:56 mr-fox kernel: R13: 0d5e15410e20d85f R14: ffffffffa6ad7f40 R15: 00000000000059c2
May 26 07:38:56 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:38:56 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:38:56 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:38:56 mr-fox kernel: PKRU: 55555554
May 26 07:38:56 mr-fox kernel: Call Trace:
May 26 07:38:56 mr-fox kernel: <NMI>
May 26 07:38:56 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:38:56 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:38:56 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:38:56 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:38:56 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:38:56 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:38:56 mr-fox kernel: ? __inet_lookup_established+0x60/0x140
May 26 07:38:56 mr-fox kernel: ? __inet_lookup_established+0x60/0x140
May 26 07:38:56 mr-fox kernel: ? __inet_lookup_established+0x60/0x140
May 26 07:38:56 mr-fox kernel: </NMI>
May 26 07:38:56 mr-fox kernel: <IRQ>
May 26 07:38:56 mr-fox kernel: tcp_v4_rcv+0x3c8/0xea0
May 26 07:38:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:56 mr-fox kernel: ip_protocol_deliver_rcu+0x32/0x180
May 26 07:38:56 mr-fox kernel: ip_local_deliver_finish+0x74/0xa0
May 26 07:38:56 mr-fox kernel: ip_sublist_rcv_finish+0x7f/0x90
May 26 07:38:56 mr-fox kernel: ip_sublist_rcv+0x176/0x1c0
May 26 07:38:56 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 07:38:56 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 07:38:56 mr-fox kernel: __netif_receive_skb_list_core+0x2a5/0x2d0
May 26 07:38:56 mr-fox kernel: ? tcp_gro_receive+0x1d7/0x330
May 26 07:38:56 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 07:38:56 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 07:38:56 mr-fox kernel: igb_poll+0x605/0x1370
May 26 07:38:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:56 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 07:38:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:56 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:38:56 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:38:56 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:38:56 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:38:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:56 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:38:56 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:38:56 mr-fox kernel: </IRQ>
May 26 07:38:56 mr-fox kernel: <TASK>
May 26 07:38:56 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:38:56 mr-fox kernel: RIP: 0010:xas_descend+0x2/0xd0
May 26 07:38:56 mr-fox kernel: Code: 18 0f b6 4c 24 10 4c 8b 04 24 e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 <41> 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80
May 26 07:38:56 mr-fox kernel: RSP: 0018:ffffa401077e3940 EFLAGS: 00000246
May 26 07:38:56 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:38:56 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988d55f18ff8 RDI: ffffa401077e3970
May 26 07:38:56 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:38:56 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:38:56 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:38:56 mr-fox kernel: xas_load+0x49/0x60
May 26 07:38:56 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:38:56 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:38:56 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:38:56 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:38:56 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:38:56 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:38:56 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:38:56 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:38:56 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:38:56 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:38:56 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:38:56 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:38:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:38:56 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:38:56 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:38:56 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:38:56 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:38:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:38:56 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:38:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:38:56 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:38:56 mr-fox kernel: </TASK>
May 26 07:39:00 mr-fox crond[23323]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:39:00 mr-fox crond[23324]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:39:00 mr-fox crond[23326]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:39:00 mr-fox crond[23325]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:39:00 mr-fox crond[23327]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:39:00 mr-fox CROND[23330]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:39:00 mr-fox CROND[23331]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:39:00 mr-fox CROND[23332]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:39:00 mr-fox CROND[23333]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:39:00 mr-fox CROND[23334]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:39:00 mr-fox CROND[23327]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:39:00 mr-fox CROND[23327]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:39:00 mr-fox CROND[23326]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:39:00 mr-fox CROND[23326]: pam_unix(crond:session): session closed for user root
May 26 07:39:00 mr-fox CROND[23325]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:39:00 mr-fox CROND[23325]: pam_unix(crond:session): session closed for user root
May 26 07:39:01 mr-fox CROND[21718]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:39:01 mr-fox CROND[21718]: pam_unix(crond:session): session closed for user root
May 26 07:40:00 mr-fox crond[24748]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:40:00 mr-fox crond[24750]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:40:00 mr-fox crond[24749]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:40:00 mr-fox crond[24747]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:40:00 mr-fox crond[24746]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:40:00 mr-fox crond[24751]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:40:00 mr-fox CROND[24757]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:40:00 mr-fox crond[24753]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:40:00 mr-fox CROND[24759]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:40:00 mr-fox CROND[24760]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:40:00 mr-fox crond[24754]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:40:00 mr-fox CROND[24758]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:40:00 mr-fox CROND[24762]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:40:00 mr-fox CROND[24763]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:40:00 mr-fox CROND[24764]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:40:00 mr-fox CROND[24765]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:40:00 mr-fox CROND[24746]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:40:00 mr-fox CROND[24746]: pam_unix(crond:session): session closed for user torproject
May 26 07:40:00 mr-fox CROND[24754]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:40:00 mr-fox CROND[24754]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:40:00 mr-fox CROND[24750]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:40:00 mr-fox CROND[24750]: pam_unix(crond:session): session closed for user root
May 26 07:40:00 mr-fox CROND[24751]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:40:00 mr-fox CROND[24751]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:40:00 mr-fox CROND[24749]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:40:00 mr-fox CROND[24749]: pam_unix(crond:session): session closed for user root
May 26 07:40:01 mr-fox CROND[23324]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:40:01 mr-fox CROND[23324]: pam_unix(crond:session): session closed for user root
May 26 07:41:00 mr-fox crond[26598]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:41:00 mr-fox crond[26600]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:41:00 mr-fox crond[26599]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:41:00 mr-fox crond[26602]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:41:00 mr-fox crond[26601]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:41:00 mr-fox CROND[26607]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:41:00 mr-fox CROND[26606]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:41:00 mr-fox CROND[26608]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:41:00 mr-fox CROND[26609]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:41:00 mr-fox CROND[26610]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:41:00 mr-fox CROND[26602]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:41:00 mr-fox CROND[26602]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:41:00 mr-fox CROND[26601]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:41:00 mr-fox CROND[26601]: pam_unix(crond:session): session closed for user root
May 26 07:41:00 mr-fox CROND[26600]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:41:00 mr-fox CROND[26600]: pam_unix(crond:session): session closed for user root
May 26 07:41:00 mr-fox CROND[24748]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:41:00 mr-fox CROND[24748]: pam_unix(crond:session): session closed for user root
May 26 07:41:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:41:45 mr-fox kernel: rcu: \x0921-....: (1500128 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=595308
May 26 07:41:45 mr-fox kernel: rcu: \x09(t=1500129 jiffies g=8794409 q=43595347 ncpus=32)
May 26 07:41:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:41:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:41:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:41:45 mr-fox kernel: RIP: 0010:xas_descend+0x1a/0xd0
May 26 07:41:45 mr-fox kernel: Code: 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f <0f> 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5
May 26 07:41:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000293
May 26 07:41:45 mr-fox kernel: RAX: ffff988d55f18ffa RBX: 000000004cd994a1 RCX: 0000000000000000
May 26 07:41:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988d55f18ff8 RDI: ffffa401077e3970
May 26 07:41:45 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:41:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:41:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:41:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:41:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:41:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:41:45 mr-fox kernel: PKRU: 55555554
May 26 07:41:45 mr-fox kernel: Call Trace:
May 26 07:41:45 mr-fox kernel: <IRQ>
May 26 07:41:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:41:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:41:45 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 07:41:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:41:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:41:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:41:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:41:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:41:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:41:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:41:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:41:45 mr-fox kernel: </IRQ>
May 26 07:41:45 mr-fox kernel: <TASK>
May 26 07:41:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:41:45 mr-fox kernel: ? xas_descend+0x1a/0xd0
May 26 07:41:45 mr-fox kernel: xas_load+0x49/0x60
May 26 07:41:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:41:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:41:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:41:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:41:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:41:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:41:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:41:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:41:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:41:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:41:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:41:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:41:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:41:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:41:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:41:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:41:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:41:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:41:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:41:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:41:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:41:45 mr-fox kernel: </TASK>
May 26 07:41:56 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1502831 jiffies s: 491905 root: 0x2/.
May 26 07:41:56 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:41:56 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:41:56 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:41:56 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:41:56 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:41:56 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:41:56 mr-fox kernel: RIP: 0010:rb_next+0x1c/0x60
May 26 07:41:56 mr-fox kernel: Code: e7 48 8b 14 24 4c 8b 65 10 e9 17 ff ff ff f3 0f 1e fa 48 8b 0f 48 39 cf 74 39 48 8b 57 08 48 85 d2 74 23 48 89 d0 48 8b 52 10 <48> 85 d2 75 f4 31 d2 31 c9 31 ff e9 0f 61 1b 00 48 3b 78 08 75 15
May 26 07:41:56 mr-fox kernel: RSP: 0018:ffffa401005f0e18 EFLAGS: 00000286
May 26 07:41:56 mr-fox kernel: RAX: ffff988d57d564c0 RBX: ffff989969200e00 RCX: ffff988d17cee141
May 26 07:41:56 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff989969200e00
May 26 07:41:56 mr-fox kernel: RBP: ffff988b47c8f100 R08: 0000000000000000 R09: 0000000000080300
May 26 07:41:56 mr-fox kernel: R10: 0000000000000026 R11: 0000000000000000 R12: 0000000000000000
May 26 07:41:56 mr-fox kernel: R13: 0000000000000004 R14: ffffffffa6ad7f40 R15: ffff98938f4e4a00
May 26 07:41:56 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:41:56 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:41:56 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:41:56 mr-fox kernel: PKRU: 55555554
May 26 07:41:56 mr-fox kernel: Call Trace:
May 26 07:41:56 mr-fox kernel: <NMI>
May 26 07:41:56 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:41:56 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:41:56 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:41:56 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:41:56 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:41:56 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:41:56 mr-fox kernel: ? rb_next+0x1c/0x60
May 26 07:41:56 mr-fox kernel: ? rb_next+0x1c/0x60
May 26 07:41:56 mr-fox kernel: ? rb_next+0x1c/0x60
May 26 07:41:56 mr-fox kernel: </NMI>
May 26 07:41:56 mr-fox kernel: <IRQ>
May 26 07:41:56 mr-fox kernel: tcp_enter_loss+0x8b/0x370
May 26 07:41:56 mr-fox kernel: tcp_retransmit_timer+0x434/0xa60
May 26 07:41:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:41:56 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 07:41:56 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 07:41:56 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:41:56 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:41:56 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:41:56 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:41:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:41:56 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:41:56 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:41:56 mr-fox kernel: </IRQ>
May 26 07:41:56 mr-fox kernel: <TASK>
May 26 07:41:56 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:41:56 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:41:56 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:41:56 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:41:56 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:41:56 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:41:56 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:41:56 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:41:56 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:41:56 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:41:56 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:41:56 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:41:56 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:41:56 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:41:56 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:41:56 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:41:56 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:41:56 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:41:56 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:41:56 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:41:56 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:41:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:41:56 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:41:56 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:41:56 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:41:56 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:41:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:41:56 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:41:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:41:56 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:41:56 mr-fox kernel: </TASK>
May 26 07:42:00 mr-fox crond[29761]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:42:00 mr-fox crond[29762]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:42:00 mr-fox crond[29760]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:42:00 mr-fox CROND[29767]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:42:00 mr-fox crond[29763]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:42:00 mr-fox crond[29764]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:42:00 mr-fox CROND[29768]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:42:00 mr-fox CROND[29769]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:42:00 mr-fox CROND[29770]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:42:00 mr-fox CROND[29771]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:42:00 mr-fox CROND[29764]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:42:00 mr-fox CROND[29764]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:42:00 mr-fox CROND[29763]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:42:00 mr-fox CROND[29763]: pam_unix(crond:session): session closed for user root
May 26 07:42:00 mr-fox CROND[29762]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:42:00 mr-fox CROND[29762]: pam_unix(crond:session): session closed for user root
May 26 07:42:00 mr-fox CROND[26599]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:42:00 mr-fox CROND[26599]: pam_unix(crond:session): session closed for user root
May 26 07:43:00 mr-fox crond[31688]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:43:00 mr-fox crond[31689]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:43:00 mr-fox crond[31686]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:43:00 mr-fox crond[31690]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:43:00 mr-fox crond[31687]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:43:00 mr-fox CROND[31695]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:43:00 mr-fox CROND[31696]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:43:00 mr-fox CROND[31697]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:43:00 mr-fox CROND[31694]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:43:00 mr-fox CROND[31698]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:43:00 mr-fox CROND[31690]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:43:00 mr-fox CROND[31690]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:43:00 mr-fox CROND[31689]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:43:00 mr-fox CROND[31689]: pam_unix(crond:session): session closed for user root
May 26 07:43:00 mr-fox CROND[31688]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:43:00 mr-fox CROND[31688]: pam_unix(crond:session): session closed for user root
May 26 07:43:01 mr-fox CROND[29761]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:43:01 mr-fox CROND[29761]: pam_unix(crond:session): session closed for user root
May 26 07:44:00 mr-fox crond[31641]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:44:00 mr-fox crond[31643]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:44:00 mr-fox crond[31642]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:44:00 mr-fox crond[31645]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:44:00 mr-fox CROND[31649]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:44:00 mr-fox CROND[31651]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:44:00 mr-fox crond[31644]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:44:00 mr-fox CROND[31652]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:44:00 mr-fox CROND[31653]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:44:00 mr-fox CROND[31654]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:44:00 mr-fox CROND[31645]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:44:00 mr-fox CROND[31645]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:44:00 mr-fox CROND[31644]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:44:00 mr-fox CROND[31644]: pam_unix(crond:session): session closed for user root
May 26 07:44:00 mr-fox CROND[31643]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:44:00 mr-fox CROND[31643]: pam_unix(crond:session): session closed for user root
May 26 07:44:01 mr-fox CROND[31687]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:44:01 mr-fox CROND[31687]: pam_unix(crond:session): session closed for user root
May 26 07:44:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:44:45 mr-fox kernel: rcu: \x0921-....: (1545132 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=614498
May 26 07:44:45 mr-fox kernel: rcu: \x09(t=1545133 jiffies g=8794409 q=44664468 ncpus=32)
May 26 07:44:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:44:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:44:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:44:45 mr-fox kernel: RIP: 0010:xas_load+0x11/0x60
May 26 07:44:45 mr-fox kernel: Code: 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 <83> e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31
May 26 07:44:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:44:45 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:44:45 mr-fox kernel: RDX: ffff988ac0754ffa RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:44:45 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 07:44:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:44:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:44:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:44:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:44:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:44:45 mr-fox kernel: PKRU: 55555554
May 26 07:44:45 mr-fox kernel: Call Trace:
May 26 07:44:45 mr-fox kernel: <IRQ>
May 26 07:44:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:44:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:44:45 mr-fox kernel: ? tcp_write_xmit+0xe3/0x13b0
May 26 07:44:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:44:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:44:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:44:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:44:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:44:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:44:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:44:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:44:45 mr-fox kernel: </IRQ>
May 26 07:44:45 mr-fox kernel: <TASK>
May 26 07:44:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:44:45 mr-fox kernel: ? xas_load+0x11/0x60
May 26 07:44:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:44:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:44:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:44:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:44:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:44:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:44:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:44:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:44:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:44:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:44:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:44:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:44:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:44:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:44:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:44:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:44:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:44:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:44:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:44:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:44:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:44:45 mr-fox kernel: </TASK>
May 26 07:44:56 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1547888 jiffies s: 491905 root: 0x2/.
May 26 07:44:56 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:44:56 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:44:56 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:44:56 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:44:56 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:44:56 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:44:56 mr-fox kernel: RIP: 0010:igb_poll+0x153/0x1370
May 26 07:44:56 mr-fox kernel: Code: c3 38 4d 39 e7 75 e2 49 83 c4 10 41 83 c6 01 0f 84 be 0a 00 00 41 0f 18 0c 24 83 6c 24 18 01 0f 85 5c ff ff ff 41 0f b6 45 4e <49> 8b 55 08 4c 8b 5c 24 10 4c 8d 3c 80 49 c1 e7 06 4c 03 7a 18 85
May 26 07:44:56 mr-fox kernel: RSP: 0018:ffffa401005f0df8 EFLAGS: 00000246
May 26 07:44:56 mr-fox kernel: RAX: 0000000000000002 RBX: ffffa40107bf0298 RCX: 0000000000000000
May 26 07:44:56 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:44:56 mr-fox kernel: RBP: 000000000000db2c R08: 0000000000000000 R09: 0000000000000000
May 26 07:44:56 mr-fox kernel: R10: ffff98a279748240 R11: 0000000000000000 R12: ffffa40100119550
May 26 07:44:56 mr-fox kernel: R13: ffff988ac873fa40 R14: 00000000ffffff55 R15: 0000000000000000
May 26 07:44:56 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:44:56 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:44:56 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:44:56 mr-fox kernel: PKRU: 55555554
May 26 07:44:56 mr-fox kernel: Call Trace:
May 26 07:44:56 mr-fox kernel: <NMI>
May 26 07:44:56 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:44:56 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:44:56 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:44:56 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:44:56 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:44:56 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:44:56 mr-fox kernel: ? igb_poll+0x153/0x1370
May 26 07:44:56 mr-fox kernel: ? igb_poll+0x153/0x1370
May 26 07:44:56 mr-fox kernel: ? igb_poll+0x153/0x1370
May 26 07:44:56 mr-fox kernel: </NMI>
May 26 07:44:56 mr-fox kernel: <IRQ>
May 26 07:44:56 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 07:44:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:44:56 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 07:44:56 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:44:56 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:44:56 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:44:56 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:44:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:44:56 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:44:56 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:44:56 mr-fox kernel: </IRQ>
May 26 07:44:56 mr-fox kernel: <TASK>
May 26 07:44:56 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:44:56 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:44:56 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:44:56 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 07:44:56 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:44:56 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:44:56 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:44:56 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:44:56 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:44:56 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:44:56 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:44:56 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:44:56 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:44:56 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:44:56 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:44:56 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:44:56 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:44:56 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:44:56 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:44:56 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:44:56 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:44:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:44:56 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:44:56 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:44:56 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:44:56 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:44:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:44:56 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:44:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:44:56 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:44:56 mr-fox kernel: </TASK>
May 26 07:45:00 mr-fox crond[3101]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:45:00 mr-fox crond[3097]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:45:00 mr-fox crond[3099]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:45:00 mr-fox crond[3098]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:45:00 mr-fox crond[3102]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:45:00 mr-fox CROND[3106]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:45:00 mr-fox CROND[3107]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:45:00 mr-fox CROND[3110]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:45:00 mr-fox crond[3105]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:45:00 mr-fox crond[3104]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:45:00 mr-fox CROND[3115]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:45:00 mr-fox crond[3103]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:45:00 mr-fox CROND[3117]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:45:00 mr-fox CROND[3118]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:45:00 mr-fox CROND[3119]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:45:00 mr-fox CROND[3120]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:45:00 mr-fox CROND[3097]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:45:00 mr-fox CROND[3097]: pam_unix(crond:session): session closed for user torproject
May 26 07:45:00 mr-fox CROND[3105]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:45:00 mr-fox CROND[3105]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:45:00 mr-fox CROND[3102]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:45:00 mr-fox CROND[3102]: pam_unix(crond:session): session closed for user root
May 26 07:45:00 mr-fox CROND[3103]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:45:00 mr-fox CROND[3103]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:45:00 mr-fox CROND[3101]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:45:00 mr-fox CROND[3101]: pam_unix(crond:session): session closed for user root
May 26 07:45:00 mr-fox CROND[31642]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:45:00 mr-fox CROND[31642]: pam_unix(crond:session): session closed for user root
May 26 07:46:00 mr-fox crond[25209]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:46:00 mr-fox crond[25211]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:46:00 mr-fox crond[25212]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:46:00 mr-fox crond[25218]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:46:00 mr-fox CROND[25220]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:46:00 mr-fox crond[25210]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:46:00 mr-fox CROND[25221]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:46:00 mr-fox CROND[25222]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:46:00 mr-fox CROND[25223]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:46:00 mr-fox CROND[25225]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:46:00 mr-fox CROND[25218]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:46:00 mr-fox CROND[25218]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:46:00 mr-fox CROND[25212]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:46:00 mr-fox CROND[25212]: pam_unix(crond:session): session closed for user root
May 26 07:46:00 mr-fox CROND[3099]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:46:00 mr-fox CROND[3099]: pam_unix(crond:session): session closed for user root
May 26 07:46:00 mr-fox CROND[25211]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:46:00 mr-fox CROND[25211]: pam_unix(crond:session): session closed for user root
May 26 07:47:00 mr-fox crond[27690]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:47:00 mr-fox crond[27693]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:47:00 mr-fox crond[27692]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:47:00 mr-fox crond[27695]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:47:00 mr-fox crond[27691]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:47:00 mr-fox CROND[27698]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:47:00 mr-fox CROND[27699]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:47:00 mr-fox CROND[27701]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:47:00 mr-fox CROND[27702]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:47:00 mr-fox CROND[27703]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:47:00 mr-fox CROND[27695]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:47:00 mr-fox CROND[27695]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:47:00 mr-fox CROND[27693]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:47:00 mr-fox CROND[27693]: pam_unix(crond:session): session closed for user root
May 26 07:47:01 mr-fox CROND[27692]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:47:01 mr-fox CROND[27692]: pam_unix(crond:session): session closed for user root
May 26 07:47:01 mr-fox CROND[25210]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:47:01 mr-fox CROND[25210]: pam_unix(crond:session): session closed for user root
May 26 07:47:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:47:45 mr-fox kernel: rcu: \x0921-....: (1590136 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=633716
May 26 07:47:45 mr-fox kernel: rcu: \x09(t=1590137 jiffies g=8794409 q=46116498 ncpus=32)
May 26 07:47:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:47:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:47:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:47:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:47:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:47:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:47:45 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:47:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:47:45 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:47:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:47:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:47:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:47:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:47:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:47:45 mr-fox kernel: PKRU: 55555554
May 26 07:47:45 mr-fox kernel: Call Trace:
May 26 07:47:45 mr-fox kernel: <IRQ>
May 26 07:47:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:47:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:47:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:47:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:47:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:47:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:47:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:47:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:47:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:47:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:47:45 mr-fox kernel: </IRQ>
May 26 07:47:45 mr-fox kernel: <TASK>
May 26 07:47:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:47:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 07:47:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:47:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:47:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:47:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:47:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:47:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:47:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:47:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:47:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:47:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:47:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:47:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:47:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:47:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:47:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:47:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:47:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:47:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:47:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:47:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:47:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:47:45 mr-fox kernel: </TASK>
May 26 07:47:56 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1592944 jiffies s: 491905 root: 0x2/.
May 26 07:47:56 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:47:56 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:47:56 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:47:56 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:47:56 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:47:56 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:47:56 mr-fox kernel: RIP: 0010:ep_poll_callback+0x20/0x280
May 26 07:47:56 mr-fox kernel: Code: ff e9 3f 97 80 00 0f 1f 40 00 f3 0f 1e fa 41 57 41 56 41 55 41 54 49 89 fc 55 48 89 cd 53 48 83 ec 18 48 8b 5f f8 4c 8b 7b 48 <4d> 8d 77 60 4c 89 f7 e8 d4 65 6b 00 48 89 44 24 08 8b 05 49 8c 1a
May 26 07:47:56 mr-fox kernel: RSP: 0018:ffffa401005f0930 EFLAGS: 00000086
May 26 07:47:56 mr-fox kernel: RAX: ffffffffa52fa9d0 RBX: ffff988dc9583d80 RCX: 00000000000000c3
May 26 07:47:56 mr-fox kernel: RDX: 0000000000000010 RSI: 0000000000000001 RDI: ffff988dc9582f10
May 26 07:47:56 mr-fox kernel: RBP: 00000000000000c3 R08: ffff988dc3fe98c8 R09: 0000000000000000
May 26 07:47:56 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988dc9582f10
May 26 07:47:56 mr-fox kernel: R13: 0000000000000001 R14: 0000000000000010 R15: ffff988ad0a32480
May 26 07:47:56 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:47:56 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:47:56 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:47:56 mr-fox kernel: PKRU: 55555554
May 26 07:47:56 mr-fox kernel: Call Trace:
May 26 07:47:56 mr-fox kernel: <NMI>
May 26 07:47:56 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:47:56 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:47:56 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:47:56 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:47:56 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:47:56 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:47:56 mr-fox kernel: ? ep_autoremove_wake_function+0x50/0x50
May 26 07:47:56 mr-fox kernel: ? ep_poll_callback+0x20/0x280
May 26 07:47:56 mr-fox kernel: ? ep_poll_callback+0x20/0x280
May 26 07:47:56 mr-fox kernel: ? ep_poll_callback+0x20/0x280
May 26 07:47:56 mr-fox kernel: </NMI>
May 26 07:47:56 mr-fox kernel: <IRQ>
May 26 07:47:56 mr-fox kernel: __wake_up_common+0x73/0xa0
May 26 07:47:56 mr-fox kernel: __wake_up_sync_key+0x36/0x60
May 26 07:47:56 mr-fox kernel: sock_def_readable+0x36/0x80
May 26 07:47:56 mr-fox kernel: tcp_data_queue+0x8ce/0x1090
May 26 07:47:56 mr-fox kernel: tcp_rcv_established+0x1f2/0x6c0
May 26 07:47:56 mr-fox kernel: ? sk_filter_trim_cap+0x40/0x220
May 26 07:47:56 mr-fox kernel: tcp_v4_do_rcv+0x153/0x240
May 26 07:47:56 mr-fox kernel: tcp_v4_rcv+0xe00/0xea0
May 26 07:47:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:47:56 mr-fox kernel: ip_protocol_deliver_rcu+0x32/0x180
May 26 07:47:56 mr-fox kernel: ip_local_deliver_finish+0x74/0xa0
May 26 07:47:56 mr-fox kernel: ip_sublist_rcv_finish+0x7f/0x90
May 26 07:47:56 mr-fox kernel: ip_sublist_rcv+0x176/0x1c0
May 26 07:47:56 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 07:47:56 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 07:47:56 mr-fox kernel: __netif_receive_skb_list_core+0x293/0x2d0
May 26 07:47:56 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 07:47:56 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 07:47:56 mr-fox kernel: igb_poll+0x605/0x1370
May 26 07:47:56 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:47:56 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:47:56 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:47:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:47:56 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:47:56 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:47:56 mr-fox kernel: </IRQ>
May 26 07:47:56 mr-fox kernel: <TASK>
May 26 07:47:56 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:47:56 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 07:47:56 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 07:47:56 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 07:47:56 mr-fox kernel: RAX: ffff988ac07566ca RBX: 000000000000000c RCX: 0000000000000018
May 26 07:47:56 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 07:47:56 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:47:56 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:47:56 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:47:56 mr-fox kernel: xas_load+0x49/0x60
May 26 07:47:56 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:47:56 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:47:56 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:47:56 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:47:56 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:47:56 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:47:56 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:47:56 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:47:56 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:47:56 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:47:56 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:47:56 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:47:56 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:47:56 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:47:56 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:47:56 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:47:56 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:47:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:47:56 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:47:56 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:47:56 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:47:56 mr-fox kernel: </TASK>
May 26 07:48:00 mr-fox crond[19993]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:48:00 mr-fox crond[19994]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:48:00 mr-fox crond[19991]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:48:00 mr-fox crond[19996]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:48:00 mr-fox CROND[20001]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:48:00 mr-fox CROND[20000]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:48:00 mr-fox CROND[19999]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:48:00 mr-fox crond[19995]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:48:00 mr-fox CROND[20002]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:48:00 mr-fox CROND[20003]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:48:00 mr-fox CROND[19996]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:48:00 mr-fox CROND[19996]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:48:00 mr-fox CROND[19995]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:48:00 mr-fox CROND[19995]: pam_unix(crond:session): session closed for user root
May 26 07:48:00 mr-fox CROND[19994]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:48:00 mr-fox CROND[19994]: pam_unix(crond:session): session closed for user root
May 26 07:48:01 mr-fox CROND[27691]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:48:01 mr-fox CROND[27691]: pam_unix(crond:session): session closed for user root
May 26 07:49:00 mr-fox crond[23097]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:49:00 mr-fox crond[23099]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:49:00 mr-fox crond[23101]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:49:00 mr-fox crond[23102]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:49:00 mr-fox CROND[23106]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:49:00 mr-fox crond[23100]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:49:00 mr-fox CROND[23107]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:49:00 mr-fox CROND[23109]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:49:00 mr-fox CROND[23108]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:49:00 mr-fox CROND[23111]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:49:00 mr-fox CROND[23102]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:49:00 mr-fox CROND[23102]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:49:00 mr-fox CROND[23101]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:49:00 mr-fox CROND[23101]: pam_unix(crond:session): session closed for user root
May 26 07:49:00 mr-fox CROND[23100]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:49:00 mr-fox CROND[23100]: pam_unix(crond:session): session closed for user root
May 26 07:49:00 mr-fox CROND[19993]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:49:00 mr-fox CROND[19993]: pam_unix(crond:session): session closed for user root
May 26 07:50:00 mr-fox crond[26184]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:50:00 mr-fox crond[26183]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:50:00 mr-fox crond[26187]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:50:00 mr-fox crond[26186]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:50:00 mr-fox crond[26188]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:50:00 mr-fox CROND[26195]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:50:00 mr-fox CROND[26196]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:50:00 mr-fox CROND[26197]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:50:00 mr-fox CROND[26198]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:50:00 mr-fox crond[26190]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:50:00 mr-fox crond[26189]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:50:00 mr-fox crond[26191]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:50:00 mr-fox CROND[26199]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:50:00 mr-fox CROND[26202]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:50:00 mr-fox CROND[26203]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:50:00 mr-fox CROND[26204]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:50:00 mr-fox CROND[26183]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:50:00 mr-fox CROND[26183]: pam_unix(crond:session): session closed for user torproject
May 26 07:50:00 mr-fox CROND[26191]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:50:00 mr-fox CROND[26191]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:50:00 mr-fox CROND[26188]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:50:00 mr-fox CROND[26188]: pam_unix(crond:session): session closed for user root
May 26 07:50:01 mr-fox CROND[26189]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:50:01 mr-fox CROND[26189]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:50:01 mr-fox CROND[23099]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:50:01 mr-fox CROND[23099]: pam_unix(crond:session): session closed for user root
May 26 07:50:01 mr-fox CROND[26187]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:50:01 mr-fox CROND[26187]: pam_unix(crond:session): session closed for user root
May 26 07:50:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:50:45 mr-fox kernel: rcu: \x0921-....: (1635139 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=652834
May 26 07:50:45 mr-fox kernel: rcu: \x09(t=1635140 jiffies g=8794409 q=47155035 ncpus=32)
May 26 07:50:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:50:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:50:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:50:45 mr-fox kernel: RIP: 0010:xas_descend+0x2/0xd0
May 26 07:50:45 mr-fox kernel: Code: 18 0f b6 4c 24 10 4c 8b 04 24 e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 <41> 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80
May 26 07:50:45 mr-fox kernel: RSP: 0018:ffffa401077e3940 EFLAGS: 00000206
May 26 07:50:45 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:50:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 07:50:45 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:50:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:50:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:50:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:50:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:50:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:50:45 mr-fox kernel: PKRU: 55555554
May 26 07:50:45 mr-fox kernel: Call Trace:
May 26 07:50:45 mr-fox kernel: <IRQ>
May 26 07:50:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:50:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:50:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:50:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:50:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:50:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:50:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:50:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:50:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:50:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:50:45 mr-fox kernel: </IRQ>
May 26 07:50:45 mr-fox kernel: <TASK>
May 26 07:50:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:50:45 mr-fox kernel: ? xas_descend+0x2/0xd0
May 26 07:50:45 mr-fox kernel: xas_load+0x49/0x60
May 26 07:50:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:50:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:50:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:50:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:50:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:50:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:50:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:50:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:50:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:50:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:50:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:50:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:50:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:50:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:50:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:50:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:50:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:50:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:50:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:50:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:50:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:50:45 mr-fox kernel: </TASK>
May 26 07:50:57 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1637999 jiffies s: 491905 root: 0x2/.
May 26 07:50:57 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:50:57 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:50:57 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:50:57 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:50:57 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:50:57 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:50:57 mr-fox kernel: RIP: 0010:__nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 07:50:57 mr-fox kernel: Code: 00 49 8b 07 49 8b 4f 08 48 33 43 10 48 33 4b 18 48 09 c8 75 09 8b 43 20 41 39 47 10 74 7f 48 8b 1b f6 c3 01 75 5b 0f b6 4b 37 <4c> 89 ed 48 8d 04 cd 00 00 00 00 48 29 c8 48 c1 e0 03 48 29 c5 48
May 26 07:50:57 mr-fox kernel: RSP: 0018:ffffa401005f0c28 EFLAGS: 00000246
May 26 07:50:57 mr-fox kernel: RAX: ffff988acb3dcc20 RBX: ffff988d66be0810 RCX: 0000000000000000
May 26 07:50:57 mr-fox kernel: RDX: 00000000ee612205 RSI: ffff988acb200000 RDI: ffffffffa6ad7f40
May 26 07:50:57 mr-fox kernel: RBP: ffffa401005f0c88 R08: 0000000000000000 R09: 0000000000000000
May 26 07:50:57 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 000000000003b984
May 26 07:50:57 mr-fox kernel: R13: fffffffffffffff0 R14: ffffffffa6ad7f40 R15: ffffa401005f0c88
May 26 07:50:57 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:50:57 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:50:57 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:50:57 mr-fox kernel: PKRU: 55555554
May 26 07:50:57 mr-fox kernel: Call Trace:
May 26 07:50:57 mr-fox kernel: <NMI>
May 26 07:50:57 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:50:57 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:50:57 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:50:57 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:50:57 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:50:57 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:50:57 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 07:50:57 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 07:50:57 mr-fox kernel: ? __nf_conntrack_find_get.isra.0+0x8d/0x280
May 26 07:50:57 mr-fox kernel: </NMI>
May 26 07:50:57 mr-fox kernel: <IRQ>
May 26 07:50:57 mr-fox kernel: nf_conntrack_in+0xdc/0x540
May 26 07:50:57 mr-fox kernel: nf_hook_slow+0x3c/0x100
May 26 07:50:57 mr-fox kernel: __ip_local_out+0xc6/0x100
May 26 07:50:57 mr-fox kernel: ? ip_output+0x100/0x100
May 26 07:50:57 mr-fox kernel: ip_local_out+0x16/0x70
May 26 07:50:57 mr-fox kernel: __ip_queue_xmit+0x16b/0x480
May 26 07:50:57 mr-fox kernel: __tcp_transmit_skb+0xbad/0xd30
May 26 07:50:57 mr-fox kernel: tcp_delack_timer_handler+0xa9/0x110
May 26 07:50:57 mr-fox kernel: tcp_delack_timer+0xb5/0xf0
May 26 07:50:57 mr-fox kernel: ? tcp_delack_timer_handler+0x110/0x110
May 26 07:50:57 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:50:57 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:50:57 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:50:57 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:50:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:50:57 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:50:57 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:50:57 mr-fox kernel: </IRQ>
May 26 07:50:57 mr-fox kernel: <TASK>
May 26 07:50:57 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:50:57 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:50:57 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:50:57 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:50:57 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:50:57 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:50:57 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:50:57 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:50:57 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:50:57 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:50:57 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:50:57 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:50:57 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:50:57 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:50:57 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:50:57 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:50:57 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:50:57 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:50:57 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:50:57 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:50:57 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:50:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:50:57 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:50:57 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:50:57 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:50:57 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:50:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:50:57 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:50:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:50:57 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:50:57 mr-fox kernel: </TASK>
May 26 07:51:00 mr-fox crond[26497]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:51:00 mr-fox crond[26496]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:51:00 mr-fox crond[26498]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:51:00 mr-fox crond[26500]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:51:00 mr-fox crond[26501]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:51:00 mr-fox CROND[26503]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:51:00 mr-fox CROND[26504]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:51:00 mr-fox CROND[26505]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:51:00 mr-fox CROND[26506]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:51:00 mr-fox CROND[26507]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:51:00 mr-fox CROND[26501]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:51:00 mr-fox CROND[26501]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:51:00 mr-fox CROND[26500]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:51:00 mr-fox CROND[26500]: pam_unix(crond:session): session closed for user root
May 26 07:51:00 mr-fox CROND[26498]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:51:00 mr-fox CROND[26498]: pam_unix(crond:session): session closed for user root
May 26 07:51:01 mr-fox CROND[26186]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:51:01 mr-fox CROND[26186]: pam_unix(crond:session): session closed for user root
May 26 07:52:00 mr-fox crond[30638]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:52:00 mr-fox crond[30637]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:52:00 mr-fox crond[30639]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:52:00 mr-fox crond[30640]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:52:00 mr-fox CROND[30645]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:52:00 mr-fox CROND[30646]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:52:00 mr-fox CROND[30648]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:52:00 mr-fox CROND[30647]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:52:00 mr-fox crond[30641]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:52:00 mr-fox CROND[30650]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:52:00 mr-fox CROND[30641]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:52:00 mr-fox CROND[30641]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:52:00 mr-fox CROND[30640]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:52:00 mr-fox CROND[30640]: pam_unix(crond:session): session closed for user root
May 26 07:52:00 mr-fox CROND[26497]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:52:00 mr-fox CROND[26497]: pam_unix(crond:session): session closed for user root
May 26 07:52:00 mr-fox CROND[30639]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:52:00 mr-fox CROND[30639]: pam_unix(crond:session): session closed for user root
May 26 07:53:00 mr-fox crond[32141]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:53:00 mr-fox crond[32139]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:53:00 mr-fox crond[32136]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:53:00 mr-fox crond[32137]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:53:00 mr-fox crond[32142]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:53:00 mr-fox CROND[32145]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:53:00 mr-fox CROND[32146]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:53:00 mr-fox CROND[32147]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:53:00 mr-fox CROND[32149]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:53:00 mr-fox CROND[32151]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:53:00 mr-fox CROND[32142]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:53:00 mr-fox CROND[32142]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:53:00 mr-fox CROND[32141]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:53:00 mr-fox CROND[32141]: pam_unix(crond:session): session closed for user root
May 26 07:53:00 mr-fox CROND[32139]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:53:00 mr-fox CROND[32139]: pam_unix(crond:session): session closed for user root
May 26 07:53:01 mr-fox CROND[30638]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:53:01 mr-fox CROND[30638]: pam_unix(crond:session): session closed for user root
May 26 07:53:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:53:45 mr-fox kernel: rcu: \x0921-....: (1680143 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=672038
May 26 07:53:45 mr-fox kernel: rcu: \x09(t=1680144 jiffies g=8794409 q=48172632 ncpus=32)
May 26 07:53:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:53:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:53:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:53:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 07:53:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 07:53:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 07:53:45 mr-fox kernel: RAX: ffff988ac07566ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:53:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:53:45 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:53:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:53:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:53:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:53:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:53:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:53:45 mr-fox kernel: PKRU: 55555554
May 26 07:53:45 mr-fox kernel: Call Trace:
May 26 07:53:45 mr-fox kernel: <IRQ>
May 26 07:53:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:53:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:53:45 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 07:53:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:53:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:53:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:53:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:53:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:53:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:53:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:53:45 mr-fox kernel: </IRQ>
May 26 07:53:45 mr-fox kernel: <TASK>
May 26 07:53:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:53:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 07:53:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:53:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:53:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:53:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:53:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:53:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:53:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:53:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:53:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:53:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:53:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:53:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:53:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:53:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:53:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:53:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:53:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:53:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:53:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:53:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:53:45 mr-fox kernel: </TASK>
May 26 07:53:57 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1683055 jiffies s: 491905 root: 0x2/.
May 26 07:53:57 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:53:57 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:53:57 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:53:57 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:53:57 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:53:57 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:53:57 mr-fox kernel: RIP: 0010:__dev_queue_xmit+0x3ff/0x9a0
May 26 07:53:57 mr-fox kernel: Code: 69 fd ff ff c7 85 84 00 00 00 ff ff ff ff c6 85 80 00 00 00 00 e9 17 fe ff ff a8 04 0f 84 ac 01 00 00 49 8b 84 24 d8 00 00 00 <a8> 0c 0f 85 9c 01 00 00 4d 8d ac 24 44 01 00 00 4c 89 ef e8 99 9e
May 26 07:53:57 mr-fox kernel: RSP: 0018:ffffa401005f0b40 EFLAGS: 00000202
May 26 07:53:57 mr-fox kernel: RAX: 0000000000000000 RBX: ffff988b22535f00 RCX: 0000000000000000
May 26 07:53:57 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:53:57 mr-fox kernel: RBP: ffff988ac76ec140 R08: 0000000000000000 R09: ffff988e055ee400
May 26 07:53:57 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988ac5503800
May 26 07:53:57 mr-fox kernel: R13: 0000000000000000 R14: 0000000000000010 R15: ffff988ac4bb4000
May 26 07:53:57 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:53:57 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:53:57 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:53:57 mr-fox kernel: PKRU: 55555554
May 26 07:53:57 mr-fox kernel: Call Trace:
May 26 07:53:57 mr-fox kernel: <NMI>
May 26 07:53:57 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:53:57 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:53:57 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:53:57 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:53:57 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:53:57 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:53:57 mr-fox kernel: ? __dev_queue_xmit+0x3ff/0x9a0
May 26 07:53:57 mr-fox kernel: ? __dev_queue_xmit+0x3ff/0x9a0
May 26 07:53:57 mr-fox kernel: ? __dev_queue_xmit+0x3ff/0x9a0
May 26 07:53:57 mr-fox kernel: </NMI>
May 26 07:53:57 mr-fox kernel: <IRQ>
May 26 07:53:57 mr-fox kernel: ? ip6t_do_table+0x30b/0x590
May 26 07:53:57 mr-fox kernel: ip6_finish_output2+0x2c0/0x610
May 26 07:53:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:57 mr-fox kernel: ? ip6_output+0xa7/0x290
May 26 07:53:57 mr-fox kernel: ip6_xmit+0x3fc/0x600
May 26 07:53:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:57 mr-fox kernel: ? ip6_output+0x290/0x290
May 26 07:53:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:57 mr-fox kernel: ? __sk_dst_check+0x34/0xa0
May 26 07:53:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:57 mr-fox kernel: ? inet6_csk_route_socket+0x132/0x210
May 26 07:53:57 mr-fox kernel: inet6_csk_xmit+0xe9/0x160
May 26 07:53:57 mr-fox kernel: __tcp_transmit_skb+0x5d0/0xd30
May 26 07:53:57 mr-fox kernel: tcp_delack_timer_handler+0xa9/0x110
May 26 07:53:57 mr-fox kernel: tcp_delack_timer+0xb5/0xf0
May 26 07:53:57 mr-fox kernel: ? tcp_delack_timer_handler+0x110/0x110
May 26 07:53:57 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 07:53:57 mr-fox kernel: __run_timers+0x20a/0x240
May 26 07:53:57 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 07:53:57 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:53:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:57 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:53:57 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:53:57 mr-fox kernel: </IRQ>
May 26 07:53:57 mr-fox kernel: <TASK>
May 26 07:53:57 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:53:57 mr-fox kernel: RIP: 0010:xas_descend+0xc/0xd0
May 26 07:53:57 mr-fox kernel: Code: e9 60 fe ff ff e9 69 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 <48> 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3
May 26 07:53:57 mr-fox kernel: RSP: 0018:ffffa401077e3928 EFLAGS: 00000246
May 26 07:53:57 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 07:53:57 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988d55f18ff8 RDI: ffffa401077e3970
May 26 07:53:57 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:53:57 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:53:57 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:53:57 mr-fox kernel: xas_load+0x49/0x60
May 26 07:53:57 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:53:57 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:53:57 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:53:57 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:53:57 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:53:57 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:53:57 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:53:57 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:53:57 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:53:57 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:53:57 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:53:57 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:53:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:53:57 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:53:57 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:53:57 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:53:57 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:53:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:53:57 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:53:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:53:57 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:53:57 mr-fox kernel: </TASK>
May 26 07:54:00 mr-fox crond[379]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:54:00 mr-fox crond[380]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:54:00 mr-fox crond[382]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:54:00 mr-fox crond[381]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:54:00 mr-fox crond[384]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:54:00 mr-fox CROND[386]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:54:00 mr-fox CROND[388]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:54:00 mr-fox CROND[387]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:54:00 mr-fox CROND[389]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:54:00 mr-fox CROND[390]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:54:00 mr-fox CROND[384]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:54:00 mr-fox CROND[384]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:54:00 mr-fox CROND[382]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:54:00 mr-fox CROND[382]: pam_unix(crond:session): session closed for user root
May 26 07:54:00 mr-fox CROND[381]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:54:00 mr-fox CROND[381]: pam_unix(crond:session): session closed for user root
May 26 07:54:01 mr-fox CROND[32137]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:54:01 mr-fox CROND[32137]: pam_unix(crond:session): session closed for user root
May 26 07:55:00 mr-fox crond[2083]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 07:55:00 mr-fox crond[2085]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:55:00 mr-fox crond[2084]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:55:00 mr-fox crond[2086]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:55:00 mr-fox crond[2087]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:55:00 mr-fox crond[2088]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:55:00 mr-fox crond[2089]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:55:00 mr-fox CROND[2094]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:55:00 mr-fox CROND[2095]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:55:00 mr-fox CROND[2096]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:55:00 mr-fox CROND[2098]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:55:00 mr-fox crond[2091]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:55:00 mr-fox CROND[2112]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:55:00 mr-fox CROND[2113]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:55:00 mr-fox CROND[2116]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 07:55:00 mr-fox CROND[2118]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:55:00 mr-fox CROND[2083]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 07:55:00 mr-fox CROND[2083]: pam_unix(crond:session): session closed for user torproject
May 26 07:55:00 mr-fox CROND[2091]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:55:00 mr-fox CROND[2091]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:55:00 mr-fox CROND[2087]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:55:00 mr-fox CROND[2087]: pam_unix(crond:session): session closed for user root
May 26 07:55:00 mr-fox CROND[2088]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 07:55:00 mr-fox CROND[2088]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:55:00 mr-fox CROND[2086]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:55:00 mr-fox CROND[2086]: pam_unix(crond:session): session closed for user root
May 26 07:55:00 mr-fox CROND[380]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:55:00 mr-fox CROND[380]: pam_unix(crond:session): session closed for user root
May 26 07:56:00 mr-fox crond[5062]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:56:00 mr-fox crond[5063]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:56:00 mr-fox crond[5064]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:56:00 mr-fox crond[5065]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:56:00 mr-fox crond[5061]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:56:00 mr-fox CROND[5069]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:56:00 mr-fox CROND[5072]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:56:00 mr-fox CROND[5073]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:56:00 mr-fox CROND[5074]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:56:00 mr-fox CROND[5075]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:56:00 mr-fox CROND[5065]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:56:00 mr-fox CROND[5065]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:56:00 mr-fox CROND[5064]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:56:00 mr-fox CROND[5064]: pam_unix(crond:session): session closed for user root
May 26 07:56:00 mr-fox CROND[5063]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:56:00 mr-fox CROND[5063]: pam_unix(crond:session): session closed for user root
May 26 07:56:00 mr-fox CROND[2085]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:56:00 mr-fox CROND[2085]: pam_unix(crond:session): session closed for user root
May 26 07:56:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:56:45 mr-fox kernel: rcu: \x0921-....: (1725147 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=691167
May 26 07:56:45 mr-fox kernel: rcu: \x09(t=1725148 jiffies g=8794409 q=49192845 ncpus=32)
May 26 07:56:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:56:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:56:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:56:45 mr-fox kernel: RIP: 0010:xas_load+0x23/0x60
May 26 07:56:45 mr-fox kernel: Code: 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b <5d> 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe
May 26 07:56:45 mr-fox kernel: RSP: 0018:ffffa401077e3958 EFLAGS: 00000246
May 26 07:56:45 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 07:56:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 07:56:45 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:56:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 07:56:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:56:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:56:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:56:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:56:45 mr-fox kernel: PKRU: 55555554
May 26 07:56:45 mr-fox kernel: Call Trace:
May 26 07:56:45 mr-fox kernel: <IRQ>
May 26 07:56:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:56:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:56:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:56:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:56:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:56:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:56:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:56:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:56:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:56:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:56:45 mr-fox kernel: </IRQ>
May 26 07:56:45 mr-fox kernel: <TASK>
May 26 07:56:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:56:45 mr-fox kernel: ? xas_load+0x23/0x60
May 26 07:56:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:56:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:56:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:56:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:56:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:56:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:56:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:56:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:56:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:56:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:56:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:56:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:56:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:56:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:56:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:56:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:56:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:56:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:56:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:56:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:56:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:56:45 mr-fox kernel: </TASK>
May 26 07:56:57 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1728111 jiffies s: 491905 root: 0x2/.
May 26 07:56:57 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:56:57 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:56:57 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:56:57 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:56:57 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:56:57 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:56:57 mr-fox kernel: RIP: 0010:__sock_wfree+0x24/0x60
May 26 07:56:57 mr-fox kernel: Code: f4 ff ff 0f 1f 00 f3 0f 1e fa 48 89 f8 8b 90 d0 00 00 00 48 8b 7f 18 89 d0 4c 8d 87 44 01 00 00 f7 d8 f0 0f c1 87 44 01 00 00 <39> c2 74 1a 89 c1 29 d1 09 c1 78 17 31 c0 31 d2 31 c9 31 f6 31 ff
May 26 07:56:57 mr-fox kernel: RSP: 0018:ffffa401005f0dd0 EFLAGS: 00000213
May 26 07:56:57 mr-fox kernel: RAX: 0000000000000003 RBX: ffff989b3a303100 RCX: 0000000000000000
May 26 07:56:57 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000040 RDI: ffff98938d66d340
May 26 07:56:57 mr-fox kernel: RBP: 000000000000163c R08: ffff98938d66d484 R09: 0000000000000000
May 26 07:56:57 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa40100119ad0
May 26 07:56:57 mr-fox kernel: R13: ffff988ac873fa40 R14: 00000000ffffffad R15: ffffa40100119ae0
May 26 07:56:57 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:56:57 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:56:57 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:56:57 mr-fox kernel: PKRU: 55555554
May 26 07:56:57 mr-fox kernel: Call Trace:
May 26 07:56:57 mr-fox kernel: <NMI>
May 26 07:56:57 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:56:57 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:56:57 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:56:57 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:56:57 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:56:57 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:56:57 mr-fox kernel: ? __sock_wfree+0x24/0x60
May 26 07:56:57 mr-fox kernel: ? __sock_wfree+0x24/0x60
May 26 07:56:57 mr-fox kernel: ? __sock_wfree+0x24/0x60
May 26 07:56:57 mr-fox kernel: </NMI>
May 26 07:56:57 mr-fox kernel: <IRQ>
May 26 07:56:57 mr-fox kernel: skb_release_head_state+0x22/0x80
May 26 07:56:57 mr-fox kernel: napi_consume_skb+0x2e/0xc0
May 26 07:56:57 mr-fox kernel: igb_poll+0xea/0x1370
May 26 07:56:57 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 07:56:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:56:57 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 07:56:57 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:56:57 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:56:57 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:56:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:56:57 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:56:57 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:56:57 mr-fox kernel: </IRQ>
May 26 07:56:57 mr-fox kernel: <TASK>
May 26 07:56:57 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:56:57 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 07:56:57 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 07:56:57 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 07:56:57 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 07:56:57 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 07:56:57 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:56:57 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:56:57 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:56:57 mr-fox kernel: xas_load+0x49/0x60
May 26 07:56:57 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:56:57 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:56:57 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:56:57 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:56:57 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:56:57 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:56:57 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:56:57 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:56:57 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:56:57 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:56:57 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:56:57 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:56:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:56:57 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:56:57 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:56:57 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:56:57 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:56:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:56:57 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:56:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:56:57 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:56:57 mr-fox kernel: </TASK>
May 26 07:57:00 mr-fox crond[5612]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:57:00 mr-fox crond[5610]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:57:00 mr-fox crond[5611]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:57:00 mr-fox crond[5614]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:57:00 mr-fox crond[5615]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:57:00 mr-fox CROND[5619]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:57:00 mr-fox CROND[5618]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:57:00 mr-fox CROND[5617]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:57:00 mr-fox CROND[5621]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:57:00 mr-fox CROND[5622]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:57:00 mr-fox CROND[5615]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:57:00 mr-fox CROND[5615]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:57:00 mr-fox CROND[5614]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:57:00 mr-fox CROND[5614]: pam_unix(crond:session): session closed for user root
May 26 07:57:01 mr-fox CROND[5612]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:57:01 mr-fox CROND[5612]: pam_unix(crond:session): session closed for user root
May 26 07:57:01 mr-fox CROND[5062]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:57:01 mr-fox CROND[5062]: pam_unix(crond:session): session closed for user root
May 26 07:58:00 mr-fox crond[7582]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:58:00 mr-fox crond[7584]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:58:00 mr-fox crond[7583]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:58:00 mr-fox crond[7585]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:58:00 mr-fox crond[7586]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:58:00 mr-fox CROND[7591]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:58:00 mr-fox CROND[7590]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:58:00 mr-fox CROND[7592]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:58:00 mr-fox CROND[7593]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:58:00 mr-fox CROND[7594]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:58:00 mr-fox CROND[7586]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:58:00 mr-fox CROND[7586]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:58:00 mr-fox CROND[7585]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:58:00 mr-fox CROND[7585]: pam_unix(crond:session): session closed for user root
May 26 07:58:00 mr-fox CROND[7584]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:58:00 mr-fox CROND[7584]: pam_unix(crond:session): session closed for user root
May 26 07:58:01 mr-fox CROND[5611]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:58:01 mr-fox CROND[5611]: pam_unix(crond:session): session closed for user root
May 26 07:59:00 mr-fox crond[9494]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:59:00 mr-fox crond[9496]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:59:00 mr-fox crond[9495]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:59:00 mr-fox CROND[9501]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 07:59:00 mr-fox CROND[9502]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 07:59:00 mr-fox crond[9498]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 07:59:00 mr-fox crond[9497]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 07:59:00 mr-fox CROND[9506]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:59:00 mr-fox CROND[9507]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 07:59:00 mr-fox CROND[9508]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:59:00 mr-fox CROND[9498]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 07:59:00 mr-fox CROND[9498]: pam_unix(crond:session): session closed for user tinderbox
May 26 07:59:00 mr-fox CROND[9497]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 07:59:00 mr-fox CROND[9497]: pam_unix(crond:session): session closed for user root
May 26 07:59:00 mr-fox CROND[9496]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 07:59:00 mr-fox CROND[9496]: pam_unix(crond:session): session closed for user root
May 26 07:59:00 mr-fox CROND[7583]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 07:59:00 mr-fox CROND[7583]: pam_unix(crond:session): session closed for user root
May 26 07:59:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 07:59:45 mr-fox kernel: rcu: \x0921-....: (1770151 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=710169
May 26 07:59:45 mr-fox kernel: rcu: \x09(t=1770152 jiffies g=8794409 q=50199720 ncpus=32)
May 26 07:59:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:59:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:59:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:59:45 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 07:59:45 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 07:59:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000202
May 26 07:59:45 mr-fox kernel: RAX: ffff988d55f18ffa RBX: 0000000000000012 RCX: 0000000000000006
May 26 07:59:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988d667606d8 RDI: ffffa401077e3970
May 26 07:59:45 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:59:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:59:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:59:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:59:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:59:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:59:45 mr-fox kernel: PKRU: 55555554
May 26 07:59:45 mr-fox kernel: Call Trace:
May 26 07:59:45 mr-fox kernel: <IRQ>
May 26 07:59:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 07:59:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 07:59:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 07:59:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 07:59:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 07:59:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 07:59:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:59:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 07:59:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 07:59:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 07:59:45 mr-fox kernel: </IRQ>
May 26 07:59:45 mr-fox kernel: <TASK>
May 26 07:59:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:59:45 mr-fox kernel: ? xas_descend+0x31/0xd0
May 26 07:59:45 mr-fox kernel: xas_load+0x49/0x60
May 26 07:59:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:59:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:59:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:59:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:59:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:59:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:59:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:59:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:59:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:59:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:59:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:59:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:59:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:59:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:59:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:59:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:59:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:59:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:59:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:59:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:59:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:59:45 mr-fox kernel: </TASK>
May 26 07:59:57 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1773168 jiffies s: 491905 root: 0x2/.
May 26 07:59:57 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 07:59:57 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 07:59:57 mr-fox kernel: NMI backtrace for cpu 21
May 26 07:59:57 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 07:59:57 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 07:59:57 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 07:59:57 mr-fox kernel: RIP: 0010:napi_consume_skb+0x1b/0xc0
May 26 07:59:57 mr-fox kernel: Code: 20 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 f3 0f 1e fa 53 48 89 fb 85 f6 0f 84 8c 00 00 00 48 85 ff 74 7b 8b 87 d4 00 00 00 <83> f8 01 75 53 48 89 df f6 43 7e 0c 75 25 e8 62 87 ff ff 48 83 bb
May 26 07:59:57 mr-fox kernel: RSP: 0018:ffffa401005f0de8 EFLAGS: 00000286
May 26 07:59:57 mr-fox kernel: RAX: 0000000000000001 RBX: ffff989969201418 RCX: 0000000000000000
May 26 07:59:57 mr-fox kernel: RDX: 0000000000000001 RSI: 0000000000000040 RDI: ffff989969201418
May 26 07:59:57 mr-fox kernel: RBP: 0000000000004ec0 R08: 0000000000000000 R09: 0000000000000000
May 26 07:59:57 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa40100119740
May 26 07:59:57 mr-fox kernel: R13: ffff988ac873fa40 R14: 00000000ffffff74 R15: ffffa40100119760
May 26 07:59:57 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 07:59:57 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 07:59:57 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 07:59:57 mr-fox kernel: PKRU: 55555554
May 26 07:59:57 mr-fox kernel: Call Trace:
May 26 07:59:57 mr-fox kernel: <NMI>
May 26 07:59:57 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 07:59:57 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 07:59:57 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 07:59:57 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 07:59:57 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 07:59:57 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 07:59:57 mr-fox kernel: ? napi_consume_skb+0x1b/0xc0
May 26 07:59:57 mr-fox kernel: ? napi_consume_skb+0x1b/0xc0
May 26 07:59:57 mr-fox kernel: ? napi_consume_skb+0x1b/0xc0
May 26 07:59:57 mr-fox kernel: </NMI>
May 26 07:59:57 mr-fox kernel: <IRQ>
May 26 07:59:57 mr-fox kernel: igb_poll+0xea/0x1370
May 26 07:59:57 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 07:59:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:59:57 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 07:59:57 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 07:59:57 mr-fox kernel: net_rx_action+0x202/0x590
May 26 07:59:57 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 07:59:57 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 07:59:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:59:57 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 07:59:57 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 07:59:57 mr-fox kernel: </IRQ>
May 26 07:59:57 mr-fox kernel: <TASK>
May 26 07:59:57 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 07:59:57 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 07:59:57 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 07:59:57 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000212
May 26 07:59:57 mr-fox kernel: RAX: ffff988acf90e6ca RBX: 000000000000000c RCX: 0000000000000018
May 26 07:59:57 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 07:59:57 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 07:59:57 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 07:59:57 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 07:59:57 mr-fox kernel: xas_load+0x49/0x60
May 26 07:59:57 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 07:59:57 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 07:59:57 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 07:59:57 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 07:59:57 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 07:59:57 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 07:59:57 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 07:59:57 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 07:59:57 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 07:59:57 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 07:59:57 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 07:59:57 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 07:59:57 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 07:59:57 mr-fox kernel: process_one_work+0x16a/0x280
May 26 07:59:57 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 07:59:57 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 07:59:57 mr-fox kernel: kthread+0xcb/0xf0
May 26 07:59:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:59:57 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 07:59:57 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 07:59:57 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 07:59:57 mr-fox kernel: </TASK>
May 26 08:00:00 mr-fox crond[12148]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:00:00 mr-fox crond[12150]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:00:00 mr-fox crond[12149]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:00:00 mr-fox crond[12147]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:00:00 mr-fox crond[12153]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:00:00 mr-fox CROND[12161]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:00:00 mr-fox crond[12154]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:00:00 mr-fox CROND[12162]: (root) CMD (/etc/conf.d/ipv4-rules.sh update; /etc/conf.d/ipv6-rules.sh update)
May 26 08:00:00 mr-fox crond[12151]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:00:00 mr-fox CROND[12164]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:00:00 mr-fox CROND[12165]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:00:00 mr-fox crond[12155]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:00:00 mr-fox CROND[12163]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:00:00 mr-fox CROND[12167]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:00:00 mr-fox CROND[12168]: (tinderbox) CMD (sudo /opt/tb/bin/collect_data.sh 2>/dev/null)
May 26 08:00:00 mr-fox crond[12156]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:00:00 mr-fox crond[12159]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:00:00 mr-fox CROND[12169]: (root) CMD (/opt/torutils/update_tor.sh)
May 26 08:00:00 mr-fox crond[12160]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:00:00 mr-fox crond[12157]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:00:00 mr-fox CROND[12172]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:00:00 mr-fox CROND[12173]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:00:00 mr-fox CROND[12174]: (tinderbox) CMD (sudo /opt/tb/bin/house_keeping.sh >/dev/null)
May 26 08:00:00 mr-fox CROND[12170]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:00:00 mr-fox sudo[12179]: tinderbox : PWD=/home/tinderbox ; USER=root ; COMMAND=/opt/tb/bin/collect_data.sh
May 26 08:00:00 mr-fox sudo[12179]: pam_unix(sudo:session): session opened for user root(uid=0) by tinderbox(uid=1003)
May 26 08:00:00 mr-fox CROND[12147]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:00:00 mr-fox CROND[12147]: pam_unix(crond:session): session closed for user torproject
May 26 08:00:00 mr-fox sudo[12181]: tinderbox : PWD=/home/tinderbox ; USER=root ; COMMAND=/opt/tb/bin/house_keeping.sh
May 26 08:00:00 mr-fox sudo[12181]: pam_unix(sudo:session): session opened for user root(uid=0) by tinderbox(uid=1003)
May 26 08:00:00 mr-fox CROND[12160]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:00:00 mr-fox CROND[12160]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:00:00 mr-fox CROND[12154]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:00:00 mr-fox CROND[12154]: pam_unix(crond:session): session closed for user root
May 26 08:00:00 mr-fox CROND[12156]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:00:00 mr-fox CROND[12156]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:00:00 mr-fox CROND[9495]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:00:00 mr-fox CROND[9495]: pam_unix(crond:session): session closed for user root
May 26 08:00:00 mr-fox CROND[12153]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:00:00 mr-fox CROND[12153]: pam_unix(crond:session): session closed for user root
May 26 08:00:04 mr-fox CROND[12151]: (root) CMDEND (/opt/torutils/update_tor.sh)
May 26 08:00:04 mr-fox CROND[12151]: pam_unix(crond:session): session closed for user root
May 26 08:00:08 mr-fox CROND[12150]: (root) CMDEND (/etc/conf.d/ipv4-rules.sh update; /etc/conf.d/ipv6-rules.sh update)
May 26 08:00:08 mr-fox CROND[12150]: pam_unix(crond:session): session closed for user root
May 26 08:01:00 mr-fox CROND[13072]: (root) CMD (run-parts /etc/cron.hourly)
May 26 08:01:00 mr-fox crond[13071]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:01:00 mr-fox crond[13069]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:01:00 mr-fox crond[13068]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:01:00 mr-fox CROND[13080]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:01:00 mr-fox CROND[13079]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:01:00 mr-fox crond[13075]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:01:00 mr-fox CROND[13081]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:01:00 mr-fox CROND[13082]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:01:00 mr-fox crond[13073]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:01:00 mr-fox CROND[13085]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:01:00 mr-fox CROND[13067]: (root) CMDEND (run-parts /etc/cron.hourly)
May 26 08:01:00 mr-fox CROND[13075]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:01:00 mr-fox CROND[13075]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:01:00 mr-fox CROND[13073]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:01:00 mr-fox CROND[13073]: pam_unix(crond:session): session closed for user root
May 26 08:01:00 mr-fox CROND[13071]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:01:00 mr-fox CROND[13071]: pam_unix(crond:session): session closed for user root
May 26 08:01:01 mr-fox CROND[12149]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:01:01 mr-fox CROND[12149]: pam_unix(crond:session): session closed for user root
May 26 08:02:00 mr-fox crond[16112]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:02:00 mr-fox crond[16113]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:02:00 mr-fox crond[16114]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:02:00 mr-fox crond[16116]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:02:00 mr-fox crond[16115]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:02:00 mr-fox CROND[16120]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:02:00 mr-fox CROND[16121]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:02:00 mr-fox CROND[16125]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:02:00 mr-fox CROND[16124]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:02:00 mr-fox CROND[16126]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:02:00 mr-fox CROND[16116]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:02:00 mr-fox CROND[16116]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:02:00 mr-fox CROND[16115]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:02:00 mr-fox CROND[16115]: pam_unix(crond:session): session closed for user root
May 26 08:02:00 mr-fox CROND[16114]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:02:00 mr-fox CROND[16114]: pam_unix(crond:session): session closed for user root
May 26 08:02:00 mr-fox CROND[13069]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:02:00 mr-fox CROND[13069]: pam_unix(crond:session): session closed for user root
May 26 08:02:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:02:45 mr-fox kernel: rcu: \x0921-....: (1815155 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=729107
May 26 08:02:45 mr-fox kernel: rcu: \x09(t=1815156 jiffies g=8794409 q=51244979 ncpus=32)
May 26 08:02:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:02:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:02:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:02:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:02:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:02:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:02:45 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:02:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:02:45 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:02:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:02:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:02:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:02:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:02:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:02:45 mr-fox kernel: PKRU: 55555554
May 26 08:02:45 mr-fox kernel: Call Trace:
May 26 08:02:45 mr-fox kernel: <IRQ>
May 26 08:02:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:02:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:02:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:02:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:02:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:02:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:02:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:02:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:02:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:02:45 mr-fox kernel: </IRQ>
May 26 08:02:45 mr-fox kernel: <TASK>
May 26 08:02:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:02:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:02:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:02:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:02:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:02:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:02:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:02:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:02:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:02:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:02:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:02:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:02:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:02:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:02:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:02:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:02:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:02:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:02:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:02:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:02:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:02:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:02:45 mr-fox kernel: </TASK>
May 26 08:02:58 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1818223 jiffies s: 491905 root: 0x2/.
May 26 08:02:58 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:02:58 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:02:58 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:02:58 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:02:58 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:02:58 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:02:58 mr-fox kernel: RIP: 0010:__ip_dev_find+0x57/0x100
May 26 08:02:58 mr-fox kernel: Code: 47 86 c8 61 c1 e8 18 48 8b 14 c5 20 a8 ad a6 48 85 d2 75 0a eb 58 48 8b 12 48 85 d2 74 50 3b 72 30 75 f3 48 8b 42 18 48 8b 00 <4c> 3b 80 08 01 00 00 75 e3 48 85 c0 74 0e 84 db 74 0a 48 8b 90 50
May 26 08:02:58 mr-fox kernel: RSP: 0018:ffffa401005f0c30 EFLAGS: 00000246
May 26 08:02:58 mr-fox kernel: RAX: ffff988ac4bb4000 RBX: 0000000000000000 RCX: 0000000000000000
May 26 08:02:58 mr-fox kernel: RDX: ffff988ac4b50580 RSI: 000000000d5e1541 RDI: ffffffffa6ad7f40
May 26 08:02:58 mr-fox kernel: RBP: ffffffffa6ad7f40 R08: ffffffffa6ad7f40 R09: 000000000000fa80
May 26 08:02:58 mr-fox kernel: R10: 0000000000000001 R11: 0000000000000000 R12: ffffa401005f0d10
May 26 08:02:58 mr-fox kernel: R13: 00000000e324dcbd R14: 0000000000000000 R15: 0000000000000000
May 26 08:02:58 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:02:58 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:02:58 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:02:58 mr-fox kernel: PKRU: 55555554
May 26 08:02:58 mr-fox kernel: Call Trace:
May 26 08:02:58 mr-fox kernel: <NMI>
May 26 08:02:58 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:02:58 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:02:58 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:02:58 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:02:58 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:02:58 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:02:58 mr-fox kernel: ? __ip_dev_find+0x57/0x100
May 26 08:02:58 mr-fox kernel: ? __ip_dev_find+0x57/0x100
May 26 08:02:58 mr-fox kernel: ? __ip_dev_find+0x57/0x100
May 26 08:02:58 mr-fox kernel: </NMI>
May 26 08:02:58 mr-fox kernel: <IRQ>
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? __sk_dst_check+0x34/0xa0
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? inet6_csk_route_socket+0x132/0x210
May 26 08:02:58 mr-fox kernel: ip_route_output_key_hash_rcu+0x4e2/0x780
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? inet6_csk_xmit+0xe9/0x160
May 26 08:02:58 mr-fox kernel: ip_route_output_flow+0x55/0xa0
May 26 08:02:58 mr-fox kernel: inet_sk_rebuild_header+0x185/0x410
May 26 08:02:58 mr-fox kernel: __tcp_retransmit_skb+0x9e/0x800
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? lock_timer_base+0x2f/0xc0
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? retransmits_timed_out.part.0+0x8d/0x170
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: tcp_retransmit_skb+0x11/0xa0
May 26 08:02:58 mr-fox kernel: tcp_retransmit_timer+0x492/0xa60
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 08:02:58 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 08:02:58 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 08:02:58 mr-fox kernel: __run_timers+0x20a/0x240
May 26 08:02:58 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 08:02:58 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:02:58 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:02:58 mr-fox kernel: </IRQ>
May 26 08:02:58 mr-fox kernel: <TASK>
May 26 08:02:58 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:02:58 mr-fox kernel: RIP: 0010:filemap_get_entry+0x89/0x160
May 26 08:02:58 mr-fox kernel: Code: 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 48 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8 01 75 56 8b 40 34 <85> c0 74 ca 8d 50 01 f0 0f b1 53 34 75 f2 4c 8b 64 24 20 4d 85 e4
May 26 08:02:58 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 08:02:58 mr-fox kernel: RAX: 0000000000000000 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 08:02:58 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:02:58 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:02:58 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:02:58 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:02:58 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 08:02:58 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:02:58 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:02:58 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:02:58 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:02:58 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:02:58 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:02:58 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:02:58 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:02:58 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:02:58 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:02:58 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:02:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:02:58 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:02:58 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:02:58 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:02:58 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:02:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:02:58 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:02:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:02:58 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:02:58 mr-fox kernel: </TASK>
May 26 08:03:00 mr-fox crond[18365]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:03:00 mr-fox crond[18367]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:03:00 mr-fox crond[18370]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:03:00 mr-fox crond[18366]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:03:00 mr-fox crond[18372]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:03:00 mr-fox CROND[18375]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:03:00 mr-fox CROND[18374]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:03:00 mr-fox CROND[18376]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:03:00 mr-fox CROND[18377]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:03:00 mr-fox CROND[18378]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:03:00 mr-fox CROND[18372]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:03:00 mr-fox CROND[18372]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:03:00 mr-fox CROND[18370]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:03:00 mr-fox CROND[18370]: pam_unix(crond:session): session closed for user root
May 26 08:03:00 mr-fox CROND[18367]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:03:00 mr-fox CROND[18367]: pam_unix(crond:session): session closed for user root
May 26 08:03:00 mr-fox CROND[16113]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:03:00 mr-fox CROND[16113]: pam_unix(crond:session): session closed for user root
May 26 08:04:00 mr-fox crond[22180]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:04:00 mr-fox crond[22182]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:04:00 mr-fox crond[22183]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:04:00 mr-fox crond[22181]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:04:00 mr-fox crond[22184]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:04:00 mr-fox CROND[22188]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:04:00 mr-fox CROND[22189]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:04:00 mr-fox CROND[22190]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:04:00 mr-fox CROND[22191]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:04:00 mr-fox CROND[22192]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:04:01 mr-fox CROND[22184]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:04:01 mr-fox CROND[22184]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:04:01 mr-fox CROND[22183]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:04:01 mr-fox CROND[22183]: pam_unix(crond:session): session closed for user root
May 26 08:04:01 mr-fox CROND[22182]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:04:01 mr-fox CROND[22182]: pam_unix(crond:session): session closed for user root
May 26 08:04:01 mr-fox CROND[18366]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:04:01 mr-fox CROND[18366]: pam_unix(crond:session): session closed for user root
May 26 08:05:00 mr-fox crond[23607]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:05:00 mr-fox crond[23606]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:05:00 mr-fox crond[23608]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:05:00 mr-fox crond[23609]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:05:00 mr-fox crond[23610]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:05:00 mr-fox crond[23612]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:05:00 mr-fox CROND[23619]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:05:00 mr-fox CROND[23618]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:05:00 mr-fox crond[23613]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:05:00 mr-fox CROND[23617]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:05:00 mr-fox crond[23614]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:05:00 mr-fox CROND[23620]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:05:00 mr-fox CROND[23622]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:05:00 mr-fox CROND[23624]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:05:00 mr-fox CROND[23623]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:05:00 mr-fox CROND[23625]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:05:00 mr-fox CROND[23606]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:05:00 mr-fox CROND[23606]: pam_unix(crond:session): session closed for user torproject
May 26 08:05:00 mr-fox CROND[23614]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:05:00 mr-fox CROND[23614]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:05:00 mr-fox CROND[23610]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:05:00 mr-fox CROND[23610]: pam_unix(crond:session): session closed for user root
May 26 08:05:00 mr-fox CROND[23612]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:05:00 mr-fox CROND[23612]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:05:00 mr-fox CROND[23609]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:05:00 mr-fox CROND[23609]: pam_unix(crond:session): session closed for user root
May 26 08:05:01 mr-fox CROND[22181]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:05:01 mr-fox CROND[22181]: pam_unix(crond:session): session closed for user root
May 26 08:05:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:05:45 mr-fox kernel: rcu: \x0921-....: (1860159 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=748064
May 26 08:05:45 mr-fox kernel: rcu: \x09(t=1860160 jiffies g=8794409 q=52270626 ncpus=32)
May 26 08:05:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:05:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:05:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:05:45 mr-fox kernel: RIP: 0010:xas_descend+0x40/0xd0
May 26 08:05:45 mr-fox kernel: Code: 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 <75> 08 48 3d fd 00 00 00 76 2f 41 88 5c 24 12 48 83 c4 08 5b 5d 41
May 26 08:05:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000293
May 26 08:05:45 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: 0000000000000021 RCX: 0000000000000000
May 26 08:05:45 mr-fox kernel: RDX: 0000000000000000 RSI: ffff988d55f18ff8 RDI: ffffa401077e3970
May 26 08:05:45 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:05:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:05:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:05:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:05:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:05:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:05:45 mr-fox kernel: PKRU: 55555554
May 26 08:05:45 mr-fox kernel: Call Trace:
May 26 08:05:45 mr-fox kernel: <IRQ>
May 26 08:05:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:05:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:05:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:05:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:05:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:05:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:05:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:05:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:05:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:05:45 mr-fox kernel: </IRQ>
May 26 08:05:45 mr-fox kernel: <TASK>
May 26 08:05:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:05:45 mr-fox kernel: ? xas_descend+0x40/0xd0
May 26 08:05:45 mr-fox kernel: xas_load+0x49/0x60
May 26 08:05:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:05:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:05:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:05:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:05:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:05:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:05:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:05:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:05:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:05:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:05:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:05:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:05:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:05:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:05:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:05:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:05:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:05:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:05:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:05:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:05:45 mr-fox kernel: </TASK>
May 26 08:05:58 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1863280 jiffies s: 491905 root: 0x2/.
May 26 08:05:58 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:05:58 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:05:58 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:05:58 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:05:58 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:05:58 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:05:58 mr-fox kernel: RIP: 0010:fetch_pte+0x78/0x170
May 26 08:05:58 mr-fox kernel: Code: 00 01 00 00 83 eb 01 44 8d 34 db 41 8d 4e 0c 83 f9 3f 0f 87 0e f9 2d 00 4c 89 e0 49 8b 95 08 01 00 00 48 d3 e8 25 ff 01 00 00 <48> 8d 04 c2 ba 01 00 00 00 48 d3 e2 48 89 55 00 85 db 0f 8e b2 00
May 26 08:05:58 mr-fox kernel: RSP: 0018:ffffa401005f0c38 EFLAGS: 00000206
May 26 08:05:58 mr-fox kernel: RAX: 0000000000000003 RBX: 0000000000000002 RCX: 000000000000001e
May 26 08:05:58 mr-fox kernel: RDX: ffff988ac41fa000 RSI: 00000000cfefd000 RDI: ffff988ac2211088
May 26 08:05:58 mr-fox kernel: RBP: ffffa401005f0c80 R08: 0000000000000000 R09: 0000000000000000
May 26 08:05:58 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000cfefd000
May 26 08:05:58 mr-fox kernel: R13: ffff988ac2211088 R14: 0000000000000012 R15: ffff988ac2211088
May 26 08:05:58 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:05:58 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:05:58 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:05:58 mr-fox kernel: PKRU: 55555554
May 26 08:05:58 mr-fox kernel: Call Trace:
May 26 08:05:58 mr-fox kernel: <NMI>
May 26 08:05:58 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:05:58 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:05:58 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:05:58 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:05:58 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:05:58 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:05:58 mr-fox kernel: ? fetch_pte+0x78/0x170
May 26 08:05:58 mr-fox kernel: ? fetch_pte+0x78/0x170
May 26 08:05:58 mr-fox kernel: ? fetch_pte+0x78/0x170
May 26 08:05:58 mr-fox kernel: </NMI>
May 26 08:05:58 mr-fox kernel: <IRQ>
May 26 08:05:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:58 mr-fox kernel: iommu_v1_unmap_pages+0x81/0x140
May 26 08:05:58 mr-fox kernel: amd_iommu_unmap_pages+0x40/0x130
May 26 08:05:58 mr-fox kernel: __iommu_unmap+0xbf/0x120
May 26 08:05:58 mr-fox kernel: __iommu_dma_unmap+0xb5/0x170
May 26 08:05:58 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 08:05:58 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:05:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:58 mr-fox kernel: ? tcp_delack_timer_handler+0xa9/0x110
May 26 08:05:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:58 mr-fox kernel: ? tcp_delack_timer+0xb5/0xf0
May 26 08:05:58 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:05:58 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:05:58 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:05:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:58 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:05:58 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:05:58 mr-fox kernel: </IRQ>
May 26 08:05:58 mr-fox kernel: <TASK>
May 26 08:05:58 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:05:58 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:05:58 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:05:58 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:05:58 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:05:58 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:05:58 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 08:05:58 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:05:58 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:05:58 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:05:58 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:05:58 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:05:58 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:05:58 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:05:58 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:05:58 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:05:58 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:05:58 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:05:58 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:05:58 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:05:58 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:05:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:05:58 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:05:58 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:05:58 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:05:58 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:05:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:05:58 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:05:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:05:58 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:05:58 mr-fox kernel: </TASK>
May 26 08:06:00 mr-fox crond[29160]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:06:00 mr-fox crond[29162]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:06:00 mr-fox crond[29161]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:06:00 mr-fox crond[29164]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:06:00 mr-fox CROND[29167]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:06:00 mr-fox crond[29165]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:06:00 mr-fox CROND[29168]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:06:00 mr-fox CROND[29170]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:06:00 mr-fox CROND[29171]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:06:00 mr-fox CROND[29172]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:06:00 mr-fox CROND[29165]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:06:00 mr-fox CROND[29165]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:06:00 mr-fox CROND[29164]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:06:00 mr-fox CROND[29164]: pam_unix(crond:session): session closed for user root
May 26 08:06:00 mr-fox CROND[29162]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:06:00 mr-fox CROND[29162]: pam_unix(crond:session): session closed for user root
May 26 08:06:00 mr-fox CROND[23608]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:06:00 mr-fox CROND[23608]: pam_unix(crond:session): session closed for user root
May 26 08:07:00 mr-fox crond[498]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:07:00 mr-fox crond[499]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:07:00 mr-fox crond[501]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:07:00 mr-fox crond[500]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:07:00 mr-fox crond[502]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:07:00 mr-fox CROND[505]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:07:00 mr-fox CROND[506]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:07:00 mr-fox CROND[507]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:07:00 mr-fox CROND[509]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:07:00 mr-fox CROND[510]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:07:00 mr-fox CROND[502]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:07:00 mr-fox CROND[502]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:07:00 mr-fox CROND[501]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:07:00 mr-fox CROND[501]: pam_unix(crond:session): session closed for user root
May 26 08:07:00 mr-fox CROND[500]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:07:00 mr-fox CROND[500]: pam_unix(crond:session): session closed for user root
May 26 08:07:01 mr-fox CROND[29161]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:07:01 mr-fox CROND[29161]: pam_unix(crond:session): session closed for user root
May 26 08:08:00 mr-fox crond[3977]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:08:00 mr-fox crond[3976]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:08:00 mr-fox crond[3979]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:08:00 mr-fox crond[3978]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:08:00 mr-fox crond[3980]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:08:00 mr-fox CROND[3985]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:08:00 mr-fox CROND[3986]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:08:00 mr-fox CROND[3987]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:08:00 mr-fox CROND[3988]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:08:00 mr-fox CROND[3984]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:08:00 mr-fox CROND[3980]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:08:00 mr-fox CROND[3980]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:08:00 mr-fox CROND[3979]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:08:00 mr-fox CROND[3979]: pam_unix(crond:session): session closed for user root
May 26 08:08:00 mr-fox CROND[3978]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:08:00 mr-fox CROND[3978]: pam_unix(crond:session): session closed for user root
May 26 08:08:01 mr-fox CROND[499]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:08:01 mr-fox CROND[499]: pam_unix(crond:session): session closed for user root
May 26 08:08:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:08:45 mr-fox kernel: rcu: \x0921-....: (1905163 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=767107
May 26 08:08:45 mr-fox kernel: rcu: \x09(t=1905164 jiffies g=8794409 q=53285729 ncpus=32)
May 26 08:08:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:08:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:08:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:08:45 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 08:08:45 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 08:08:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 08:08:45 mr-fox kernel: RAX: ffff988acf90e6ca RBX: 0000000000000036 RCX: 0000000000000012
May 26 08:08:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 08:08:45 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:08:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:08:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:08:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:08:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:08:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:08:45 mr-fox kernel: PKRU: 55555554
May 26 08:08:45 mr-fox kernel: Call Trace:
May 26 08:08:45 mr-fox kernel: <IRQ>
May 26 08:08:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:08:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:08:45 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 08:08:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:08:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:08:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:08:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:08:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:08:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:08:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:08:45 mr-fox kernel: </IRQ>
May 26 08:08:45 mr-fox kernel: <TASK>
May 26 08:08:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:08:45 mr-fox kernel: ? xas_descend+0x26/0xd0
May 26 08:08:45 mr-fox kernel: xas_load+0x49/0x60
May 26 08:08:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:08:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:08:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:08:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:08:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:08:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:08:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:08:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:08:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:08:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:08:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:08:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:08:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:08:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:08:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:08:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:08:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:08:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:08:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:08:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:08:45 mr-fox kernel: </TASK>
May 26 08:08:58 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1908336 jiffies s: 491905 root: 0x2/.
May 26 08:08:58 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:08:58 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:08:58 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:08:58 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:08:58 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:08:58 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:08:58 mr-fox kernel: RIP: 0010:ip6_xmit+0x27e/0x600
May 26 08:08:58 mr-fox kernel: Code: 0f 85 e0 02 00 00 e8 21 1d 01 00 3b 43 70 0f 82 e7 00 00 00 48 8b 44 24 20 48 85 c0 74 0c 48 8b 80 98 03 00 00 65 48 ff 40 28 <48> 8b 4c 24 28 48 8b 81 a0 01 00 00 65 48 ff 40 28 48 85 db 0f 84
May 26 08:08:58 mr-fox kernel: RSP: 0018:ffffa401005f0c30 EFLAGS: 00000202
May 26 08:08:58 mr-fox kernel: RAX: 00002b5790e21310 RBX: ffff9891f26e6700 RCX: 0000000000000040
May 26 08:08:58 mr-fox kernel: RDX: 0000000000000000 RSI: 4f244001f804012a RDI: 0000000000000000
May 26 08:08:58 mr-fox kernel: RBP: ffffa401005f0d20 R08: 0000000000000040 R09: 0000000000000000
May 26 08:08:58 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000020 R12: 0000000000002000
May 26 08:08:58 mr-fox kernel: R13: ffff988d9e786e00 R14: ffffa401005f0d38 R15: ffff988ae91ac300
May 26 08:08:58 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:08:58 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:08:58 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:08:58 mr-fox kernel: PKRU: 55555554
May 26 08:08:58 mr-fox kernel: Call Trace:
May 26 08:08:58 mr-fox kernel: <NMI>
May 26 08:08:58 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:08:58 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:08:58 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:08:58 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:08:58 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:08:58 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:08:58 mr-fox kernel: ? ip6_xmit+0x27e/0x600
May 26 08:08:58 mr-fox kernel: ? ip6_xmit+0x27e/0x600
May 26 08:08:58 mr-fox kernel: ? ip6_xmit+0x27e/0x600
May 26 08:08:58 mr-fox kernel: </NMI>
May 26 08:08:58 mr-fox kernel: <IRQ>
May 26 08:08:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:58 mr-fox kernel: ? sch_direct_xmit+0x8d/0x290
May 26 08:08:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:58 mr-fox kernel: ? __sk_dst_check+0x34/0xa0
May 26 08:08:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:58 mr-fox kernel: ? inet6_csk_route_socket+0x132/0x210
May 26 08:08:58 mr-fox kernel: inet6_csk_xmit+0xe9/0x160
May 26 08:08:58 mr-fox kernel: __tcp_transmit_skb+0x5d0/0xd30
May 26 08:08:58 mr-fox kernel: tcp_delack_timer_handler+0xa9/0x110
May 26 08:08:58 mr-fox kernel: tcp_delack_timer+0xb5/0xf0
May 26 08:08:58 mr-fox kernel: ? tcp_delack_timer_handler+0x110/0x110
May 26 08:08:58 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 08:08:58 mr-fox kernel: __run_timers+0x20a/0x240
May 26 08:08:58 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 08:08:58 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:08:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:58 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:08:58 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:08:58 mr-fox kernel: </IRQ>
May 26 08:08:58 mr-fox kernel: <TASK>
May 26 08:08:58 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:08:58 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 08:08:58 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 08:08:58 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 08:08:58 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:08:58 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:08:58 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:08:58 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:08:58 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:08:58 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:08:58 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:08:58 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:08:58 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:08:58 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:08:58 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:08:58 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:08:58 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:08:58 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:08:58 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:08:58 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:08:58 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:08:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:08:58 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:08:58 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:08:58 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:08:58 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:08:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:08:58 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:08:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:08:58 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:08:58 mr-fox kernel: </TASK>
May 26 08:09:00 mr-fox crond[7565]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:09:00 mr-fox crond[7566]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:09:00 mr-fox crond[7568]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:09:00 mr-fox crond[7570]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:09:00 mr-fox crond[7569]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:09:00 mr-fox CROND[7572]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:09:00 mr-fox CROND[7574]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:09:00 mr-fox CROND[7573]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:09:00 mr-fox CROND[7576]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:09:00 mr-fox CROND[7577]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:09:00 mr-fox CROND[7570]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:09:00 mr-fox CROND[7570]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:09:00 mr-fox CROND[7569]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:09:00 mr-fox CROND[7569]: pam_unix(crond:session): session closed for user root
May 26 08:09:00 mr-fox CROND[7568]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:09:00 mr-fox CROND[7568]: pam_unix(crond:session): session closed for user root
May 26 08:09:00 mr-fox CROND[3977]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:09:00 mr-fox CROND[3977]: pam_unix(crond:session): session closed for user root
May 26 08:10:00 mr-fox crond[12547]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:10:00 mr-fox crond[12548]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:10:00 mr-fox crond[12544]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:10:00 mr-fox crond[12545]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:10:00 mr-fox crond[12550]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:10:00 mr-fox crond[12552]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:10:00 mr-fox crond[12553]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:10:00 mr-fox CROND[12557]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:10:00 mr-fox CROND[12558]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:10:00 mr-fox CROND[12559]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:10:00 mr-fox CROND[12560]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:10:00 mr-fox CROND[12563]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:10:00 mr-fox CROND[12564]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:10:00 mr-fox crond[12554]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:10:00 mr-fox CROND[12562]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:10:00 mr-fox CROND[12566]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:10:00 mr-fox CROND[12544]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:10:00 mr-fox CROND[12544]: pam_unix(crond:session): session closed for user torproject
May 26 08:10:00 mr-fox CROND[12554]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:10:00 mr-fox CROND[12554]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:10:00 mr-fox CROND[12550]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:10:00 mr-fox CROND[12550]: pam_unix(crond:session): session closed for user root
May 26 08:10:00 mr-fox CROND[12552]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:10:00 mr-fox CROND[12552]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:10:00 mr-fox CROND[12548]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:10:00 mr-fox CROND[12548]: pam_unix(crond:session): session closed for user root
May 26 08:10:00 mr-fox CROND[7566]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:10:00 mr-fox CROND[7566]: pam_unix(crond:session): session closed for user root
May 26 08:11:00 mr-fox crond[10418]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:11:00 mr-fox crond[10419]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:11:00 mr-fox crond[10422]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:11:00 mr-fox crond[10421]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:11:00 mr-fox crond[10423]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:11:00 mr-fox CROND[10427]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:11:00 mr-fox CROND[10429]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:11:00 mr-fox CROND[10430]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:11:00 mr-fox CROND[10431]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:11:00 mr-fox CROND[10432]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:11:00 mr-fox CROND[10423]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:11:00 mr-fox CROND[10423]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:11:00 mr-fox CROND[10422]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:11:00 mr-fox CROND[10422]: pam_unix(crond:session): session closed for user root
May 26 08:11:01 mr-fox CROND[10421]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:11:01 mr-fox CROND[10421]: pam_unix(crond:session): session closed for user root
May 26 08:11:01 mr-fox CROND[12547]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:11:01 mr-fox CROND[12547]: pam_unix(crond:session): session closed for user root
May 26 08:11:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:11:45 mr-fox kernel: rcu: \x0921-....: (1950167 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=786099
May 26 08:11:45 mr-fox kernel: rcu: \x09(t=1950168 jiffies g=8794409 q=54324356 ncpus=32)
May 26 08:11:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:11:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:11:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:11:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:11:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:11:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:11:45 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:11:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:11:45 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 08:11:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:11:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:11:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:11:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:11:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:11:45 mr-fox kernel: PKRU: 55555554
May 26 08:11:45 mr-fox kernel: Call Trace:
May 26 08:11:45 mr-fox kernel: <IRQ>
May 26 08:11:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:11:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:11:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:11:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:11:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:11:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:11:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:11:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:11:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:11:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:11:45 mr-fox kernel: </IRQ>
May 26 08:11:45 mr-fox kernel: <TASK>
May 26 08:11:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:11:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:11:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:11:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:11:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:11:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:11:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:11:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:11:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:11:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:11:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:11:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:11:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:11:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:11:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:11:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:11:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:11:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:11:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:11:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:11:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:11:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:11:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:11:45 mr-fox kernel: </TASK>
May 26 08:11:58 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1953391 jiffies s: 491905 root: 0x2/.
May 26 08:11:58 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:11:58 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:11:58 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:11:58 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:11:58 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:11:58 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:11:58 mr-fox kernel: RIP: 0010:iommu_dma_unmap_page+0x30/0xa0
May 26 08:11:58 mr-fox kernel: Code: 4d 89 c7 41 56 41 89 ce 41 55 49 89 d5 41 54 49 89 f4 55 53 48 89 fb e8 3e ec ff ff 4c 89 e6 48 89 c7 e8 13 bb ff ff 48 85 c0 <74> 46 4c 89 ea 4c 89 e6 48 89 df 48 89 c5 e8 3d fe ff ff 48 8b 83
May 26 08:11:58 mr-fox kernel: RSP: 0018:ffffa401005f0dc0 EFLAGS: 00000202
May 26 08:11:58 mr-fox kernel: RAX: 000000140662993e RBX: ffff988ac1a460c0 RCX: 0000000000000000
May 26 08:11:58 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:11:58 mr-fox kernel: RBP: 0000000000008bbf R08: 0000000000000000 R09: 0000000000000000
May 26 08:11:58 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000fec7293e
May 26 08:11:58 mr-fox kernel: R13: 0000000000000042 R14: 0000000000000001 R15: 0000000000000000
May 26 08:11:58 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:11:58 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:11:58 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:11:58 mr-fox kernel: PKRU: 55555554
May 26 08:11:58 mr-fox kernel: Call Trace:
May 26 08:11:58 mr-fox kernel: <NMI>
May 26 08:11:58 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:11:58 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:11:58 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:11:58 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:11:58 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:11:58 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:11:58 mr-fox kernel: ? iommu_dma_unmap_page+0x30/0xa0
May 26 08:11:58 mr-fox kernel: ? iommu_dma_unmap_page+0x30/0xa0
May 26 08:11:58 mr-fox kernel: ? iommu_dma_unmap_page+0x30/0xa0
May 26 08:11:58 mr-fox kernel: </NMI>
May 26 08:11:58 mr-fox kernel: <IRQ>
May 26 08:11:58 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:11:58 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:11:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:11:58 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:11:58 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:11:58 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:11:58 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:11:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:11:58 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:11:58 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:11:58 mr-fox kernel: </IRQ>
May 26 08:11:58 mr-fox kernel: <TASK>
May 26 08:11:58 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:11:58 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 08:11:58 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 08:11:58 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000202
May 26 08:11:58 mr-fox kernel: RAX: ffff988ac0754ffa RBX: 0000000000000001 RCX: 000000000000001e
May 26 08:11:58 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 08:11:58 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:11:58 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:11:58 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:11:58 mr-fox kernel: xas_load+0x49/0x60
May 26 08:11:58 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:11:58 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:11:58 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:11:58 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:11:58 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:11:58 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:11:58 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:11:58 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:11:58 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:11:58 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:11:58 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:11:58 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:11:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:11:58 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:11:58 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:11:58 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:11:58 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:11:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:11:58 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:11:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:11:58 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:11:58 mr-fox kernel: </TASK>
May 26 08:12:00 mr-fox crond[11454]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:12:00 mr-fox crond[11455]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:12:00 mr-fox crond[11453]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:12:00 mr-fox crond[11450]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:12:00 mr-fox crond[11452]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:12:00 mr-fox CROND[11458]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:12:00 mr-fox CROND[11459]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:12:00 mr-fox CROND[11461]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:12:00 mr-fox CROND[11462]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:12:00 mr-fox CROND[11460]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:12:00 mr-fox CROND[11455]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:12:00 mr-fox CROND[11455]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:12:00 mr-fox CROND[11454]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:12:00 mr-fox CROND[11454]: pam_unix(crond:session): session closed for user root
May 26 08:12:00 mr-fox CROND[11453]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:12:00 mr-fox CROND[11453]: pam_unix(crond:session): session closed for user root
May 26 08:12:01 mr-fox CROND[10419]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:12:01 mr-fox CROND[10419]: pam_unix(crond:session): session closed for user root
May 26 08:13:00 mr-fox crond[5538]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:13:00 mr-fox crond[5539]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:13:00 mr-fox crond[5541]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:13:00 mr-fox crond[5540]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:13:00 mr-fox crond[5537]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:13:00 mr-fox CROND[5545]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:13:00 mr-fox CROND[5550]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:13:00 mr-fox CROND[5551]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:13:00 mr-fox CROND[5552]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:13:00 mr-fox CROND[5554]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:13:00 mr-fox CROND[5541]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:13:00 mr-fox CROND[5541]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:13:00 mr-fox CROND[5540]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:13:00 mr-fox CROND[5540]: pam_unix(crond:session): session closed for user root
May 26 08:13:00 mr-fox CROND[5539]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:13:00 mr-fox CROND[5539]: pam_unix(crond:session): session closed for user root
May 26 08:13:00 mr-fox CROND[11452]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:13:00 mr-fox CROND[11452]: pam_unix(crond:session): session closed for user root
May 26 08:14:00 mr-fox crond[8572]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:14:00 mr-fox crond[8571]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:14:00 mr-fox crond[8570]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:14:00 mr-fox crond[8574]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:14:00 mr-fox crond[8569]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:14:00 mr-fox CROND[8577]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:14:00 mr-fox CROND[8578]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:14:00 mr-fox CROND[8579]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:14:00 mr-fox CROND[8580]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:14:00 mr-fox CROND[8582]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:14:00 mr-fox CROND[8574]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:14:00 mr-fox CROND[8574]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:14:00 mr-fox CROND[8572]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:14:00 mr-fox CROND[8572]: pam_unix(crond:session): session closed for user root
May 26 08:14:00 mr-fox CROND[8571]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:14:00 mr-fox CROND[8571]: pam_unix(crond:session): session closed for user root
May 26 08:14:01 mr-fox CROND[5538]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:14:01 mr-fox CROND[5538]: pam_unix(crond:session): session closed for user root
May 26 08:14:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:14:45 mr-fox kernel: rcu: \x0921-....: (1995171 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=805121
May 26 08:14:45 mr-fox kernel: rcu: \x09(t=1995172 jiffies g=8794409 q=55371266 ncpus=32)
May 26 08:14:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:14:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:14:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:14:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:14:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:14:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:14:45 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:14:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:14:45 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:14:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:14:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:14:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:14:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:14:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:14:45 mr-fox kernel: PKRU: 55555554
May 26 08:14:45 mr-fox kernel: Call Trace:
May 26 08:14:45 mr-fox kernel: <IRQ>
May 26 08:14:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:14:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:14:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:14:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:14:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:14:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:14:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:14:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:14:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:14:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:14:45 mr-fox kernel: </IRQ>
May 26 08:14:45 mr-fox kernel: <TASK>
May 26 08:14:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:14:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:14:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:14:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:14:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:14:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:14:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:14:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:14:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:14:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:14:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:14:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:14:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:14:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:14:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:14:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:14:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:14:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:14:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:14:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:14:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:14:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:14:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:14:45 mr-fox kernel: </TASK>
May 26 08:14:58 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 1998448 jiffies s: 491905 root: 0x2/.
May 26 08:14:58 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:14:58 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:14:58 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:14:58 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:14:58 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:14:58 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:14:58 mr-fox kernel: RIP: 0010:amd_iommu_iova_to_phys+0x4/0x20
May 26 08:14:58 mr-fox kernel: Code: b8 ea ff ff ff 31 d2 31 c9 31 f6 31 ff 45 31 c0 45 31 c9 45 31 d2 45 31 db e9 73 9e 45 00 0f 1f 84 00 00 00 00 00 f3 0f 1e fa <48> 8d 97 58 01 00 00 48 8b 87 60 01 00 00 48 89 d7 e9 86 75 30 00
May 26 08:14:58 mr-fox kernel: RSP: 0018:ffffa401005f0db8 EFLAGS: 00000202
May 26 08:14:58 mr-fox kernel: RAX: ffffffffa56aa2a0 RBX: ffff988ac1a460c0 RCX: 0000000000000001
May 26 08:14:58 mr-fox kernel: RDX: 0000000000000056 RSI: 00000000cf5d046a RDI: ffff988ac2211010
May 26 08:14:58 mr-fox kernel: RBP: 000000000000782a R08: 0000000000000000 R09: 0000000000000000
May 26 08:14:58 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000cf5d046a
May 26 08:14:58 mr-fox kernel: R13: 0000000000000056 R14: 0000000000000001 R15: 0000000000000000
May 26 08:14:58 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:14:58 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:14:58 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:14:58 mr-fox kernel: PKRU: 55555554
May 26 08:14:58 mr-fox kernel: Call Trace:
May 26 08:14:58 mr-fox kernel: <NMI>
May 26 08:14:58 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:14:58 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:14:58 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:14:58 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:14:58 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:14:58 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:14:58 mr-fox kernel: ? amd_iommu_map_pages+0x70/0x70
May 26 08:14:58 mr-fox kernel: ? amd_iommu_iova_to_phys+0x4/0x20
May 26 08:14:58 mr-fox kernel: ? amd_iommu_iova_to_phys+0x4/0x20
May 26 08:14:58 mr-fox kernel: ? amd_iommu_iova_to_phys+0x4/0x20
May 26 08:14:58 mr-fox kernel: </NMI>
May 26 08:14:58 mr-fox kernel: <IRQ>
May 26 08:14:58 mr-fox kernel: iommu_dma_unmap_page+0x2d/0xa0
May 26 08:14:58 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:14:58 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:14:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:14:58 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:14:58 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:14:58 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:14:58 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:14:58 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:14:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:14:58 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:14:58 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:14:58 mr-fox kernel: </IRQ>
May 26 08:14:58 mr-fox kernel: <TASK>
May 26 08:14:58 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:14:58 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:14:58 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:14:58 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:14:58 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:14:58 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:14:58 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 08:14:58 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:14:58 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:14:58 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:14:58 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:14:58 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:14:58 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:14:58 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:14:58 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:14:58 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:14:58 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:14:58 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:14:58 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:14:58 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:14:58 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:14:58 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:14:58 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:14:58 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:14:58 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:14:58 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:14:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:14:58 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:14:58 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:14:58 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:14:58 mr-fox kernel: </TASK>
May 26 08:15:00 mr-fox crond[9428]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:15:00 mr-fox crond[9429]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:15:00 mr-fox crond[9427]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:15:00 mr-fox crond[9430]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:15:00 mr-fox crond[9431]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:15:00 mr-fox crond[9432]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:15:00 mr-fox CROND[9437]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:15:00 mr-fox CROND[9438]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:15:00 mr-fox CROND[9439]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:15:00 mr-fox CROND[9440]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:15:00 mr-fox CROND[9442]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:15:00 mr-fox CROND[9441]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:15:00 mr-fox crond[9433]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:15:00 mr-fox crond[9435]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:15:00 mr-fox CROND[9444]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:15:00 mr-fox CROND[9445]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:15:00 mr-fox CROND[9427]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:15:00 mr-fox CROND[9427]: pam_unix(crond:session): session closed for user torproject
May 26 08:15:00 mr-fox CROND[9435]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:15:00 mr-fox CROND[9435]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:15:00 mr-fox CROND[9431]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:15:00 mr-fox CROND[9431]: pam_unix(crond:session): session closed for user root
May 26 08:15:00 mr-fox CROND[9432]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:15:00 mr-fox CROND[9432]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:15:00 mr-fox CROND[9430]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:15:00 mr-fox CROND[9430]: pam_unix(crond:session): session closed for user root
May 26 08:15:01 mr-fox CROND[8570]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:15:01 mr-fox CROND[8570]: pam_unix(crond:session): session closed for user root
May 26 08:16:00 mr-fox crond[13321]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:16:00 mr-fox crond[13324]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:16:00 mr-fox crond[13325]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:16:00 mr-fox crond[13323]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:16:00 mr-fox crond[13326]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:16:00 mr-fox CROND[13331]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:16:00 mr-fox CROND[13330]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:16:00 mr-fox CROND[13332]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:16:00 mr-fox CROND[13333]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:16:00 mr-fox CROND[13334]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:16:00 mr-fox CROND[13326]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:16:00 mr-fox CROND[13326]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:16:00 mr-fox CROND[13325]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:16:00 mr-fox CROND[13325]: pam_unix(crond:session): session closed for user root
May 26 08:16:00 mr-fox CROND[13324]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:16:00 mr-fox CROND[13324]: pam_unix(crond:session): session closed for user root
May 26 08:16:00 mr-fox CROND[9429]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:16:00 mr-fox CROND[9429]: pam_unix(crond:session): session closed for user root
May 26 08:17:00 mr-fox crond[16938]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:17:00 mr-fox crond[16936]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:17:00 mr-fox crond[16937]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:17:00 mr-fox crond[16940]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:17:00 mr-fox crond[16941]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:17:00 mr-fox CROND[16948]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:17:00 mr-fox CROND[16949]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:17:00 mr-fox CROND[16947]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:17:00 mr-fox CROND[16950]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:17:00 mr-fox CROND[16951]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:17:00 mr-fox CROND[16941]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:17:00 mr-fox CROND[16941]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:17:00 mr-fox CROND[16940]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:17:00 mr-fox CROND[16940]: pam_unix(crond:session): session closed for user root
May 26 08:17:00 mr-fox CROND[16938]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:17:00 mr-fox CROND[16938]: pam_unix(crond:session): session closed for user root
May 26 08:17:00 mr-fox CROND[13323]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:17:00 mr-fox CROND[13323]: pam_unix(crond:session): session closed for user root
May 26 08:17:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:17:45 mr-fox kernel: rcu: \x0921-....: (2040175 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=824116
May 26 08:17:45 mr-fox kernel: rcu: \x09(t=2040176 jiffies g=8794409 q=56388062 ncpus=32)
May 26 08:17:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:17:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:17:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:17:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:17:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:17:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:17:45 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:17:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:17:45 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:17:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:17:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:17:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:17:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:17:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:17:45 mr-fox kernel: PKRU: 55555554
May 26 08:17:45 mr-fox kernel: Call Trace:
May 26 08:17:45 mr-fox kernel: <IRQ>
May 26 08:17:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:17:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:17:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:17:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:17:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:17:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:17:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:17:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:17:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:17:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:17:45 mr-fox kernel: </IRQ>
May 26 08:17:45 mr-fox kernel: <TASK>
May 26 08:17:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:17:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:17:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:17:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:17:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:17:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:17:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:17:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:17:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:17:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:17:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:17:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:17:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:17:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:17:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:17:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:17:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:17:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:17:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:17:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:17:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:17:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:17:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:17:45 mr-fox kernel: </TASK>
May 26 08:17:59 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2043504 jiffies s: 491905 root: 0x2/.
May 26 08:17:59 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:17:59 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:17:59 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:17:59 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:17:59 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:17:59 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:17:59 mr-fox kernel: RIP: 0010:rb_first+0x13/0x30
May 26 08:17:59 mr-fox kernel: Code: c0 45 31 c9 45 31 d2 e9 a6 66 1b 00 66 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 48 8b 07 48 85 c0 74 18 48 89 c2 48 8b 40 10 <48> 85 c0 75 f4 48 89 d0 31 d2 31 ff e9 77 66 1b 00 31 d2 eb f0 0f
May 26 08:17:59 mr-fox kernel: RSP: 0018:ffffa401005f0e50 EFLAGS: 00000282
May 26 08:17:59 mr-fox kernel: RAX: ffff988d8f31cc40 RBX: ffff9891f581ae40 RCX: 0000000000000000
May 26 08:17:59 mr-fox kernel: RDX: ffff989b4c319340 RSI: 0000000000000000 RDI: ffff9891f581af90
May 26 08:17:59 mr-fox kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
May 26 08:17:59 mr-fox kernel: R10: 0000000000000005 R11: 0000000000000000 R12: ffffffffa6ad7f40
May 26 08:17:59 mr-fox kernel: R13: ffff9891f581af90 R14: ffffa401005f0f00 R15: ffff98a96ed5bb40
May 26 08:17:59 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:17:59 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:17:59 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:17:59 mr-fox kernel: PKRU: 55555554
May 26 08:17:59 mr-fox kernel: Call Trace:
May 26 08:17:59 mr-fox kernel: <NMI>
May 26 08:17:59 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:17:59 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:17:59 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:17:59 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:17:59 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:17:59 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:17:59 mr-fox kernel: ? rb_first+0x13/0x30
May 26 08:17:59 mr-fox kernel: ? rb_first+0x13/0x30
May 26 08:17:59 mr-fox kernel: ? rb_first+0x13/0x30
May 26 08:17:59 mr-fox kernel: </NMI>
May 26 08:17:59 mr-fox kernel: <IRQ>
May 26 08:17:59 mr-fox kernel: tcp_retransmit_timer+0x1ad/0xa60
May 26 08:17:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:17:59 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 08:17:59 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 08:17:59 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 08:17:59 mr-fox kernel: __run_timers+0x20a/0x240
May 26 08:17:59 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 08:17:59 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:17:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:17:59 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:17:59 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:17:59 mr-fox kernel: </IRQ>
May 26 08:17:59 mr-fox kernel: <TASK>
May 26 08:17:59 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:17:59 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:17:59 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:17:59 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:17:59 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:17:59 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:17:59 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 08:17:59 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:17:59 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:17:59 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:17:59 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:17:59 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:17:59 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:17:59 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:17:59 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:17:59 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:17:59 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:17:59 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:17:59 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:17:59 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:17:59 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:17:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:17:59 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:17:59 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:17:59 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:17:59 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:17:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:17:59 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:17:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:17:59 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:17:59 mr-fox kernel: </TASK>
May 26 08:18:00 mr-fox crond[20542]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:18:00 mr-fox crond[20544]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:18:00 mr-fox crond[20541]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:18:00 mr-fox crond[20546]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:18:00 mr-fox crond[20545]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:18:00 mr-fox CROND[20548]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:18:00 mr-fox CROND[20552]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:18:00 mr-fox CROND[20550]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:18:00 mr-fox CROND[20553]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:18:00 mr-fox CROND[20549]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:18:00 mr-fox CROND[20546]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:18:00 mr-fox CROND[20546]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:18:00 mr-fox CROND[20545]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:18:00 mr-fox CROND[20545]: pam_unix(crond:session): session closed for user root
May 26 08:18:01 mr-fox CROND[20544]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:18:01 mr-fox CROND[20544]: pam_unix(crond:session): session closed for user root
May 26 08:18:01 mr-fox CROND[16937]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:18:01 mr-fox CROND[16937]: pam_unix(crond:session): session closed for user root
May 26 08:19:00 mr-fox crond[22433]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:19:00 mr-fox crond[22432]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:19:00 mr-fox crond[22431]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:19:00 mr-fox crond[22436]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:19:00 mr-fox crond[22434]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:19:00 mr-fox CROND[22439]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:19:00 mr-fox CROND[22440]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:19:00 mr-fox CROND[22441]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:19:00 mr-fox CROND[22442]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:19:00 mr-fox CROND[22443]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:19:00 mr-fox CROND[22436]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:19:00 mr-fox CROND[22436]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:19:00 mr-fox CROND[22434]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:19:00 mr-fox CROND[22434]: pam_unix(crond:session): session closed for user root
May 26 08:19:00 mr-fox CROND[22433]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:19:00 mr-fox CROND[22433]: pam_unix(crond:session): session closed for user root
May 26 08:19:01 mr-fox CROND[20542]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:19:01 mr-fox CROND[20542]: pam_unix(crond:session): session closed for user root
May 26 08:20:00 mr-fox crond[28810]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:20:00 mr-fox crond[28812]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:20:00 mr-fox crond[28809]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:20:00 mr-fox crond[28811]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:20:00 mr-fox crond[28814]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:20:00 mr-fox crond[28808]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:20:00 mr-fox CROND[28819]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:20:00 mr-fox CROND[28820]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:20:00 mr-fox CROND[28821]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:20:00 mr-fox CROND[28822]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:20:00 mr-fox CROND[28823]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:20:00 mr-fox crond[28816]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:20:00 mr-fox CROND[28824]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:20:00 mr-fox crond[28815]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:20:00 mr-fox CROND[28827]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:20:00 mr-fox CROND[28828]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:20:00 mr-fox CROND[28808]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:20:00 mr-fox CROND[28808]: pam_unix(crond:session): session closed for user torproject
May 26 08:20:00 mr-fox CROND[28816]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:20:00 mr-fox CROND[28816]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:20:00 mr-fox CROND[28812]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:20:00 mr-fox CROND[28812]: pam_unix(crond:session): session closed for user root
May 26 08:20:00 mr-fox CROND[28814]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:20:00 mr-fox CROND[28814]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:20:00 mr-fox CROND[28811]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:20:00 mr-fox CROND[28811]: pam_unix(crond:session): session closed for user root
May 26 08:20:00 mr-fox CROND[22432]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:20:00 mr-fox CROND[22432]: pam_unix(crond:session): session closed for user root
May 26 08:20:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:20:45 mr-fox kernel: rcu: \x0921-....: (2085179 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=843479
May 26 08:20:45 mr-fox kernel: rcu: \x09(t=2085180 jiffies g=8794409 q=57410221 ncpus=32)
May 26 08:20:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:20:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:20:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:20:45 mr-fox kernel: RIP: 0010:xas_descend+0x31/0xd0
May 26 08:20:45 mr-fox kernel: Code: 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 <49> 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00
May 26 08:20:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 08:20:45 mr-fox kernel: RAX: ffff988ac07566ca RBX: 0000000000000001 RCX: 000000000000001e
May 26 08:20:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 08:20:45 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:20:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:20:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:20:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:20:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:20:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:20:45 mr-fox kernel: PKRU: 55555554
May 26 08:20:45 mr-fox kernel: Call Trace:
May 26 08:20:45 mr-fox kernel: <IRQ>
May 26 08:20:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:20:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:20:45 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 08:20:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:20:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:20:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:20:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:20:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:20:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:20:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:20:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:20:45 mr-fox kernel: </IRQ>
May 26 08:20:45 mr-fox kernel: <TASK>
May 26 08:20:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:20:45 mr-fox kernel: ? xas_descend+0x31/0xd0
May 26 08:20:45 mr-fox kernel: xas_load+0x49/0x60
May 26 08:20:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:20:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:20:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:20:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:20:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:20:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:20:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:20:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:20:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:20:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:20:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:20:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:20:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:20:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:20:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:20:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:20:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:20:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:20:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:20:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:20:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:20:45 mr-fox kernel: </TASK>
May 26 08:20:59 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2088559 jiffies s: 491905 root: 0x2/.
May 26 08:20:59 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:20:59 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:20:59 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:20:59 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:20:59 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:20:59 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:20:59 mr-fox kernel: RIP: 0010:_raw_spin_lock+0xf/0x30
May 26 08:20:59 mr-fox kernel: Code: 31 ff e9 af 2f 15 00 e9 0f 03 00 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa 31 c0 ba 01 00 00 00 f0 0f b1 17 <75> 0d 31 c0 31 d2 31 f6 31 ff e9 7d 2f 15 00 89 c6 e9 cb 00 00 00
May 26 08:20:59 mr-fox kernel: RSP: 0018:ffffa401005f0e88 EFLAGS: 00000246
May 26 08:20:59 mr-fox kernel: RAX: 0000000000000000 RBX: ffff988dd4110e08 RCX: 0000000000000000
May 26 08:20:59 mr-fox kernel: RDX: 0000000000000001 RSI: ffffffffa587d6d0 RDI: ffff988dd4110a98
May 26 08:20:59 mr-fox kernel: RBP: ffff988dd4110a00 R08: 0000000000000000 R09: 0000000000000000
May 26 08:20:59 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffa401005f0ee8
May 26 08:20:59 mr-fox kernel: R13: ffff98a96ed5bbb0 R14: ffffa401005f0ef0 R15: ffff98a96ed5bb40
May 26 08:20:59 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:20:59 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:20:59 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:20:59 mr-fox kernel: PKRU: 55555554
May 26 08:20:59 mr-fox kernel: Call Trace:
May 26 08:20:59 mr-fox kernel: <NMI>
May 26 08:20:59 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:20:59 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:20:59 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:20:59 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:20:59 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:20:59 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:20:59 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 08:20:59 mr-fox kernel: ? _raw_spin_lock+0xf/0x30
May 26 08:20:59 mr-fox kernel: ? _raw_spin_lock+0xf/0x30
May 26 08:20:59 mr-fox kernel: ? _raw_spin_lock+0xf/0x30
May 26 08:20:59 mr-fox kernel: </NMI>
May 26 08:20:59 mr-fox kernel: <IRQ>
May 26 08:20:59 mr-fox kernel: tcp_write_timer+0x1e/0xd0
May 26 08:20:59 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 08:20:59 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 08:20:59 mr-fox kernel: __run_timers+0x20a/0x240
May 26 08:20:59 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 08:20:59 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:20:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:20:59 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:20:59 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:20:59 mr-fox kernel: </IRQ>
May 26 08:20:59 mr-fox kernel: <TASK>
May 26 08:20:59 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:20:59 mr-fox kernel: RIP: 0010:filemap_get_entry+0x6d/0x160
May 26 08:20:59 mr-fox kernel: Code: 00 00 48 c7 44 24 30 00 00 00 00 48 c7 44 24 38 00 00 00 00 48 c7 44 24 20 03 00 00 00 48 8d 7c 24 08 e8 56 70 78 00 48 89 c3 <48> 3d 02 04 00 00 74 e2 48 3d 06 04 00 00 74 da 48 85 c0 74 5a a8
May 26 08:20:59 mr-fox kernel: RSP: 0018:ffffa401077e3968 EFLAGS: 00000246
May 26 08:20:59 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 08:20:59 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:20:59 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:20:59 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:20:59 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:20:59 mr-fox kernel: ? filemap_get_entry+0x6a/0x160
May 26 08:20:59 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:20:59 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:20:59 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:20:59 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:20:59 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:20:59 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:20:59 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:20:59 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:20:59 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:20:59 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:20:59 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:20:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:20:59 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:20:59 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:20:59 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:20:59 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:20:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:20:59 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:20:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:20:59 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:20:59 mr-fox kernel: </TASK>
May 26 08:21:00 mr-fox crond[986]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:21:00 mr-fox crond[987]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:21:00 mr-fox crond[988]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:21:00 mr-fox crond[989]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:21:00 mr-fox crond[990]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:21:00 mr-fox CROND[993]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:21:00 mr-fox CROND[996]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:21:00 mr-fox CROND[997]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:21:00 mr-fox CROND[999]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:21:00 mr-fox CROND[1001]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:21:00 mr-fox CROND[990]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:21:00 mr-fox CROND[990]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:21:00 mr-fox CROND[989]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:21:00 mr-fox CROND[989]: pam_unix(crond:session): session closed for user root
May 26 08:21:00 mr-fox CROND[988]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:21:00 mr-fox CROND[988]: pam_unix(crond:session): session closed for user root
May 26 08:21:01 mr-fox CROND[28810]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:21:01 mr-fox CROND[28810]: pam_unix(crond:session): session closed for user root
May 26 08:22:00 mr-fox crond[1873]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:22:00 mr-fox crond[1871]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:22:00 mr-fox crond[1869]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:22:00 mr-fox crond[1870]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:22:00 mr-fox crond[1874]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:22:00 mr-fox CROND[1878]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:22:00 mr-fox CROND[1879]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:22:00 mr-fox CROND[1881]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:22:00 mr-fox CROND[1880]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:22:00 mr-fox CROND[1887]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:22:00 mr-fox CROND[1874]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:22:00 mr-fox CROND[1874]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:22:00 mr-fox CROND[1873]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:22:00 mr-fox CROND[1873]: pam_unix(crond:session): session closed for user root
May 26 08:22:00 mr-fox CROND[1871]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:22:00 mr-fox CROND[1871]: pam_unix(crond:session): session closed for user root
May 26 08:22:01 mr-fox CROND[987]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:22:01 mr-fox CROND[987]: pam_unix(crond:session): session closed for user root
May 26 08:23:00 mr-fox crond[6998]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:23:00 mr-fox crond[6999]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:23:00 mr-fox crond[7001]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:23:00 mr-fox crond[7000]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:23:00 mr-fox crond[6997]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:23:00 mr-fox CROND[7005]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:23:00 mr-fox CROND[7006]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:23:00 mr-fox CROND[7007]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:23:00 mr-fox CROND[7009]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:23:00 mr-fox CROND[7010]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:23:00 mr-fox CROND[7001]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:23:00 mr-fox CROND[7001]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:23:00 mr-fox CROND[7000]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:23:00 mr-fox CROND[7000]: pam_unix(crond:session): session closed for user root
May 26 08:23:00 mr-fox CROND[6999]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:23:00 mr-fox CROND[6999]: pam_unix(crond:session): session closed for user root
May 26 08:23:00 mr-fox CROND[1870]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:23:00 mr-fox CROND[1870]: pam_unix(crond:session): session closed for user root
May 26 08:23:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:23:45 mr-fox kernel: rcu: \x0921-....: (2130183 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=863037
May 26 08:23:45 mr-fox kernel: rcu: \x09(t=2130184 jiffies g=8794409 q=58426312 ncpus=32)
May 26 08:23:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:23:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:23:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:23:45 mr-fox kernel: RIP: 0010:xas_descend+0x26/0xd0
May 26 08:23:45 mr-fox kernel: Code: 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e 48 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f <89> d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03
May 26 08:23:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000206
May 26 08:23:45 mr-fox kernel: RAX: ffff988ac07566ca RBX: 000000000000000c RCX: 0000000000000018
May 26 08:23:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac07566c8 RDI: ffffa401077e3970
May 26 08:23:45 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:23:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:23:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:23:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:23:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:23:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:23:45 mr-fox kernel: PKRU: 55555554
May 26 08:23:45 mr-fox kernel: Call Trace:
May 26 08:23:45 mr-fox kernel: <IRQ>
May 26 08:23:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:23:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:23:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:23:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:23:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:23:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:23:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:23:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:23:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:23:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:23:45 mr-fox kernel: </IRQ>
May 26 08:23:45 mr-fox kernel: <TASK>
May 26 08:23:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:23:45 mr-fox kernel: ? xas_descend+0x26/0xd0
May 26 08:23:45 mr-fox kernel: xas_load+0x49/0x60
May 26 08:23:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:23:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:23:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:23:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:23:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:23:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:23:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:23:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:23:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:23:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:23:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:23:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:23:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:23:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:23:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:23:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:23:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:23:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:23:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:23:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:23:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:23:45 mr-fox kernel: </TASK>
May 26 08:23:59 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2133616 jiffies s: 491905 root: 0x2/.
May 26 08:23:59 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:23:59 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:23:59 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:23:59 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:23:59 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:23:59 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:23:59 mr-fox kernel: RIP: 0010:__iommu_dma_unmap+0xb0/0x170
May 26 08:23:59 mr-fox kernel: Code: 48 c7 44 24 18 00 00 00 00 48 01 d5 48 c7 44 24 08 ff ff ff ff 48 21 c5 49 8b 85 e0 00 00 00 48 89 ea 48 85 c0 0f 95 44 24 30 <e8> db b5 ff ff 48 39 e8 0f 85 9a 00 00 00 0f b6 44 24 30 3c 01 0f
May 26 08:23:59 mr-fox kernel: RSP: 0018:ffffa401005f0d48 EFLAGS: 00000282
May 26 08:23:59 mr-fox kernel: RAX: ffff988ac2211010 RBX: 00000000cf593000 RCX: ffffa401005f0d50
May 26 08:23:59 mr-fox kernel: RDX: 0000000000001000 RSI: 00000000cf593000 RDI: ffff988ac2211010
May 26 08:23:59 mr-fox kernel: RBP: 0000000000001000 R08: 0000000000000000 R09: 0000000000000000
May 26 08:23:59 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988ac2211010
May 26 08:23:59 mr-fox kernel: R13: ffff988ac1292400 R14: ffffa401005f0d50 R15: ffffa401005f0d68
May 26 08:23:59 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:23:59 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:23:59 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:23:59 mr-fox kernel: PKRU: 55555554
May 26 08:23:59 mr-fox kernel: Call Trace:
May 26 08:23:59 mr-fox kernel: <NMI>
May 26 08:23:59 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:23:59 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:23:59 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:23:59 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:23:59 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:23:59 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:23:59 mr-fox kernel: ? __iommu_dma_unmap+0xb0/0x170
May 26 08:23:59 mr-fox kernel: ? __iommu_dma_unmap+0xb0/0x170
May 26 08:23:59 mr-fox kernel: ? __iommu_dma_unmap+0xb0/0x170
May 26 08:23:59 mr-fox kernel: </NMI>
May 26 08:23:59 mr-fox kernel: <IRQ>
May 26 08:23:59 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 08:23:59 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:23:59 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:23:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:23:59 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:23:59 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:23:59 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:23:59 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:23:59 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:23:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:23:59 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:23:59 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:23:59 mr-fox kernel: </IRQ>
May 26 08:23:59 mr-fox kernel: <TASK>
May 26 08:23:59 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:23:59 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:23:59 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:23:59 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:23:59 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:23:59 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:23:59 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:23:59 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:23:59 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:23:59 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:23:59 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:23:59 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:23:59 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:23:59 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:23:59 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:23:59 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:23:59 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:23:59 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:23:59 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:23:59 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:23:59 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:23:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:23:59 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:23:59 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:23:59 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:23:59 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:23:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:23:59 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:23:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:23:59 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:23:59 mr-fox kernel: </TASK>
May 26 08:24:00 mr-fox crond[10341]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:24:00 mr-fox crond[10343]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:24:00 mr-fox crond[10344]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:24:00 mr-fox crond[10345]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:24:00 mr-fox crond[10346]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:24:00 mr-fox CROND[10350]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:24:00 mr-fox CROND[10351]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:24:00 mr-fox CROND[10352]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:24:00 mr-fox CROND[10353]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:24:00 mr-fox CROND[10355]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:24:00 mr-fox CROND[10346]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:24:00 mr-fox CROND[10346]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:24:00 mr-fox CROND[10345]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:24:00 mr-fox CROND[10345]: pam_unix(crond:session): session closed for user root
May 26 08:24:00 mr-fox CROND[10344]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:24:00 mr-fox CROND[10344]: pam_unix(crond:session): session closed for user root
May 26 08:24:00 mr-fox CROND[6998]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:24:00 mr-fox CROND[6998]: pam_unix(crond:session): session closed for user root
May 26 08:25:00 mr-fox crond[12535]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:25:00 mr-fox crond[12537]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:25:00 mr-fox crond[12533]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:25:00 mr-fox crond[12534]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:25:00 mr-fox CROND[12544]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:25:00 mr-fox CROND[12547]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:25:00 mr-fox CROND[12548]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:25:00 mr-fox crond[12541]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:25:00 mr-fox crond[12540]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:25:00 mr-fox crond[12539]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:25:00 mr-fox CROND[12550]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:25:00 mr-fox crond[12538]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:25:00 mr-fox CROND[12551]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:25:00 mr-fox CROND[12552]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:25:00 mr-fox CROND[12554]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:25:00 mr-fox CROND[12556]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:25:00 mr-fox CROND[12533]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:25:00 mr-fox CROND[12533]: pam_unix(crond:session): session closed for user torproject
May 26 08:25:00 mr-fox CROND[12541]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:25:00 mr-fox CROND[12541]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:25:00 mr-fox CROND[12538]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:25:00 mr-fox CROND[12538]: pam_unix(crond:session): session closed for user root
May 26 08:25:01 mr-fox CROND[12539]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:25:01 mr-fox CROND[12539]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:25:01 mr-fox CROND[12537]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:25:01 mr-fox CROND[12537]: pam_unix(crond:session): session closed for user root
May 26 08:25:01 mr-fox CROND[10343]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:25:01 mr-fox CROND[10343]: pam_unix(crond:session): session closed for user root
May 26 08:26:00 mr-fox crond[13067]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:26:00 mr-fox crond[13066]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:26:00 mr-fox crond[13071]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:26:00 mr-fox crond[13069]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:26:00 mr-fox crond[13072]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:26:00 mr-fox CROND[13075]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:26:00 mr-fox CROND[13076]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:26:00 mr-fox CROND[13078]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:26:00 mr-fox CROND[13079]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:26:00 mr-fox CROND[13081]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:26:00 mr-fox CROND[13072]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:26:00 mr-fox CROND[13072]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:26:00 mr-fox CROND[13071]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:26:00 mr-fox CROND[13071]: pam_unix(crond:session): session closed for user root
May 26 08:26:00 mr-fox CROND[13069]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:26:00 mr-fox CROND[13069]: pam_unix(crond:session): session closed for user root
May 26 08:26:01 mr-fox CROND[12535]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:26:01 mr-fox CROND[12535]: pam_unix(crond:session): session closed for user root
May 26 08:26:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:26:45 mr-fox kernel: rcu: \x0921-....: (2175187 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=882654
May 26 08:26:45 mr-fox kernel: rcu: \x09(t=2175188 jiffies g=8794409 q=59457709 ncpus=32)
May 26 08:26:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:26:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:26:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:26:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:26:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:26:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 08:26:45 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:26:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:26:45 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:26:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:26:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:26:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:26:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:26:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:26:45 mr-fox kernel: PKRU: 55555554
May 26 08:26:45 mr-fox kernel: Call Trace:
May 26 08:26:45 mr-fox kernel: <IRQ>
May 26 08:26:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:26:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:26:45 mr-fox kernel: ? tcp_write_xmit+0xe3/0x13b0
May 26 08:26:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:26:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:26:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:26:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:26:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:26:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:26:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:26:45 mr-fox kernel: </IRQ>
May 26 08:26:45 mr-fox kernel: <TASK>
May 26 08:26:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:26:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:26:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:26:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:26:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:26:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:26:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:26:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:26:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:26:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:26:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:26:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:26:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:26:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:26:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:26:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:26:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:26:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:26:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:26:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:26:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:26:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:26:45 mr-fox kernel: </TASK>
May 26 08:26:59 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2178672 jiffies s: 491905 root: 0x2/.
May 26 08:26:59 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:26:59 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:26:59 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:26:59 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:26:59 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:26:59 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:26:59 mr-fox kernel: RIP: 0010:skb_release_data.isra.0+0x47/0x1e0
May 26 08:26:59 mr-fox kernel: Code: 00 00 00 a8 01 74 2b 83 e0 02 3c 01 89 c2 19 c0 0d ff ff fe ff 80 fa 01 19 d2 66 31 d2 81 c2 01 00 01 00 f0 41 0f c1 44 24 20 <39> c2 0f 85 eb 00 00 00 4d 85 f6 74 57 41 8b 86 b8 00 00 00 49 03
May 26 08:26:59 mr-fox kernel: RSP: 0018:ffffa401005f0db8 EFLAGS: 00000213
May 26 08:26:59 mr-fox kernel: RAX: 0000000000010002 RBX: ffff98998f71de98 RCX: 0000000000000b5f
May 26 08:26:59 mr-fox kernel: RDX: 0000000000000001 RSI: 0000000000000002 RDI: ffff98998f71de98
May 26 08:26:59 mr-fox kernel: RBP: 0000000000009184 R08: 0000000000000000 R09: 0000000000000000
May 26 08:26:59 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988e1dc06980
May 26 08:26:59 mr-fox kernel: R13: 0000000000000002 R14: ffff98998f71de98 R15: ffffa40100119f80
May 26 08:26:59 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:26:59 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:26:59 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:26:59 mr-fox kernel: PKRU: 55555554
May 26 08:26:59 mr-fox kernel: Call Trace:
May 26 08:26:59 mr-fox kernel: <NMI>
May 26 08:26:59 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:26:59 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:26:59 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:26:59 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:26:59 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:26:59 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:26:59 mr-fox kernel: ? skb_release_data.isra.0+0x47/0x1e0
May 26 08:26:59 mr-fox kernel: ? skb_release_data.isra.0+0x47/0x1e0
May 26 08:26:59 mr-fox kernel: ? skb_release_data.isra.0+0x47/0x1e0
May 26 08:26:59 mr-fox kernel: </NMI>
May 26 08:26:59 mr-fox kernel: <IRQ>
May 26 08:26:59 mr-fox kernel: napi_consume_skb+0x6a/0xc0
May 26 08:26:59 mr-fox kernel: igb_poll+0xea/0x1370
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: ? wq_worker_tick+0xd/0xd0
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:26:59 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:26:59 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:26:59 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:26:59 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:26:59 mr-fox kernel: </IRQ>
May 26 08:26:59 mr-fox kernel: <TASK>
May 26 08:26:59 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:26:59 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:26:59 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:26:59 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:26:59 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:26:59 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:26:59 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 08:26:59 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:26:59 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:26:59 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:26:59 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:26:59 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:26:59 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:26:59 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:26:59 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:26:59 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:26:59 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:26:59 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:26:59 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:26:59 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:26:59 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:26:59 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:26:59 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:26:59 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:26:59 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:26:59 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:26:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:26:59 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:26:59 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:26:59 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:26:59 mr-fox kernel: </TASK>
May 26 08:27:00 mr-fox crond[17016]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:27:00 mr-fox crond[17017]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:27:00 mr-fox crond[17018]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:27:00 mr-fox crond[17019]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:27:00 mr-fox CROND[17024]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:27:00 mr-fox CROND[17025]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:27:00 mr-fox crond[17020]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:27:00 mr-fox CROND[17026]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:27:00 mr-fox CROND[17027]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:27:00 mr-fox CROND[17028]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:27:00 mr-fox CROND[17020]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:27:00 mr-fox CROND[17020]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:27:00 mr-fox CROND[17019]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:27:00 mr-fox CROND[17019]: pam_unix(crond:session): session closed for user root
May 26 08:27:00 mr-fox CROND[17018]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:27:00 mr-fox CROND[17018]: pam_unix(crond:session): session closed for user root
May 26 08:27:00 mr-fox CROND[13067]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:27:00 mr-fox CROND[13067]: pam_unix(crond:session): session closed for user root
May 26 08:28:00 mr-fox crond[20473]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:28:00 mr-fox crond[20472]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:28:00 mr-fox crond[20475]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:28:00 mr-fox crond[20471]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:28:00 mr-fox CROND[20481]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:28:00 mr-fox CROND[20482]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:28:00 mr-fox CROND[20483]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:28:00 mr-fox CROND[20485]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:28:00 mr-fox crond[20476]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:28:00 mr-fox CROND[20488]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:28:00 mr-fox CROND[20476]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:28:00 mr-fox CROND[20476]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:28:00 mr-fox CROND[20475]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:28:00 mr-fox CROND[20475]: pam_unix(crond:session): session closed for user root
May 26 08:28:00 mr-fox CROND[20473]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:28:00 mr-fox CROND[20473]: pam_unix(crond:session): session closed for user root
May 26 08:28:01 mr-fox CROND[17017]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:28:01 mr-fox CROND[17017]: pam_unix(crond:session): session closed for user root
May 26 08:29:00 mr-fox crond[23306]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:29:00 mr-fox crond[23308]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:29:00 mr-fox crond[23305]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:29:00 mr-fox crond[23307]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:29:00 mr-fox crond[23309]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:29:00 mr-fox CROND[23314]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:29:00 mr-fox CROND[23315]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:29:00 mr-fox CROND[23318]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:29:00 mr-fox CROND[23317]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:29:00 mr-fox CROND[23319]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:29:00 mr-fox CROND[23309]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:29:00 mr-fox CROND[23309]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:29:00 mr-fox CROND[23308]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:29:00 mr-fox CROND[23308]: pam_unix(crond:session): session closed for user root
May 26 08:29:00 mr-fox CROND[23307]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:29:00 mr-fox CROND[23307]: pam_unix(crond:session): session closed for user root
May 26 08:29:01 mr-fox CROND[20472]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:29:01 mr-fox CROND[20472]: pam_unix(crond:session): session closed for user root
May 26 08:29:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:29:45 mr-fox kernel: rcu: \x0921-....: (2220191 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=902227
May 26 08:29:45 mr-fox kernel: rcu: \x09(t=2220192 jiffies g=8794409 q=60466260 ncpus=32)
May 26 08:29:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:29:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:29:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:29:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:29:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:29:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 08:29:45 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:29:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:29:45 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:29:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:29:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:29:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:29:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:29:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:29:45 mr-fox kernel: PKRU: 55555554
May 26 08:29:45 mr-fox kernel: Call Trace:
May 26 08:29:45 mr-fox kernel: <IRQ>
May 26 08:29:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:29:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:29:45 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 08:29:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:29:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:29:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:29:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:29:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:29:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:29:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:29:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:29:45 mr-fox kernel: </IRQ>
May 26 08:29:45 mr-fox kernel: <TASK>
May 26 08:29:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:29:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:29:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:29:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:29:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:29:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:29:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:29:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:29:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:29:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:29:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:29:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:29:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:29:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:29:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:29:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:29:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:29:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:29:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:29:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:29:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:29:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:29:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:29:45 mr-fox kernel: </TASK>
May 26 08:30:00 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2223728 jiffies s: 491905 root: 0x2/.
May 26 08:30:00 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:30:00 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:30:00 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:30:00 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:30:00 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:30:00 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:30:00 mr-fox kernel: RIP: 0010:fetch_pte+0x145/0x170
May 26 08:30:00 mr-fox kernel: Code: 74 25 48 8b 10 f6 c2 01 75 81 31 c0 48 83 c4 10 5b 5d 41 5c 41 5d 41 5e 41 5f 31 d2 31 c9 31 f6 31 ff e9 29 18 45 00 48 8b 38 <81> e7 00 0e 00 00 48 81 ff 00 0e 00 00 75 d3 48 83 c4 10 48 89 ee
May 26 08:30:00 mr-fox kernel: RSP: 0018:ffffa401005f0d58 EFLAGS: 00000246
May 26 08:30:00 mr-fox kernel: RAX: ffff988d028ee2e0 RBX: 0000000000000000 RCX: 0000000000000003
May 26 08:30:00 mr-fox kernel: RDX: 0000000000001000 RSI: 0000000000000001 RDI: 3000000358174e01
May 26 08:30:00 mr-fox kernel: RBP: ffffa401005f0da0 R08: 0000000000000000 R09: 0000000000000000
May 26 08:30:00 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000cdc5cc46
May 26 08:30:00 mr-fox kernel: R13: 000ffffffffff000 R14: 0000000000000003 R15: 0000000000000000
May 26 08:30:00 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:30:00 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:30:00 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:30:00 mr-fox kernel: PKRU: 55555554
May 26 08:30:00 mr-fox kernel: Call Trace:
May 26 08:30:00 mr-fox kernel: <NMI>
May 26 08:30:00 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:30:00 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:30:00 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:30:00 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:30:00 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:30:00 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:30:00 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 08:30:00 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 08:30:00 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 08:30:00 mr-fox kernel: </NMI>
May 26 08:30:00 mr-fox kernel: <IRQ>
May 26 08:30:00 mr-fox kernel: iommu_v1_iova_to_phys+0x2b/0xa0
May 26 08:30:00 mr-fox kernel: iommu_dma_unmap_page+0x2d/0xa0
May 26 08:30:00 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:30:00 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:30:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:30:00 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:30:00 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:30:00 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:30:00 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:30:00 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:30:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:30:00 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:30:00 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:30:00 mr-fox kernel: </IRQ>
May 26 08:30:00 mr-fox kernel: <TASK>
May 26 08:30:00 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:30:00 mr-fox kernel: RIP: 0010:xas_descend+0x40/0xd0
May 26 08:30:00 mr-fox kernel: Code: 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 <75> 08 48 3d fd 00 00 00 76 2f 41 88 5c 24 12 48 83 c4 08 5b 5d 41
May 26 08:30:00 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000293
May 26 08:30:00 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: 0000000000000021 RCX: 0000000000000000
May 26 08:30:00 mr-fox kernel: RDX: 0000000000000000 RSI: ffff988d55f18ff8 RDI: ffffa401077e3970
May 26 08:30:00 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:30:00 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:30:00 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:30:00 mr-fox kernel: xas_load+0x49/0x60
May 26 08:30:00 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:30:00 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:30:00 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:30:00 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:30:00 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:30:00 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:30:00 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:30:00 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:30:00 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:30:00 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:30:00 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:30:00 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:30:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:30:00 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:30:00 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:30:00 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:30:00 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:30:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:30:00 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:30:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:30:00 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:30:00 mr-fox kernel: </TASK>
May 26 08:30:00 mr-fox crond[28641]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:30:00 mr-fox crond[28644]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:30:00 mr-fox crond[28643]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:30:00 mr-fox crond[28642]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:30:00 mr-fox CROND[28651]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:30:00 mr-fox crond[28646]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:30:00 mr-fox CROND[28652]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:30:00 mr-fox CROND[28653]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:30:00 mr-fox CROND[28654]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:30:00 mr-fox crond[28648]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:30:00 mr-fox crond[28645]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:30:00 mr-fox crond[28649]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:30:00 mr-fox CROND[28655]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:30:00 mr-fox CROND[28657]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:30:00 mr-fox CROND[28659]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:30:00 mr-fox CROND[28658]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:30:00 mr-fox CROND[28641]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:30:00 mr-fox CROND[28641]: pam_unix(crond:session): session closed for user torproject
May 26 08:30:00 mr-fox CROND[28649]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:30:00 mr-fox CROND[28649]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:30:00 mr-fox CROND[28645]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:30:00 mr-fox CROND[28645]: pam_unix(crond:session): session closed for user root
May 26 08:30:00 mr-fox CROND[28646]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:30:00 mr-fox CROND[28646]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:30:00 mr-fox CROND[28644]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:30:00 mr-fox CROND[28644]: pam_unix(crond:session): session closed for user root
May 26 08:30:00 mr-fox CROND[23306]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:30:00 mr-fox CROND[23306]: pam_unix(crond:session): session closed for user root
May 26 08:31:00 mr-fox crond[31070]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:31:00 mr-fox crond[31067]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:31:00 mr-fox crond[31068]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:31:00 mr-fox crond[31069]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:31:00 mr-fox CROND[31076]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:31:00 mr-fox CROND[31074]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:31:00 mr-fox CROND[31075]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:31:00 mr-fox crond[31071]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:31:00 mr-fox CROND[31078]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:31:00 mr-fox CROND[31080]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:31:00 mr-fox CROND[31071]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:31:00 mr-fox CROND[31071]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:31:00 mr-fox CROND[31070]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:31:00 mr-fox CROND[31070]: pam_unix(crond:session): session closed for user root
May 26 08:31:00 mr-fox CROND[31069]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:31:00 mr-fox CROND[31069]: pam_unix(crond:session): session closed for user root
May 26 08:31:00 mr-fox CROND[28643]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:31:00 mr-fox CROND[28643]: pam_unix(crond:session): session closed for user root
May 26 08:31:32 mr-fox sshd[20093]: pam_unix(sshd:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:32:00 mr-fox crond[24925]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:32:00 mr-fox crond[24924]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:32:00 mr-fox crond[24926]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:32:00 mr-fox crond[24927]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:32:00 mr-fox CROND[24931]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:32:00 mr-fox crond[24928]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:32:00 mr-fox CROND[24932]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:32:00 mr-fox CROND[24933]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:32:00 mr-fox CROND[24934]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:32:00 mr-fox CROND[24935]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:32:00 mr-fox CROND[24928]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:32:00 mr-fox CROND[24928]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:32:00 mr-fox CROND[24927]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:32:00 mr-fox CROND[24927]: pam_unix(crond:session): session closed for user root
May 26 08:32:01 mr-fox CROND[24926]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:32:01 mr-fox CROND[24926]: pam_unix(crond:session): session closed for user root
May 26 08:32:01 mr-fox CROND[31068]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:32:01 mr-fox CROND[31068]: pam_unix(crond:session): session closed for user root
May 26 08:32:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:32:45 mr-fox kernel: rcu: \x0921-....: (2265194 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=921657
May 26 08:32:45 mr-fox kernel: rcu: \x09(t=2265195 jiffies g=8794409 q=61523075 ncpus=32)
May 26 08:32:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:32:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:32:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:32:45 mr-fox kernel: RIP: 0010:xas_load+0x20/0x60
May 26 08:32:45 mr-fox kernel: Code: ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 <77> 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48
May 26 08:32:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 08:32:45 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:32:45 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:32:45 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:32:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:32:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:32:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:32:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:32:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:32:45 mr-fox kernel: PKRU: 55555554
May 26 08:32:45 mr-fox kernel: Call Trace:
May 26 08:32:45 mr-fox kernel: <IRQ>
May 26 08:32:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:32:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:32:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:32:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:32:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:32:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:32:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:32:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:32:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:32:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:32:45 mr-fox kernel: </IRQ>
May 26 08:32:45 mr-fox kernel: <TASK>
May 26 08:32:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:32:45 mr-fox kernel: ? xas_load+0x20/0x60
May 26 08:32:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:32:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:32:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:32:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:32:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:32:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:32:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:32:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:32:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:32:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:32:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:32:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:32:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:32:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:32:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:32:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:32:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:32:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:32:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:32:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:32:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:32:45 mr-fox kernel: </TASK>
May 26 08:33:00 mr-fox crond[27777]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:33:00 mr-fox crond[27776]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:33:00 mr-fox crond[27774]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:33:00 mr-fox crond[27778]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:33:00 mr-fox crond[27779]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:33:00 mr-fox CROND[27783]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:33:00 mr-fox CROND[27785]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:33:00 mr-fox CROND[27784]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:33:00 mr-fox CROND[27786]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:33:00 mr-fox CROND[27787]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:33:00 mr-fox CROND[27779]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:33:00 mr-fox CROND[27779]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:33:00 mr-fox CROND[27778]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:33:00 mr-fox CROND[27778]: pam_unix(crond:session): session closed for user root
May 26 08:33:00 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2268783 jiffies s: 491905 root: 0x2/.
May 26 08:33:00 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:33:00 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:33:00 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:33:00 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:33:00 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:33:00 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:33:00 mr-fox kernel: RIP: 0010:fetch_pte+0x145/0x170
May 26 08:33:00 mr-fox kernel: Code: 74 25 48 8b 10 f6 c2 01 75 81 31 c0 48 83 c4 10 5b 5d 41 5c 41 5d 41 5e 41 5f 31 d2 31 c9 31 f6 31 ff e9 29 18 45 00 48 8b 38 <81> e7 00 0e 00 00 48 81 ff 00 0e 00 00 75 d3 48 83 c4 10 48 89 ee
May 26 08:33:00 mr-fox kernel: RSP: 0018:ffffa401005f0d58 EFLAGS: 00000246
May 26 08:33:00 mr-fox kernel: RAX: ffff988acb51fa80 RBX: 0000000000000000 RCX: 0000000000000003
May 26 08:33:00 mr-fox kernel: RDX: 0000000000001000 RSI: 0000000000000001 RDI: 3000000176fe4001
May 26 08:33:00 mr-fox kernel: RBP: ffffa401005f0da0 R08: 0000000000000000 R09: 0000000000000000
May 26 08:33:00 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000cff5073e
May 26 08:33:00 mr-fox kernel: R13: 000ffffffffff000 R14: 0000000000000003 R15: 0000000000000000
May 26 08:33:00 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:33:00 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:33:00 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:33:00 mr-fox kernel: PKRU: 55555554
May 26 08:33:00 mr-fox kernel: Call Trace:
May 26 08:33:00 mr-fox kernel: <NMI>
May 26 08:33:00 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:33:00 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:33:00 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:33:00 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:33:00 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:33:00 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:33:00 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 08:33:00 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 08:33:00 mr-fox kernel: ? fetch_pte+0x145/0x170
May 26 08:33:00 mr-fox kernel: </NMI>
May 26 08:33:00 mr-fox kernel: <IRQ>
May 26 08:33:00 mr-fox kernel: ? skb_release_data.isra.0+0x1b2/0x1e0
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: iommu_v1_iova_to_phys+0x2b/0xa0
May 26 08:33:00 mr-fox kernel: iommu_dma_unmap_page+0x2d/0xa0
May 26 08:33:00 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: ? task_tick_fair+0x85/0x470
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: ? wq_worker_tick+0xd/0xd0
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:33:00 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:33:00 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:33:00 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:33:00 mr-fox kernel: </IRQ>
May 26 08:33:00 mr-fox kernel: <TASK>
May 26 08:33:00 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:33:00 mr-fox kernel: RIP: 0010:xas_load+0x35/0x60
May 26 08:33:00 mr-fox kernel: Code: f7 ff ff 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 <48> 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d
May 26 08:33:00 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000286
May 26 08:33:00 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:33:00 mr-fox kernel: RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:33:00 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:33:00 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:33:00 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:33:00 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:33:00 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:33:00 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:33:00 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:33:00 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:33:00 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:33:00 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:33:00 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:33:00 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:33:00 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:33:00 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:33:00 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:33:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:33:00 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:33:00 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:33:00 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:33:00 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:33:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:33:00 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:33:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:33:00 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:33:00 mr-fox kernel: </TASK>
May 26 08:33:00 mr-fox CROND[27777]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:33:00 mr-fox CROND[27777]: pam_unix(crond:session): session closed for user root
May 26 08:33:01 mr-fox CROND[24925]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:33:01 mr-fox CROND[24925]: pam_unix(crond:session): session closed for user root
May 26 08:33:11 mr-fox sshd[12450]: pam_unix(sshd:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:34:00 mr-fox crond[30835]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:34:00 mr-fox crond[30837]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:34:00 mr-fox crond[30836]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:34:00 mr-fox CROND[30843]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:34:00 mr-fox crond[30838]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:34:00 mr-fox crond[30840]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:34:00 mr-fox CROND[30846]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:34:00 mr-fox CROND[30847]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:34:00 mr-fox CROND[30849]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:34:00 mr-fox CROND[30850]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:34:00 mr-fox CROND[30840]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:34:00 mr-fox CROND[30840]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:34:00 mr-fox CROND[30838]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:34:00 mr-fox CROND[30838]: pam_unix(crond:session): session closed for user root
May 26 08:34:00 mr-fox CROND[30837]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:34:00 mr-fox CROND[30837]: pam_unix(crond:session): session closed for user root
May 26 08:34:00 mr-fox CROND[27776]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:34:00 mr-fox CROND[27776]: pam_unix(crond:session): session closed for user root
May 26 08:34:30 mr-fox sshd[15895]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:35:00 mr-fox crond[974]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:35:00 mr-fox crond[973]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:35:00 mr-fox crond[975]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:35:00 mr-fox crond[979]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:35:00 mr-fox crond[983]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:35:00 mr-fox crond[971]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:35:00 mr-fox crond[972]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:35:00 mr-fox CROND[988]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:35:00 mr-fox CROND[989]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:35:00 mr-fox CROND[990]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:35:00 mr-fox CROND[991]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:35:00 mr-fox CROND[992]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:35:00 mr-fox CROND[996]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:35:00 mr-fox crond[984]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:35:00 mr-fox CROND[999]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:35:00 mr-fox CROND[1001]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:35:00 mr-fox CROND[971]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:35:00 mr-fox CROND[971]: pam_unix(crond:session): session closed for user torproject
May 26 08:35:00 mr-fox CROND[984]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:35:00 mr-fox CROND[984]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:35:00 mr-fox CROND[975]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:35:00 mr-fox CROND[975]: pam_unix(crond:session): session closed for user root
May 26 08:35:00 mr-fox CROND[979]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:35:00 mr-fox CROND[979]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:35:00 mr-fox CROND[974]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:35:00 mr-fox CROND[974]: pam_unix(crond:session): session closed for user root
May 26 08:35:01 mr-fox CROND[30836]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:35:01 mr-fox CROND[30836]: pam_unix(crond:session): session closed for user root
May 26 08:35:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:35:45 mr-fox kernel: rcu: \x0921-....: (2310198 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=941259
May 26 08:35:45 mr-fox kernel: rcu: \x09(t=2310199 jiffies g=8794409 q=62543459 ncpus=32)
May 26 08:35:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:35:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:35:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:35:45 mr-fox kernel: RIP: 0010:xas_descend+0x48/0xd0
May 26 08:35:45 mr-fox kernel: Code: 04 00 48 d3 eb 83 e3 3f 89 d8 48 83 c0 04 48 8b 44 c5 08 49 89 6c 24 18 48 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d fd 00 00 00 <76> 2f 41 88 5c 24 12 48 83 c4 08 5b 5d 41 5c 41 5d 31 d2 31 c9 31
May 26 08:35:45 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000296
May 26 08:35:45 mr-fox kernel: RAX: ffff988b1fdbb47a RBX: 0000000000000036 RCX: 0000000000000012
May 26 08:35:45 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988acf90e6c8 RDI: ffffa401077e3970
May 26 08:35:45 mr-fox kernel: RBP: ffff988acf90e6c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:35:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:35:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:35:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:35:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:35:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:35:45 mr-fox kernel: PKRU: 55555554
May 26 08:35:45 mr-fox kernel: Call Trace:
May 26 08:35:45 mr-fox kernel: <IRQ>
May 26 08:35:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:35:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:35:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:35:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:35:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:35:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:35:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:35:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:35:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:35:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:35:45 mr-fox kernel: </IRQ>
May 26 08:35:45 mr-fox kernel: <TASK>
May 26 08:35:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:35:45 mr-fox kernel: ? xas_descend+0x48/0xd0
May 26 08:35:45 mr-fox kernel: xas_load+0x49/0x60
May 26 08:35:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:35:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:35:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:35:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:35:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:35:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:35:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:35:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:35:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:35:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:35:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:35:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:35:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:35:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:35:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:35:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:35:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:35:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:35:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:35:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:35:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:35:45 mr-fox kernel: </TASK>
May 26 08:36:00 mr-fox crond[873]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:36:00 mr-fox crond[874]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:36:00 mr-fox crond[876]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:36:00 mr-fox crond[875]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:36:00 mr-fox crond[878]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:36:00 mr-fox CROND[881]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:36:00 mr-fox CROND[882]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:36:00 mr-fox CROND[880]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:36:00 mr-fox CROND[883]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:36:00 mr-fox CROND[884]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:36:00 mr-fox CROND[878]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:36:00 mr-fox CROND[878]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:36:00 mr-fox CROND[876]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:36:00 mr-fox CROND[876]: pam_unix(crond:session): session closed for user root
May 26 08:36:00 mr-fox CROND[875]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:36:00 mr-fox CROND[875]: pam_unix(crond:session): session closed for user root
May 26 08:36:00 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2313839 jiffies s: 491905 root: 0x2/.
May 26 08:36:00 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:36:00 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:36:00 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:36:00 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:36:00 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:36:00 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:36:00 mr-fox kernel: RIP: 0010:iommu_dma_unmap_page+0x4a/0xa0
May 26 08:36:00 mr-fox kernel: Code: ff ff 4c 89 e6 48 89 c7 e8 13 bb ff ff 48 85 c0 74 46 4c 89 ea 4c 89 e6 48 89 df 48 89 c5 e8 3d fe ff ff 48 8b 83 40 02 00 00 <48> 85 c0 74 2b 48 3b 28 72 26 48 3b 68 08 73 20 4d 89 f8 44 89 f1
May 26 08:36:00 mr-fox kernel: RSP: 0018:ffffa401005f0dc0 EFLAGS: 00000246
May 26 08:36:00 mr-fox kernel: RAX: ffffffffa6aaa880 RBX: ffff988ac1a460c0 RCX: 0000000000000000
May 26 08:36:00 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:36:00 mr-fox kernel: RBP: 0000000f922b097e R08: 0000000000000000 R09: 0000000000000000
May 26 08:36:00 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000cf9f797e
May 26 08:36:00 mr-fox kernel: R13: 0000000000000042 R14: 0000000000000001 R15: 0000000000000000
May 26 08:36:00 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:36:00 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:36:00 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:36:00 mr-fox kernel: PKRU: 55555554
May 26 08:36:00 mr-fox kernel: Call Trace:
May 26 08:36:00 mr-fox kernel: <NMI>
May 26 08:36:00 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:36:00 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:36:00 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:36:00 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:36:00 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:36:00 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:36:00 mr-fox kernel: ? iommu_dma_unmap_page+0x4a/0xa0
May 26 08:36:00 mr-fox kernel: ? iommu_dma_unmap_page+0x4a/0xa0
May 26 08:36:00 mr-fox kernel: ? iommu_dma_unmap_page+0x4a/0xa0
May 26 08:36:00 mr-fox kernel: </NMI>
May 26 08:36:00 mr-fox kernel: <IRQ>
May 26 08:36:00 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:36:00 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:36:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:36:00 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:36:00 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:36:00 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:36:00 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:36:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:36:00 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:36:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:36:00 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:36:00 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:36:00 mr-fox kernel: </IRQ>
May 26 08:36:00 mr-fox kernel: <TASK>
May 26 08:36:00 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:36:00 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:36:00 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:36:00 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:36:00 mr-fox kernel: RAX: ffff988acf90e6ca RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:36:00 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:36:00 mr-fox kernel: RBP: ffff988ac07566c8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:36:00 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:36:00 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:36:00 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:36:00 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:36:00 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:36:00 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:36:00 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:36:00 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:36:00 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:36:00 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:36:00 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:36:00 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:36:00 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:36:00 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:36:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:36:00 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:36:00 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:36:00 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:36:00 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:36:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:36:00 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:36:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:36:00 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:36:00 mr-fox kernel: </TASK>
May 26 08:36:01 mr-fox CROND[973]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:36:01 mr-fox CROND[973]: pam_unix(crond:session): session closed for user root
May 26 08:36:49 mr-fox sshd[12450]: pam_unix(sshd:session): session closed for user tinderbox
May 26 08:37:00 mr-fox crond[4213]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:37:00 mr-fox crond[4214]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:37:00 mr-fox crond[4212]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:37:00 mr-fox crond[4215]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:37:00 mr-fox crond[4216]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:37:00 mr-fox CROND[4221]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:37:00 mr-fox CROND[4220]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:37:00 mr-fox CROND[4223]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:37:00 mr-fox CROND[4222]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:37:00 mr-fox CROND[4224]: (root) CMD (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:37:00 mr-fox CROND[4216]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:37:00 mr-fox CROND[4216]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:37:00 mr-fox CROND[4215]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:37:00 mr-fox CROND[4215]: pam_unix(crond:session): session closed for user root
May 26 08:37:00 mr-fox CROND[4214]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:37:00 mr-fox CROND[4214]: pam_unix(crond:session): session closed for user root
May 26 08:37:00 mr-fox CROND[874]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:37:00 mr-fox CROND[874]: pam_unix(crond:session): session closed for user root
May 26 08:37:20 mr-fox crontab[5320]: (root) BEGIN EDIT (root)
May 26 08:37:45 mr-fox crontab[5320]: (root) REPLACE (root)
May 26 08:37:45 mr-fox crontab[5320]: (root) END EDIT (root)
May 26 08:37:47 mr-fox crontab[19009]: (root) LIST (root)
May 26 08:38:00 mr-fox crond[3071]: (root) RELOAD (/var/spool/cron/crontabs/root)
May 26 08:38:00 mr-fox crond[7100]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:38:00 mr-fox crond[7099]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:38:00 mr-fox crond[7101]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:38:00 mr-fox crond[7098]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:38:00 mr-fox CROND[7105]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:38:00 mr-fox CROND[7106]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:38:00 mr-fox CROND[7108]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:38:00 mr-fox CROND[7109]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:38:00 mr-fox CROND[7098]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:38:00 mr-fox CROND[7098]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:38:00 mr-fox CROND[7101]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:38:00 mr-fox CROND[7101]: pam_unix(crond:session): session closed for user root
May 26 08:38:00 mr-fox CROND[7100]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:38:00 mr-fox CROND[7100]: pam_unix(crond:session): session closed for user root
May 26 08:38:00 mr-fox CROND[4213]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:38:00 mr-fox CROND[4213]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[873]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[2340]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[8569]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[873]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[986]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[8569]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[2340]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[6492]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[379]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[4109]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[2749]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[3442]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[6492]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[7565]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[19516]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[12534]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[7565]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[19516]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[3442]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[7582]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[31549]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[16112]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[11625]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[972]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[10408]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[5610]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[23305]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[2525]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[10408]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[5610]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[5449]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[5449]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[12534]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[20541]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[12349]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[10418]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[30375]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[4483]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[21416]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[12110]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[3098]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[12110]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[5537]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[13321]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[20471]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[5537]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[31751]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[18689]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[27159]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[9494]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[14290]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[9428]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[14930]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[18100]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[30522]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[22271]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[14930]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[13066]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[22271]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[24341]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[13066]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[22180]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[18365]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[2082]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[22431]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[28642]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[25209]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[2082]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[12237]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[25460]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[28642]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[27412]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[30956]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[17016]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[7075]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[21217]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[15676]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[19580]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[17016]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[26989]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[29652]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[29652]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[4517]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[14400]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[13068]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[4517]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[7770]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[29160]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[18458]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[29160]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[10341]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[10341]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[24924]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[31641]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[11450]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[8780]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[25247]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[16936]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[8780]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[25247]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[31870]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[19723]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[31870]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[27774]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[19723]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[26846]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[11047]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[7750]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[27774]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[26846]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[14213]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[29059]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[19991]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[15610]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[27690]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[23097]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[32136]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[15610]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[20917]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[24747]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[24422]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[986]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[24747]: pam_unix(crond:session): session closed for user root
May 26 08:38:23 mr-fox CROND[28809]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[31067]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[28903]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[31686]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[25325]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
May 26 08:38:23 mr-fox CROND[26184]: (root) CMDEND (for i in {0..3}; do /opt/tb/bin/metrics.sh &>/dev/null ; sleep 15; done)
--
May 26 08:38:45 mr-fox kernel: RIP: 0010:xas_load+0x11/0x60
May 26 08:38:45 mr-fox kernel: Code: 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 <83> e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31
May 26 08:38:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 08:38:45 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:38:45 mr-fox kernel: RDX: ffff988ac0754ffa RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:38:45 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:38:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:38:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:38:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:38:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:38:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:38:45 mr-fox kernel: PKRU: 55555554
May 26 08:38:45 mr-fox kernel: Call Trace:
May 26 08:38:45 mr-fox kernel: <IRQ>
May 26 08:38:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:38:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:38:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:38:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:38:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:38:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:38:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:38:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:38:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:38:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:38:45 mr-fox kernel: </IRQ>
May 26 08:38:45 mr-fox kernel: <TASK>
May 26 08:38:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:38:45 mr-fox kernel: ? xas_load+0x11/0x60
May 26 08:38:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:38:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:38:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:38:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:38:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:38:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:38:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:38:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:38:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:38:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:38:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:38:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:38:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:38:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:38:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:38:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:38:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:38:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:38:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:38:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:38:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:38:45 mr-fox kernel: </TASK>
May 26 08:39:00 mr-fox crond[8369]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:39:00 mr-fox crond[8368]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:39:00 mr-fox crond[8370]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:39:00 mr-fox crond[8371]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:39:00 mr-fox CROND[8375]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:39:00 mr-fox CROND[8376]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:39:00 mr-fox CROND[8378]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:39:00 mr-fox CROND[8377]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:39:00 mr-fox CROND[8368]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:39:00 mr-fox CROND[8368]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:39:00 mr-fox CROND[8371]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:39:00 mr-fox CROND[8371]: pam_unix(crond:session): session closed for user root
May 26 08:39:00 mr-fox CROND[8370]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:39:00 mr-fox CROND[8370]: pam_unix(crond:session): session closed for user root
May 26 08:39:00 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2358895 jiffies s: 491905 root: 0x2/.
May 26 08:39:00 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:39:00 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:39:00 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:39:00 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:39:00 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:39:00 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:39:00 mr-fox kernel: RIP: 0010:read_tsc+0x7/0x20
May 26 08:39:00 mr-fox kernel: Code: 00 00 00 f3 0f 1e fa 8b 05 66 26 48 01 e9 6c 64 ae 00 90 f3 0f 1e fa e9 62 64 ae 00 0f 1f 80 00 00 00 00 f3 0f 1e fa 0f 01 f9 <66> 90 48 c1 e2 20 48 09 d0 31 d2 31 c9 e9 42 64 ae 00 0f 1f 80 00
May 26 08:39:00 mr-fox kernel: RSP: 0018:ffffa401005f0e28 EFLAGS: 00000246
May 26 08:39:00 mr-fox kernel: RAX: 0000000064243c8e RBX: 00000000014239de RCX: 0000000000000015
May 26 08:39:00 mr-fox kernel: RDX: 00000000000082c3 RSI: 0000000000000000 RDI: ffffffffa60201c0
May 26 08:39:00 mr-fox kernel: RBP: 000026697e5906c2 R08: 0000000000000000 R09: 0000000000000000
May 26 08:39:00 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
May 26 08:39:00 mr-fox kernel: R13: ffff98a96ed5bbb0 R14: ffffa401005f0ef0 R15: ffff98a96ed5bb40
May 26 08:39:00 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:39:00 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:39:00 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:39:00 mr-fox kernel: PKRU: 55555554
May 26 08:39:00 mr-fox kernel: Call Trace:
May 26 08:39:00 mr-fox kernel: <NMI>
May 26 08:39:00 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:39:00 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:39:00 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:39:00 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:39:00 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:39:00 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:39:00 mr-fox kernel: ? read_tsc+0x7/0x20
May 26 08:39:00 mr-fox kernel: ? read_tsc+0x7/0x20
May 26 08:39:00 mr-fox kernel: ? read_tsc+0x7/0x20
May 26 08:39:00 mr-fox kernel: </NMI>
May 26 08:39:00 mr-fox kernel: <IRQ>
May 26 08:39:00 mr-fox kernel: ktime_get+0x42/0xb0
May 26 08:39:00 mr-fox kernel: tcp_mstamp_refresh+0xd/0x40
May 26 08:39:00 mr-fox kernel: tcp_write_timer_handler+0x5d/0x280
May 26 08:39:00 mr-fox kernel: tcp_write_timer+0x9f/0xd0
May 26 08:39:00 mr-fox kernel: ? tcp_write_timer_handler+0x280/0x280
May 26 08:39:00 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 08:39:00 mr-fox kernel: __run_timers+0x20a/0x240
May 26 08:39:00 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 08:39:00 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:39:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:39:00 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:39:00 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:39:00 mr-fox kernel: </IRQ>
May 26 08:39:00 mr-fox kernel: <TASK>
May 26 08:39:00 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:39:00 mr-fox kernel: RIP: 0010:xas_load+0x11/0x60
May 26 08:39:00 mr-fox kernel: Code: 92 4c 89 ee 48 c7 c7 e0 eb 49 a6 e8 a9 58 c6 ff eb be 0f 1f 80 00 00 00 00 f3 0f 1e fa 55 53 48 89 fb e8 f2 f7 ff ff 48 89 c2 <83> e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 0f 5b 5d 31 d2 31
May 26 08:39:00 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 08:39:00 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:39:00 mr-fox kernel: RDX: ffff988ac0754ffa RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:39:00 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:39:00 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:39:00 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:39:00 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:39:00 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:39:00 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:39:00 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:39:00 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:39:00 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:39:00 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:39:00 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:39:00 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:39:00 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:39:00 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:39:00 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:39:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:39:00 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:39:00 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:39:00 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:39:00 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:39:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:39:00 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:39:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:39:00 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:39:00 mr-fox kernel: </TASK>
May 26 08:39:01 mr-fox CROND[7099]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:39:01 mr-fox CROND[7099]: pam_unix(crond:session): session closed for user root
May 26 08:39:16 mr-fox sshd[20093]: pam_unix(sshd:session): session closed for user tinderbox
May 26 08:39:32 mr-fox crontab[29139]: (root) BEGIN EDIT (root)
May 26 08:39:42 mr-fox crontab[29139]: (root) REPLACE (root)
May 26 08:39:42 mr-fox crontab[29139]: (root) END EDIT (root)
May 26 08:39:46 mr-fox crontab[20171]: (root) BEGIN EDIT (root)
May 26 08:39:54 mr-fox crontab[20171]: (root) REPLACE (root)
May 26 08:39:54 mr-fox crontab[20171]: (root) END EDIT (root)
May 26 08:40:00 mr-fox crond[3071]: (root) RELOAD (/var/spool/cron/crontabs/root)
May 26 08:40:00 mr-fox crond[10107]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:40:00 mr-fox crond[10104]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:40:00 mr-fox crond[10108]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:40:00 mr-fox crond[10109]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:40:00 mr-fox crond[10110]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:40:00 mr-fox CROND[10115]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:40:00 mr-fox CROND[10116]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:40:00 mr-fox CROND[10114]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:40:00 mr-fox CROND[10118]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:40:00 mr-fox crond[10111]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:40:00 mr-fox crond[10112]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:40:00 mr-fox CROND[10119]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:40:00 mr-fox CROND[10120]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:40:00 mr-fox CROND[10121]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:40:00 mr-fox CROND[10104]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:40:00 mr-fox CROND[10104]: pam_unix(crond:session): session closed for user torproject
May 26 08:40:00 mr-fox CROND[10109]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:40:00 mr-fox CROND[10109]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:40:00 mr-fox CROND[10112]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:40:00 mr-fox CROND[10112]: pam_unix(crond:session): session closed for user root
May 26 08:40:00 mr-fox CROND[10107]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:40:00 mr-fox CROND[10107]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:40:00 mr-fox CROND[10111]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:40:00 mr-fox CROND[10111]: pam_unix(crond:session): session closed for user root
May 26 08:40:00 mr-fox CROND[8369]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:40:00 mr-fox CROND[8369]: pam_unix(crond:session): session closed for user root
May 26 08:41:00 mr-fox CROND[10110]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:41:00 mr-fox CROND[10110]: pam_unix(crond:session): session closed for user root
May 26 08:41:00 mr-fox crond[12997]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:41:00 mr-fox crond[12998]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:41:00 mr-fox crond[12996]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:41:00 mr-fox CROND[13002]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:41:00 mr-fox CROND[13004]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:41:00 mr-fox CROND[13003]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:41:00 mr-fox crond[12999]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:41:00 mr-fox CROND[13006]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:41:00 mr-fox CROND[12996]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:41:00 mr-fox CROND[12996]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:41:00 mr-fox CROND[12999]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:41:00 mr-fox CROND[12999]: pam_unix(crond:session): session closed for user root
May 26 08:41:00 mr-fox CROND[12998]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:41:00 mr-fox CROND[12998]: pam_unix(crond:session): session closed for user root
May 26 08:41:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:41:45 mr-fox kernel: rcu: \x0921-....: (2400206 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=980025
May 26 08:41:45 mr-fox kernel: rcu: \x09(t=2400207 jiffies g=8794409 q=64677320 ncpus=32)
May 26 08:41:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:41:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:41:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:41:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:41:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:41:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:41:45 mr-fox kernel: RAX: ffff988d667606da RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:41:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:41:45 mr-fox kernel: RBP: ffff988b1fdbb478 R08: 0000000000000000 R09: 0000000000000000
May 26 08:41:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:41:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:41:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:41:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:41:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:41:45 mr-fox kernel: PKRU: 55555554
May 26 08:41:45 mr-fox kernel: Call Trace:
May 26 08:41:45 mr-fox kernel: <IRQ>
May 26 08:41:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:41:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:41:45 mr-fox kernel: ? tcp_write_xmit+0x1e7/0x13b0
May 26 08:41:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:41:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:41:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:41:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:41:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:41:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:41:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:41:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:41:45 mr-fox kernel: </IRQ>
May 26 08:41:45 mr-fox kernel: <TASK>
May 26 08:41:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:41:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:41:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:41:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:41:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:41:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:41:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:41:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:41:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:41:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:41:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:41:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:41:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:41:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:41:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:41:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:41:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:41:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:41:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:41:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:41:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:41:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:41:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:41:45 mr-fox kernel: </TASK>
May 26 08:41:46 mr-fox sSMTP[21339]: Creating SSL connection to host
May 26 08:41:46 mr-fox sSMTP[21339]: SSL connection using TLS_AES_256_GCM_SHA384
May 26 08:41:51 mr-fox sSMTP[21339]: Sent mail for root@zwiebeltoralf.de (221 www325.your-server.de closing connection) uid=0 username=root outbytes=4034
May 26 08:42:00 mr-fox crond[14079]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:42:00 mr-fox crond[14080]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:42:00 mr-fox crond[14081]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:42:00 mr-fox crond[14082]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:42:00 mr-fox CROND[14085]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:42:00 mr-fox CROND[14086]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:42:00 mr-fox CROND[14087]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:42:00 mr-fox CROND[14088]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:42:00 mr-fox CROND[14079]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:42:00 mr-fox CROND[14079]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:42:00 mr-fox CROND[14082]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:42:00 mr-fox CROND[14082]: pam_unix(crond:session): session closed for user root
May 26 08:42:00 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2403952 jiffies s: 491905 root: 0x2/.
May 26 08:42:00 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:42:00 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:42:00 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:42:00 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:42:00 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:42:00 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:42:00 mr-fox kernel: RIP: 0010:__slab_free+0xb8/0x420
May 26 08:42:00 mr-fox kernel: Code: 89 c1 66 89 74 24 58 66 85 f6 0f 84 df 00 00 00 41 8b 46 08 45 84 c9 75 09 4d 85 ed 0f 84 75 01 00 00 45 31 ff 48 8b 4c 24 58 <a9> 00 00 00 40 0f 84 85 00 00 00 4c 89 e8 f0 49 0f c7 4c 24 20 0f
May 26 08:42:00 mr-fox kernel: RSP: 0018:ffffa401005f0cc0 EFLAGS: 00000246
May 26 08:42:00 mr-fox kernel: RAX: 0000000040042000 RBX: ffff988c27bb5440 RCX: 00000000801c001b
May 26 08:42:00 mr-fox kernel: RDX: 00000000801c001c RSI: 00000000801c001b RDI: ffffa401005f0d30
May 26 08:42:00 mr-fox kernel: RBP: ffffa401005f0d60 R08: 0000000000000001 R09: 0000000000000001
May 26 08:42:00 mr-fox kernel: R10: ffff988c27bb5440 R11: 0000000000000000 R12: ffffcf62c99eed00
May 26 08:42:00 mr-fox kernel: R13: 0000000000000000 R14: ffff988ac0137700 R15: 0000000000000000
May 26 08:42:00 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:42:00 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:42:00 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:42:00 mr-fox kernel: PKRU: 55555554
May 26 08:42:00 mr-fox kernel: Call Trace:
May 26 08:42:00 mr-fox kernel: <NMI>
May 26 08:42:00 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:42:00 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:42:00 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:42:00 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:42:00 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:42:00 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:42:00 mr-fox kernel: ? __slab_free+0xb8/0x420
May 26 08:42:00 mr-fox kernel: ? __slab_free+0xb8/0x420
May 26 08:42:00 mr-fox kernel: ? __slab_free+0xb8/0x420
May 26 08:42:00 mr-fox kernel: </NMI>
May 26 08:42:00 mr-fox kernel: <IRQ>
May 26 08:42:00 mr-fox kernel: ? amd_iommu_unmap_pages+0x40/0x130
May 26 08:42:00 mr-fox kernel: ? skb_release_data.isra.0+0x1b2/0x1e0
May 26 08:42:00 mr-fox kernel: kmem_cache_free+0x2c2/0x360
May 26 08:42:00 mr-fox kernel: skb_release_data.isra.0+0x1b2/0x1e0
May 26 08:42:00 mr-fox kernel: napi_consume_skb+0x45/0xc0
May 26 08:42:00 mr-fox kernel: igb_poll+0xea/0x1370
May 26 08:42:00 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:42:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:42:00 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:42:00 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:42:00 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:42:00 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:42:00 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:42:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:42:00 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:42:00 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:42:00 mr-fox kernel: </IRQ>
May 26 08:42:00 mr-fox kernel: <TASK>
May 26 08:42:00 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:42:00 mr-fox kernel: RIP: 0010:xas_descend+0x13/0xd0
May 26 08:42:00 mr-fox kernel: Code: 87 04 00 e9 a3 87 04 00 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 41 55 41 54 49 89 fc 55 48 89 f5 53 48 83 ec 08 0f b6 0e <48> 8b 5f 08 80 f9 3f 0f 87 8e 87 04 00 48 d3 eb 83 e3 3f 89 d8 48
May 26 08:42:00 mr-fox kernel: RSP: 0018:ffffa401077e3920 EFLAGS: 00000282
May 26 08:42:00 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 000000000000001e
May 26 08:42:00 mr-fox kernel: RDX: 0000000000000002 RSI: ffff988ac0754ff8 RDI: ffffa401077e3970
May 26 08:42:00 mr-fox kernel: RBP: ffff988ac0754ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:42:00 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: ffffa401077e3970
May 26 08:42:00 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:42:00 mr-fox kernel: xas_load+0x49/0x60
May 26 08:42:00 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:42:00 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:42:00 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:42:00 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:42:00 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:42:00 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:42:00 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:42:00 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:42:00 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:42:00 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:42:00 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:42:00 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:42:00 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:42:00 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:42:00 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:42:00 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:42:00 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:42:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:42:00 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:42:00 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:42:00 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:42:00 mr-fox kernel: </TASK>
May 26 08:42:01 mr-fox CROND[14081]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:42:01 mr-fox CROND[14081]: pam_unix(crond:session): session closed for user root
May 26 08:42:01 mr-fox CROND[12997]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:42:01 mr-fox CROND[12997]: pam_unix(crond:session): session closed for user root
May 26 08:42:37 mr-fox su[10021]: (to tinderbox) root on pts/2
May 26 08:42:37 mr-fox su[10021]: pam_unix(su-l:session): session opened for user tinderbox(uid=1003) by root(uid=0)
May 26 08:42:55 mr-fox su[10021]: pam_unix(su-l:session): session closed for user tinderbox
May 26 08:43:00 mr-fox crond[14832]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:43:00 mr-fox crond[14831]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:43:00 mr-fox crond[14834]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:43:00 mr-fox crond[14835]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:43:00 mr-fox CROND[14838]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:43:00 mr-fox CROND[14837]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:43:00 mr-fox CROND[14839]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:43:00 mr-fox CROND[14841]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:43:00 mr-fox CROND[14831]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:43:00 mr-fox CROND[14831]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:43:00 mr-fox CROND[14835]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:43:00 mr-fox CROND[14835]: pam_unix(crond:session): session closed for user root
May 26 08:43:00 mr-fox CROND[14834]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:43:00 mr-fox CROND[14834]: pam_unix(crond:session): session closed for user root
May 26 08:43:01 mr-fox CROND[14080]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:43:01 mr-fox CROND[14080]: pam_unix(crond:session): session closed for user root
May 26 08:44:00 mr-fox crond[14874]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:44:00 mr-fox crond[14872]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:44:00 mr-fox crond[14871]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:44:00 mr-fox crond[14873]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:44:00 mr-fox CROND[14878]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:44:00 mr-fox CROND[14880]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:44:00 mr-fox CROND[14879]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:44:00 mr-fox CROND[14881]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:44:00 mr-fox CROND[14871]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:44:00 mr-fox CROND[14871]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:44:00 mr-fox CROND[14874]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:44:00 mr-fox CROND[14874]: pam_unix(crond:session): session closed for user root
May 26 08:44:00 mr-fox CROND[14873]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:44:00 mr-fox CROND[14873]: pam_unix(crond:session): session closed for user root
May 26 08:44:00 mr-fox CROND[14832]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:44:00 mr-fox CROND[14832]: pam_unix(crond:session): session closed for user root
May 26 08:44:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:44:45 mr-fox kernel: rcu: \x0921-....: (2445210 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=998833
May 26 08:44:45 mr-fox kernel: rcu: \x09(t=2445211 jiffies g=8794409 q=70987250 ncpus=32)
May 26 08:44:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:44:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:44:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:44:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:44:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:44:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:44:45 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:44:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:44:45 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:44:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:44:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:44:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:44:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:44:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:44:45 mr-fox kernel: PKRU: 55555554
May 26 08:44:45 mr-fox kernel: Call Trace:
May 26 08:44:45 mr-fox kernel: <IRQ>
May 26 08:44:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:44:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:44:45 mr-fox kernel: ? tcp_write_xmit+0xe3/0x13b0
May 26 08:44:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:44:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:44:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:44:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:44:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:44:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:44:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:44:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:44:45 mr-fox kernel: </IRQ>
May 26 08:44:45 mr-fox kernel: <TASK>
May 26 08:44:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:44:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:44:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:44:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:44:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:44:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:44:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:44:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:44:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:44:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:44:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:44:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:44:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:44:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:44:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:44:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:44:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:44:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:44:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:44:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:44:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:44:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:44:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:44:45 mr-fox kernel: </TASK>
May 26 08:44:45 mr-fox sSMTP[21376]: Creating SSL connection to host
May 26 08:44:45 mr-fox sSMTP[21376]: SSL connection using TLS_AES_256_GCM_SHA384
May 26 08:44:50 mr-fox sSMTP[21376]: Sent mail for root@zwiebeltoralf.de (221 www325.your-server.de closing connection) uid=0 username=root outbytes=4033
May 26 08:45:00 mr-fox crond[12608]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:45:00 mr-fox crond[12606]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:45:00 mr-fox crond[12602]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:45:00 mr-fox crond[12607]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:45:00 mr-fox crond[12604]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:45:00 mr-fox crond[12603]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:45:00 mr-fox crond[12605]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:45:00 mr-fox CROND[12620]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:45:00 mr-fox CROND[12618]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:45:00 mr-fox CROND[12617]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:45:00 mr-fox CROND[12621]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:45:00 mr-fox CROND[12623]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:45:00 mr-fox CROND[12622]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:45:00 mr-fox CROND[12619]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:45:00 mr-fox CROND[12602]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:45:00 mr-fox CROND[12602]: pam_unix(crond:session): session closed for user torproject
May 26 08:45:00 mr-fox CROND[12605]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:45:00 mr-fox CROND[12605]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:45:00 mr-fox CROND[12608]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:45:00 mr-fox CROND[12608]: pam_unix(crond:session): session closed for user root
May 26 08:45:01 mr-fox CROND[12603]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:45:01 mr-fox CROND[12603]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:45:01 mr-fox CROND[12607]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:45:01 mr-fox CROND[12607]: pam_unix(crond:session): session closed for user root
May 26 08:45:01 mr-fox CROND[14872]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:45:01 mr-fox CROND[14872]: pam_unix(crond:session): session closed for user root
May 26 08:45:01 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2449007 jiffies s: 491905 root: 0x2/.
May 26 08:45:01 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:45:01 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:45:01 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:45:01 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:45:01 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:45:01 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:45:01 mr-fox kernel: RIP: 0010:tcp_ack_update_rtt+0x1b0/0x350
May 26 08:45:01 mr-fox kernel: Code: 74 14 48 8b 52 10 48 83 e2 fc f7 02 00 20 00 00 0f 85 79 01 00 00 89 c7 e8 bd 0b 8f ff 89 83 64 07 00 00 f6 83 98 08 00 00 08 <0f> 85 41 01 00 00 8b ab 90 05 00 00 e9 27 ff ff ff 48 83 f9 ff 0f
May 26 08:45:01 mr-fox kernel: RSP: 0018:ffffa401005f0940 EFLAGS: 00000246
May 26 08:45:01 mr-fox kernel: RAX: 0000000000030d40 RBX: ffff988d825e8000 RCX: 000000000001bece
May 26 08:45:01 mr-fox kernel: RDX: ffffffffa5cbcea0 RSI: fffffffffffff558 RDI: 0000000000000000
May 26 08:45:01 mr-fox kernel: RBP: 000000000007d2d8 R08: fffffffffffe4bda R09: ffffa401005f09f0
May 26 08:45:01 mr-fox kernel: R10: 0000000000000000 R11: 0000000000004502 R12: 000000000011c444
May 26 08:45:01 mr-fox kernel: R13: ffffa401005f09f0 R14: 0000000000000001 R15: ffff98a7546b4d00
May 26 08:45:01 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:45:01 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:45:01 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:45:01 mr-fox kernel: PKRU: 55555554
May 26 08:45:01 mr-fox kernel: Call Trace:
May 26 08:45:01 mr-fox kernel: <NMI>
May 26 08:45:01 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:45:01 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:45:01 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:45:01 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:45:01 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:45:01 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:45:01 mr-fox kernel: ? tcp_ack_update_rtt+0x1b0/0x350
May 26 08:45:01 mr-fox kernel: ? tcp_ack_update_rtt+0x1b0/0x350
May 26 08:45:01 mr-fox kernel: ? tcp_ack_update_rtt+0x1b0/0x350
May 26 08:45:01 mr-fox kernel: </NMI>
May 26 08:45:01 mr-fox kernel: <IRQ>
May 26 08:45:01 mr-fox kernel: tcp_ack+0xe1d/0x14b0
May 26 08:45:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:45:01 mr-fox kernel: tcp_rcv_established+0x146/0x6c0
May 26 08:45:01 mr-fox kernel: ? sk_filter_trim_cap+0x40/0x220
May 26 08:45:01 mr-fox kernel: tcp_v4_do_rcv+0x153/0x240
May 26 08:45:01 mr-fox kernel: tcp_v4_rcv+0xe00/0xea0
May 26 08:45:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:45:01 mr-fox kernel: ip_protocol_deliver_rcu+0x32/0x180
May 26 08:45:01 mr-fox kernel: ip_local_deliver_finish+0x74/0xa0
May 26 08:45:01 mr-fox kernel: ip_sublist_rcv_finish+0x7f/0x90
May 26 08:45:01 mr-fox kernel: ip_sublist_rcv+0x176/0x1c0
May 26 08:45:01 mr-fox kernel: ? ip_sublist_rcv+0x1c0/0x1c0
May 26 08:45:01 mr-fox kernel: ip_list_rcv+0x138/0x170
May 26 08:45:01 mr-fox kernel: __netif_receive_skb_list_core+0x2a5/0x2d0
May 26 08:45:01 mr-fox kernel: netif_receive_skb_list_internal+0x1db/0x320
May 26 08:45:01 mr-fox kernel: napi_gro_receive+0xcf/0x1b0
May 26 08:45:01 mr-fox kernel: igb_poll+0x605/0x1370
May 26 08:45:01 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:45:01 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:45:01 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:45:01 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:45:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:45:01 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:45:01 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:45:01 mr-fox kernel: </IRQ>
May 26 08:45:01 mr-fox kernel: <TASK>
May 26 08:45:01 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:45:01 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:45:01 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:45:01 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000246
May 26 08:45:01 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:45:01 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:45:01 mr-fox kernel: RBP: ffff988d55f18ff8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:45:01 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:45:01 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:45:01 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:45:01 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:45:01 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:45:01 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:45:01 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:45:01 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:45:01 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:45:01 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:45:01 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:45:01 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:45:01 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:45:01 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:45:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:45:01 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:45:01 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:45:01 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:45:01 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:45:01 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:45:01 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:45:01 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:45:01 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:45:01 mr-fox kernel: </TASK>
May 26 08:46:00 mr-fox crond[20139]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:46:00 mr-fox crond[20138]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:46:00 mr-fox crond[20140]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:46:00 mr-fox crond[20141]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:46:00 mr-fox CROND[20164]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:46:00 mr-fox CROND[20166]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:46:00 mr-fox CROND[20165]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:46:00 mr-fox CROND[20163]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:46:00 mr-fox CROND[20138]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:46:00 mr-fox CROND[20138]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:46:00 mr-fox CROND[20141]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:46:00 mr-fox CROND[20141]: pam_unix(crond:session): session closed for user root
May 26 08:46:00 mr-fox CROND[20140]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:46:00 mr-fox CROND[20140]: pam_unix(crond:session): session closed for user root
May 26 08:46:02 mr-fox CROND[12606]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:46:02 mr-fox CROND[12606]: pam_unix(crond:session): session closed for user root
May 26 08:47:00 mr-fox crond[21926]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:47:00 mr-fox crond[21928]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:47:00 mr-fox crond[21929]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:47:00 mr-fox crond[21925]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:47:00 mr-fox CROND[21939]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:47:00 mr-fox CROND[21940]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:47:00 mr-fox CROND[21941]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:47:00 mr-fox CROND[21942]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:47:00 mr-fox CROND[21925]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:47:00 mr-fox CROND[21925]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:47:00 mr-fox CROND[21929]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:47:00 mr-fox CROND[21929]: pam_unix(crond:session): session closed for user root
May 26 08:47:00 mr-fox CROND[21928]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:47:00 mr-fox CROND[21928]: pam_unix(crond:session): session closed for user root
May 26 08:47:02 mr-fox CROND[20139]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:47:02 mr-fox CROND[20139]: pam_unix(crond:session): session closed for user root
May 26 08:47:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:47:45 mr-fox kernel: rcu: \x0921-....: (2490214 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=1018371
May 26 08:47:45 mr-fox kernel: rcu: \x09(t=2490215 jiffies g=8794409 q=73773020 ncpus=32)
May 26 08:47:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:47:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:47:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:47:45 mr-fox kernel: RIP: 0010:xas_start+0x33/0x120
May 26 08:47:45 mr-fox kernel: Code: 48 8b 6f 18 48 89 e8 83 e0 03 0f 84 81 00 00 00 48 81 fd 05 c0 ff ff 76 06 48 83 f8 02 74 5d 48 8b 03 48 8b 73 08 48 8b 40 08 <48> 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 20 48 85
May 26 08:47:45 mr-fox kernel: RSP: 0018:ffffa401077e3930 EFLAGS: 00000213
May 26 08:47:45 mr-fox kernel: RAX: ffff988ac0754ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:47:45 mr-fox kernel: RDX: 0000000000000000 RSI: 000000004cd994a1 RDI: ffffa401077e3970
May 26 08:47:45 mr-fox kernel: RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
May 26 08:47:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:47:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:47:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:47:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:47:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:47:45 mr-fox kernel: PKRU: 55555554
May 26 08:47:45 mr-fox kernel: Call Trace:
May 26 08:47:45 mr-fox kernel: <IRQ>
May 26 08:47:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:47:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:47:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:47:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:47:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:47:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:47:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:47:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:47:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:47:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:47:45 mr-fox kernel: </IRQ>
May 26 08:47:45 mr-fox kernel: <TASK>
May 26 08:47:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:47:45 mr-fox kernel: ? xas_start+0x33/0x120
May 26 08:47:45 mr-fox kernel: xas_load+0xe/0x60
May 26 08:47:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:47:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:47:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:47:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:47:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:47:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:47:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:47:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:47:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:47:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:47:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:47:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:47:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:47:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:47:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:47:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:47:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:47:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:47:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:47:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:47:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:47:45 mr-fox kernel: </TASK>
May 26 08:47:45 mr-fox sSMTP[21706]: Creating SSL connection to host
May 26 08:47:45 mr-fox sSMTP[21706]: SSL connection using TLS_AES_256_GCM_SHA384
May 26 08:47:50 mr-fox sSMTP[21706]: Sent mail for root@zwiebeltoralf.de (221 www325.your-server.de closing connection) uid=0 username=root outbytes=4026
May 26 08:48:00 mr-fox crond[9366]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:48:00 mr-fox crond[9364]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:48:00 mr-fox crond[9365]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:48:00 mr-fox crond[9363]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:48:00 mr-fox CROND[9373]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:48:00 mr-fox CROND[9371]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:48:00 mr-fox CROND[9374]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:48:00 mr-fox CROND[9372]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:48:00 mr-fox CROND[9363]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:48:00 mr-fox CROND[9363]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:48:00 mr-fox CROND[9366]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:48:00 mr-fox CROND[9366]: pam_unix(crond:session): session closed for user root
May 26 08:48:00 mr-fox CROND[9365]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:48:00 mr-fox CROND[9365]: pam_unix(crond:session): session closed for user root
May 26 08:48:00 mr-fox CROND[21926]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:48:00 mr-fox CROND[21926]: pam_unix(crond:session): session closed for user root
May 26 08:48:01 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2494064 jiffies s: 491905 root: 0x2/.
May 26 08:48:01 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:48:01 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:48:01 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:48:01 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:48:01 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:48:01 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:48:01 mr-fox kernel: RIP: 0010:__iommu_dma_unmap+0xc3/0x170
May 26 08:48:01 mr-fox kernel: Code: ff ff 48 21 c5 49 8b 85 e0 00 00 00 48 89 ea 48 85 c0 0f 95 44 24 30 e8 db b5 ff ff 48 39 e8 0f 85 9a 00 00 00 0f b6 44 24 30 <3c> 01 0f 87 66 8b 2d 00 a8 01 74 3e 4c 89 f1 48 89 ea 48 89 de 4c
May 26 08:48:01 mr-fox kernel: RSP: 0018:ffffa401005f0d48 EFLAGS: 00000246
May 26 08:48:01 mr-fox kernel: RAX: 0000000000000001 RBX: 00000000cf3fb000 RCX: 0000000000000000
May 26 08:48:01 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:48:01 mr-fox kernel: RBP: 0000000000001000 R08: 0000000000000000 R09: 0000000000000000
May 26 08:48:01 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff988ac2211010
May 26 08:48:01 mr-fox kernel: R13: ffff988ac1292400 R14: ffffa401005f0d50 R15: ffffa401005f0d68
May 26 08:48:01 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:48:01 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:48:01 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:48:01 mr-fox kernel: PKRU: 55555554
May 26 08:48:01 mr-fox kernel: Call Trace:
May 26 08:48:01 mr-fox kernel: <NMI>
May 26 08:48:01 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:48:01 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:48:01 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:48:01 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:48:01 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:48:01 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:48:01 mr-fox kernel: ? __iommu_dma_unmap+0xc3/0x170
May 26 08:48:01 mr-fox kernel: ? __iommu_dma_unmap+0xc3/0x170
May 26 08:48:01 mr-fox kernel: ? __iommu_dma_unmap+0xc3/0x170
May 26 08:48:01 mr-fox kernel: </NMI>
May 26 08:48:01 mr-fox kernel: <IRQ>
May 26 08:48:01 mr-fox kernel: iommu_dma_unmap_page+0x43/0xa0
May 26 08:48:01 mr-fox kernel: igb_poll+0x106/0x1370
May 26 08:48:01 mr-fox kernel: ? free_unref_page_commit+0x8f/0x3b0
May 26 08:48:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:48:01 mr-fox kernel: ? free_unref_page+0xd7/0x170
May 26 08:48:01 mr-fox kernel: __napi_poll+0x28/0x1a0
May 26 08:48:01 mr-fox kernel: net_rx_action+0x202/0x590
May 26 08:48:01 mr-fox kernel: ? enqueue_hrtimer.isra.0+0x3b/0x60
May 26 08:48:01 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:48:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:48:01 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:48:01 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:48:01 mr-fox kernel: </IRQ>
May 26 08:48:01 mr-fox kernel: <TASK>
May 26 08:48:01 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:48:01 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:48:01 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:48:01 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:48:01 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:48:01 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:48:01 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:48:01 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:48:01 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:48:01 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:48:01 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:48:01 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:48:01 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:48:01 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:48:01 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:48:01 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:48:01 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:48:01 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:48:01 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:48:01 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:48:01 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:48:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:48:01 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:48:01 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:48:01 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:48:01 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:48:01 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:48:01 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:48:01 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:48:01 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:48:01 mr-fox kernel: </TASK>
May 26 08:49:00 mr-fox CROND[9364]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:49:00 mr-fox CROND[9364]: pam_unix(crond:session): session closed for user root
May 26 08:49:00 mr-fox crond[23867]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:49:00 mr-fox crond[23866]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:49:00 mr-fox crond[23868]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:49:00 mr-fox CROND[23874]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:49:00 mr-fox CROND[23873]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:49:00 mr-fox crond[23869]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:49:00 mr-fox CROND[23872]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:49:00 mr-fox CROND[23875]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:49:00 mr-fox CROND[23866]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:49:00 mr-fox CROND[23866]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:49:00 mr-fox CROND[23869]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:49:00 mr-fox CROND[23869]: pam_unix(crond:session): session closed for user root
May 26 08:49:00 mr-fox CROND[23868]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:49:00 mr-fox CROND[23868]: pam_unix(crond:session): session closed for user root
May 26 08:49:28 mr-fox sshd[26503]: pam_unix(sshd:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:49:29 mr-fox sshd[26503]: pam_unix(sshd:session): session closed for user tinderbox
May 26 08:50:00 mr-fox sshd[7712]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:50:00 mr-fox crond[9530]: pam_unix(crond:session): session opened for user torproject(uid=1001) by torproject(uid=0)
May 26 08:50:00 mr-fox crond[9531]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:50:00 mr-fox crond[9532]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:50:00 mr-fox CROND[9539]: (torproject) CMD (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:50:00 mr-fox crond[9533]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:50:00 mr-fox CROND[9540]: (tinderbox) CMD (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:50:00 mr-fox crond[9536]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:50:00 mr-fox crond[9537]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:50:00 mr-fox crond[9535]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:50:00 mr-fox CROND[9541]: (tinderbox) CMD (sleep 10; /opt/tb/bin/index.sh)
May 26 08:50:00 mr-fox CROND[9543]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:50:00 mr-fox CROND[9544]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:50:00 mr-fox CROND[9545]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:50:00 mr-fox CROND[9547]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:50:00 mr-fox CROND[9530]: (torproject) CMDEND (/opt/fuzz-utils/fuzz.sh -p -f -o 1 -t 1)
May 26 08:50:00 mr-fox CROND[9530]: pam_unix(crond:session): session closed for user torproject
May 26 08:50:00 mr-fox CROND[9533]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:50:00 mr-fox CROND[9533]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:50:00 mr-fox CROND[9537]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:50:00 mr-fox CROND[9537]: pam_unix(crond:session): session closed for user root
May 26 08:50:00 mr-fox CROND[9531]: (tinderbox) CMDEND (f=/tmp/replace_img.$(date +%s).$$.log; /opt/tb/bin/replace_img.sh &>$f; [[ -s $f ]] && cat $f; rm $f)
May 26 08:50:00 mr-fox CROND[9531]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:50:00 mr-fox CROND[9536]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:50:00 mr-fox CROND[9536]: pam_unix(crond:session): session closed for user root
May 26 08:50:01 mr-fox CROND[23867]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:50:01 mr-fox CROND[23867]: pam_unix(crond:session): session closed for user root
May 26 08:50:45 mr-fox kernel: rcu: INFO: rcu_sched self-detected stall on CPU
May 26 08:50:45 mr-fox kernel: rcu: \x0921-....: (2535218 ticks this GP) idle=6004/1/0x4000000000000000 softirq=11646198/11646198 fqs=1036877
May 26 08:50:45 mr-fox kernel: rcu: \x09(t=2535219 jiffies g=8794409 q=74685064 ncpus=32)
May 26 08:50:45 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:50:45 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:50:45 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:50:45 mr-fox kernel: RIP: 0010:xas_load+0x4d/0x60
May 26 08:50:45 mr-fox kernel: Code: 5d 31 d2 31 c9 31 f6 31 ff e9 fa bf 1a 00 0f b6 4b 10 48 8d 68 fe 38 48 fe 72 e4 48 89 ee 48 89 df e8 e7 fe ff ff 80 7d 00 00 <75> bf eb d1 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e
May 26 08:50:45 mr-fox kernel: RSP: 0018:ffffa401077e3950 EFLAGS: 00000206
May 26 08:50:45 mr-fox kernel: RAX: ffff988d55f18ffa RBX: ffffa401077e3970 RCX: 0000000000000000
May 26 08:50:45 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:50:45 mr-fox kernel: RBP: ffff988d667606d8 R08: 0000000000000000 R09: 0000000000000000
May 26 08:50:45 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:50:45 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:50:45 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:50:45 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:50:45 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:50:45 mr-fox kernel: PKRU: 55555554
May 26 08:50:45 mr-fox kernel: Call Trace:
May 26 08:50:45 mr-fox kernel: <IRQ>
May 26 08:50:45 mr-fox kernel: ? rcu_dump_cpu_stacks+0x107/0x1e0
May 26 08:50:45 mr-fox kernel: ? rcu_sched_clock_irq+0x3db/0xba0
May 26 08:50:45 mr-fox kernel: ? update_process_times+0x6f/0xb0
May 26 08:50:45 mr-fox kernel: ? tick_nohz_highres_handler+0x94/0xb0
May 26 08:50:45 mr-fox kernel: ? tick_sched_do_timer+0x80/0x80
May 26 08:50:45 mr-fox kernel: ? __hrtimer_run_queues+0xe8/0x170
May 26 08:50:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:50:45 mr-fox kernel: ? hrtimer_interrupt+0xf3/0x250
May 26 08:50:45 mr-fox kernel: ? __sysvec_apic_timer_interrupt+0x42/0xa0
May 26 08:50:45 mr-fox kernel: ? sysvec_apic_timer_interrupt+0x65/0x80
May 26 08:50:45 mr-fox kernel: </IRQ>
May 26 08:50:45 mr-fox kernel: <TASK>
May 26 08:50:45 mr-fox kernel: ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:50:45 mr-fox kernel: ? xas_load+0x4d/0x60
May 26 08:50:45 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:50:45 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:50:45 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:50:45 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:50:45 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:50:45 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:50:45 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:50:45 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:50:45 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:50:45 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:50:45 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:50:45 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:50:45 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:50:45 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:50:45 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:50:45 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:50:45 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:50:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:50:45 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:50:45 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:50:45 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:50:45 mr-fox kernel: </TASK>
May 26 08:50:45 mr-fox sSMTP[4770]: Creating SSL connection to host
May 26 08:50:45 mr-fox sSMTP[4770]: SSL connection using TLS_AES_256_GCM_SHA384
May 26 08:50:50 mr-fox sSMTP[4770]: Sent mail for root@zwiebeltoralf.de (221 www325.your-server.de closing connection) uid=0 username=root outbytes=4032
May 26 08:51:00 mr-fox crond[24919]: pam_unix(crond:session): session opened for user tinderbox(uid=1003) by tinderbox(uid=0)
May 26 08:51:00 mr-fox crond[24922]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:51:00 mr-fox crond[24921]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:51:00 mr-fox crond[24923]: pam_unix(crond:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:51:00 mr-fox CROND[24925]: (tinderbox) CMD (/opt/tb/bin/logcheck.sh)
May 26 08:51:00 mr-fox CROND[24926]: (root) CMD (/opt/torutils/restart_service.sh)
May 26 08:51:00 mr-fox CROND[24928]: (root) CMD (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:51:00 mr-fox CROND[24927]: (root) CMD (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:51:00 mr-fox CROND[24919]: (tinderbox) CMDEND (/opt/tb/bin/logcheck.sh)
May 26 08:51:00 mr-fox CROND[24919]: pam_unix(crond:session): session closed for user tinderbox
May 26 08:51:00 mr-fox CROND[24923]: (root) CMDEND (/usr/lib/sa/sa1 1 1 -S XALL)
May 26 08:51:00 mr-fox CROND[24923]: pam_unix(crond:session): session closed for user root
May 26 08:51:00 mr-fox CROND[24922]: (root) CMDEND (/opt/torutils/restart_service.sh)
May 26 08:51:00 mr-fox CROND[24922]: pam_unix(crond:session): session closed for user root
May 26 08:51:01 mr-fox CROND[9535]: (root) CMDEND (for i in {0..3}; do /opt/torutils/metrics.sh &>/dev/null; sleep 15; done)
May 26 08:51:01 mr-fox CROND[9535]: pam_unix(crond:session): session closed for user root
May 26 08:51:01 mr-fox kernel: rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 21-.... } 2539119 jiffies s: 491905 root: 0x2/.
May 26 08:51:01 mr-fox kernel: rcu: blocking rcu_node structures (internal RCU debug): l=1:16-31:0x20/.
May 26 08:51:01 mr-fox kernel: Sending NMI from CPU 27 to CPUs 21:
May 26 08:51:01 mr-fox kernel: NMI backtrace for cpu 21
May 26 08:51:01 mr-fox kernel: CPU: 21 PID: 31911 Comm: kworker/u65:20 Tainted: G B T 6.8.11 #13
May 26 08:51:01 mr-fox kernel: Hardware name: ASUS System Product Name/Pro WS 565-ACE, BIOS 0502 01/15/2021
May 26 08:51:01 mr-fox kernel: Workqueue: btrfs-endio-write btrfs_work_helper
May 26 08:51:01 mr-fox kernel: RIP: 0010:__ip_queue_xmit+0x14a/0x480
May 26 08:51:01 mr-fox kernel: Code: 85 c9 0f 85 a4 00 00 00 f6 46 06 40 74 0d f6 83 80 00 00 00 08 0f 84 86 00 00 00 48 8b 3c 24 e8 fc 2d ff ff 8b 85 b0 01 00 00 <48> 8b 3c 24 48 89 da 48 89 ee 89 83 8c 00 00 00 8b 85 b4 01 00 00
May 26 08:51:01 mr-fox kernel: RSP: 0018:ffffa401005f0d80 EFLAGS: 00000202
May 26 08:51:01 mr-fox kernel: RAX: 0000000000000000 RBX: ffff989a4145f000 RCX: 0000000600010000
May 26 08:51:01 mr-fox kernel: RDX: 000000000002b45e RSI: ffff988a85fe1b8c RDI: 0000000000000000
May 26 08:51:01 mr-fox kernel: RBP: ffff988dd7ea4a00 R08: 0000000000000000 R09: 0000000000000000
May 26 08:51:01 mr-fox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000045
May 26 08:51:01 mr-fox kernel: R13: 0000000000000000 R14: ffff98979531d440 R15: ffff988dd7ea4d48
May 26 08:51:01 mr-fox kernel: FS: 0000000000000000(0000) GS:ffff98a96ed40000(0000) knlGS:0000000000000000
May 26 08:51:01 mr-fox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 26 08:51:01 mr-fox kernel: CR2: 000055f569daa8f0 CR3: 0000000412016000 CR4: 0000000000f50ef0
May 26 08:51:01 mr-fox kernel: PKRU: 55555554
May 26 08:51:01 mr-fox kernel: Call Trace:
May 26 08:51:01 mr-fox kernel: <NMI>
May 26 08:51:01 mr-fox kernel: ? nmi_cpu_backtrace+0x9f/0x110
May 26 08:51:01 mr-fox kernel: ? nmi_cpu_backtrace_handler+0xc/0x20
May 26 08:51:01 mr-fox kernel: ? nmi_handle+0x56/0x100
May 26 08:51:01 mr-fox kernel: ? default_do_nmi+0x40/0x240
May 26 08:51:01 mr-fox kernel: ? exc_nmi+0x10c/0x190
May 26 08:51:01 mr-fox kernel: ? end_repeat_nmi+0xf/0x4e
May 26 08:51:01 mr-fox kernel: ? __ip_queue_xmit+0x14a/0x480
May 26 08:51:01 mr-fox kernel: ? __ip_queue_xmit+0x14a/0x480
May 26 08:51:01 mr-fox kernel: ? __ip_queue_xmit+0x14a/0x480
May 26 08:51:01 mr-fox kernel: </NMI>
May 26 08:51:01 mr-fox kernel: <IRQ>
May 26 08:51:01 mr-fox kernel: __tcp_transmit_skb+0xbad/0xd30
May 26 08:51:01 mr-fox kernel: tcp_delack_timer_handler+0xa9/0x110
May 26 08:51:01 mr-fox kernel: tcp_delack_timer+0xb5/0xf0
May 26 08:51:01 mr-fox kernel: ? tcp_delack_timer_handler+0x110/0x110
May 26 08:51:01 mr-fox kernel: call_timer_fn.isra.0+0x13/0xa0
May 26 08:51:01 mr-fox kernel: __run_timers+0x20a/0x240
May 26 08:51:01 mr-fox kernel: run_timer_softirq+0x27/0x60
May 26 08:51:01 mr-fox kernel: __do_softirq+0xd1/0x2a4
May 26 08:51:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:51:01 mr-fox kernel: irq_exit_rcu+0x83/0xa0
May 26 08:51:01 mr-fox kernel: sysvec_apic_timer_interrupt+0x6a/0x80
May 26 08:51:01 mr-fox kernel: </IRQ>
May 26 08:51:01 mr-fox kernel: <TASK>
May 26 08:51:01 mr-fox kernel: asm_sysvec_apic_timer_interrupt+0x1a/0x20
May 26 08:51:01 mr-fox kernel: RIP: 0010:srso_alias_return_thunk+0x0/0xfbef5
May 26 08:51:01 mr-fox kernel: Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 48 8d 64 24 08 c3 cc <e8> f4 ff ff ff 0f 0b cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
May 26 08:51:01 mr-fox kernel: RSP: 0018:ffffa401077e3960 EFLAGS: 00000246
May 26 08:51:01 mr-fox kernel: RAX: ffffcf62c5775d40 RBX: ffffcf62c5775d40 RCX: 0000000000000000
May 26 08:51:01 mr-fox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 26 08:51:01 mr-fox kernel: RBP: ffff988ac61fdd70 R08: 0000000000000000 R09: 0000000000000000
May 26 08:51:01 mr-fox kernel: R10: ffffcf62f511df80 R11: 0000000000000000 R12: 0000000000000001
May 26 08:51:01 mr-fox kernel: R13: 0000000000000002 R14: ffff988ac80b7000 R15: ffff988ac61fdd70
May 26 08:51:01 mr-fox kernel: filemap_get_entry+0x6a/0x160
May 26 08:51:01 mr-fox kernel: __filemap_get_folio+0x34/0x2c0
May 26 08:51:01 mr-fox kernel: alloc_extent_buffer+0x264/0xa00
May 26 08:51:01 mr-fox kernel: read_tree_block+0x17/0xa0
May 26 08:51:01 mr-fox kernel: read_block_for_search+0x252/0x370
May 26 08:51:01 mr-fox kernel: btrfs_search_slot+0x371/0x1030
May 26 08:51:01 mr-fox kernel: btrfs_lookup_csum+0x6e/0x170
May 26 08:51:01 mr-fox kernel: btrfs_csum_file_blocks+0x1af/0x770
May 26 08:51:01 mr-fox kernel: ? join_transaction+0x1e/0x470
May 26 08:51:01 mr-fox kernel: ? btrfs_global_root+0x30/0x70
May 26 08:51:01 mr-fox kernel: btrfs_finish_one_ordered+0x6c3/0xa40
May 26 08:51:01 mr-fox kernel: btrfs_work_helper+0xb1/0x200
May 26 08:51:01 mr-fox kernel: ? srso_alias_return_thunk+0x5/0xfbef5
May 26 08:51:01 mr-fox kernel: process_one_work+0x16a/0x280
May 26 08:51:01 mr-fox kernel: worker_thread+0x281/0x3a0
May 26 08:51:01 mr-fox kernel: ? flush_delayed_work+0x40/0x40
May 26 08:51:01 mr-fox kernel: kthread+0xcb/0xf0
May 26 08:51:01 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:51:01 mr-fox kernel: ret_from_fork+0x2f/0x50
May 26 08:51:01 mr-fox kernel: ? kthread_complete_and_exit+0x20/0x20
May 26 08:51:01 mr-fox kernel: ret_from_fork_asm+0x11/0x20
May 26 08:51:01 mr-fox kernel: </TASK>
May 26 08:51:43 mr-fox sshd[17350]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
May 26 08:51:47 mr-fox sshd[17350]: pam_unix(sshd:session): session closed for user root
May 26 08:51:52 mr-fox sshd[30002]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
^ permalink raw reply [relevance 4%]
* [amir73il:sb_write_barrier] [fs] 8829cb6189: stress-ng.fault.ops_per_sec -2.3% regression
@ 2024-05-23 2:58 4% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-05-23 2:58 UTC (permalink / raw)
To: Amir Goldstein; +Cc: oe-lkp, lkp, oliver.sang
Hello,
kernel test robot noticed a -2.3% regression of stress-ng.fault.ops_per_sec on:
commit: 8829cb6189b7a6b5283b9ffc870df13c085f1cd6 ("fs: hold s_write_srcu for pre-modify permission events on write")
https://github.com/amir73il/linux sb_write_barrier
testcase: stress-ng
test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
parameters:
nr_threads: 100%
testtime: 60s
test: fault
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202405231056.66ecbb94-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240523/202405231056.66ecbb94-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
gcc-13/performance/x86_64-rhel-8.3/100%/debian-12-x86_64-20240206.cgz/lkp-icl-2sp9/fault/stress-ng/60s
commit:
3f7a9d8157 ("fs: add srcu variants for mnt_{want,drop}_write() helpers")
8829cb6189 ("fs: hold s_write_srcu for pre-modify permission events on write")
3f7a9d815783aeff 8829cb6189b7a6b5283b9ffc870
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:6 17% 1:6 dmesg.RIP:native_queued_spin_lock_slowpath
:6 17% 1:6 dmesg.RIP:setup_pebs_adaptive_sample_data
:6 17% 1:6 dmesg.WARNING:at_arch/x86/events/intel/ds.c:#setup_pebs_adaptive_sample_data
%stddev %change %stddev
\ | \
155.51 ± 12% +23.3% 191.81 ± 13% sched_debug.cfs_rq:/.util_est.stddev
5270 ±141% +378.6% 25225 ± 79% sched_debug.cpu.max_idle_balance_cost.stddev
0.63 ± 2% -0.0 0.59 perf-stat.i.branch-miss-rate%
2.61 ± 2% +3.5% 2.70 perf-stat.i.cpi
0.40 ± 5% -5.2% 0.38 perf-stat.i.ipc
53250 -2.3% 52032 stress-ng.fault.minor_page_faults_per_sec
51143720 -2.3% 49967689 stress-ng.fault.ops
852394 -2.3% 832793 stress-ng.fault.ops_per_sec
2.046e+08 -2.3% 1.999e+08 stress-ng.time.minor_page_faults
1.157e+08 -2.2% 1.132e+08 proc-vmstat.numa_hit
1.157e+08 -2.2% 1.131e+08 proc-vmstat.numa_local
51220291 -2.4% 49995156 proc-vmstat.pgactivate
1.377e+08 -2.1% 1.349e+08 proc-vmstat.pgalloc_normal
2.053e+08 -2.4% 2.003e+08 proc-vmstat.pgfault
1.368e+08 -2.2% 1.338e+08 proc-vmstat.pgfree
51073893 -2.4% 49869748 proc-vmstat.unevictable_pgs_culled
24.17 ± 2% -1.7 22.46 ± 2% perf-profile.calltrace.cycles-pp.__madvise
23.20 ± 2% -1.7 21.52 ± 2% perf-profile.calltrace.cycles-pp.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
23.33 ± 2% -1.7 21.65 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__madvise
23.24 ± 2% -1.7 21.55 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
23.31 ± 2% -1.7 21.62 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
22.51 ± 2% -1.7 20.83 ± 2% perf-profile.calltrace.cycles-pp.madvise_vma_behavior.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.38 ± 3% -1.5 16.87 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain
18.12 ± 3% -1.5 16.62 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu
17.63 -1.2 16.39 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
17.36 -1.2 16.14 perf-profile.calltrace.cycles-pp.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
17.61 -1.2 16.38 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
17.48 -1.2 16.25 perf-profile.calltrace.cycles-pp.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.64 ± 2% -1.2 14.49 perf-profile.calltrace.cycles-pp.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap
15.03 ± 2% -1.1 13.91 ± 2% perf-profile.calltrace.cycles-pp.zap_page_range_single.madvise_vma_behavior.do_madvise.__x64_sys_madvise.do_syscall_64
13.49 ± 3% -1.1 12.38 ± 2% perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.zap_page_range_single.madvise_vma_behavior
13.51 ± 3% -1.1 12.41 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain.zap_page_range_single.madvise_vma_behavior.do_madvise.__x64_sys_madvise
13.51 ± 3% -1.1 12.41 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.zap_page_range_single.madvise_vma_behavior.do_madvise
12.53 ± 3% -1.0 11.50 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.zap_page_range_single
36.55 -1.0 35.54 perf-profile.calltrace.cycles-pp.__munmap
36.27 -1.0 35.27 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
36.26 -1.0 35.26 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
12.06 ± 2% -1.0 11.08 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs
7.33 -0.6 6.72 ± 2% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
7.10 ± 2% -0.6 6.49 ± 2% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.unmap_region.do_vmi_align_munmap
7.11 ± 2% -0.6 6.50 ± 2% perf-profile.calltrace.cycles-pp.__tlb_batch_free_encoded_pages.tlb_finish_mmu.unmap_region.do_vmi_align_munmap.do_vmi_munmap
7.02 ± 2% -0.6 6.42 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.unmap_region
7.01 ± 2% -0.6 6.40 ± 2% perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu
6.99 ± 3% -0.6 6.42 ± 2% perf-profile.calltrace.cycles-pp.__walk_page_range.walk_page_range.madvise_pageout.madvise_vma_behavior.do_madvise
7.38 ± 2% -0.6 6.82 ± 2% perf-profile.calltrace.cycles-pp.madvise_pageout.madvise_vma_behavior.do_madvise.__x64_sys_madvise.do_syscall_64
6.94 ± 3% -0.6 6.38 ± 2% perf-profile.calltrace.cycles-pp.walk_p4d_range.walk_pgd_range.__walk_page_range.walk_page_range.madvise_pageout
6.29 ± 2% -0.6 5.73 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain.free_pages_and_swap_cache
6.97 ± 2% -0.6 6.41 ± 2% perf-profile.calltrace.cycles-pp.walk_pgd_range.__walk_page_range.walk_page_range.madvise_pageout.madvise_vma_behavior
6.38 ± 2% -0.6 5.82 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages
6.16 ± 2% -0.6 5.60 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain
6.92 ± 3% -0.6 6.36 ± 2% perf-profile.calltrace.cycles-pp.walk_pud_range.walk_p4d_range.walk_pgd_range.__walk_page_range.walk_page_range
7.10 ± 2% -0.6 6.54 ± 2% perf-profile.calltrace.cycles-pp.walk_page_range.madvise_pageout.madvise_vma_behavior.do_madvise.__x64_sys_madvise
6.90 ± 3% -0.6 6.34 ± 2% perf-profile.calltrace.cycles-pp.walk_pmd_range.walk_pud_range.walk_p4d_range.walk_pgd_range.__walk_page_range
6.88 ± 3% -0.6 6.32 ± 2% perf-profile.calltrace.cycles-pp.madvise_cold_or_pageout_pte_range.walk_pmd_range.walk_pud_range.walk_p4d_range.walk_pgd_range
6.97 -0.6 6.42 perf-profile.calltrace.cycles-pp.folios_put_refs.shmem_undo_range.shmem_evict_inode.evict.__dentry_kill
6.54 ± 3% -0.5 5.99 ± 2% perf-profile.calltrace.cycles-pp.folio_isolate_lru.madvise_cold_or_pageout_pte_range.walk_pmd_range.walk_pud_range.walk_p4d_range
7.84 -0.5 7.29 perf-profile.calltrace.cycles-pp.shmem_evict_inode.evict.__dentry_kill.dput.__fput
7.72 -0.5 7.17 perf-profile.calltrace.cycles-pp.shmem_undo_range.shmem_evict_inode.evict.__dentry_kill.dput
6.27 ± 3% -0.5 5.75 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.folio_isolate_lru.madvise_cold_or_pageout_pte_range.walk_pmd_range.walk_pud_range
6.17 ± 3% -0.5 5.65 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.folio_isolate_lru.madvise_cold_or_pageout_pte_range.walk_pmd_range
6.09 ± 3% -0.5 5.57 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.folio_isolate_lru.madvise_cold_or_pageout_pte_range
6.42 -0.5 5.90 perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.shmem_undo_range.shmem_evict_inode.evict
6.58 ± 3% -0.5 6.07 ± 2% perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.unmap_region.do_vmi_align_munmap
6.25 -0.5 5.73 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.shmem_undo_range.shmem_evict_inode
6.62 ± 2% -0.5 6.10 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.do_vmi_align_munmap.do_vmi_munmap
6.62 ± 3% -0.5 6.11 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
6.18 -0.5 5.66 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.shmem_undo_range
6.16 ± 3% -0.5 5.67 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.unmap_region
6.14 ± 2% -0.5 5.68 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.truncate_inode_pages_range.evict
6.07 ± 2% -0.5 5.60 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.truncate_inode_pages_range
3.43 ± 2% -0.3 3.17 perf-profile.calltrace.cycles-pp.folios_put_refs.truncate_inode_pages_range.evict.__dentry_kill.dput
3.75 ± 2% -0.2 3.50 perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
3.41 ± 2% -0.2 3.17 perf-profile.calltrace.cycles-pp.folios_put_refs.truncate_inode_pages_range.evict.do_unlinkat.__x64_sys_unlink
3.14 ± 3% -0.2 2.91 perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.truncate_inode_pages_range.evict.__dentry_kill
3.74 ± 2% -0.2 3.50 perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64
3.13 ± 2% -0.2 2.91 perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.truncate_inode_pages_range.evict.do_unlinkat
0.51 -0.2 0.33 ± 70% perf-profile.calltrace.cycles-pp.sync_regs.asm_exc_page_fault.stress_fault
5.64 -0.1 5.50 perf-profile.calltrace.cycles-pp.stress_fault
4.70 -0.1 4.60 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.stress_fault
4.16 -0.1 4.07 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.stress_fault
4.12 -0.1 4.03 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.stress_fault
3.59 -0.1 3.52 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.stress_fault
2.13 -0.1 2.08 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.92 -0.0 0.88 perf-profile.calltrace.cycles-pp.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write.__x64_sys_pwrite64
0.71 -0.0 0.67 perf-profile.calltrace.cycles-pp.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write
1.13 -0.0 1.09 perf-profile.calltrace.cycles-pp.generic_perform_write.generic_file_write_iter.vfs_write.__x64_sys_pwrite64.do_syscall_64
0.81 -0.0 0.79 perf-profile.calltrace.cycles-pp.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault.__do_fault.do_read_fault
0.75 -0.0 0.73 perf-profile.calltrace.cycles-pp.alloc_inode.new_inode.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup
0.60 -0.0 0.58 perf-profile.calltrace.cycles-pp.perf_event_mmap_event.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
18.59 +0.2 18.83 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
18.33 +0.2 18.58 perf-profile.calltrace.cycles-pp.task_work_run.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
17.29 +0.3 17.54 perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.syscall_exit_to_user_mode.do_syscall_64
18.08 +0.3 18.34 perf-profile.calltrace.cycles-pp.__fput.task_work_run.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
17.19 +0.3 17.45 perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.syscall_exit_to_user_mode
1.96 +0.3 2.23 perf-profile.calltrace.cycles-pp.__libc_pwrite
1.84 +0.3 2.12 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pwrite
1.85 +0.3 2.13 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_pwrite
1.82 +0.3 2.10 perf-profile.calltrace.cycles-pp.__x64_sys_pwrite64.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pwrite
1.78 +0.3 2.06 perf-profile.calltrace.cycles-pp.vfs_write.__x64_sys_pwrite64.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pwrite
15.98 +0.3 16.27 perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
8.24 +0.9 9.15 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
8.24 +0.9 9.16 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
8.36 +0.9 9.27 ± 2% perf-profile.calltrace.cycles-pp.unlink
8.04 +0.9 8.96 ± 2% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
8.20 +0.9 9.13 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
5.36 +1.0 6.37 ± 3% perf-profile.calltrace.cycles-pp.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.64 ± 6% +1.0 4.68 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.evict.__dentry_kill.dput
3.88 ± 6% +1.1 4.94 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.evict.__dentry_kill.dput.__fput
1.35 ± 11% +1.2 2.52 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.evict.do_unlinkat.__x64_sys_unlink
1.42 ± 10% +1.2 2.62 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64
8.34 ± 5% +2.3 10.62 ± 6% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
8.36 ± 5% +2.3 10.64 ± 6% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
8.53 ± 5% +2.3 10.81 ± 6% perf-profile.calltrace.cycles-pp.open64
8.39 ± 5% +2.3 10.67 ± 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
8.40 ± 5% +2.3 10.68 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
8.01 ± 5% +2.3 10.30 ± 6% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.98 ± 5% +2.3 10.27 ± 6% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
2.87 ± 10% +2.3 5.19 ± 11% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.new_inode.ramfs_get_inode.ramfs_mknod
5.99 ± 6% +2.3 8.32 ± 7% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
4.04 ± 7% +2.3 6.38 ± 9% perf-profile.calltrace.cycles-pp.new_inode.ramfs_get_inode.ramfs_mknod.lookup_open.open_last_lookups
3.06 ± 9% +2.4 5.44 ± 11% perf-profile.calltrace.cycles-pp._raw_spin_lock.new_inode.ramfs_get_inode.ramfs_mknod.lookup_open
5.53 ± 6% +2.4 7.91 ± 8% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
4.53 ± 7% +2.4 6.92 ± 9% perf-profile.calltrace.cycles-pp.ramfs_mknod.lookup_open.open_last_lookups.path_openat.do_filp_open
4.39 ± 7% +2.4 6.79 ± 9% perf-profile.calltrace.cycles-pp.ramfs_get_inode.ramfs_mknod.lookup_open.open_last_lookups.path_openat
37.47 ± 2% -3.1 34.42 ± 2% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
36.99 ± 2% -3.0 33.94 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
27.20 ± 2% -2.2 24.96 ± 2% perf-profile.children.cycles-pp.lru_add_drain
27.10 ± 2% -2.2 24.88 ± 2% perf-profile.children.cycles-pp.folio_batch_move_lru
24.23 ± 2% -1.7 22.52 ± 2% perf-profile.children.cycles-pp.__madvise
23.22 ± 2% -1.7 21.53 ± 2% perf-profile.children.cycles-pp.do_madvise
23.24 ± 2% -1.7 21.56 ± 2% perf-profile.children.cycles-pp.__x64_sys_madvise
22.52 ± 2% -1.7 20.84 ± 2% perf-profile.children.cycles-pp.madvise_vma_behavior
20.19 ± 3% -1.6 18.56 ± 2% perf-profile.children.cycles-pp.lru_add_drain_cpu
17.60 -1.2 16.37 perf-profile.children.cycles-pp.do_vmi_munmap
17.63 -1.2 16.40 perf-profile.children.cycles-pp.__x64_sys_munmap
17.62 -1.2 16.39 perf-profile.children.cycles-pp.__vm_munmap
17.38 -1.2 16.16 perf-profile.children.cycles-pp.do_vmi_align_munmap
15.65 ± 2% -1.2 14.50 perf-profile.children.cycles-pp.unmap_region
15.04 ± 2% -1.1 13.91 ± 2% perf-profile.children.cycles-pp.zap_page_range_single
14.24 ± 2% -1.1 13.19 perf-profile.children.cycles-pp.folios_put_refs
36.59 -1.0 35.58 perf-profile.children.cycles-pp.__munmap
12.71 ± 2% -1.0 11.72 perf-profile.children.cycles-pp.__page_cache_release
7.75 -0.6 7.13 perf-profile.children.cycles-pp.tlb_finish_mmu
7.33 ± 2% -0.6 6.72 ± 2% perf-profile.children.cycles-pp.free_pages_and_swap_cache
7.44 -0.6 6.82 perf-profile.children.cycles-pp.__tlb_batch_free_encoded_pages
7.39 ± 2% -0.6 6.83 ± 2% perf-profile.children.cycles-pp.madvise_pageout
6.95 ± 3% -0.6 6.38 ± 2% perf-profile.children.cycles-pp.walk_p4d_range
7.10 ± 2% -0.6 6.54 ± 2% perf-profile.children.cycles-pp.walk_page_range
6.98 ± 2% -0.6 6.42 ± 2% perf-profile.children.cycles-pp.walk_pgd_range
6.99 ± 3% -0.6 6.43 ± 2% perf-profile.children.cycles-pp.__walk_page_range
6.92 ± 3% -0.6 6.36 ± 2% perf-profile.children.cycles-pp.walk_pud_range
6.88 ± 3% -0.6 6.32 ± 2% perf-profile.children.cycles-pp.madvise_cold_or_pageout_pte_range
6.90 ± 3% -0.6 6.34 ± 2% perf-profile.children.cycles-pp.walk_pmd_range
6.55 ± 3% -0.6 6.00 ± 2% perf-profile.children.cycles-pp.folio_isolate_lru
7.84 -0.5 7.30 perf-profile.children.cycles-pp.shmem_evict_inode
7.73 -0.5 7.18 perf-profile.children.cycles-pp.shmem_undo_range
6.28 ± 3% -0.5 5.75 ± 2% perf-profile.children.cycles-pp.folio_lruvec_lock_irq
6.30 ± 3% -0.5 5.78 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_irq
7.50 ± 2% -0.5 7.02 perf-profile.children.cycles-pp.truncate_inode_pages_range
6.62 -0.2 6.47 perf-profile.children.cycles-pp.stress_fault
5.72 -0.1 5.60 perf-profile.children.cycles-pp.asm_exc_page_fault
2.28 -0.1 2.15 perf-profile.children.cycles-pp.__do_softirq
2.26 -0.1 2.14 perf-profile.children.cycles-pp.rcu_do_batch
2.26 -0.1 2.15 perf-profile.children.cycles-pp.rcu_core
2.12 -0.1 2.01 perf-profile.children.cycles-pp.irq_exit_rcu
2.00 -0.1 1.91 perf-profile.children.cycles-pp.kmem_cache_free
0.25 ± 2% -0.1 0.16 ± 2% perf-profile.children.cycles-pp.vfs_fallocate
2.34 -0.1 2.25 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
4.17 -0.1 4.08 perf-profile.children.cycles-pp.exc_page_fault
2.32 -0.1 2.23 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.29 ± 2% -0.1 0.20 perf-profile.children.cycles-pp.__x64_sys_fallocate
4.14 -0.1 4.05 perf-profile.children.cycles-pp.do_user_addr_fault
0.42 ± 3% -0.1 0.34 ± 2% perf-profile.children.cycles-pp.posix_fallocate64
3.60 -0.1 3.53 perf-profile.children.cycles-pp.handle_mm_fault
1.70 ± 2% -0.1 1.63 perf-profile.children.cycles-pp.alloc_inode
2.96 -0.1 2.90 perf-profile.children.cycles-pp.do_fault
0.17 ± 3% -0.1 0.11 ± 3% perf-profile.children.cycles-pp.rw_verify_area
1.03 -0.0 0.99 perf-profile.children.cycles-pp.__slab_free
0.92 -0.0 0.88 perf-profile.children.cycles-pp.simple_write_begin
0.64 ± 2% -0.0 0.59 perf-profile.children.cycles-pp.inode_init_always
1.16 -0.0 1.12 perf-profile.children.cycles-pp.generic_perform_write
0.46 ± 3% -0.0 0.42 ± 2% perf-profile.children.cycles-pp.mnt_want_write
0.84 -0.0 0.80 perf-profile.children.cycles-pp.__filemap_get_folio
1.12 -0.0 1.08 perf-profile.children.cycles-pp.perf_event_mmap
1.08 -0.0 1.05 perf-profile.children.cycles-pp.perf_event_mmap_event
0.15 -0.0 0.12 ± 3% perf-profile.children.cycles-pp.__fsnotify_parent
0.23 ± 3% -0.0 0.20 ± 2% perf-profile.children.cycles-pp.may_open
0.58 -0.0 0.55 perf-profile.children.cycles-pp.mas_prev_slot
0.28 -0.0 0.26 ± 4% perf-profile.children.cycles-pp.__count_memcg_events
0.45 ± 2% -0.0 0.42 ± 2% perf-profile.children.cycles-pp.filemap_add_folio
0.18 ± 2% -0.0 0.15 ± 4% perf-profile.children.cycles-pp.security_inode_alloc
0.57 -0.0 0.54 perf-profile.children.cycles-pp.__cond_resched
0.26 -0.0 0.24 ± 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.68 -0.0 0.66 perf-profile.children.cycles-pp.flush_tlb_mm_range
0.32 ± 2% -0.0 0.30 perf-profile.children.cycles-pp.generic_file_mmap
0.14 ± 3% -0.0 0.12 ± 7% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.31 ± 2% -0.0 0.29 ± 2% perf-profile.children.cycles-pp.touch_atime
0.50 ± 2% -0.0 0.48 perf-profile.children.cycles-pp.mas_rev_awalk
0.32 -0.0 0.30 ± 2% perf-profile.children.cycles-pp.alloc_pages_mpol
0.22 ± 2% -0.0 0.20 ± 2% perf-profile.children.cycles-pp.shmem_alloc_folio
0.17 ± 2% -0.0 0.16 ± 3% perf-profile.children.cycles-pp.fsnotify
0.12 ± 4% -0.0 0.10 ± 3% perf-profile.children.cycles-pp.blk_finish_plug
0.42 -0.0 0.40 perf-profile.children.cycles-pp.entry_SYSCALL_64
0.17 ± 2% -0.0 0.15 ± 2% perf-profile.children.cycles-pp.folio_alloc
0.31 -0.0 0.30 perf-profile.children.cycles-pp.mas_ascend
0.18 ± 2% -0.0 0.17 perf-profile.children.cycles-pp.fsnotify_grab_connector
0.10 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.kfree
0.19 ± 2% -0.0 0.18 perf-profile.children.cycles-pp.xas_start
0.64 -0.0 0.62 perf-profile.children.cycles-pp.lru_add_fn
0.09 ± 4% -0.0 0.08 perf-profile.children.cycles-pp.prepend_path
0.14 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.simple_getattr
0.20 ± 2% -0.0 0.19 ± 2% perf-profile.children.cycles-pp.fsnotify_destroy_marks
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.08 ± 9% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.security_current_getsecid_subj
0.10 ± 7% +0.0 0.12 perf-profile.children.cycles-pp.security_file_post_open
0.09 ± 6% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.ima_file_check
0.02 ± 99% +0.0 0.06 perf-profile.children.cycles-pp.__x64_sys_fcntl
0.55 ± 2% +0.1 0.62 ± 2% perf-profile.children.cycles-pp.inode_wait_for_writeback
91.01 +0.2 91.25 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
18.74 +0.2 18.98 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
90.84 +0.2 91.08 perf-profile.children.cycles-pp.do_syscall_64
0.00 +0.2 0.24 ± 3% perf-profile.children.cycles-pp.file_start_write_area
18.34 +0.2 18.59 perf-profile.children.cycles-pp.task_work_run
17.46 +0.3 17.72 perf-profile.children.cycles-pp.dput
17.25 +0.3 17.50 perf-profile.children.cycles-pp.__dentry_kill
18.09 +0.3 18.35 perf-profile.children.cycles-pp.__fput
1.98 +0.3 2.25 perf-profile.children.cycles-pp.__libc_pwrite
1.82 +0.3 2.10 perf-profile.children.cycles-pp.vfs_write
1.82 +0.3 2.10 perf-profile.children.cycles-pp.__x64_sys_pwrite64
8.38 +0.9 9.29 ± 2% perf-profile.children.cycles-pp.unlink
8.21 +0.9 9.13 ± 2% perf-profile.children.cycles-pp.__x64_sys_unlink
8.04 +0.9 8.96 ± 2% perf-profile.children.cycles-pp.do_unlinkat
21.35 +1.3 22.65 perf-profile.children.cycles-pp.evict
8.11 ± 5% +2.0 10.10 ± 3% perf-profile.children.cycles-pp.new_inode
8.36 ± 5% +2.3 10.64 ± 6% perf-profile.children.cycles-pp.do_sys_openat2
8.37 ± 5% +2.3 10.65 ± 6% perf-profile.children.cycles-pp.__x64_sys_openat
8.55 ± 5% +2.3 10.83 ± 6% perf-profile.children.cycles-pp.open64
7.99 ± 5% +2.3 10.28 ± 6% perf-profile.children.cycles-pp.path_openat
8.02 ± 5% +2.3 10.31 ± 6% perf-profile.children.cycles-pp.do_filp_open
6.00 ± 6% +2.3 8.33 ± 7% perf-profile.children.cycles-pp.open_last_lookups
5.54 ± 6% +2.4 7.92 ± 8% perf-profile.children.cycles-pp.lookup_open
4.54 ± 7% +2.4 6.93 ± 9% perf-profile.children.cycles-pp.ramfs_mknod
4.40 ± 7% +2.4 6.79 ± 9% perf-profile.children.cycles-pp.ramfs_get_inode
12.62 ± 6% +4.4 16.99 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
1.00 -0.0 0.95 perf-profile.self.cycles-pp.__slab_free
0.10 ± 4% -0.0 0.06 perf-profile.self.cycles-pp.vfs_fallocate
1.25 -0.0 1.21 perf-profile.self.cycles-pp.stress_fault
0.44 ± 5% -0.0 0.41 ± 4% perf-profile.self.cycles-pp.apparmor_file_alloc_security
0.14 ± 3% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.__fsnotify_parent
0.26 -0.0 0.23 ± 4% perf-profile.self.cycles-pp.__count_memcg_events
0.25 -0.0 0.23 ± 2% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.17 -0.0 0.15 perf-profile.self.cycles-pp.fsnotify
0.21 -0.0 0.19 perf-profile.self.cycles-pp.mas_prev_slot
0.35 -0.0 0.34 perf-profile.self.cycles-pp.__cond_resched
0.17 ± 2% -0.0 0.16 perf-profile.self.cycles-pp.xas_start
0.12 ± 4% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.__srcu_read_lock
0.13 -0.0 0.12 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.09 -0.0 0.08 perf-profile.self.cycles-pp.mas_store_gfp
0.07 -0.0 0.06 perf-profile.self.cycles-pp.unmap_region
0.06 ± 6% +0.0 0.07 perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.15 ± 3% +0.0 0.19 perf-profile.self.cycles-pp.ramfs_get_inode
0.12 ± 3% +0.1 0.26 ± 2% perf-profile.self.cycles-pp.vfs_write
1.60 ± 2% +0.2 1.76 perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.2 0.22 ± 3% perf-profile.self.cycles-pp.file_start_write_area
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 4%]
* + udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch added to mm-unstable branch
@ 2024-05-22 22:07 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-05-22 22:07 UTC (permalink / raw)
To: mm-commits, willy, shuah, peterx, mike.kravetz, kraxel,
junxiao.chang, jgg, hughd, hch, hch, dongwon.kim, david,
daniel.vetter, vivek.kasireddy, akpm
The patch titled
Subject: udmabuf: pin the pages using memfd_pin_folios() API
has been added to the -mm mm-unstable branch. Its filename is
udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Vivek Kasireddy <vivek.kasireddy@intel.com>
Subject: udmabuf: pin the pages using memfd_pin_folios() API
Date: Wed, 10 Apr 2024 23:59:43 -0700
Using memfd_pin_folios() will ensure that the pages are pinned correctly
using FOLL_PIN. And, this also ensures that we don't accidentally break
features such as memory hotunplug as it would not allow pinning pages in
the movable zone.
Using this new API also simplifies the code as we no longer have to deal
with extracting individual pages from their mappings or handle shmem and
hugetlb cases separately.
Link: https://lkml.kernel.org/r/20240411070157.3318425-8-vivek.kasireddy@intel.com
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
drivers/dma-buf/udmabuf.c | 153 ++++++++++++++++++------------------
1 file changed, 78 insertions(+), 75 deletions(-)
--- a/drivers/dma-buf/udmabuf.c~udmabuf-pin-the-pages-using-memfd_pin_folios-api
+++ a/drivers/dma-buf/udmabuf.c
@@ -30,6 +30,12 @@ struct udmabuf {
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
+ struct list_head unpin_list;
+};
+
+struct udmabuf_folio {
+ struct folio *folio;
+ struct list_head list;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -153,17 +159,43 @@ static void unmap_udmabuf(struct dma_buf
return put_sg_table(at->dev, sg, direction);
}
+static void unpin_all_folios(struct list_head *unpin_list)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ while (!list_empty(unpin_list)) {
+ ubuf_folio = list_first_entry(unpin_list,
+ struct udmabuf_folio, list);
+ unpin_folio(ubuf_folio->folio);
+
+ list_del(&ubuf_folio->list);
+ kfree(ubuf_folio);
+ }
+}
+
+static int add_to_unpin_list(struct list_head *unpin_list,
+ struct folio *folio)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ ubuf_folio = kzalloc(sizeof(*ubuf_folio), GFP_KERNEL);
+ if (!ubuf_folio)
+ return -ENOMEM;
+
+ ubuf_folio->folio = folio;
+ list_add_tail(&ubuf_folio->list, unpin_list);
+ return 0;
+}
+
static void release_udmabuf(struct dma_buf *buf)
{
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
- pgoff_t pg;
if (ubuf->sg)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
- for (pg = 0; pg < ubuf->pagecount; pg++)
- folio_put(ubuf->folios[pg]);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
@@ -218,64 +250,6 @@ static const struct dma_buf_ops udmabuf_
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
-static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- struct hstate *hpstate = hstate_file(memfd);
- pgoff_t mapidx = offset >> huge_page_shift(hpstate);
- pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
- pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct folio *folio = NULL;
- pgoff_t pgidx;
-
- mapidx <<= huge_page_order(hpstate);
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!folio) {
- folio = __filemap_get_folio(memfd->f_mapping,
- mapidx,
- FGP_ACCESSED, 0);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
- }
-
- folio_get(folio);
- ubuf->folios[*pgbuf] = folio;
- ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
- (*pgbuf)++;
- if (++subpgoff == maxsubpgs) {
- folio_put(folio);
- folio = NULL;
- subpgoff = 0;
- mapidx += pages_per_huge_page(hpstate);
- }
- }
-
- if (folio)
- folio_put(folio);
-
- return 0;
-}
-
-static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct folio *folio = NULL;
-
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
-
- ubuf->folios[*pgbuf] = folio;
- (*pgbuf)++;
- }
-
- return 0;
-}
-
static int check_memfd_seals(struct file *memfd)
{
int seals;
@@ -321,16 +295,19 @@ static long udmabuf_create(struct miscde
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- pgoff_t pgcnt, pgbuf = 0, pglimit;
+ pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0;
+ long nr_folios, ret = -EINVAL;
struct file *memfd = NULL;
+ struct folio **folios;
struct udmabuf *ubuf;
- int ret = -EINVAL;
- u32 i, flags;
+ u32 i, j, k, flags;
+ loff_t end;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
if (!ubuf)
return -ENOMEM;
+ INIT_LIST_HEAD(&ubuf->unpin_list);
pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
for (i = 0; i < head->count; i++) {
if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
@@ -366,17 +343,44 @@ static long udmabuf_create(struct miscde
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
- if (is_file_hugepages(memfd))
- ret = handle_hugetlb_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- else
- ret = handle_shmem_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- if (ret < 0)
+ folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
goto err;
+ }
+
+ end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1;
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+
+ nr_folios = ret;
+ pgoff >>= PAGE_SHIFT;
+ for (j = 0, k = 0; j < pgcnt; j++) {
+ ubuf->folios[pgbuf] = folios[k];
+ ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT;
+
+ if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) {
+ ret = add_to_unpin_list(&ubuf->unpin_list,
+ folios[k]);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+ }
+
+ pgbuf++;
+ if (++pgoff == folio_nr_pages(folios[k])) {
+ pgoff = 0;
+ if (++k == nr_folios)
+ break;
+ }
+ }
+ kfree(folios);
fput(memfd);
}
@@ -388,10 +392,9 @@ static long udmabuf_create(struct miscde
return ret;
err:
- while (pgbuf > 0)
- folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
_
Patches currently in -mm which might be from vivek.kasireddy@intel.com are
mm-gup-introduce-unpin_folio-unpin_folios-helpers.patch
mm-gup-introduce-check_and_migrate_movable_folios.patch
mm-gup-introduce-memfd_pin_folios-for-pinning-memfd-folios.patch
udmabuf-use-vmf_insert_pfn-and-vm_pfnmap-for-handling-mmap.patch
udmabuf-add-back-support-for-mapping-hugetlb-pages.patch
udmabuf-convert-udmabuf-driver-to-use-folios.patch
udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch
selftests-udmabuf-add-tests-to-verify-data-after-page-migration.patch
^ permalink raw reply [relevance 4%]
* + udmabuf-convert-udmabuf-driver-to-use-folios.patch added to mm-unstable branch
@ 2024-05-22 22:07 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-05-22 22:07 UTC (permalink / raw)
To: mm-commits, willy, shuah, peterx, mike.kravetz, kraxel,
junxiao.chang, jgg, hughd, hch, hch, dongwon.kim, david,
daniel.vetter, vivek.kasireddy, akpm
The patch titled
Subject: udmabuf: convert udmabuf driver to use folios
has been added to the -mm mm-unstable branch. Its filename is
udmabuf-convert-udmabuf-driver-to-use-folios.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/udmabuf-convert-udmabuf-driver-to-use-folios.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Vivek Kasireddy <vivek.kasireddy@intel.com>
Subject: udmabuf: convert udmabuf driver to use folios
Date: Wed, 10 Apr 2024 23:59:42 -0700
This is mainly a preparatory patch to use memfd_pin_folios() API for
pinning folios. Using folios instead of pages makes sense as the udmabuf
driver needs to handle both shmem and hugetlb cases. And, using the
memfd_pin_folios() API makes this easier as we no longer need to
separately handle shmem vs hugetlb cases in the udmabuf driver.
Note that, the function vmap_udmabuf() still needs a list of pages; so, we
collect all the head pages into a local array in this case.
Other changes in this patch include the addition of helpers for checking
the memfd seals and exporting dmabuf. Moving code from udmabuf_create()
into these helpers improves readability given that udmabuf_create() is a
bit long.
Link: https://lkml.kernel.org/r/20240411070157.3318425-7-vivek.kasireddy@intel.com
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
drivers/dma-buf/udmabuf.c | 140 +++++++++++++++++++++---------------
1 file changed, 83 insertions(+), 57 deletions(-)
--- a/drivers/dma-buf/udmabuf.c~udmabuf-convert-udmabuf-driver-to-use-folios
+++ a/drivers/dma-buf/udmabuf.c
@@ -26,7 +26,7 @@ MODULE_PARM_DESC(size_limit_mb, "Max siz
struct udmabuf {
pgoff_t pagecount;
- struct page **pages;
+ struct folio **folios;
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
@@ -42,7 +42,7 @@ static vm_fault_t udmabuf_vm_fault(struc
if (pgoff >= ubuf->pagecount)
return VM_FAULT_SIGBUS;
- pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn = folio_pfn(ubuf->folios[pgoff]);
pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
return vmf_insert_pfn(vma, vmf->address, pfn);
@@ -68,11 +68,21 @@ static int mmap_udmabuf(struct dma_buf *
static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
{
struct udmabuf *ubuf = buf->priv;
+ struct page **pages;
void *vaddr;
+ pgoff_t pg;
dma_resv_assert_held(buf->resv);
- vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
+ pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ for (pg = 0; pg < ubuf->pagecount; pg++)
+ pages[pg] = &ubuf->folios[pg]->page;
+
+ vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+ kfree(pages);
if (!vaddr)
return -EINVAL;
@@ -107,7 +117,8 @@ static struct sg_table *get_sg_table(str
goto err_alloc;
for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
- sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+ sg_set_folio(sgl, ubuf->folios[i], PAGE_SIZE,
+ ubuf->offsets[i]);
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
@@ -152,9 +163,9 @@ static void release_udmabuf(struct dma_b
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
for (pg = 0; pg < ubuf->pagecount; pg++)
- put_page(ubuf->pages[pg]);
+ folio_put(ubuf->folios[pg]);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
}
@@ -215,36 +226,33 @@ static int handle_hugetlb_pages(struct u
pgoff_t mapidx = offset >> huge_page_shift(hpstate);
pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct page *hpage = NULL;
- struct folio *folio;
+ struct folio *folio = NULL;
pgoff_t pgidx;
mapidx <<= huge_page_order(hpstate);
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!hpage) {
+ if (!folio) {
folio = __filemap_get_folio(memfd->f_mapping,
mapidx,
FGP_ACCESSED, 0);
if (IS_ERR(folio))
return PTR_ERR(folio);
-
- hpage = &folio->page;
}
- get_page(hpage);
- ubuf->pages[*pgbuf] = hpage;
+ folio_get(folio);
+ ubuf->folios[*pgbuf] = folio;
ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
(*pgbuf)++;
if (++subpgoff == maxsubpgs) {
- put_page(hpage);
- hpage = NULL;
+ folio_put(folio);
+ folio = NULL;
subpgoff = 0;
mapidx += pages_per_huge_page(hpstate);
}
}
- if (hpage)
- put_page(hpage);
+ if (folio)
+ folio_put(folio);
return 0;
}
@@ -254,31 +262,69 @@ static int handle_shmem_pages(struct udm
pgoff_t *pgbuf)
{
pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio = NULL;
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(memfd->f_mapping,
- pgoff + pgidx);
- if (IS_ERR(page))
- return PTR_ERR(page);
+ folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
- ubuf->pages[*pgbuf] = page;
+ ubuf->folios[*pgbuf] = folio;
(*pgbuf)++;
}
return 0;
}
+static int check_memfd_seals(struct file *memfd)
+{
+ int seals;
+
+ if (!memfd)
+ return -EBADFD;
+
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EBADFD;
+
+ seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
+ if (seals == -EINVAL)
+ return -EBADFD;
+
+ if ((seals & SEALS_WANTED) != SEALS_WANTED ||
+ (seals & SEALS_DENIED) != 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int export_udmabuf(struct udmabuf *ubuf,
+ struct miscdevice *device,
+ u32 flags)
+{
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ struct dma_buf *buf;
+
+ ubuf->device = device;
+ exp_info.ops = &udmabuf_ops;
+ exp_info.size = ubuf->pagecount << PAGE_SHIFT;
+ exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
+
+ buf = dma_buf_export(&exp_info);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+
+ return dma_buf_fd(buf, flags);
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
struct file *memfd = NULL;
struct udmabuf *ubuf;
- struct dma_buf *buf;
- pgoff_t pgcnt, pgbuf = 0, pglimit;
- int seals, ret = -EINVAL;
+ int ret = -EINVAL;
u32 i, flags;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
@@ -299,9 +345,9 @@ static long udmabuf_create(struct miscde
if (!ubuf->pagecount)
goto err;
- ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
GFP_KERNEL);
- if (!ubuf->pages) {
+ if (!ubuf->folios) {
ret = -ENOMEM;
goto err;
}
@@ -314,18 +360,9 @@ static long udmabuf_create(struct miscde
pgbuf = 0;
for (i = 0; i < head->count; i++) {
- ret = -EBADFD;
memfd = fget(list[i].memfd);
- if (!memfd)
- goto err;
- if (!shmem_file(memfd) && !is_file_hugepages(memfd))
- goto err;
- seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
- if (seals == -EINVAL)
- goto err;
- ret = -EINVAL;
- if ((seals & SEALS_WANTED) != SEALS_WANTED ||
- (seals & SEALS_DENIED) != 0)
+ ret = check_memfd_seals(memfd);
+ if (ret < 0)
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
@@ -341,33 +378,22 @@ static long udmabuf_create(struct miscde
goto err;
fput(memfd);
- memfd = NULL;
}
- exp_info.ops = &udmabuf_ops;
- exp_info.size = ubuf->pagecount << PAGE_SHIFT;
- exp_info.priv = ubuf;
- exp_info.flags = O_RDWR;
-
- ubuf->device = device;
- buf = dma_buf_export(&exp_info);
- if (IS_ERR(buf)) {
- ret = PTR_ERR(buf);
+ flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0;
+ ret = export_udmabuf(ubuf, device, flags);
+ if (ret < 0)
goto err;
- }
- flags = 0;
- if (head->flags & UDMABUF_FLAGS_CLOEXEC)
- flags |= O_CLOEXEC;
- return dma_buf_fd(buf, flags);
+ return ret;
err:
while (pgbuf > 0)
- put_page(ubuf->pages[--pgbuf]);
+ folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
return ret;
}
_
Patches currently in -mm which might be from vivek.kasireddy@intel.com are
mm-gup-introduce-unpin_folio-unpin_folios-helpers.patch
mm-gup-introduce-check_and_migrate_movable_folios.patch
mm-gup-introduce-memfd_pin_folios-for-pinning-memfd-folios.patch
udmabuf-use-vmf_insert_pfn-and-vm_pfnmap-for-handling-mmap.patch
udmabuf-add-back-support-for-mapping-hugetlb-pages.patch
udmabuf-convert-udmabuf-driver-to-use-folios.patch
udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch
selftests-udmabuf-add-tests-to-verify-data-after-page-migration.patch
^ permalink raw reply [relevance 4%]
* + udmabuf-add-back-support-for-mapping-hugetlb-pages.patch added to mm-unstable branch
@ 2024-05-22 22:07 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-05-22 22:07 UTC (permalink / raw)
To: mm-commits, willy, shuah, peterx, mike.kravetz, kraxel,
junxiao.chang, jgg, hughd, hch, hch, dongwon.kim, david,
daniel.vetter, vivek.kasireddy, akpm
The patch titled
Subject: udmabuf: add back support for mapping hugetlb pages
has been added to the -mm mm-unstable branch. Its filename is
udmabuf-add-back-support-for-mapping-hugetlb-pages.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/udmabuf-add-back-support-for-mapping-hugetlb-pages.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Vivek Kasireddy <vivek.kasireddy@intel.com>
Subject: udmabuf: add back support for mapping hugetlb pages
Date: Wed, 10 Apr 2024 23:59:41 -0700
A user or admin can configure a VMM (Qemu) Guest's memory to be backed by
hugetlb pages for various reasons. However, a Guest OS would still
allocate (and pin) buffers that are backed by regular 4k sized pages. In
order to map these buffers and create dma-bufs for them on the Host, we
first need to find the hugetlb pages where the buffer allocations are
located and then determine the offsets of individual chunks (within those
pages) and use this information to eventually populate a scatterlist.
Testcase: default_hugepagesz=2M hugepagesz=2M hugepages=2500 options
were passed to the Host kernel and Qemu was launched with these
relevant options: qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-display gtk,gl=on
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
Replacing -display gtk,gl=on with -display gtk,gl=off above would
exercise the mmap handler.
Link: https://lkml.kernel.org/r/20240411070157.3318425-6-vivek.kasireddy@intel.com
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com> [v2]
Cc: David Hildenbrand <david@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
drivers/dma-buf/udmabuf.c | 122 +++++++++++++++++++++++++++++-------
1 file changed, 101 insertions(+), 21 deletions(-)
--- a/drivers/dma-buf/udmabuf.c~udmabuf-add-back-support-for-mapping-hugetlb-pages
+++ a/drivers/dma-buf/udmabuf.c
@@ -10,6 +10,7 @@
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/shmem_fs.h>
+#include <linux/hugetlb.h>
#include <linux/slab.h>
#include <linux/udmabuf.h>
#include <linux/vmalloc.h>
@@ -28,6 +29,7 @@ struct udmabuf {
struct page **pages;
struct sg_table *sg;
struct miscdevice *device;
+ pgoff_t *offsets;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -41,6 +43,8 @@ static vm_fault_t udmabuf_vm_fault(struc
return VM_FAULT_SIGBUS;
pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
+
return vmf_insert_pfn(vma, vmf->address, pfn);
}
@@ -90,23 +94,29 @@ static struct sg_table *get_sg_table(str
{
struct udmabuf *ubuf = buf->priv;
struct sg_table *sg;
+ struct scatterlist *sgl;
+ unsigned int i = 0;
int ret;
sg = kzalloc(sizeof(*sg), GFP_KERNEL);
if (!sg)
return ERR_PTR(-ENOMEM);
- ret = sg_alloc_table_from_pages(sg, ubuf->pages, ubuf->pagecount,
- 0, ubuf->pagecount << PAGE_SHIFT,
- GFP_KERNEL);
+
+ ret = sg_alloc_table(sg, ubuf->pagecount, GFP_KERNEL);
if (ret < 0)
- goto err;
+ goto err_alloc;
+
+ for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
+ sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
- goto err;
+ goto err_map;
return sg;
-err:
+err_map:
sg_free_table(sg);
+err_alloc:
kfree(sg);
return ERR_PTR(ret);
}
@@ -143,6 +153,7 @@ static void release_udmabuf(struct dma_b
for (pg = 0; pg < ubuf->pagecount; pg++)
put_page(ubuf->pages[pg]);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
}
@@ -196,17 +207,77 @@ static const struct dma_buf_ops udmabuf_
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
+static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ struct hstate *hpstate = hstate_file(memfd);
+ pgoff_t mapidx = offset >> huge_page_shift(hpstate);
+ pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
+ pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
+ struct page *hpage = NULL;
+ struct folio *folio;
+ pgoff_t pgidx;
+
+ mapidx <<= huge_page_order(hpstate);
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ if (!hpage) {
+ folio = __filemap_get_folio(memfd->f_mapping,
+ mapidx,
+ FGP_ACCESSED, 0);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+
+ hpage = &folio->page;
+ }
+
+ get_page(hpage);
+ ubuf->pages[*pgbuf] = hpage;
+ ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
+ (*pgbuf)++;
+ if (++subpgoff == maxsubpgs) {
+ put_page(hpage);
+ hpage = NULL;
+ subpgoff = 0;
+ mapidx += pages_per_huge_page(hpstate);
+ }
+ }
+
+ if (hpage)
+ put_page(hpage);
+
+ return 0;
+}
+
+static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
+ struct page *page;
+
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ page = shmem_read_mapping_page(memfd->f_mapping,
+ pgoff + pgidx);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ ubuf->pages[*pgbuf] = page;
+ (*pgbuf)++;
+ }
+
+ return 0;
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL;
- struct address_space *mapping = NULL;
struct udmabuf *ubuf;
struct dma_buf *buf;
- pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
- struct page *page;
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
int seals, ret = -EINVAL;
u32 i, flags;
@@ -234,6 +305,12 @@ static long udmabuf_create(struct miscde
ret = -ENOMEM;
goto err;
}
+ ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+ }
pgbuf = 0;
for (i = 0; i < head->count; i++) {
@@ -241,8 +318,7 @@ static long udmabuf_create(struct miscde
memfd = fget(list[i].memfd);
if (!memfd)
goto err;
- mapping = memfd->f_mapping;
- if (!shmem_mapping(mapping))
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
goto err;
seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
if (seals == -EINVAL)
@@ -251,16 +327,19 @@ static long udmabuf_create(struct miscde
if ((seals & SEALS_WANTED) != SEALS_WANTED ||
(seals & SEALS_DENIED) != 0)
goto err;
- pgoff = list[i].offset >> PAGE_SHIFT;
- pgcnt = list[i].size >> PAGE_SHIFT;
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(mapping, pgoff + pgidx);
- if (IS_ERR(page)) {
- ret = PTR_ERR(page);
- goto err;
- }
- ubuf->pages[pgbuf++] = page;
- }
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+ if (is_file_hugepages(memfd))
+ ret = handle_hugetlb_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ else
+ ret = handle_shmem_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ if (ret < 0)
+ goto err;
+
fput(memfd);
memfd = NULL;
}
@@ -287,6 +366,7 @@ err:
put_page(ubuf->pages[--pgbuf]);
if (memfd)
fput(memfd);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
return ret;
_
Patches currently in -mm which might be from vivek.kasireddy@intel.com are
mm-gup-introduce-unpin_folio-unpin_folios-helpers.patch
mm-gup-introduce-check_and_migrate_movable_folios.patch
mm-gup-introduce-memfd_pin_folios-for-pinning-memfd-folios.patch
udmabuf-use-vmf_insert_pfn-and-vm_pfnmap-for-handling-mmap.patch
udmabuf-add-back-support-for-mapping-hugetlb-pages.patch
udmabuf-convert-udmabuf-driver-to-use-folios.patch
udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch
selftests-udmabuf-add-tests-to-verify-data-after-page-migration.patch
^ permalink raw reply [relevance 4%]
* [syzbot] [net?] INFO: task hung in addrconf_dad_work (4)
@ 2024-05-20 3:26 3% syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-05-20 3:26 UTC (permalink / raw)
To: davem, dsahern, edumazet, kuba, linux-kernel, netdev, pabeni,
syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: a5131c3fdf26 Merge tag 'x86-shstk-2024-05-13' of git://git..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13f0ecf0980000
kernel config: https://syzkaller.appspot.com/x/.config?x=e50cba0cee5dac3a
dashboard link: https://syzkaller.appspot.com/bug?extid=46af9e85f01be0118283
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/890a976d962e/disk-a5131c3f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/03885a88739f/vmlinux-a5131c3f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ec1af8562020/bzImage-a5131c3f.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+46af9e85f01be0118283@syzkaller.appspotmail.com
INFO: task kworker/u8:5:145 blocked for more than 143 seconds.
Not tainted 6.9.0-syzkaller-01768-ga5131c3fdf26 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:5 state:D stack:14192 pid:145 tgid:145 ppid:2 flags:0x00004000
Workqueue: ipv6_addrconf addrconf_dad_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5409 [inline]
__schedule+0x1796/0x4a00 kernel/sched/core.c:6746
__schedule_loop kernel/sched/core.c:6823 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6838
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6895
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4192
process_one_work kernel/workqueue.c:3267 [inline]
process_scheduled_works+0xa10/0x17c0 kernel/workqueue.c:3348
worker_thread+0x86d/0xd70 kernel/workqueue.c:3429
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
Showing all locks held in the system:
2 locks held by init/1:
2 locks held by kworker/u8:1/12:
#0: ffff888015089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
#0: ffff888015089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
#1: ffffc90000117d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
#1: ffffc90000117d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
3 locks held by kworker/1:0/25:
#0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
#0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
#1: ffffc900001f7d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
#1: ffffc900001f7d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
#2: ffffffff8f599888 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
1 lock held by khungtaskd/30:
#0: ffffffff8e336020 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#0: ffffffff8e336020 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#0: ffffffff8e336020 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
5 locks held by kworker/u9:0/53:
#0: ffff8881a659a948 ((wq_completion)hci5){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
#0: ffff8881a659a948 ((wq_completion)hci5){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
#1: ffffc90000bd7d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
#1: ffffc90000bd7d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
#2: ffff8881e35a1060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:309
#3: ffff8881e35a0078 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1ea/0xde0 net/bluetooth/hci_sync.c:5548
#4: ffff8880b943e658 (&rq->__lock){-.-.}-{2:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#4: ffff8880b943e658 (&rq->__lock){-.-.}-{2:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#4: ffff8880b943e658 (&rq->__lock){-.-.}-{2:2}, at: shrink_slab+0x12b/0x14d0 mm/shrinker.c:649
1 lock held by kswapd1/89:
3 locks held by kworker/u8:5/145:
#0: ffff888029f86148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
#0: ffff888029f86148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
#1: ffffc90002d17d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
#1: ffffc90002d17d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
#2: ffffffff8f599888 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4192
4 locks held by kworker/u8:6/2426:
6 locks held by kworker/u8:8/2822:
1 lock held by jbd2/sda1-8/4493:
1 lock held by udevd/4530:
2 locks held by getty/4832:
#0: ffff88802a0ac0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90002f0e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
2 locks held by sshd/5067:
#0: ffff8880221ec420 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:165 [inline]
#0: ffff8880221ec420 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5633 [inline]
#0: ffff8880221ec420 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x32/0x2f0 mm/memory.c:5693
#1: ffffffff8e42a400 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3771 [inline]
#1: ffffffff8e42a400 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3796 [inline]
#1: ffffffff8e42a400 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0xd31/0x23d0 mm/page_alloc.c:4202
3 locks held by syz-fuzzer/5070:
2 locks held by syz-fuzzer/5075:
3 locks held by syz-fuzzer/5087:
4 locks held by kworker/u9:1/17744:
1 lock held by syz-executor.2/18180:
#0: ffff888065c500e0 (&type->s_umount_key#60){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
#0: ffff888065c500e0 (&type->s_umount_key#60){++++}-{3:3}, at: __super_lock_excl fs/super.c:71 [inline]
#0: ffff888065c500e0 (&type->s_umount_key#60){++++}-{3:3}, at: deactivate_super+0xb5/0xf0 fs/super.c:504
4 locks held by kworker/u9:2/18553:
#0: ffff88805e87e148 ((wq_completion)hci0){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
#0: ffff88805e87e148 ((wq_completion)hci0){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
#1: ffffc90003377d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
#1: ffffc90003377d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
#2: ffff888073ef9060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:309
#3: ffff888073ef8078 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1ea/0xde0 net/bluetooth/hci_sync.c:5548
2 locks held by syz-executor.4/18796:
3 locks held by syz-executor.2/19219:
2 locks held by syz-executor.1/19286:
2 locks held by rm/19503:
3 locks held by modprobe/19505:
5 locks held by kworker/u8:7/19507:
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 30 Comm: khungtaskd Not tainted 6.9.0-syzkaller-01768-ga5131c3fdf26 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
watchdog+0xfde/0x1020 kernel/hung_task.c:380
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5075 Comm: syz-fuzzer Not tainted 6.9.0-syzkaller-01768-ga5131c3fdf26 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
RIP: 0010:__ref_is_percpu include/linux/percpu-refcount.h:174 [inline]
RIP: 0010:percpu_ref_tryget_many include/linux/percpu-refcount.h:243 [inline]
RIP: 0010:percpu_ref_tryget+0x82/0x180 include/linux/percpu-refcount.h:266
Code: 1f c6 05 fa 02 92 0d 01 48 c7 c7 e0 3b d7 8b be 0f 03 00 00 48 c7 c2 20 3c d7 8b e8 f8 73 72 ff 49 bf 00 00 00 00 00 fc ff df <48> 89 d8 48 c1 e8 03 42 80 3c 38 00 74 08 48 89 df e8 08 15 f7 ff
RSP: 0000:ffffc900034c6c30 EFLAGS: 00000202
RAX: 0000000000000001 RBX: ffff888078a32010 RCX: ffff88807a811e00
RDX: dffffc0000000000 RSI: ffffffff8c1ec8c0 RDI: ffffffff8c1ec880
RBP: ffff888078a32000 R08: ffffffff92f0b587 R09: 1ffffffff25e16b0
R10: dffffc0000000000 R11: fffffbfff25e16b1 R12: ffff888078a32054
R13: ffff888016ac4000 R14: ffffffff82009d14 R15: dffffc0000000000
FS: 000000c000d1e490(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055dff07a1696 CR3: 0000000075c00000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
css_tryget include/linux/cgroup_refcnt.h:45 [inline]
mem_cgroup_iter+0x2d3/0x560 mm/memcontrol.c:1228
shrink_node_memcgs mm/vmscan.c:5884 [inline]
shrink_node+0x135b/0x2d60 mm/vmscan.c:5908
shrink_zones mm/vmscan.c:6152 [inline]
do_try_to_free_pages+0x695/0x1af0 mm/vmscan.c:6214
try_to_free_pages+0x760/0x1100 mm/vmscan.c:6449
__perform_reclaim mm/page_alloc.c:3774 [inline]
__alloc_pages_direct_reclaim mm/page_alloc.c:3796 [inline]
__alloc_pages_slowpath+0xdc3/0x23d0 mm/page_alloc.c:4202
__alloc_pages+0x43e/0x6c0 mm/page_alloc.c:4588
alloc_pages_mpol+0x3e8/0x680 mm/mempolicy.c:2264
alloc_pages mm/mempolicy.c:2335 [inline]
folio_alloc+0x128/0x180 mm/mempolicy.c:2342
filemap_alloc_folio+0xdf/0x500 mm/filemap.c:984
__filemap_get_folio+0x41a/0xbb0 mm/filemap.c:1926
filemap_fault+0xba2/0x1760 mm/filemap.c:3299
__do_fault+0x135/0x460 mm/memory.c:4531
do_read_fault mm/memory.c:4894 [inline]
do_fault mm/memory.c:5024 [inline]
do_pte_missing mm/memory.c:3880 [inline]
handle_pte_fault mm/memory.c:5300 [inline]
__handle_mm_fault+0x45fe/0x7250 mm/memory.c:5441
handle_mm_fault+0x27f/0x770 mm/memory.c:5606
do_user_addr_fault arch/x86/mm/fault.c:1332 [inline]
handle_page_fault arch/x86/mm/fault.c:1475 [inline]
exc_page_fault+0x446/0x8a0 arch/x86/mm/fault.c:1533
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x43d098
Code: Unable to access opcode bytes at 0x43d06e.
RSP: 002b:000000c000d378e0 EFLAGS: 00010202
RAX: 00000000013b25a8 RBX: 00000000014b5540 RCX: 0000000000000000
RDX: 00000000013b25a8 RSI: 00000000009c053f RDI: 0000000000e4fcc0
RBP: 000000c000d37950 R08: 00000000ffffffff R09: 0000000000000080
R10: 0000000000000001 R11: 000000000001960e R12: 0000000000000002
R13: ffffffffffffffff R14: 000000c0001c4820 R15: 0000000000000000
</TASK>
ICMPv6: ndisc: ndisc_alloc_skb failed to allocate an skb
ICMPv6: ndisc: ndisc_alloc_skb failed to allocate an skb
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 3%]
* Re: [syzbot] [nilfs?] possible deadlock in nilfs_evict_inode (2)
2024-05-18 18:53 4% [syzbot] [nilfs?] possible deadlock in nilfs_evict_inode (2) syzbot
@ 2024-05-18 19:20 0% ` Ryusuke Konishi
0 siblings, 0 replies; 200+ results
From: Ryusuke Konishi @ 2024-05-18 19:20 UTC (permalink / raw)
To: syzbot; +Cc: linux-kernel, linux-nilfs, syzkaller-bugs
On Sun, May 19, 2024 at 3:53 AM syzbot
<syzbot+c48f1971ba117125f94c@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 6bfd2d442af5 Merge tag 'irq-core-2024-05-12' of git://git...
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=13aefc20980000
> kernel config: https://syzkaller.appspot.com/x/.config?x=395546166dcfe360
> dashboard link: https://syzkaller.appspot.com/bug?extid=c48f1971ba117125f94c
> compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> userspace arch: i386
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-6bfd2d44.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/7ad901fe99c6/vmlinux-6bfd2d44.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/8d6ef2df621f/bzImage-6bfd2d44.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+c48f1971ba117125f94c@syzkaller.appspotmail.com
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.9.0-syzkaller-01893-g6bfd2d442af5 #0 Not tainted
> ------------------------------------------------------
> kswapd0/111 is trying to acquire lock:
> ffff888018e7e610 (sb_internal#4){.+.+}-{0:0}, at: nilfs_evict_inode+0x157/0x550 fs/nilfs2/inode.c:924
>
> but task is already holding lock:
> ffffffff8d9390c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #2 (fs_reclaim){+.+.}-{0:0}:
> __fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
> fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
> might_alloc include/linux/sched/mm.h:312 [inline]
> prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
> __alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
> alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
> folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
> filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
> __filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
> pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
> block_write_begin+0x38/0x4a0 fs/buffer.c:2209
> nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
> page_symlink+0x356/0x450 fs/namei.c:5236
> nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
> vfs_symlink fs/namei.c:4489 [inline]
> vfs_symlink+0x3e8/0x630 fs/namei.c:4473
> do_symlinkat+0x263/0x310 fs/namei.c:4515
> __do_sys_symlink fs/namei.c:4536 [inline]
> __se_sys_symlink fs/namei.c:4534 [inline]
> __ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
> do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
> __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
> do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
> entry_SYSENTER_compat_after_hwframe+0x84/0x8e
>
> -> #1 (&nilfs->ns_segctor_sem){++++}-{3:3}:
> down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
> nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
> nilfs_symlink+0x114/0x3c0 fs/nilfs2/namei.c:140
> vfs_symlink fs/namei.c:4489 [inline]
> vfs_symlink+0x3e8/0x630 fs/namei.c:4473
> do_symlinkat+0x263/0x310 fs/namei.c:4515
> __do_sys_symlink fs/namei.c:4536 [inline]
> __se_sys_symlink fs/namei.c:4534 [inline]
> __ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
> do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
> __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
> do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
> entry_SYSENTER_compat_after_hwframe+0x84/0x8e
>
> -> #0 (sb_internal#4){.+.+}-{0:0}:
> check_prev_add kernel/locking/lockdep.c:3134 [inline]
> check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> validate_chain kernel/locking/lockdep.c:3869 [inline]
> __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
> lock_acquire kernel/locking/lockdep.c:5754 [inline]
> lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
> percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> __sb_start_write include/linux/fs.h:1661 [inline]
> sb_start_intwrite include/linux/fs.h:1844 [inline]
> nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
> nilfs_evict_inode+0x157/0x550 fs/nilfs2/inode.c:924
> evict+0x2ed/0x6c0 fs/inode.c:667
> iput_final fs/inode.c:1741 [inline]
> iput.part.0+0x5a8/0x7f0 fs/inode.c:1767
> iput+0x5c/0x80 fs/inode.c:1757
> dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
> __dentry_kill+0x1d0/0x600 fs/dcache.c:603
> shrink_kill fs/dcache.c:1048 [inline]
> shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
> prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
> super_cache_scan+0x32a/0x550 fs/super.c:221
> do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
> shrink_slab_memcg mm/shrinker.c:548 [inline]
> shrink_slab+0xa87/0x1310 mm/shrinker.c:626
> shrink_one+0x493/0x7c0 mm/vmscan.c:4774
> shrink_many mm/vmscan.c:4835 [inline]
> lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
> shrink_node mm/vmscan.c:5894 [inline]
> kswapd_shrink_node mm/vmscan.c:6704 [inline]
> balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
> kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
> kthread+0x2c1/0x3a0 kernel/kthread.c:388
> ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> other info that might help us debug this:
>
> Chain exists of:
> sb_internal#4 --> &nilfs->ns_segctor_sem --> fs_reclaim
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> lock(&nilfs->ns_segctor_sem);
> lock(fs_reclaim);
> rlock(sb_internal#4);
>
> *** DEADLOCK ***
>
> 2 locks held by kswapd0/111:
> #0: ffffffff8d9390c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
> #1: ffff888018e7e0e0 (&type->s_umount_key#74){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
> #1: ffff888018e7e0e0 (&type->s_umount_key#74){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196
>
> stack backtrace:
> CPU: 2 PID: 111 Comm: kswapd0 Not tainted 6.9.0-syzkaller-01893-g6bfd2d442af5 #0
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
> check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
> check_prev_add kernel/locking/lockdep.c:3134 [inline]
> check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> validate_chain kernel/locking/lockdep.c:3869 [inline]
> __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
> lock_acquire kernel/locking/lockdep.c:5754 [inline]
> lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
> percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> __sb_start_write include/linux/fs.h:1661 [inline]
> sb_start_intwrite include/linux/fs.h:1844 [inline]
> nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
> nilfs_evict_inode+0x157/0x550 fs/nilfs2/inode.c:924
> evict+0x2ed/0x6c0 fs/inode.c:667
> iput_final fs/inode.c:1741 [inline]
> iput.part.0+0x5a8/0x7f0 fs/inode.c:1767
> iput+0x5c/0x80 fs/inode.c:1757
> dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
> __dentry_kill+0x1d0/0x600 fs/dcache.c:603
> shrink_kill fs/dcache.c:1048 [inline]
> shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
> prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
> super_cache_scan+0x32a/0x550 fs/super.c:221
> do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
> shrink_slab_memcg mm/shrinker.c:548 [inline]
> shrink_slab+0xa87/0x1310 mm/shrinker.c:626
> shrink_one+0x493/0x7c0 mm/vmscan.c:4774
> shrink_many mm/vmscan.c:4835 [inline]
> lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
> shrink_node mm/vmscan.c:5894 [inline]
> kswapd_shrink_node mm/vmscan.c:6704 [inline]
> balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
> kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
> kthread+0x2c1/0x3a0 kernel/kthread.c:388
> ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
> </TASK>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
>
> If you want to undo deduplication, reply with:
> #syz undup
Similarly, it seems to have the same root as the report below, but I
can't confirm it right now, so I'll leave it now.
https://syzkaller.appspot.com/bug?extid=ca73f5a22aec76875d85
Similarly, the GFP flags on the symlink's page cache allocation
appears to be causing this circular lock dependency.
Ryusuke Konishi
^ permalink raw reply [relevance 0%]
* Re: [syzbot] [nilfs?] possible deadlock in nilfs_transaction_begin
2024-05-18 12:29 5% [syzbot] [nilfs?] possible deadlock in nilfs_transaction_begin syzbot
@ 2024-05-18 19:16 0% ` Ryusuke Konishi
0 siblings, 0 replies; 200+ results
From: Ryusuke Konishi @ 2024-05-18 19:16 UTC (permalink / raw)
To: syzbot; +Cc: linux-kernel, linux-nilfs, syzkaller-bugs
On Sat, May 18, 2024 at 9:29 PM syzbot
<syzbot+77c39f023a0cb2e4c149@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: a5131c3fdf26 Merge tag 'x86-shstk-2024-05-13' of git://git..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=144c6e04980000
> kernel config: https://syzkaller.appspot.com/x/.config?x=fdb182f40cdd66f7
> dashboard link: https://syzkaller.appspot.com/bug?extid=77c39f023a0cb2e4c149
> compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> userspace arch: i386
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-a5131c3f.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/6d23116dab9c/vmlinux-a5131c3f.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/dd8b9de9af4f/bzImage-a5131c3f.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+77c39f023a0cb2e4c149@syzkaller.appspotmail.com
>
> NILFS (loop2): inode bitmap is inconsistent for reserved inodes
> NILFS (loop2): repaired inode bitmap for reserved inodes
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.9.0-syzkaller-01768-ga5131c3fdf26 #0 Not tainted
> ------------------------------------------------------
> syz-executor.2/23478 is trying to acquire lock:
> ffffffff8d938460 (fs_reclaim){+.+.}-{0:0}, at: might_alloc include/linux/sched/mm.h:312 [inline]
> ffffffff8d938460 (fs_reclaim){+.+.}-{0:0}, at: prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
>
> but task is already holding lock:
> ffff888026c5c2a0 (&nilfs->ns_segctor_sem){++++}-{3:3}, at: nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #2 (&nilfs->ns_segctor_sem){++++}-{3:3}:
> down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
> nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
> nilfs_create+0xb7/0x320 fs/nilfs2/namei.c:82
> lookup_open.isra.0+0x10a1/0x13c0 fs/namei.c:3505
> open_last_lookups fs/namei.c:3574 [inline]
> path_openat+0x92f/0x2990 fs/namei.c:3804
> do_filp_open+0x1dc/0x430 fs/namei.c:3834
> do_sys_openat2+0x17a/0x1e0 fs/open.c:1406
> do_sys_open fs/open.c:1421 [inline]
> __do_compat_sys_openat fs/open.c:1481 [inline]
> __se_compat_sys_openat fs/open.c:1479 [inline]
> __ia32_compat_sys_openat+0x16e/0x210 fs/open.c:1479
> do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
> __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
> do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
> entry_SYSENTER_compat_after_hwframe+0x84/0x8e
>
> -> #1 (sb_internal#5){.+.+}-{0:0}:
> percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> __sb_start_write include/linux/fs.h:1661 [inline]
> sb_start_intwrite include/linux/fs.h:1844 [inline]
> nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
> nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
> __mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2486
> mark_inode_dirty_sync include/linux/fs.h:2426 [inline]
> iput.part.0+0x5b/0x7f0 fs/inode.c:1764
> iput+0x5c/0x80 fs/inode.c:1757
> dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
> __dentry_kill+0x1d0/0x600 fs/dcache.c:603
> shrink_kill fs/dcache.c:1048 [inline]
> shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
> prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
> super_cache_scan+0x32a/0x550 fs/super.c:221
> do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
> shrink_slab_memcg mm/shrinker.c:548 [inline]
> shrink_slab+0xa87/0x1310 mm/shrinker.c:626
> shrink_one+0x493/0x7c0 mm/vmscan.c:4774
> shrink_many mm/vmscan.c:4835 [inline]
> lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
> shrink_node mm/vmscan.c:5894 [inline]
> kswapd_shrink_node mm/vmscan.c:6704 [inline]
> balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
> kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
> kthread+0x2c1/0x3a0 kernel/kthread.c:388
> ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> -> #0 (fs_reclaim){+.+.}-{0:0}:
> check_prev_add kernel/locking/lockdep.c:3134 [inline]
> check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> validate_chain kernel/locking/lockdep.c:3869 [inline]
> __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
> lock_acquire kernel/locking/lockdep.c:5754 [inline]
> lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
> __fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
> fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
> might_alloc include/linux/sched/mm.h:312 [inline]
> prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
> __alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
> alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
> folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
> filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
> __filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
> pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
> block_write_begin+0x38/0x4a0 fs/buffer.c:2209
> nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
> page_symlink+0x356/0x450 fs/namei.c:5236
> nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
> vfs_symlink fs/namei.c:4489 [inline]
> vfs_symlink+0x3e8/0x630 fs/namei.c:4473
> do_symlinkat+0x263/0x310 fs/namei.c:4515
> __do_sys_symlink fs/namei.c:4536 [inline]
> __se_sys_symlink fs/namei.c:4534 [inline]
> __ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
> do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
> __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
> do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
> entry_SYSENTER_compat_after_hwframe+0x84/0x8e
>
> other info that might help us debug this:
>
> Chain exists of:
> fs_reclaim --> sb_internal#5 --> &nilfs->ns_segctor_sem
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> rlock(&nilfs->ns_segctor_sem);
> lock(sb_internal#5);
> lock(&nilfs->ns_segctor_sem);
> lock(fs_reclaim);
>
> *** DEADLOCK ***
>
> 4 locks held by syz-executor.2/23478:
> #0: ffff888000c0c420 (sb_writers#32){.+.+}-{0:0}, at: filename_create+0x10d/0x530 fs/namei.c:3893
> #1: ffff88804b284f88 (&type->i_mutex_dir_key#23/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:826 [inline]
> #1: ffff88804b284f88 (&type->i_mutex_dir_key#23/1){+.+.}-{3:3}, at: filename_create+0x1c2/0x530 fs/namei.c:3900
> #2: ffff888000c0c610 (sb_internal#5){.+.+}-{0:0}, at: nilfs_symlink+0x114/0x3c0 fs/nilfs2/namei.c:140
> #3: ffff888026c5c2a0 (&nilfs->ns_segctor_sem){++++}-{3:3}, at: nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
>
> stack backtrace:
> CPU: 2 PID: 23478 Comm: syz-executor.2 Not tainted 6.9.0-syzkaller-01768-ga5131c3fdf26 #0
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
> check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
> check_prev_add kernel/locking/lockdep.c:3134 [inline]
> check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> validate_chain kernel/locking/lockdep.c:3869 [inline]
> __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
> lock_acquire kernel/locking/lockdep.c:5754 [inline]
> lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
> __fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
> fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
> might_alloc include/linux/sched/mm.h:312 [inline]
> prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
> __alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
> alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
> folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
> filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
> __filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
> pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
> block_write_begin+0x38/0x4a0 fs/buffer.c:2209
> nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
> page_symlink+0x356/0x450 fs/namei.c:5236
> nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
> vfs_symlink fs/namei.c:4489 [inline]
> vfs_symlink+0x3e8/0x630 fs/namei.c:4473
> do_symlinkat+0x263/0x310 fs/namei.c:4515
> __do_sys_symlink fs/namei.c:4536 [inline]
> __se_sys_symlink fs/namei.c:4534 [inline]
> __ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
> do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
> __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
> do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
> entry_SYSENTER_compat_after_hwframe+0x84/0x8e
> RIP: 0023:0xf734f579
> Code: b8 01 10 06 03 74 b4 01 10 07 03 74 b0 01 10 08 03 74 d8 01 00 00 00 00 00 00 00 00 00 00 00 00 00 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d b4 26 00 00 00 00 8d b4 26 00 00 00 00
> RSP: 002b:00000000f5f415ac EFLAGS: 00000292 ORIG_RAX: 0000000000000053
> RAX: ffffffffffffffda RBX: 0000000020000340 RCX: 0000000020000100
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
> RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000292 R12: 0000000000000000
> R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
> </TASK>
> ----------------
> Code disassembly (best guess), 2 bytes skipped:
> 0: 10 06 adc %al,(%rsi)
> 2: 03 74 b4 01 add 0x1(%rsp,%rsi,4),%esi
> 6: 10 07 adc %al,(%rdi)
> 8: 03 74 b0 01 add 0x1(%rax,%rsi,4),%esi
> c: 10 08 adc %cl,(%rax)
> e: 03 74 d8 01 add 0x1(%rax,%rbx,8),%esi
> 1e: 00 51 52 add %dl,0x52(%rcx)
> 21: 55 push %rbp
> 22: 89 e5 mov %esp,%ebp
> 24: 0f 34 sysenter
> 26: cd 80 int $0x80
> * 28: 5d pop %rbp <-- trapping instruction
> 29: 5a pop %rdx
> 2a: 59 pop %rcx
> 2b: c3 ret
> 2c: 90 nop
> 2d: 90 nop
> 2e: 90 nop
> 2f: 90 nop
> 30: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
> 37: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
>
> If you want to undo deduplication, reply with:
> #syz undup
This issue seems to be the same as the report below, but I don't have
time to look into it in detail right now, so I'll put it on hold.
https://syzkaller.appspot.com/bug?extid=ca73f5a22aec76875d85
It looks like the GFP flags in the symlink's page cache allocation are
causing the circular lock dependency.
Ryusuke Konishi
^ permalink raw reply [relevance 0%]
* [syzbot] [nilfs?] possible deadlock in nilfs_evict_inode (2)
@ 2024-05-18 18:53 4% syzbot
2024-05-18 19:20 0% ` Ryusuke Konishi
0 siblings, 1 reply; 200+ results
From: syzbot @ 2024-05-18 18:53 UTC (permalink / raw)
To: konishi.ryusuke, linux-kernel, linux-nilfs, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 6bfd2d442af5 Merge tag 'irq-core-2024-05-12' of git://git...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13aefc20980000
kernel config: https://syzkaller.appspot.com/x/.config?x=395546166dcfe360
dashboard link: https://syzkaller.appspot.com/bug?extid=c48f1971ba117125f94c
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-6bfd2d44.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/7ad901fe99c6/vmlinux-6bfd2d44.xz
kernel image: https://storage.googleapis.com/syzbot-assets/8d6ef2df621f/bzImage-6bfd2d44.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c48f1971ba117125f94c@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.9.0-syzkaller-01893-g6bfd2d442af5 #0 Not tainted
------------------------------------------------------
kswapd0/111 is trying to acquire lock:
ffff888018e7e610 (sb_internal#4){.+.+}-{0:0}, at: nilfs_evict_inode+0x157/0x550 fs/nilfs2/inode.c:924
but task is already holding lock:
ffffffff8d9390c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
might_alloc include/linux/sched/mm.h:312 [inline]
prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
__alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
__filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
block_write_begin+0x38/0x4a0 fs/buffer.c:2209
nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
page_symlink+0x356/0x450 fs/namei.c:5236
nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
vfs_symlink fs/namei.c:4489 [inline]
vfs_symlink+0x3e8/0x630 fs/namei.c:4473
do_symlinkat+0x263/0x310 fs/namei.c:4515
__do_sys_symlink fs/namei.c:4536 [inline]
__se_sys_symlink fs/namei.c:4534 [inline]
__ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
-> #1 (&nilfs->ns_segctor_sem){++++}-{3:3}:
down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
nilfs_symlink+0x114/0x3c0 fs/nilfs2/namei.c:140
vfs_symlink fs/namei.c:4489 [inline]
vfs_symlink+0x3e8/0x630 fs/namei.c:4473
do_symlinkat+0x263/0x310 fs/namei.c:4515
__do_sys_symlink fs/namei.c:4536 [inline]
__se_sys_symlink fs/namei.c:4534 [inline]
__ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
-> #0 (sb_internal#4){.+.+}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1661 [inline]
sb_start_intwrite include/linux/fs.h:1844 [inline]
nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
nilfs_evict_inode+0x157/0x550 fs/nilfs2/inode.c:924
evict+0x2ed/0x6c0 fs/inode.c:667
iput_final fs/inode.c:1741 [inline]
iput.part.0+0x5a8/0x7f0 fs/inode.c:1767
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Chain exists of:
sb_internal#4 --> &nilfs->ns_segctor_sem --> fs_reclaim
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&nilfs->ns_segctor_sem);
lock(fs_reclaim);
rlock(sb_internal#4);
*** DEADLOCK ***
2 locks held by kswapd0/111:
#0: ffffffff8d9390c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
#1: ffff888018e7e0e0 (&type->s_umount_key#74){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
#1: ffff888018e7e0e0 (&type->s_umount_key#74){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196
stack backtrace:
CPU: 2 PID: 111 Comm: kswapd0 Not tainted 6.9.0-syzkaller-01893-g6bfd2d442af5 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1661 [inline]
sb_start_intwrite include/linux/fs.h:1844 [inline]
nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
nilfs_evict_inode+0x157/0x550 fs/nilfs2/inode.c:924
evict+0x2ed/0x6c0 fs/inode.c:667
iput_final fs/inode.c:1741 [inline]
iput.part.0+0x5a8/0x7f0 fs/inode.c:1767
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 4%]
* [syzbot] [nilfs?] possible deadlock in nilfs_transaction_begin
@ 2024-05-18 12:29 5% syzbot
2024-05-18 19:16 0% ` Ryusuke Konishi
0 siblings, 1 reply; 200+ results
From: syzbot @ 2024-05-18 12:29 UTC (permalink / raw)
To: konishi.ryusuke, linux-kernel, linux-nilfs, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: a5131c3fdf26 Merge tag 'x86-shstk-2024-05-13' of git://git..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=144c6e04980000
kernel config: https://syzkaller.appspot.com/x/.config?x=fdb182f40cdd66f7
dashboard link: https://syzkaller.appspot.com/bug?extid=77c39f023a0cb2e4c149
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-a5131c3f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/6d23116dab9c/vmlinux-a5131c3f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/dd8b9de9af4f/bzImage-a5131c3f.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+77c39f023a0cb2e4c149@syzkaller.appspotmail.com
NILFS (loop2): inode bitmap is inconsistent for reserved inodes
NILFS (loop2): repaired inode bitmap for reserved inodes
======================================================
WARNING: possible circular locking dependency detected
6.9.0-syzkaller-01768-ga5131c3fdf26 #0 Not tainted
------------------------------------------------------
syz-executor.2/23478 is trying to acquire lock:
ffffffff8d938460 (fs_reclaim){+.+.}-{0:0}, at: might_alloc include/linux/sched/mm.h:312 [inline]
ffffffff8d938460 (fs_reclaim){+.+.}-{0:0}, at: prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
but task is already holding lock:
ffff888026c5c2a0 (&nilfs->ns_segctor_sem){++++}-{3:3}, at: nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&nilfs->ns_segctor_sem){++++}-{3:3}:
down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
nilfs_create+0xb7/0x320 fs/nilfs2/namei.c:82
lookup_open.isra.0+0x10a1/0x13c0 fs/namei.c:3505
open_last_lookups fs/namei.c:3574 [inline]
path_openat+0x92f/0x2990 fs/namei.c:3804
do_filp_open+0x1dc/0x430 fs/namei.c:3834
do_sys_openat2+0x17a/0x1e0 fs/open.c:1406
do_sys_open fs/open.c:1421 [inline]
__do_compat_sys_openat fs/open.c:1481 [inline]
__se_compat_sys_openat fs/open.c:1479 [inline]
__ia32_compat_sys_openat+0x16e/0x210 fs/open.c:1479
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
-> #1 (sb_internal#5){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1661 [inline]
sb_start_intwrite include/linux/fs.h:1844 [inline]
nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
__mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2486
mark_inode_dirty_sync include/linux/fs.h:2426 [inline]
iput.part.0+0x5b/0x7f0 fs/inode.c:1764
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
-> #0 (fs_reclaim){+.+.}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
__fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
might_alloc include/linux/sched/mm.h:312 [inline]
prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
__alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
__filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
block_write_begin+0x38/0x4a0 fs/buffer.c:2209
nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
page_symlink+0x356/0x450 fs/namei.c:5236
nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
vfs_symlink fs/namei.c:4489 [inline]
vfs_symlink+0x3e8/0x630 fs/namei.c:4473
do_symlinkat+0x263/0x310 fs/namei.c:4515
__do_sys_symlink fs/namei.c:4536 [inline]
__se_sys_symlink fs/namei.c:4534 [inline]
__ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
other info that might help us debug this:
Chain exists of:
fs_reclaim --> sb_internal#5 --> &nilfs->ns_segctor_sem
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(&nilfs->ns_segctor_sem);
lock(sb_internal#5);
lock(&nilfs->ns_segctor_sem);
lock(fs_reclaim);
*** DEADLOCK ***
4 locks held by syz-executor.2/23478:
#0: ffff888000c0c420 (sb_writers#32){.+.+}-{0:0}, at: filename_create+0x10d/0x530 fs/namei.c:3893
#1: ffff88804b284f88 (&type->i_mutex_dir_key#23/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:826 [inline]
#1: ffff88804b284f88 (&type->i_mutex_dir_key#23/1){+.+.}-{3:3}, at: filename_create+0x1c2/0x530 fs/namei.c:3900
#2: ffff888000c0c610 (sb_internal#5){.+.+}-{0:0}, at: nilfs_symlink+0x114/0x3c0 fs/nilfs2/namei.c:140
#3: ffff888026c5c2a0 (&nilfs->ns_segctor_sem){++++}-{3:3}, at: nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
stack backtrace:
CPU: 2 PID: 23478 Comm: syz-executor.2 Not tainted 6.9.0-syzkaller-01768-ga5131c3fdf26 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
__fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
might_alloc include/linux/sched/mm.h:312 [inline]
prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
__alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
__filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
block_write_begin+0x38/0x4a0 fs/buffer.c:2209
nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
page_symlink+0x356/0x450 fs/namei.c:5236
nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
vfs_symlink fs/namei.c:4489 [inline]
vfs_symlink+0x3e8/0x630 fs/namei.c:4473
do_symlinkat+0x263/0x310 fs/namei.c:4515
__do_sys_symlink fs/namei.c:4536 [inline]
__se_sys_symlink fs/namei.c:4534 [inline]
__ia32_sys_symlink+0x78/0xa0 fs/namei.c:4534
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
RIP: 0023:0xf734f579
Code: b8 01 10 06 03 74 b4 01 10 07 03 74 b0 01 10 08 03 74 d8 01 00 00 00 00 00 00 00 00 00 00 00 00 00 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d b4 26 00 00 00 00 8d b4 26 00 00 00 00
RSP: 002b:00000000f5f415ac EFLAGS: 00000292 ORIG_RAX: 0000000000000053
RAX: ffffffffffffffda RBX: 0000000020000340 RCX: 0000000020000100
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000292 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
</TASK>
----------------
Code disassembly (best guess), 2 bytes skipped:
0: 10 06 adc %al,(%rsi)
2: 03 74 b4 01 add 0x1(%rsp,%rsi,4),%esi
6: 10 07 adc %al,(%rdi)
8: 03 74 b0 01 add 0x1(%rax,%rsi,4),%esi
c: 10 08 adc %cl,(%rax)
e: 03 74 d8 01 add 0x1(%rax,%rbx,8),%esi
1e: 00 51 52 add %dl,0x52(%rcx)
21: 55 push %rbp
22: 89 e5 mov %esp,%ebp
24: 0f 34 sysenter
26: cd 80 int $0x80
* 28: 5d pop %rbp <-- trapping instruction
29: 5a pop %rdx
2a: 59 pop %rcx
2b: c3 ret
2c: 90 nop
2d: 90 nop
2e: 90 nop
2f: 90 nop
30: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
37: 8d b4 26 00 00 00 00 lea 0x0(%rsi,%riz,1),%esi
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 5%]
* [syzbot] [udf?] possible deadlock in udf_setsize
@ 2024-05-17 13:54 4% syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-05-17 13:54 UTC (permalink / raw)
To: jack, linux-kernel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: a38297e3fb01 Linux 6.9
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13228d24980000
kernel config: https://syzkaller.appspot.com/x/.config?x=edefe34e4544d70e
dashboard link: https://syzkaller.appspot.com/bug?extid=0333a6f4b88bcd68a62f
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-a38297e3.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/eb6ef4d9e74f/vmlinux-a38297e3.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ce2fb6bcfd40/bzImage-a38297e3.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0333a6f4b88bcd68a62f@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.9.0-syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/110 is trying to acquire lock:
ffff88804a5cbdf0 (mapping.invalidate_lock#4){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:840 [inline]
ffff88804a5cbdf0 (mapping.invalidate_lock#4){++++}-{3:3}, at: udf_setsize+0x256/0x1180 fs/udf/inode.c:1254
but task is already holding lock:
ffffffff8d937180 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
might_alloc include/linux/sched/mm.h:312 [inline]
prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
__alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
__filemap_get_folio+0x527/0xa90 mm/filemap.c:1926
filemap_fault+0x610/0x38c0 mm/filemap.c:3299
__do_fault+0x10a/0x490 mm/memory.c:4531
do_shared_fault mm/memory.c:4954 [inline]
do_fault mm/memory.c:5028 [inline]
do_pte_missing mm/memory.c:3880 [inline]
handle_pte_fault mm/memory.c:5300 [inline]
__handle_mm_fault+0x3148/0x4a80 mm/memory.c:5441
handle_mm_fault+0x476/0xa00 mm/memory.c:5606
do_user_addr_fault+0x426/0x1030 arch/x86/mm/fault.c:1331
handle_page_fault arch/x86/mm/fault.c:1474 [inline]
exc_page_fault+0x5c/0xc0 arch/x86/mm/fault.c:1532
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
-> #0 (mapping.invalidate_lock#4){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
down_write+0x3a/0x50 kernel/locking/rwsem.c:1579
filemap_invalidate_lock include/linux/fs.h:840 [inline]
udf_setsize+0x256/0x1180 fs/udf/inode.c:1254
udf_evict_inode+0x361/0x590 fs/udf/inode.c:144
evict+0x2ed/0x6c0 fs/inode.c:667
iput_final fs/inode.c:1741 [inline]
iput.part.0+0x5a8/0x7f0 fs/inode.c:1767
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(mapping.invalidate_lock#4);
lock(fs_reclaim);
lock(mapping.invalidate_lock#4);
*** DEADLOCK ***
2 locks held by kswapd0/110:
#0: ffffffff8d937180 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
#1: ffff88801cd260e0 (&type->s_umount_key#57){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
#1: ffff88801cd260e0 (&type->s_umount_key#57){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196
stack backtrace:
CPU: 2 PID: 110 Comm: kswapd0 Not tainted 6.9.0-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
down_write+0x3a/0x50 kernel/locking/rwsem.c:1579
filemap_invalidate_lock include/linux/fs.h:840 [inline]
udf_setsize+0x256/0x1180 fs/udf/inode.c:1254
udf_evict_inode+0x361/0x590 fs/udf/inode.c:144
evict+0x2ed/0x6c0 fs/inode.c:667
iput_final fs/inode.c:1741 [inline]
iput.part.0+0x5a8/0x7f0 fs/inode.c:1767
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 4%]
* Re: [PATCH stable] block/mq-deadline: fix different priority request on the same zone
@ 2024-05-17 1:44 5% ` Wu Bo
0 siblings, 0 replies; 200+ results
From: Wu Bo @ 2024-05-17 1:44 UTC (permalink / raw)
To: bvanassche
Cc: axboe, bo.wu, dlemoal, linux-block, linux-kernel, stable, wubo.oduw
On Thu, May 16, 2024 at 07:45:21AM -0600, Bart Van Assche wrote:
> On 5/16/24 03:28, Wu Bo wrote:
> > Zoned devices request sequential writing on the same zone. That means
> > if 2 requests on the saem zone, the lower pos request need to dispatch
> > to device first.
> > While different priority has it's own tree & list, request with high
> > priority will be disptch first.
> > So if requestA & requestB are on the same zone. RequestA is BE and pos
> > is X+0. ReqeustB is RT and pos is X+1. RequestB will be disptched before
> > requestA, which got an ERROR from zoned device.
> >
> > This is found in a practice scenario when using F2FS on zoned device.
> > And it is very easy to reproduce:
> > 1. Use fsstress to run 8 test processes
> > 2. Use ionice to change 4/8 processes to RT priority
>
> Hi Wu,
>
> I agree that there is a problem related to the interaction of I/O
> priority and zoned storage. A solution with a lower runtime overhead
> is available here:
> https://lore.kernel.org/linux-block/20231218211342.2179689-1-bvanassche@acm.org/T/#me97b088c535278fe3d1dc5846b388ed58aa53f46
Hi Bart,
I have tried to set all seq write requests the same priority:
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 6a05dd86e8ca..b560846c63cb 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -841,7 +841,10 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx,
struct request *rq,
*/
blk_req_zone_write_unlock(rq);
- prio = ioprio_class_to_prio[ioprio_class];
+ if (blk_rq_is_seq_zoned_write(rq))
+ prio = DD_BE_PRIO;
+ else
+ prio = ioprio_class_to_prio[ioprio_class];
per_prio = &dd->per_prio[prio];
if (!rq->elv.priv[0]) {
per_prio->stats.inserted++;
I think this is the same effect as the patch you mentioned here. Unfortunatelly,
this fix causes another issue.
As all write requests are set to the same priority while read requests still
have different priotities. This makes f2fs prone to hung when under stress test:
[129412.105440][T1100129] vkhungtaskd: INFO: task "f2fs_ckpt-254:5":769 blocked for more than 193 seconds.
[129412.106629][T1100129] vkhungtaskd: 6.1.25-android14-11-maybe-dirty #1
[129412.107624][T1100129] vkhungtaskd: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[129412.108873][T1100129] vkhungtaskd: task:f2fs_ckpt-254:5 state:D stack:10496 pid:769 ppid:2 flags:0x00000408
[129412.110194][T1100129] vkhungtaskd: Call trace:
[129412.110769][T1100129] vkhungtaskd: __switch_to+0x174/0x338
[129412.111566][T1100129] vkhungtaskd: __schedule+0x604/0x9e4
[129412.112275][T1100129] vkhungtaskd: schedule+0x7c/0xe8
[129412.112938][T1100129] vkhungtaskd: rwsem_down_write_slowpath+0x4cc/0xf98
[129412.113813][T1100129] vkhungtaskd: down_write+0x38/0x40
[129412.114500][T1100129] vkhungtaskd: __write_checkpoint_sync+0x8c/0x11c
[129412.115409][T1100129] vkhungtaskd: __checkpoint_and_complete_reqs+0x54/0x1dc
[129412.116323][T1100129] vkhungtaskd: issue_checkpoint_thread+0x8c/0xec
[129412.117148][T1100129] vkhungtaskd: kthread+0x110/0x224
[129412.117826][T1100129] vkhungtaskd: ret_from_fork+0x10/0x20
[129412.484027][T1700129] vkhungtaskd: task:f2fs_gc-254:55 state:D stack:10832 pid:771 ppid:2 flags:0x00000408
[129412.485337][T1700129] vkhungtaskd: Call trace:
[129412.485906][T1700129] vkhungtaskd: __switch_to+0x174/0x338
[129412.486618][T1700129] vkhungtaskd: __schedule+0x604/0x9e4
[129412.487327][T1700129] vkhungtaskd: schedule+0x7c/0xe8
[129412.487985][T1700129] vkhungtaskd: io_schedule+0x38/0xc4
[129412.488675][T1700129] vkhungtaskd: folio_wait_bit_common+0x3d8/0x4f8
[129412.489496][T1700129] vkhungtaskd: __folio_lock+0x1c/0x2c
[129412.490196][T1700129] vkhungtaskd: __folio_lock_io+0x24/0x44
[129412.490936][T1700129] vkhungtaskd: __filemap_get_folio+0x190/0x400
[129412.491736][T1700129] vkhungtaskd: pagecache_get_page+0x1c/0x5c
[129412.492501][T1700129] vkhungtaskd: f2fs_wait_on_block_writeback+0x60/0xf8
[129412.493376][T1700129] vkhungtaskd: do_garbage_collect+0x1100/0x223c
[129412.494185][T1700129] vkhungtaskd: f2fs_gc+0x284/0x778
[129412.494858][T1700129] vkhungtaskd: gc_thread_func+0x304/0x838
[129412.495603][T1700129] vkhungtaskd: kthread+0x110/0x224
[129412.496271][T1700129] vkhungtaskd: ret_from_fork+0x10/0x20
I think because f2fs is a CoW filesystem. Some threads holding lock need much
reading & writing at the same time. Different reading & writing priority of this
thread makes this process very long. And other FS operations will be blocked.
So I figured this solution to fix this priority issue on zoned device. It sure
raises the overhead but can do fix it.
Thanks,
Wu Bo
>
> Are you OK with that alternative solution?
>
> Thanks,
>
> Bart.
^ permalink raw reply related [relevance 5%]
* Re: [PATCH 12/12] shmem: add large folio support to the write and fallocate paths
2024-05-15 5:57 11% ` [PATCH 12/12] shmem: add large folio support to the write and fallocate paths Daniel Gomez
@ 2024-05-15 18:59 5% ` kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-05-15 18:59 UTC (permalink / raw)
To: Daniel Gomez, hughd, akpm, willy, jack, mcgrof
Cc: oe-kbuild-all, linux-mm, linux-xfs, djwong, Pankaj Raghav,
dagmcr, yosryahmed, baolin.wang, ritesh.list, lsf-pc, david,
chandan.babu, linux-kernel, brauner, Daniel Gomez
Hi Daniel,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on xfs-linux/for-next brauner-vfs/vfs.all linus/master v6.9 next-20240515]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Daniel-Gomez/splice-don-t-check-for-uptodate-if-partially-uptodate-is-impl/20240515-135925
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20240515055719.32577-13-da.gomez%40samsung.com
patch subject: [PATCH 12/12] shmem: add large folio support to the write and fallocate paths
config: openrisc-defconfig (https://download.01.org/0day-ci/archive/20240516/202405160245.2EBqOCyg-lkp@intel.com/config)
compiler: or1k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240516/202405160245.2EBqOCyg-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202405160245.2EBqOCyg-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> mm/shmem.c:1864: warning: Function parameter or struct member 'sbinfo' not described in 'shmem_mapping_size_order'
mm/shmem.c:2427: warning: Function parameter or struct member 'len' not described in 'shmem_get_folio'
vim +1864 mm/shmem.c
1845
1846 /**
1847 * shmem_mapping_size_order - Get maximum folio order for the given file size.
1848 * @mapping: Target address_space.
1849 * @index: The page index.
1850 * @size: The suggested size of the folio to create.
1851 *
1852 * This returns a high order for folios (when supported) based on the file size
1853 * which the mapping currently allows at the given index. The index is relevant
1854 * due to alignment considerations the mapping might have. The returned order
1855 * may be less than the size passed.
1856 *
1857 * Like __filemap_get_folio order calculation.
1858 *
1859 * Return: The order.
1860 */
1861 static inline unsigned int
1862 shmem_mapping_size_order(struct address_space *mapping, pgoff_t index,
1863 size_t size, struct shmem_sb_info *sbinfo)
> 1864 {
1865 unsigned int order = ilog2(size);
1866
1867 if ((order <= PAGE_SHIFT) ||
1868 (!mapping_large_folio_support(mapping) || !sbinfo->noswap))
1869 return 0;
1870
1871 order -= PAGE_SHIFT;
1872
1873 /* If we're not aligned, allocate a smaller folio */
1874 if (index & ((1UL << order) - 1))
1875 order = __ffs(index);
1876
1877 order = min_t(size_t, order, MAX_PAGECACHE_ORDER);
1878
1879 /* Order-1 not supported due to THP dependency */
1880 return (order == 1) ? 0 : order;
1881 }
1882
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 5%]
* [PATCH 12/12] shmem: add large folio support to the write and fallocate paths
[not found] ` <CGME20240515055740eucas1p1bf112e73a7009a0f9b2bbf09c989a51b@eucas1p1.samsung.com>
@ 2024-05-15 5:57 11% ` Daniel Gomez
2024-05-15 18:59 5% ` kernel test robot
0 siblings, 1 reply; 200+ results
From: Daniel Gomez @ 2024-05-15 5:57 UTC (permalink / raw)
To: hughd, akpm, willy, jack, mcgrof
Cc: linux-mm, linux-xfs, djwong, Pankaj Raghav, dagmcr, yosryahmed,
baolin.wang, ritesh.list, lsf-pc, david, chandan.babu,
linux-kernel, brauner, Daniel Gomez
Add large folio support for shmem write and fallocate paths matching the
same high order preference mechanism used in the iomap buffered IO path
as used in __filemap_get_folio().
Add shmem_mapping_size_order() to get a hint for the order of the folio
based on the file size which takes care of the mapping requirements.
Swap does not support high order folios for now, so make it order-0 in
case swap is enabled.
Skip high order folio allocation loop when reclaim path returns with no
space left (ENOSPC).
Add __GFP_COMP flag for high order folios allocation path to fix a
memory leak.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
mm/shmem.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 47 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index fcd2c9befe19..9308a334a940 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1836,23 +1836,63 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info,
struct page *page;
mpol = shmem_get_pgoff_policy(info, index, order, &ilx);
- page = alloc_pages_mpol(gfp, order, mpol, ilx, numa_node_id());
+ page = alloc_pages_mpol(gfp | __GFP_COMP, order, mpol, ilx,
+ numa_node_id());
mpol_cond_put(mpol);
return page_rmappable_folio(page);
}
+/**
+ * shmem_mapping_size_order - Get maximum folio order for the given file size.
+ * @mapping: Target address_space.
+ * @index: The page index.
+ * @size: The suggested size of the folio to create.
+ *
+ * This returns a high order for folios (when supported) based on the file size
+ * which the mapping currently allows at the given index. The index is relevant
+ * due to alignment considerations the mapping might have. The returned order
+ * may be less than the size passed.
+ *
+ * Like __filemap_get_folio order calculation.
+ *
+ * Return: The order.
+ */
+static inline unsigned int
+shmem_mapping_size_order(struct address_space *mapping, pgoff_t index,
+ size_t size, struct shmem_sb_info *sbinfo)
+{
+ unsigned int order = ilog2(size);
+
+ if ((order <= PAGE_SHIFT) ||
+ (!mapping_large_folio_support(mapping) || !sbinfo->noswap))
+ return 0;
+
+ order -= PAGE_SHIFT;
+
+ /* If we're not aligned, allocate a smaller folio */
+ if (index & ((1UL << order) - 1))
+ order = __ffs(index);
+
+ order = min_t(size_t, order, MAX_PAGECACHE_ORDER);
+
+ /* Order-1 not supported due to THP dependency */
+ return (order == 1) ? 0 : order;
+}
+
static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
struct inode *inode, pgoff_t index,
struct mm_struct *fault_mm, bool huge, size_t len)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- unsigned int order = 0;
+ unsigned int order = shmem_mapping_size_order(mapping, index, len,
+ SHMEM_SB(inode->i_sb));
struct folio *folio;
long pages;
int error;
+neworder:
if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
huge = false;
@@ -1937,6 +1977,11 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
unlock:
folio_unlock(folio);
folio_put(folio);
+ if ((error != -ENOSPC) && (order > 0)) {
+ if (--order == 1)
+ order = 0;
+ goto neworder;
+ }
return ERR_PTR(error);
}
--
2.43.0
^ permalink raw reply related [relevance 11%]
* [PATCH] fsverity: support block-based Merkle tree caching
@ 2024-05-15 1:53 4% Eric Biggers
0 siblings, 0 replies; 200+ results
From: Eric Biggers @ 2024-05-15 1:53 UTC (permalink / raw)
To: fsverity; +Cc: linux-fsdevel, linux-xfs, Darrick J . Wong, Andrey Albershteyn
From: Eric Biggers <ebiggers@google.com>
Currently fs/verity/ assumes that filesystems cache Merkle tree blocks
in the page cache. Specifically, it requires that filesystems provide a
->read_merkle_tree_page() method which returns a page of blocks. It
also stores the "is the block verified" flag in PG_checked, or (if there
are multiple blocks per page) in a bitmap, with PG_checked used to
detect cache evictions instead. This solution is specific to the page
cache, as a different cache would store the flag in a different way.
To allow XFS to use a custom Merkle tree block cache, this patch
refactors the Merkle tree caching interface to be based around the
concept of reading and dropping blocks (not pages), where the storage of
the "is the block verified" flag is up to the implementation.
The existing pagecache based solution, used by ext4, f2fs, and btrfs, is
reimplemented using this interface.
Co-developed-by: Andrey Albershteyn <aalbersh@redhat.com>
Signed-off-by: Andrey Albershteyn <aalbersh@redhat.com>
Co-developed-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
This reworks the block-based caching patch to clean up many different
things, including putting the pagecache based caching behind the same
interface as suggested by Christoph. This applies to mainline commit
a5131c3fdf26. It corresponds to the following patches in Darrick's v5.6
patchset:
fsverity: convert verification to use byte instead of page offsets
fsverity: support block-based Merkle tree caching
fsverity: pass the merkle tree block level to fsverity_read_merkle_tree_block
fsverity: pass the zero-hash value to the implementation
(I don't really understand the split between the first two, as I see
them as being logically part of the same change. The new parameters
would make sense to split out though.)
If we do go with my version of the patch, also let me know if there are
any preferences for who should be author / co-developer / etc.
fs/btrfs/verity.c | 36 +++---
fs/ext4/verity.c | 20 ++--
fs/f2fs/verity.c | 20 ++--
fs/verity/fsverity_private.h | 13 ++-
fs/verity/open.c | 38 ++++--
fs/verity/read_metadata.c | 68 +++++------
fs/verity/verify.c | 216 +++++++++++++++++++++++++----------
include/linux/fsverity.h | 112 +++++++++++++++---
8 files changed, 366 insertions(+), 157 deletions(-)
diff --git a/fs/btrfs/verity.c b/fs/btrfs/verity.c
index 4042dd6437ae..c4ecae418669 100644
--- a/fs/btrfs/verity.c
+++ b/fs/btrfs/verity.c
@@ -699,33 +699,28 @@ int btrfs_get_verity_descriptor(struct inode *inode, void *buf, size_t buf_size)
}
/*
* fsverity op that reads and caches a merkle tree page.
*
- * @inode: inode to read a merkle tree page for
- * @index: page index relative to the start of the merkle tree
- * @num_ra_pages: number of pages to readahead. Optional, we ignore it
- *
* The Merkle tree is stored in the filesystem btree, but its pages are cached
* with a logical position past EOF in the inode's mapping.
- *
- * Returns the page we read, or an ERR_PTR on error.
*/
-static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages)
+static int btrfs_read_merkle_tree_block(const struct fsverity_readmerkle *req,
+ struct fsverity_blockbuf *block)
{
+ struct inode *inode = req->inode;
struct folio *folio;
- u64 off = (u64)index << PAGE_SHIFT;
+ u64 off = req->pos;
loff_t merkle_pos = merkle_file_pos(inode);
+ pgoff_t index;
int ret;
if (merkle_pos < 0)
- return ERR_PTR(merkle_pos);
+ return merkle_pos;
if (merkle_pos > inode->i_sb->s_maxbytes - off - PAGE_SIZE)
- return ERR_PTR(-EFBIG);
- index += merkle_pos >> PAGE_SHIFT;
+ return -EFBIG;
+ index = (merkle_pos + off) >> PAGE_SHIFT;
again:
folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
if (!IS_ERR(folio)) {
if (folio_test_uptodate(folio))
goto out;
@@ -733,28 +728,28 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
folio_lock(folio);
/* If it's not uptodate after we have the lock, we got a read error. */
if (!folio_test_uptodate(folio)) {
folio_unlock(folio);
folio_put(folio);
- return ERR_PTR(-EIO);
+ return -EIO;
}
folio_unlock(folio);
goto out;
}
folio = filemap_alloc_folio(mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS),
0);
if (!folio)
- return ERR_PTR(-ENOMEM);
+ return -ENOMEM;
ret = filemap_add_folio(inode->i_mapping, folio, index, GFP_NOFS);
if (ret) {
folio_put(folio);
/* Did someone else insert a folio here? */
if (ret == -EEXIST)
goto again;
- return ERR_PTR(ret);
+ return ret;
}
/*
* Merkle item keys are indexed from byte 0 in the merkle tree.
* They have the form:
@@ -763,20 +758,21 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
*/
ret = read_key_bytes(BTRFS_I(inode), BTRFS_VERITY_MERKLE_ITEM_KEY, off,
folio_address(folio), PAGE_SIZE, &folio->page);
if (ret < 0) {
folio_put(folio);
- return ERR_PTR(ret);
+ return ret;
}
if (ret < PAGE_SIZE)
folio_zero_segment(folio, ret, PAGE_SIZE);
folio_mark_uptodate(folio);
folio_unlock(folio);
out:
- return folio_file_page(folio, index);
+ fsverity_set_block_page(req, block, folio_file_page(folio, index));
+ return 0;
}
/*
* fsverity op that writes a Merkle tree block into the btree.
*
@@ -800,11 +796,13 @@ static int btrfs_write_merkle_tree_block(struct inode *inode, const void *buf,
return write_key_bytes(BTRFS_I(inode), BTRFS_VERITY_MERKLE_ITEM_KEY,
pos, buf, size);
}
const struct fsverity_operations btrfs_verityops = {
+ .uses_page_based_merkle_caching = 1,
.begin_enable_verity = btrfs_begin_enable_verity,
.end_enable_verity = btrfs_end_enable_verity,
.get_verity_descriptor = btrfs_get_verity_descriptor,
- .read_merkle_tree_page = btrfs_read_merkle_tree_page,
+ .read_merkle_tree_block = btrfs_read_merkle_tree_block,
+ .drop_merkle_tree_block = fsverity_drop_page_merkle_tree_block,
.write_merkle_tree_block = btrfs_write_merkle_tree_block,
};
diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
index 2f37e1ea3955..5a3a3991d661 100644
--- a/fs/ext4/verity.c
+++ b/fs/ext4/verity.c
@@ -355,31 +355,33 @@ static int ext4_get_verity_descriptor(struct inode *inode, void *buf,
return err;
}
return desc_size;
}
-static struct page *ext4_read_merkle_tree_page(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages)
+static int ext4_read_merkle_tree_block(const struct fsverity_readmerkle *req,
+ struct fsverity_blockbuf *block)
{
+ struct inode *inode = req->inode;
+ pgoff_t index = (req->pos +
+ ext4_verity_metadata_pos(inode)) >> PAGE_SHIFT;
+ unsigned long num_ra_pages = req->ra_bytes >> PAGE_SHIFT;
struct folio *folio;
- index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
-
folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
if (!IS_ERR(folio))
folio_put(folio);
else if (num_ra_pages > 1)
page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
folio = read_mapping_folio(inode->i_mapping, index, NULL);
if (IS_ERR(folio))
- return ERR_CAST(folio);
+ return PTR_ERR(folio);
}
- return folio_file_page(folio, index);
+ fsverity_set_block_page(req, block, folio_file_page(folio, index));
+ return 0;
}
static int ext4_write_merkle_tree_block(struct inode *inode, const void *buf,
u64 pos, unsigned int size)
{
@@ -387,11 +389,13 @@ static int ext4_write_merkle_tree_block(struct inode *inode, const void *buf,
return pagecache_write(inode, buf, size, pos);
}
const struct fsverity_operations ext4_verityops = {
+ .uses_page_based_merkle_caching = 1,
.begin_enable_verity = ext4_begin_enable_verity,
.end_enable_verity = ext4_end_enable_verity,
.get_verity_descriptor = ext4_get_verity_descriptor,
- .read_merkle_tree_page = ext4_read_merkle_tree_page,
+ .read_merkle_tree_block = ext4_read_merkle_tree_block,
+ .drop_merkle_tree_block = fsverity_drop_page_merkle_tree_block,
.write_merkle_tree_block = ext4_write_merkle_tree_block,
};
diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
index f7bb0c54502c..859ab2d8d734 100644
--- a/fs/f2fs/verity.c
+++ b/fs/f2fs/verity.c
@@ -252,31 +252,33 @@ static int f2fs_get_verity_descriptor(struct inode *inode, void *buf,
return res;
}
return size;
}
-static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages)
+static int f2fs_read_merkle_tree_block(const struct fsverity_readmerkle *req,
+ struct fsverity_blockbuf *block)
{
+ struct inode *inode = req->inode;
+ pgoff_t index = (req->pos +
+ f2fs_verity_metadata_pos(inode)) >> PAGE_SHIFT;
+ unsigned long num_ra_pages = req->ra_bytes >> PAGE_SHIFT;
struct folio *folio;
- index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT;
-
folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
if (!IS_ERR(folio))
folio_put(folio);
else if (num_ra_pages > 1)
page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
folio = read_mapping_folio(inode->i_mapping, index, NULL);
if (IS_ERR(folio))
- return ERR_CAST(folio);
+ return PTR_ERR(folio);
}
- return folio_file_page(folio, index);
+ fsverity_set_block_page(req, block, folio_file_page(folio, index));
+ return 0;
}
static int f2fs_write_merkle_tree_block(struct inode *inode, const void *buf,
u64 pos, unsigned int size)
{
@@ -284,11 +286,13 @@ static int f2fs_write_merkle_tree_block(struct inode *inode, const void *buf,
return pagecache_write(inode, buf, size, pos);
}
const struct fsverity_operations f2fs_verityops = {
+ .uses_page_based_merkle_caching = 1,
.begin_enable_verity = f2fs_begin_enable_verity,
.end_enable_verity = f2fs_end_enable_verity,
.get_verity_descriptor = f2fs_get_verity_descriptor,
- .read_merkle_tree_page = f2fs_read_merkle_tree_page,
+ .read_merkle_tree_block = f2fs_read_merkle_tree_block,
+ .drop_merkle_tree_block = fsverity_drop_page_merkle_tree_block,
.write_merkle_tree_block = f2fs_write_merkle_tree_block,
};
diff --git a/fs/verity/fsverity_private.h b/fs/verity/fsverity_private.h
index b3506f56e180..da8ba0d626d6 100644
--- a/fs/verity/fsverity_private.h
+++ b/fs/verity/fsverity_private.h
@@ -45,10 +45,13 @@ struct merkle_tree_params {
u8 log_blocks_per_page; /* log2(blocks_per_page) */
unsigned int num_levels; /* number of levels in Merkle tree */
u64 tree_size; /* Merkle tree size in bytes */
unsigned long tree_pages; /* Merkle tree size in pages */
+ /* The hash of a merkle block-sized buffer of zeroes */
+ u8 zero_digest[FS_VERITY_MAX_DIGEST_SIZE];
+
/*
* Starting block index for each tree level, ordered from leaf level (0)
* to root level ('num_levels - 1')
*/
unsigned long level_start[FS_VERITY_MAX_LEVELS];
@@ -59,11 +62,11 @@ struct merkle_tree_params {
*
* When a verity file is first opened, an instance of this struct is allocated
* and stored in ->i_verity_info; it remains until the inode is evicted. It
* caches information about the Merkle tree that's needed to efficiently verify
* data read from the file. It also caches the file digest. The Merkle tree
- * pages themselves are not cached here, but the filesystem may cache them.
+ * blocks themselves are not cached here, but the filesystem may cache them.
*/
struct fsverity_info {
struct merkle_tree_params tree_params;
u8 root_hash[FS_VERITY_MAX_DIGEST_SIZE];
u8 file_digest[FS_VERITY_MAX_DIGEST_SIZE];
@@ -150,8 +153,16 @@ static inline void fsverity_init_signature(void)
}
#endif /* !CONFIG_FS_VERITY_BUILTIN_SIGNATURES */
/* verify.c */
+int fsverity_read_merkle_tree_block(struct inode *inode,
+ const struct merkle_tree_params *params,
+ int level, u64 pos, unsigned long ra_bytes,
+ struct fsverity_blockbuf *block);
+
+void fsverity_drop_merkle_tree_block(struct inode *inode,
+ struct fsverity_blockbuf *block);
+
void __init fsverity_init_workqueue(void);
#endif /* _FSVERITY_PRIVATE_H */
diff --git a/fs/verity/open.c b/fs/verity/open.c
index fdeb95eca3af..daa37007adfd 100644
--- a/fs/verity/open.c
+++ b/fs/verity/open.c
@@ -10,10 +10,22 @@
#include <linux/mm.h>
#include <linux/slab.h>
static struct kmem_cache *fsverity_info_cachep;
+/*
+ * If the filesystem caches Merkle tree blocks in the pagecache, and the Merkle
+ * tree block size differs from the page size, then a bitmap is needed to keep
+ * track of which hash blocks have been verified.
+ */
+static bool needs_bitmap(const struct inode *inode,
+ const struct merkle_tree_params *params)
+{
+ return inode->i_sb->s_vop->uses_page_based_merkle_caching &&
+ params->block_size != PAGE_SIZE;
+}
+
/**
* fsverity_init_merkle_tree_params() - initialize Merkle tree parameters
* @params: the parameters struct to initialize
* @inode: the inode for which the Merkle tree is being built
* @hash_algorithm: number of hash algorithm to use
@@ -124,28 +136,36 @@ int fsverity_init_merkle_tree_params(struct merkle_tree_params *params,
params->level_start[level] = offset;
offset += blocks_in_level[level];
}
/*
- * With block_size != PAGE_SIZE, an in-memory bitmap will need to be
- * allocated to track the "verified" status of hash blocks. Don't allow
- * this bitmap to get too large. For now, limit it to 1 MiB, which
- * limits the file size to about 4.4 TB with SHA-256 and 4K blocks.
+ * If an in-memory bitmap will need to be allocated to track the
+ * "verified" status of hash blocks, don't allow this bitmap to get too
+ * large. For now, limit it to 1 MiB, which limits the file size to
+ * about 4.4 TB with SHA-256 and 4K blocks.
*
* Together with the fact that the data, and thus also the Merkle tree,
* cannot have more than ULONG_MAX pages, this implies that hash block
* indices can always fit in an 'unsigned long'. But to be safe, we
* explicitly check for that too. Note, this is only for hash block
* indices; data block indices might not fit in an 'unsigned long'.
*/
- if ((params->block_size != PAGE_SIZE && offset > 1 << 23) ||
+ if ((needs_bitmap(inode, params) && offset > 1 << 23) ||
offset > ULONG_MAX) {
fsverity_err(inode, "Too many blocks in Merkle tree");
err = -EFBIG;
goto out_err;
}
+ /* Calculate the digest of the all-zeroes block. */
+ err = fsverity_hash_block(params, inode, page_address(ZERO_PAGE(0)),
+ params->zero_digest);
+ if (err) {
+ fsverity_err(inode, "Error %d computing zero digest", err);
+ goto out_err;
+ }
+
params->tree_size = offset << log_blocksize;
params->tree_pages = PAGE_ALIGN(params->tree_size) >> PAGE_SHIFT;
return 0;
out_err:
@@ -211,16 +231,14 @@ struct fsverity_info *fsverity_create_info(const struct inode *inode,
err = fsverity_verify_signature(vi, desc->signature,
le32_to_cpu(desc->sig_size));
if (err)
goto fail;
- if (vi->tree_params.block_size != PAGE_SIZE) {
+ if (needs_bitmap(inode, &vi->tree_params)) {
/*
- * When the Merkle tree block size and page size differ, we use
- * a bitmap to keep track of which hash blocks have been
- * verified. This bitmap must contain one bit per hash block,
- * including alignment to a page boundary at the end.
+ * The bitmap must contain one bit per hash block, including
+ * alignment to a page boundary at the end.
*
* Eventually, to support extremely large files in an efficient
* way, it might be necessary to make pages of this bitmap
* reclaimable. But for now, simply allocating the whole bitmap
* is a simple solution that works well on the files on which
diff --git a/fs/verity/read_metadata.c b/fs/verity/read_metadata.c
index f58432772d9e..61f419df1ea1 100644
--- a/fs/verity/read_metadata.c
+++ b/fs/verity/read_metadata.c
@@ -12,69 +12,59 @@
#include <linux/sched/signal.h>
#include <linux/uaccess.h>
static int fsverity_read_merkle_tree(struct inode *inode,
const struct fsverity_info *vi,
- void __user *buf, u64 offset, int length)
+ void __user *buf, u64 pos, int length)
{
- const struct fsverity_operations *vops = inode->i_sb->s_vop;
- u64 end_offset;
- unsigned int offs_in_page;
- pgoff_t index, last_index;
+ const struct merkle_tree_params *params = &vi->tree_params;
+ const u64 end_pos = min(pos + length, params->tree_size);
+ struct backing_dev_info *bdi = inode->i_sb->s_bdi;
+ const unsigned long max_ra_bytes =
+ min_t(u64, (u64)bdi->io_pages << PAGE_SHIFT, ULONG_MAX);
+ unsigned int offs_in_block = pos & (params->block_size - 1);
int retval = 0;
int err = 0;
- end_offset = min(offset + length, vi->tree_params.tree_size);
- if (offset >= end_offset)
- return 0;
- offs_in_page = offset_in_page(offset);
- last_index = (end_offset - 1) >> PAGE_SHIFT;
-
/*
- * Iterate through each Merkle tree page in the requested range and copy
- * the requested portion to userspace. Note that the Merkle tree block
- * size isn't important here, as we are returning a byte stream; i.e.,
- * we can just work with pages even if the tree block size != PAGE_SIZE.
+ * Iterate through each Merkle tree block in the requested range and
+ * copy the requested portion to userspace.
*/
- for (index = offset >> PAGE_SHIFT; index <= last_index; index++) {
- unsigned long num_ra_pages =
- min_t(unsigned long, last_index - index + 1,
- inode->i_sb->s_bdi->io_pages);
- unsigned int bytes_to_copy = min_t(u64, end_offset - offset,
- PAGE_SIZE - offs_in_page);
- struct page *page;
- const void *virt;
-
- page = vops->read_merkle_tree_page(inode, index, num_ra_pages);
- if (IS_ERR(page)) {
- err = PTR_ERR(page);
- fsverity_err(inode,
- "Error %d reading Merkle tree page %lu",
- err, index);
+ while (pos < end_pos) {
+ unsigned long ra_bytes;
+ unsigned int bytes_to_copy;
+ struct fsverity_blockbuf block;
+
+ ra_bytes = min_t(u64, end_pos - pos, max_ra_bytes);
+ bytes_to_copy = min_t(u64, end_pos - pos,
+ params->block_size - offs_in_block);
+
+ err = fsverity_read_merkle_tree_block(inode, params,
+ FSVERITY_STREAMING_READ,
+ pos - offs_in_block,
+ ra_bytes, &block);
+ if (err)
break;
- }
- virt = kmap_local_page(page);
- if (copy_to_user(buf, virt + offs_in_page, bytes_to_copy)) {
- kunmap_local(virt);
- put_page(page);
+ if (copy_to_user(buf, block.kaddr + offs_in_block,
+ bytes_to_copy)) {
+ fsverity_drop_merkle_tree_block(inode, &block);
err = -EFAULT;
break;
}
- kunmap_local(virt);
- put_page(page);
+ fsverity_drop_merkle_tree_block(inode, &block);
retval += bytes_to_copy;
buf += bytes_to_copy;
- offset += bytes_to_copy;
+ pos += bytes_to_copy;
if (fatal_signal_pending(current)) {
err = -EINTR;
break;
}
cond_resched();
- offs_in_page = 0;
+ offs_in_block = 0;
}
return retval ? retval : err;
}
/* Copy the requested portion of the buffer to userspace. */
diff --git a/fs/verity/verify.c b/fs/verity/verify.c
index 4fcad0825a12..aa6f5ca719b3 100644
--- a/fs/verity/verify.c
+++ b/fs/verity/verify.c
@@ -76,10 +76,131 @@ static bool is_hash_block_verified(struct fsverity_info *vi, struct page *hpage,
smp_wmb();
SetPageChecked(hpage);
return false;
}
+/**
+ * fsverity_set_block_page() - fill in a fsverity_blockbuf using a page
+ * @req: The Merkle tree block read request
+ * @block: The fsverity_blockbuf to initialize
+ * @page: The page containing the block's data at offset @req->pos % PAGE_SIZE.
+ *
+ * This is a helper function for filesystems that cache Merkle tree blocks in
+ * the pagecache. It should be called at the end of
+ * fsverity_operations::read_merkle_tree_block(). It takes ownership of a ref
+ * to the page, maps the page, and uses the PG_checked flag and (if needed) the
+ * fsverity_info::hash_block_verified bitmap to check whether the block has been
+ * verified or not. It initializes the fsverity_blockbuf accordingly.
+ *
+ * This must be paired with fsverity_drop_page_merkle_tree_block(), called from
+ * fsverity_operations::drop_merkle_tree_block().
+ */
+void fsverity_set_block_page(const struct fsverity_readmerkle *req,
+ struct fsverity_blockbuf *block,
+ struct page *page)
+{
+ struct fsverity_info *vi = req->inode->i_verity_info;
+
+ block->kaddr = kmap_local_page(page) + (req->pos & ~PAGE_MASK);
+ block->context = page;
+ block->verified = is_hash_block_verified(vi, page, block->index);
+}
+EXPORT_SYMBOL_GPL(fsverity_set_block_page);
+
+/**
+ * fsverity_drop_page_merkle_tree_block() - drop a Merkle tree block for
+ * filesystems using page-based caching
+ * @inode: The inode to which the Merkle tree belongs
+ * @block: The fsverity_blockbuf to drop
+ *
+ * This pairs with fsverity_set_block_page(). It marks the block as verified if
+ * needed, and then it unmaps and puts the page. Filesystems that use
+ * fsverity_set_block_page() need to set ->drop_merkle_tree_block to this.
+ */
+void fsverity_drop_page_merkle_tree_block(struct inode *inode,
+ struct fsverity_blockbuf *block)
+{
+ struct fsverity_info *vi = inode->i_verity_info;
+ struct page *page = block->context;
+
+ if (block->newly_verified) {
+ /*
+ * This must be atomic and idempotent, as the same hash block
+ * might be verified by multiple threads concurrently.
+ */
+ if (vi->hash_block_verified != NULL)
+ set_bit(block->index, vi->hash_block_verified);
+ else
+ SetPageChecked(page);
+ }
+ unmap_and_put_page(page, block->kaddr);
+}
+EXPORT_SYMBOL_GPL(fsverity_drop_page_merkle_tree_block);
+
+/**
+ * fsverity_read_merkle_tree_block() - read a Merkle tree block
+ * @inode: inode to which the Merkle tree belongs
+ * @params: inode's Merkle tree parameters
+ * @level: level of the block, or FSVERITY_STREAMING_READ to indicate a
+ * streaming read. Level 0 means the leaf level.
+ * @pos: position of the block in the Merkle tree, in bytes
+ * @ra_bytes: on cache miss, try to read ahead this many bytes
+ * @block: struct in which the block is returned
+ *
+ * This function reads a block from a file's Merkle tree. It must be paired
+ * with fsverity_drop_merkle_tree_block().
+ *
+ * Return: 0 on success, -errno on failure
+ */
+int fsverity_read_merkle_tree_block(struct inode *inode,
+ const struct merkle_tree_params *params,
+ int level, u64 pos, unsigned long ra_bytes,
+ struct fsverity_blockbuf *block)
+{
+ struct fsverity_readmerkle req = {
+ .inode = inode,
+ .pos = pos,
+ .size = params->block_size,
+ .digest_size = params->digest_size,
+ .level = level,
+ .num_levels = params->num_levels,
+ .ra_bytes = ra_bytes,
+ .zero_digest = params->zero_digest,
+ };
+ int err;
+
+ memset(block, 0, sizeof(*block));
+ block->index = pos >> params->log_blocksize;
+
+ err = inode->i_sb->s_vop->read_merkle_tree_block(&req, block);
+ if (err)
+ fsverity_err(inode, "Error %d reading Merkle tree block %lu",
+ err, block->index);
+ block->newly_verified = false;
+ return err;
+}
+
+/**
+ * fsverity_drop_merkle_tree_block() - drop a Merkle tree block buffer
+ * @inode: inode to which the Merkle tree belongs
+ * @block: block buffer to be dropped
+ *
+ * This releases the resources that were acquired by
+ * fsverity_read_merkle_tree_block(). If the block is newly verified, it also
+ * saves a record of that in the appropriate location. If a process nests the
+ * reads of multiple blocks, they must be dropped in reverse order; this is
+ * needed to accommodate the use of local kmaps to map the blocks' contents.
+ */
+void fsverity_drop_merkle_tree_block(struct inode *inode,
+ struct fsverity_blockbuf *block)
+{
+ inode->i_sb->s_vop->drop_merkle_tree_block(inode, block);
+
+ block->context = NULL;
+ block->kaddr = NULL;
+}
+
/*
* Verify a single data block against the file's Merkle tree.
*
* In principle, we need to verify the entire path to the root node. However,
* for efficiency the filesystem may cache the hash blocks. Therefore we need
@@ -88,27 +209,24 @@ static bool is_hash_block_verified(struct fsverity_info *vi, struct page *hpage,
*
* Return: %true if the data block is valid, else %false.
*/
static bool
verify_data_block(struct inode *inode, struct fsverity_info *vi,
- const void *data, u64 data_pos, unsigned long max_ra_pages)
+ const void *data, u64 data_pos, unsigned long max_ra_bytes)
{
const struct merkle_tree_params *params = &vi->tree_params;
const unsigned int hsize = params->digest_size;
int level;
+ unsigned long ra_bytes;
u8 _want_hash[FS_VERITY_MAX_DIGEST_SIZE];
const u8 *want_hash;
u8 real_hash[FS_VERITY_MAX_DIGEST_SIZE];
/* The hash blocks that are traversed, indexed by level */
struct {
- /* Page containing the hash block */
- struct page *page;
- /* Mapped address of the hash block (will be within @page) */
- const void *addr;
- /* Index of the hash block in the tree overall */
- unsigned long index;
- /* Byte offset of the wanted hash relative to @addr */
+ /* Buffer containing the hash block */
+ struct fsverity_blockbuf block;
+ /* Byte offset of the wanted hash in the block */
unsigned int hoffset;
} hblocks[FS_VERITY_MAX_LEVELS];
/*
* The index of the previous level's block within that level; also the
* index of that block's hash within the current level.
@@ -141,86 +259,67 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
* until we reach the root.
*/
for (level = 0; level < params->num_levels; level++) {
unsigned long next_hidx;
unsigned long hblock_idx;
- pgoff_t hpage_idx;
- unsigned int hblock_offset_in_page;
+ u64 hblock_pos;
unsigned int hoffset;
- struct page *hpage;
- const void *haddr;
+ struct fsverity_blockbuf *block = &hblocks[level].block;
/*
* The index of the block in the current level; also the index
* of that block's hash within the next level.
*/
next_hidx = hidx >> params->log_arity;
/* Index of the hash block in the tree overall */
hblock_idx = params->level_start[level] + next_hidx;
- /* Index of the hash page in the tree overall */
- hpage_idx = hblock_idx >> params->log_blocks_per_page;
-
- /* Byte offset of the hash block within the page */
- hblock_offset_in_page =
- (hblock_idx << params->log_blocksize) & ~PAGE_MASK;
+ /* Byte offset of the hash block in the tree overall */
+ hblock_pos = (u64)hblock_idx << params->log_blocksize;
/* Byte offset of the hash within the block */
hoffset = (hidx << params->log_digestsize) &
(params->block_size - 1);
- hpage = inode->i_sb->s_vop->read_merkle_tree_page(inode,
- hpage_idx, level == 0 ? min(max_ra_pages,
- params->tree_pages - hpage_idx) : 0);
- if (IS_ERR(hpage)) {
- fsverity_err(inode,
- "Error %ld reading Merkle tree page %lu",
- PTR_ERR(hpage), hpage_idx);
+ if (level == 0)
+ ra_bytes = min_t(u64, max_ra_bytes,
+ params->tree_size - hblock_pos);
+ else
+ ra_bytes = 0;
+
+ if (fsverity_read_merkle_tree_block(inode, params, level,
+ hblock_pos, ra_bytes,
+ block) != 0)
goto error;
- }
- haddr = kmap_local_page(hpage) + hblock_offset_in_page;
- if (is_hash_block_verified(vi, hpage, hblock_idx)) {
- memcpy(_want_hash, haddr + hoffset, hsize);
+
+ if (block->verified) {
+ memcpy(_want_hash, block->kaddr + hoffset, hsize);
want_hash = _want_hash;
- kunmap_local(haddr);
- put_page(hpage);
+ fsverity_drop_merkle_tree_block(inode, block);
goto descend;
}
- hblocks[level].page = hpage;
- hblocks[level].addr = haddr;
- hblocks[level].index = hblock_idx;
hblocks[level].hoffset = hoffset;
hidx = next_hidx;
}
want_hash = vi->root_hash;
descend:
/* Descend the tree verifying hash blocks. */
for (; level > 0; level--) {
- struct page *hpage = hblocks[level - 1].page;
- const void *haddr = hblocks[level - 1].addr;
- unsigned long hblock_idx = hblocks[level - 1].index;
+ struct fsverity_blockbuf *block = &hblocks[level - 1].block;
+ const void *haddr = block->kaddr;
unsigned int hoffset = hblocks[level - 1].hoffset;
if (fsverity_hash_block(params, inode, haddr, real_hash) != 0)
goto error;
if (memcmp(want_hash, real_hash, hsize) != 0)
goto corrupted;
- /*
- * Mark the hash block as verified. This must be atomic and
- * idempotent, as the same hash block might be verified by
- * multiple threads concurrently.
- */
- if (vi->hash_block_verified)
- set_bit(hblock_idx, vi->hash_block_verified);
- else
- SetPageChecked(hpage);
+ block->newly_verified = true;
memcpy(_want_hash, haddr + hoffset, hsize);
want_hash = _want_hash;
- kunmap_local(haddr);
- put_page(hpage);
+ fsverity_drop_merkle_tree_block(inode, block);
}
/* Finally, verify the data block. */
if (fsverity_hash_block(params, inode, data, real_hash) != 0)
goto error;
@@ -233,20 +332,18 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
"FILE CORRUPTED! pos=%llu, level=%d, want_hash=%s:%*phN, real_hash=%s:%*phN",
data_pos, level - 1,
params->hash_alg->name, hsize, want_hash,
params->hash_alg->name, hsize, real_hash);
error:
- for (; level > 0; level--) {
- kunmap_local(hblocks[level - 1].addr);
- put_page(hblocks[level - 1].page);
- }
+ for (; level > 0; level--)
+ fsverity_drop_merkle_tree_block(inode, &hblocks[level - 1].block);
return false;
}
static bool
verify_data_blocks(struct folio *data_folio, size_t len, size_t offset,
- unsigned long max_ra_pages)
+ unsigned long max_ra_bytes)
{
struct inode *inode = data_folio->mapping->host;
struct fsverity_info *vi = inode->i_verity_info;
const unsigned int block_size = vi->tree_params.block_size;
u64 pos = (u64)data_folio->index << PAGE_SHIFT;
@@ -260,11 +357,11 @@ verify_data_blocks(struct folio *data_folio, size_t len, size_t offset,
void *data;
bool valid;
data = kmap_local_folio(data_folio, offset);
valid = verify_data_block(inode, vi, data, pos + offset,
- max_ra_pages);
+ max_ra_bytes);
kunmap_local(data);
if (!valid)
return false;
offset += block_size;
len -= block_size;
@@ -306,28 +403,29 @@ EXPORT_SYMBOL_GPL(fsverity_verify_blocks);
* All filesystems must also call fsverity_verify_page() on holes.
*/
void fsverity_verify_bio(struct bio *bio)
{
struct folio_iter fi;
- unsigned long max_ra_pages = 0;
+ unsigned long max_ra_bytes = 0;
if (bio->bi_opf & REQ_RAHEAD) {
/*
* If this bio is for data readahead, then we also do readahead
* of the first (largest) level of the Merkle tree. Namely,
- * when a Merkle tree page is read, we also try to piggy-back on
- * some additional pages -- up to 1/4 the number of data pages.
+ * when there is a cache miss for a Merkle tree block, we try to
+ * piggy-back some additional blocks onto the read, with size up
+ * to 1/4 the size of the data being read.
*
* This improves sequential read performance, as it greatly
* reduces the number of I/O requests made to the Merkle tree.
*/
- max_ra_pages = bio->bi_iter.bi_size >> (PAGE_SHIFT + 2);
+ max_ra_bytes = bio->bi_iter.bi_size >> 2;
}
bio_for_each_folio_all(fi, bio) {
if (!verify_data_blocks(fi.folio, fi.length, fi.offset,
- max_ra_pages)) {
+ max_ra_bytes)) {
bio->bi_status = BLK_STS_IOERR;
break;
}
}
}
diff --git a/include/linux/fsverity.h b/include/linux/fsverity.h
index 1eb7eae580be..2b9137061379 100644
--- a/include/linux/fsverity.h
+++ b/include/linux/fsverity.h
@@ -24,13 +24,77 @@
#define FS_VERITY_MAX_DIGEST_SIZE SHA512_DIGEST_SIZE
/* Arbitrary limit to bound the kmalloc() size. Can be changed. */
#define FS_VERITY_MAX_DESCRIPTOR_SIZE 16384
+/**
+ * struct fsverity_blockbuf - Merkle tree block buffer
+ * @context: filesystem private context
+ * @kaddr: virtual address of the block's data
+ * @index: index of the block in the Merkle tree
+ * @verified: was this block already verified when it was requested?
+ * @newly_verified: was verification of this block just done?
+ *
+ * This struct describes a buffer containing a Merkle tree block. When a Merkle
+ * tree block needs to be read, this struct is passed to the filesystem's
+ * ->read_merkle_tree_block function, with just the @index field set. The
+ * filesystem sets @kaddr, and optionally @context and @verified. Filesystems
+ * must set @verified only if the filesystem was previously told that the same
+ * block was verified (via ->drop_merkle_tree_block() seeing @newly_verified)
+ * and the block wasn't evicted from cache in the intervening time.
+ *
+ * To release the resources acquired by a read, this struct is passed to
+ * ->drop_merkle_tree_block, with @newly_verified set if verification of the
+ * block was just done.
+ */
+struct fsverity_blockbuf {
+ void *context;
+ void *kaddr;
+ unsigned long index;
+ unsigned int verified : 1;
+ unsigned int newly_verified : 1;
+};
+
+/**
+ * struct fsverity_readmerkle - Request to read a Merkle tree block
+ * @inode: inode to which the Merkle tree belongs
+ * @pos: position of the block in the Merkle tree, in bytes
+ * @size: size of the Merkle tree block, in bytes
+ * @digest_size: size of zero_digest, in bytes
+ * @level: level of the block, or FSVERITY_STREAMING_READ to indicate a
+ * streaming read. Level 0 means the leaf level.
+ * @num_levels: number of levels in the tree total
+ * @ra_bytes: number of bytes that should be prefetched starting at @pos if the
+ * block isn't already cached. Implementations may ignore this
+ * argument; it's only a performance optimization.
+ * @zero_digest: hash of a merkle block-sized buffer of zeroes
+ */
+struct fsverity_readmerkle {
+ struct inode *inode;
+ u64 pos;
+ unsigned int size;
+ unsigned int digest_size;
+ int level;
+ int num_levels;
+ unsigned long ra_bytes;
+ const u8 *zero_digest;
+};
+
+#define FSVERITY_STREAMING_READ (-1)
+
/* Verity operations for filesystems */
struct fsverity_operations {
+ /**
+ * This must be set if the filesystem chooses to cache Merkle tree
+ * blocks in the pagecache, i.e. if it uses fsverity_set_block_page()
+ * and fsverity_drop_page_merkle_tree_block(). It causes the allocation
+ * of the bitmap needed by those helper functions when the Merkle tree
+ * block size is less than the page size.
+ */
+ unsigned int uses_page_based_merkle_caching : 1;
+
/**
* Begin enabling verity on the given file.
*
* @filp: a readonly file descriptor for the file
*
@@ -83,29 +147,46 @@ struct fsverity_operations {
*/
int (*get_verity_descriptor)(struct inode *inode, void *buf,
size_t bufsize);
/**
- * Read a Merkle tree page of the given inode.
+ * Read a Merkle tree block of the given inode.
*
- * @inode: the inode
- * @index: 0-based index of the page within the Merkle tree
- * @num_ra_pages: The number of Merkle tree pages that should be
- * prefetched starting at @index if the page at @index
- * isn't already cached. Implementations may ignore this
- * argument; it's only a performance optimization.
+ * @req: read request; see struct fsverity_readmerkle
+ * @block: struct in which the filesystem returns the block.
+ * It also contains the block index.
*
* This can be called at any time on an open verity file. It may be
- * called by multiple processes concurrently, even with the same page.
+ * called by multiple processes concurrently.
+ *
+ * Implementations of this function should cache the Merkle tree blocks
+ * and issue I/O only if the block isn't already cached. The filesystem
+ * can implement a custom cache or use the pagecache based helpers.
+ *
+ * Return: 0 on success, -errno on failure
+ */
+ int (*read_merkle_tree_block)(const struct fsverity_readmerkle *req,
+ struct fsverity_blockbuf *block);
+
+ /**
+ * Release a Merkle tree block buffer.
+ *
+ * @inode: the inode the block is being dropped for
+ * @block: the block buffer to release
*
- * Note that this must retrieve a *page*, not necessarily a *block*.
+ * This is called to release a Merkle tree block that was obtained with
+ * ->read_merkle_tree_block(). If multiple reads were nested, the drops
+ * are done in reverse order (to accommodate the use of local kmaps).
*
- * Return: the page on success, ERR_PTR() on failure
+ * If @block->newly_verified is true, then implementations of this
+ * function should cache a flag saying that the block is verified, and
+ * return that flag from later ->read_merkle_tree_block() for the same
+ * block if the block hasn't been evicted from the cache in the
+ * meantime. This avoids unnecessary revalidation of blocks.
*/
- struct page *(*read_merkle_tree_page)(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages);
+ void (*drop_merkle_tree_block)(struct inode *inode,
+ struct fsverity_blockbuf *block);
/**
* Write a Merkle tree block to the given inode.
*
* @inode: the inode for which the Merkle tree is being built
@@ -168,10 +249,15 @@ static inline void fsverity_cleanup_inode(struct inode *inode)
int fsverity_ioctl_read_metadata(struct file *filp, const void __user *uarg);
/* verify.c */
+void fsverity_set_block_page(const struct fsverity_readmerkle *req,
+ struct fsverity_blockbuf *block,
+ struct page *page);
+void fsverity_drop_page_merkle_tree_block(struct inode *inode,
+ struct fsverity_blockbuf *block);
bool fsverity_verify_blocks(struct folio *folio, size_t len, size_t offset);
void fsverity_verify_bio(struct bio *bio);
void fsverity_enqueue_verify_work(struct work_struct *work);
#else /* !CONFIG_FS_VERITY */
base-commit: a5131c3fdf2608f1c15f3809e201cf540eb28489
--
2.45.0
^ permalink raw reply related [relevance 4%]
* KASAN: use-after-free in ext4_find_extent in v6.9
@ 2024-05-15 0:40 1% Shuangpeng Bai
0 siblings, 0 replies; 200+ results
From: Shuangpeng Bai @ 2024-05-15 0:40 UTC (permalink / raw)
To: tytso, adilger.kernel; +Cc: linux-ext4, linux-kernel, syzkaller
[-- Attachment #1: Type: text/plain, Size: 17296 bytes --]
Hi Kernel Maintainers,
Our tool found a kernel bug KASAN: use-after-free in ext4_find_extent. Please see the details below.
Kernel commit: v6.9 (Commits on May 12, 2024)
Kernel config: attachment
C/Syz reproducer: attachment
We find this bug was reported and marked as fixed. (https://syzkaller.appspot.com/bug?extid=7ec4ebe875a7076ebb31)
Our reproducer can trigger this bug in v6.9, so the bug may have not been fixed correctly.
Please let me know for anything I can help.
Best,
Shuangpeng
[ 104.471062][ T1049] ==================================================================
[ 104.473279][ T1049] BUG: KASAN: use-after-free in ext4_find_extent (fs/ext4/extents.c:837 fs/ext4/extents.c:953)
[ 104.475224][ T1049] Read of size 4 at addr ffff88815aec5d24 by task kworker/u10:7/1049
[ 104.477244][ T1049]
[ 104.477808][ T1049] CPU: 1 PID: 1049 Comm: kworker/u10:7 Not tainted 6.9.0 #7
[ 104.479677][ T1049] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
[ 104.481942][ T1049] Workqueue: ext4-rsv-conversion ext4_end_io_rsv_work
[ 104.483662][ T1049] Call Trace:
[ 104.484507][ T1049] <TASK>
[ 104.485281][ T1049] dump_stack_lvl (lib/dump_stack.c:117)
[ 104.487750][ T1049] print_report (mm/kasan/report.c:378 mm/kasan/report.c:488)
[ 104.488874][ T1049] ? __phys_addr (arch/x86/mm/physaddr.c:32 (discriminator 4))
[ 104.490057][ T1049] ? ext4_find_extent (fs/ext4/extents.c:837 fs/ext4/extents.c:953)
[ 104.491357][ T1049] kasan_report (mm/kasan/report.c:603)
[ 104.492441][ T1049] ? ext4_find_extent (fs/ext4/extents.c:837 fs/ext4/extents.c:953)
[ 104.493455][ T1049] ext4_find_extent (fs/ext4/extents.c:837 fs/ext4/extents.c:953)
[ 104.494504][ T1049] ext4_ext_map_blocks (fs/ext4/extents.c:4144)
[ 104.495628][ T1049] ? preempt_count_add (./include/linux/ftrace.h:974 kernel/sched/core.c:5852 kernel/sched/core.c:5849 kernel/sched/core.c:5877)
[ 104.496730][ T1049] ? __pfx_copy_page_from_iter_atomic (lib/iov_iter.c:462)
[ 104.498034][ T1049] ? const_folio_flags.constprop.0 (./include/linux/page-flags.h:316)
[ 104.499327][ T1049] ? noop_dirty_folio (mm/page-writeback.c:2650)
[ 104.500338][ T1049] ? folio_flags.constprop.0 (./include/linux/page-flags.h:325)
[ 104.501532][ T1049] ? inode_to_bdi (mm/backing-dev.c:1097)
[ 104.502518][ T1049] ? __pfx_ext4_ext_map_blocks (fs/ext4/extents.c:4128)
[ 104.503705][ T1049] ? shmem_write_end (mm/shmem.c:2783)
[ 104.504958][ T1049] ? generic_perform_write (mm/filemap.c:3938)
[ 104.506371][ T1049] ? __pfx_generic_perform_write (mm/filemap.c:3938)
[ 104.507787][ T1049] ? percpu_counter_add_batch (./arch/x86/include/asm/irqflags.h:42 ./arch/x86/include/asm/irqflags.h:77 ./arch/x86/include/asm/irqflags.h:135 lib/percpu_counter.c:102)
[ 104.509268][ T1049] ? down_write (./arch/x86/include/asm/preempt.h:103 kernel/locking/rwsem.c:1309 kernel/locking/rwsem.c:1315 kernel/locking/rwsem.c:1580)
[ 104.510458][ T1049] ? __pfx_down_write (kernel/locking/rwsem.c:1577)
[ 104.511700][ T1049] ext4_map_blocks (fs/ext4/inode.c:637)
[ 104.512996][ T1049] ? __pfx_ext4_map_blocks (fs/ext4/inode.c:481)
[ 104.514325][ T1049] ? ext4_journal_check_start (fs/ext4/ext4_jbd2.c:88)
[ 104.515792][ T1049] ? __ext4_journal_start_sb (fs/ext4/ext4_jbd2.c:114)
[ 104.517222][ T1049] ? ext4_convert_unwritten_extents (fs/ext4/extents.c:4840)
[ 104.518882][ T1049] ext4_convert_unwritten_extents (fs/ext4/extents.c:4847)
[ 104.520471][ T1049] ? __pfx_ext4_convert_unwritten_extents (fs/ext4/extents.c:4818)
[ 104.522137][ T1049] ? wakeup_preempt (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/thread_info.h:118 ./include/linux/sched.h:1952 ./include/linux/sched.h:1967 kernel/sched/core.c:2248)
[ 104.523257][ T1049] ext4_convert_unwritten_io_end_vec (fs/ext4/extents.c:4887)
[ 104.524747][ T1049] ? try_to_wake_up (./arch/x86/include/asm/preempt.h:103 ./include/linux/preempt.h:480 ./include/linux/preempt.h:480 kernel/sched/core.c:4233)
[ 104.525878][ T1049] ext4_end_io_rsv_work (fs/ext4/page-io.c:187 fs/ext4/page-io.c:259 fs/ext4/page-io.c:273)
[ 104.527018][ T1049] ? __pfx_ext4_end_io_rsv_work (fs/ext4/page-io.c:270)
[ 104.528352][ T1049] ? kick_pool (kernel/workqueue.c:1290)
[ 104.529398][ T1049] process_one_work (kernel/workqueue.c:3272)
[ 104.530571][ T1049] ? kthread_data (kernel/kthread.c:77 kernel/kthread.c:244)
[ 104.531647][ T1049] worker_thread (kernel/workqueue.c:3342 kernel/workqueue.c:3429)
[ 104.532769][ T1049] ? __kthread_parkme (kernel/kthread.c:293)
[ 104.533912][ T1049] ? __pfx_worker_thread (kernel/workqueue.c:3375)
[ 104.535148][ T1049] kthread (kernel/kthread.c:388)
[ 104.536104][ T1049] ? __pfx_kthread (kernel/kthread.c:341)
[ 104.537159][ T1049] ret_from_fork (arch/x86/kernel/process.c:153)
[ 104.538230][ T1049] ? __pfx_kthread (kernel/kthread.c:341)
[ 104.539234][ T1049] ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
[ 104.540355][ T1049] </TASK>
[ 104.541051][ T1049]
[ 104.541606][ T1049] The buggy address belongs to the physical page:
[ 104.543248][ T1049] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x15aec5
[ 104.545380][ T1049] flags: 0x57ff00000000000(node=1|zone=2|lastcpupid=0x7ff)
[ 104.547104][ T1049] page_type: 0xffffffff()
[ 104.548186][ T1049] raw: 057ff00000000000 ffffea00056bb088 ffffea00056bb1c8 0000000000000000
[ 104.550181][ T1049] raw: 0000000000000001 0000000000000000 00000000ffffffff 0000000000000000
[ 104.552298][ T1049] page dumped because: kasan: bad access detected
[ 104.553716][ T1049] page_owner tracks the page as freed
[ 104.554946][ T1049] page last allocated via order 0, migratetype Movable, gfp_mask 0x141cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP|__GFP_WRITE), pid 8103, tgid 8102 (4
[ 104.559217][ T1049] post_alloc_hook (./include/linux/page_owner.h:32 mm/page_alloc.c:1534)
[ 104.560336][ T1049] get_page_from_freelist (mm/page_alloc.c:1543 mm/page_alloc.c:3317)
[ 104.561656][ T1049] __alloc_pages (mm/page_alloc.c:4576)
[ 104.562758][ T1049] alloc_pages_mpol (mm/mempolicy.c:2266)
[ 104.563885][ T1049] folio_alloc (mm/mempolicy.c:2342)
[ 104.564870][ T1049] filemap_alloc_folio (mm/filemap.c:984)
[ 104.566055][ T1049] __filemap_get_folio (mm/filemap.c:1927)
[ 104.567272][ T1049] ext4_write_begin (fs/ext4/inode.c:1161)
[ 104.568419][ T1049] ext4_da_write_begin (fs/ext4/inode.c:2869)
[ 104.569641][ T1049] generic_perform_write (mm/filemap.c:3976)
[ 104.570938][ T1049] ext4_buffered_write_iter (./include/linux/fs.h:800 fs/ext4/file.c:302)
[ 104.572260][ T1049] ext4_file_write_iter (fs/ext4/file.c:698)
[ 104.573498][ T1049] vfs_write (fs/read_write.c:498 fs/read_write.c:590)
[ 104.574510][ T1049] ksys_write (fs/read_write.c:644)
[ 104.575533][ T1049] do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
[ 104.576688][ T1049] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
[ 104.578060][ T1049] page last free pid 8131 tgid 8102 stack trace:
[ 104.579478][ T1049] free_unref_page_prepare (./include/linux/page_owner.h:25 mm/page_alloc.c:1141 mm/page_alloc.c:2347)
[ 104.580787][ T1049] free_unref_folios (mm/page_alloc.c:2536)
[ 104.581977][ T1049] folios_put_refs (mm/swap.c:1034)
[ 104.583141][ T1049] truncate_inode_pages_range (./include/linux/sched.h:1988 mm/truncate.c:363)
[ 104.584525][ T1049] ext4_punch_hole (fs/ext4/ext4.h:1936 fs/ext4/inode.c:3964)
[ 104.585727][ T1049] ext4_fallocate (fs/ext4/extents.c:4803)
[ 104.586820][ T1049] vfs_fallocate (fs/open.c:339)
[ 104.587933][ T1049] __x64_sys_fallocate (./include/linux/file.h:47 fs/open.c:354 fs/open.c:361 fs/open.c:359 fs/open.c:359)
[ 104.589136][ T1049] do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
[ 104.590202][ T1049] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
[ 104.591635][ T1049]
[ 104.592637][ T1049] Memory state around the buggy address:
[ 104.594014][ T1049] ffff88815aec5c00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.595931][ T1049] ffff88815aec5c80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.597833][ T1049] >ffff88815aec5d00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.599718][ T1049] ^
[ 104.600903][ T1049] ffff88815aec5d80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.602821][ T1049] ffff88815aec5e00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[ 104.604620][ T1049] ==================================================================
[ 104.607028][ T8098] EXT4-fs (loop1): This should not happen!! Data will be lost
[ 104.607028][ T8098]
[ 104.610469][ T1048] EXT4-fs warning (device loop1): ext4_convert_unwritten_extents:4848: inode #15: block 1: len 1: ext4_ext_map_blocks returned -117
[ 104.613454][ T1048] EXT4-fs error (device loop1) in ext4_reserve_inode_write:5738: Corrupt filesystem
[ 104.615714][ T1048] EXT4-fs error (device loop1): ext4_convert_unwritten_extents:4853: inode #15: comm kworker/u10:6: mark_inode_dirty error
[ 104.618529][ T1048] EXT4-fs (loop1): failed to convert unwritten extents to written extents -- potential data loss! (inode 15, error -117)
[ 104.623679][ T8099] EXT4-fs (loop2): Delayed block allocation failed for inode 15 at logical offset 16 with max blocks 184 with error 117
[ 104.624339][ T8132] ------------[ cut here ]------------
[ 104.626580][ T8099] EXT4-fs (loop2): This should not happen!! Data will be lost
[ 104.626580][ T8099]
[ 104.627413][ T8132] kernel BUG at fs/ext4/extents.c:3180!
[ 104.630527][ T8132] invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI
[ 104.631866][ T8132] CPU: 0 PID: 8132 Comm: a.out Not tainted 6.9.0 #7
[ 104.633183][ T8132] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
[ 104.635028][ T8132] RIP: 0010:ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
104.636331][ T8132] Code: 48 c7 c7 80 b8 80 8a 48 8b 54 24 08 0f b7 43 08 4c 8d 04 40 49 c1 e0 04 49 01 d8 e8 ba 59 ff ff e9 e3 fc ff ff e8 90 7e 58 ff <0f> 0b e8f
All code
========
0: 48 c7 c7 80 b8 80 8a mov $0xffffffff8a80b880,%rdi
7: 48 8b 54 24 08 mov 0x8(%rsp),%rdx
c: 0f b7 43 08 movzwl 0x8(%rbx),%eax
10: 4c 8d 04 40 lea (%rax,%rax,2),%r8
14: 49 c1 e0 04 shl $0x4,%r8
18: 49 01 d8 add %rbx,%r8
1b: e8 ba 59 ff ff call 0xffffffffffff59da
20: e9 e3 fc ff ff jmp 0xfffffffffffffd08
25: e8 90 7e 58 ff call 0xffffffffff587eba
2a:* 0f 0b ud2 <-- trapping instruction
2c: 8f .byte 0x8f
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: 8f .byte 0x8f
[ 104.641847][ T8132] RSP: 0018:ffffc90003f5f9b0 EFLAGS: 00010293
[ 104.643350][ T8132] RAX: 0000000000000000 RBX: 000000000000003f RCX: ffffffff822bcfe1
[ 104.645037][ T8132] RDX: ffff88801ed2c900 RSI: ffffffff822bd5c0 RDI: 0000000000000004
[ 104.646994][ T8132] RBP: ffff88801c30f630 R08: 0000000000000004 R09: 0000000000000000
[ 104.648792][ T8132] R10: 000000000000003f R11: ffff888020ebd6e8 R12: ffff88815ba75428
[ 104.650663][ T8132] R13: 0000000000000000 R14: 0000000000000000 R15: ffff88815836b988
[ 104.652253][ T8132] FS: 00007f6bdd2cb700(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
Mes[sage f 104.653844][ T8132] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 104.655856][ T8132] CR2: 000000002003d000 CR3: 0000000016fca000 CR4: 00000000000006f0
[ 104.657513][ T8132] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 104.658902][ T8132] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 104.660242][ T8132] Call Trace:
[ 104.660768][ T8132] <TASK>
[ 104.661232][ T8132] ? show_regs (arch/x86/kernel/dumpstack.c:479)
[ 104.661907][ T8132] ? die (arch/x86/kernel/dumpstack.c:421 arch/x86/kernel/dumpstack.c:434 arch/x86/kernel/dumpstack.c:447)
[ 104rom .662496][ T8132] ? do_trap (arch/x86/kernel/traps.c:114 arch/x86/kernel/traps.c:155)
syslogd@syzkalle[ 104.668396][ T8132] ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
r at May 15 [ 104.669608][ T08132] ? 2o_error_trap+0xdc/0x150
5:38 ...
k[ ern104.670642][ T8132] ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
el:[[ 1 104.604.67182015]529[ T8132] ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
][ T[ 1104804.6] EXT4-f73321][ T8132] ? handle_invalid_op (arch/x86/kernel/traps.c:214)
s (l[ oop1104.67448): 5][ faiT8132] le ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
d to[ con 104.vert675775 un][ Tw8132] ? exc_invalid_op (arch/x86/kernel/traps.c:267)
ritt[ en 104.ext67ent690s t5][ T8132] ? asm_exc_invalid_op (./arch/x86/include/asm/idtentry.h:621)
o wr[ itt104en ext.678234][ entT8132] ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 2))
s --[ pot 104ent.679ial d475][ T8132] ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
ata[ los 1s! 04.6807 (ino90][d T8132] ? ext4_split_extent_at (fs/ext4/extents.c:3180 (discriminator 3))
e 15[ 104, er.68ror 2020][-11 T81372] ? __read_extent_tree_block (fs/ext4/extents.c:590)
)
[ 104.683283][ T8132] ? __pfx_ext4_split_extent_at (fs/ext4/extents.c:3158)
[ 104.684482][ T8132] ? ext4_find_extent (fs/ext4/extents.c:967)
[ 104.685519][ T8132] ext4_ext_remove_space (fs/ext4/extents.c:2877)
[ 104.686615][ T8132] ? __pfx__raw_write_lock (kernel/locking/spinlock.c:299)
[ 104.687699][ T8132] ? __pfx__ext4_get_block (fs/ext4/inode.c:755)
[ 104.688773][ T8132] ? _raw_write_unlock (./arch/x86/include/asm/preempt.h:103 ./include/linux/rwlock_api_smp.h:226 kernel/locking/spinlock.c:342)
[ 104.689781][ T8132] ? ext4_discard_preallocations (fs/ext4/mballoc.c:5504)
[ 104.690958][ T8132] ? __pfx__raw_write_lock (kernel/locking/spinlock.c:299)
[ 104.692029][ T8132] ? ext4_da_release_space (fs/ext4/inode.c:1488)
[ 104.693114][ T8132] ? __pfx_ext4_ext_remove_space (fs/ext4/extents.c:2791)
[ 104.694249][ T8132] ? __pfx_ext4_es_remove_extent (fs/ext4/extents_status.c:1497)
[ 104.695404][ T8132] ? __pfx_down_write (kernel/locking/rwsem.c:1577)
[ 104.696407][ T8132] ? __ext4_journal_start_sb (fs/ext4/ext4_jbd2.c:110)
[ 104.697539][ T8132] ext4_punch_hole (fs/ext4/inode.c:3994)
[ 104.698502][ T8132] ? __pfx_rwsem_wake.isra.0 (kernel/locking/rwsem.c:1203)
[ 104.699566][ T8132] ext4_fallocate (fs/ext4/extents.c:4803)
[ 104.700515][ T8132] ? __pfx_ext4_fallocate (fs/ext4/extents.c:4709)
[ 104.701541][ T8132] ? avc_policy_seqno (security/selinux/avc.c:1205)
[ 104.702502][ T8132] ? selinux_file_permission (security/selinux/hooks.c:3643)
[ 104.703662][ T8132] ? __pfx_ext4_fallocate (fs/ext4/extents.c:4709)
[ 104.704710][ T8132] vfs_fallocate (fs/open.c:339)
[ 104.705647][ T8132] __x64_sys_fallocate (./include/linux/file.h:47 fs/open.c:354 fs/open.c:361 fs/open.c:359 fs/open.c:359)
[ 104.706660][ T8132] do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
[ 104.707607][ T8132] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
[ 104.708804][ T8132] RIP: 0033:0x7f6bdd40873d
[ 104.709686][ T8132] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d8
All code
========
0: 00 c3 add %al,%bl
2: 66 2e 0f 1f 84 00 00 cs nopw 0x0(%rax,%rax,1)
9: 00 00 00
c: 90 nop
d: f3 0f 1e fa endbr64
11: 48 89 f8 mov %rdi,%rax
14: 48 89 f7 mov %rsi,%rdi
17: 48 89 d6 mov %rdx,%rsi
1a: 48 89 ca mov %rcx,%rdx
1d: 4d 89 c2 mov %r8,%r10
20: 4d 89 c8 mov %r9,%r8
23: 4c 8b 4c 24 08 mov 0x8(%rsp),%r9
28: 0f 05 syscall
2a:* 48 rex.W <-- trapping instruction
2b: d8 .byte 0xd8
Code starting with the faulting instruction
===========================================
0: 48 rex.W
1: d8 .byte 0xd8
[ 104.713554][ T8132] RSP: 002b:00007f6bdd2cae98 EFLAGS: 00000207 ORIG_RAX: 000000000000011d
[ 104.715201][ T8132] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6bdd40873d
[ 104.716792][ T8132] RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000005
[ 104.718366][ T8132] RBP: 00007f6bdd2caec0 R08: 00007f6bdd2cb700 R09: 0000000000000000
[ 104.719961][ T8132] R10: 000000000000ffff R11: 0000000000000207 R12: 00007ffec136fe7e
[ 104.721498][ T8132] R13: 00007ffec136fe7f R14: 00007ffec136ff20 R15: 00007f6bdd2cafc0
[ 104.723020][ T8132] </TASK>
[ 104.723652][ T8132] Modules linked in:
[ 104.728923][ T1049] Kernel panic - not syncing: KASAN: panic_on_warn set ...
[ 104.730764][ T1049] Kernel Offset: disabled
[ 104.731710][ T1049] Rebooting in 86400 seconds..
[-- Attachment #2: repro.c --]
[-- Type: application/octet-stream, Size: 621958 bytes --]
// autogenerated by syzkaller (https://github.com/google/syzkaller)
#define _GNU_SOURCE
#include <dirent.h>
#include <endian.h>
#include <errno.h>
#include <fcntl.h>
#include <pthread.h>
#include <setjmp.h>
#include <signal.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <sys/mount.h>
#include <sys/prctl.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <time.h>
#include <unistd.h>
#include <linux/futex.h>
#include <linux/loop.h>
#ifndef __NR_memfd_create
#define __NR_memfd_create 319
#endif
#ifndef __NR_userfaultfd
#define __NR_userfaultfd 323
#endif
static unsigned long long procid;
static void sleep_ms(uint64_t ms)
{
usleep(ms * 1000);
}
static uint64_t current_time_ms(void)
{
struct timespec ts;
if (clock_gettime(CLOCK_MONOTONIC, &ts))
exit(1);
return (uint64_t)ts.tv_sec * 1000 + (uint64_t)ts.tv_nsec / 1000000;
}
static void thread_start(void* (*fn)(void*), void* arg)
{
pthread_t th;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setstacksize(&attr, 128 << 10);
int i = 0;
for (; i < 100; i++) {
if (pthread_create(&th, &attr, fn, arg) == 0) {
pthread_attr_destroy(&attr);
return;
}
if (errno == EAGAIN) {
usleep(50);
continue;
}
break;
}
exit(1);
}
typedef struct {
int state;
} event_t;
static void event_init(event_t* ev)
{
ev->state = 0;
}
static void event_reset(event_t* ev)
{
ev->state = 0;
}
static void event_set(event_t* ev)
{
if (ev->state)
exit(1);
__atomic_store_n(&ev->state, 1, __ATOMIC_RELEASE);
syscall(SYS_futex, &ev->state, FUTEX_WAKE | FUTEX_PRIVATE_FLAG, 1000000);
}
static void event_wait(event_t* ev)
{
while (!__atomic_load_n(&ev->state, __ATOMIC_ACQUIRE))
syscall(SYS_futex, &ev->state, FUTEX_WAIT | FUTEX_PRIVATE_FLAG, 0, 0);
}
static int event_isset(event_t* ev)
{
return __atomic_load_n(&ev->state, __ATOMIC_ACQUIRE);
}
static int event_timedwait(event_t* ev, uint64_t timeout)
{
uint64_t start = current_time_ms();
uint64_t now = start;
for (;;) {
uint64_t remain = timeout - (now - start);
struct timespec ts;
ts.tv_sec = remain / 1000;
ts.tv_nsec = (remain % 1000) * 1000 * 1000;
syscall(SYS_futex, &ev->state, FUTEX_WAIT | FUTEX_PRIVATE_FLAG, 0, &ts);
if (__atomic_load_n(&ev->state, __ATOMIC_ACQUIRE))
return 1;
now = current_time_ms();
if (now - start > timeout)
return 0;
}
}
static bool write_file(const char* file, const char* what, ...)
{
char buf[1024];
va_list args;
va_start(args, what);
vsnprintf(buf, sizeof(buf), what, args);
va_end(args);
buf[sizeof(buf) - 1] = 0;
int len = strlen(buf);
int fd = open(file, O_WRONLY | O_CLOEXEC);
if (fd == -1)
return false;
if (write(fd, buf, len) != len) {
int err = errno;
close(fd);
errno = err;
return false;
}
close(fd);
return true;
}
static long syz_open_dev(volatile long a0, volatile long a1, volatile long a2)
{
if (a0 == 0xc || a0 == 0xb) {
char buf[128];
sprintf(buf, "/dev/%s/%d:%d", a0 == 0xc ? "char" : "block", (uint8_t)a1,
(uint8_t)a2);
return open(buf, O_RDWR, 0);
} else {
char buf[1024];
char* hash;
strncpy(buf, (char*)a0, sizeof(buf) - 1);
buf[sizeof(buf) - 1] = 0;
while ((hash = strchr(buf, '#'))) {
*hash = '0' + (char)(a1 % 10);
a1 /= 10;
}
return open(buf, a2, 0);
}
}
//% This code is derived from puff.{c,h}, found in the zlib development. The
//% original files come with the following copyright notice:
//% Copyright (C) 2002-2013 Mark Adler, all rights reserved
//% version 2.3, 21 Jan 2013
//% This software is provided 'as-is', without any express or implied
//% warranty. In no event will the author be held liable for any damages
//% arising from the use of this software.
//% Permission is granted to anyone to use this software for any purpose,
//% including commercial applications, and to alter it and redistribute it
//% freely, subject to the following restrictions:
//% 1. The origin of this software must not be misrepresented; you must not
//% claim that you wrote the original software. If you use this software
//% in a product, an acknowledgment in the product documentation would be
//% appreciated but is not required.
//% 2. Altered source versions must be plainly marked as such, and must not be
//% misrepresented as being the original software.
//% 3. This notice may not be removed or altered from any source distribution.
//% Mark Adler madler@alumni.caltech.edu
//% BEGIN CODE DERIVED FROM puff.{c,h}
#define MAXBITS 15
#define MAXLCODES 286
#define MAXDCODES 30
#define MAXCODES (MAXLCODES + MAXDCODES)
#define FIXLCODES 288
struct puff_state {
unsigned char* out;
unsigned long outlen;
unsigned long outcnt;
const unsigned char* in;
unsigned long inlen;
unsigned long incnt;
int bitbuf;
int bitcnt;
jmp_buf env;
};
static int puff_bits(struct puff_state* s, int need)
{
long val = s->bitbuf;
while (s->bitcnt < need) {
if (s->incnt == s->inlen)
longjmp(s->env, 1);
val |= (long)(s->in[s->incnt++]) << s->bitcnt;
s->bitcnt += 8;
}
s->bitbuf = (int)(val >> need);
s->bitcnt -= need;
return (int)(val & ((1L << need) - 1));
}
static int puff_stored(struct puff_state* s)
{
s->bitbuf = 0;
s->bitcnt = 0;
if (s->incnt + 4 > s->inlen)
return 2;
unsigned len = s->in[s->incnt++];
len |= s->in[s->incnt++] << 8;
if (s->in[s->incnt++] != (~len & 0xff) ||
s->in[s->incnt++] != ((~len >> 8) & 0xff))
return -2;
if (s->incnt + len > s->inlen)
return 2;
if (s->outcnt + len > s->outlen)
return 1;
for (; len--; s->outcnt++, s->incnt++) {
if (s->in[s->incnt])
s->out[s->outcnt] = s->in[s->incnt];
}
return 0;
}
struct puff_huffman {
short* count;
short* symbol;
};
static int puff_decode(struct puff_state* s, const struct puff_huffman* h)
{
int first = 0;
int index = 0;
int bitbuf = s->bitbuf;
int left = s->bitcnt;
int code = first = index = 0;
int len = 1;
short* next = h->count + 1;
while (1) {
while (left--) {
code |= bitbuf & 1;
bitbuf >>= 1;
int count = *next++;
if (code - count < first) {
s->bitbuf = bitbuf;
s->bitcnt = (s->bitcnt - len) & 7;
return h->symbol[index + (code - first)];
}
index += count;
first += count;
first <<= 1;
code <<= 1;
len++;
}
left = (MAXBITS + 1) - len;
if (left == 0)
break;
if (s->incnt == s->inlen)
longjmp(s->env, 1);
bitbuf = s->in[s->incnt++];
if (left > 8)
left = 8;
}
return -10;
}
static int puff_construct(struct puff_huffman* h, const short* length, int n)
{
int len;
for (len = 0; len <= MAXBITS; len++)
h->count[len] = 0;
int symbol;
for (symbol = 0; symbol < n; symbol++)
(h->count[length[symbol]])++;
if (h->count[0] == n)
return 0;
int left = 1;
for (len = 1; len <= MAXBITS; len++) {
left <<= 1;
left -= h->count[len];
if (left < 0)
return left;
}
short offs[MAXBITS + 1];
offs[1] = 0;
for (len = 1; len < MAXBITS; len++)
offs[len + 1] = offs[len] + h->count[len];
for (symbol = 0; symbol < n; symbol++)
if (length[symbol] != 0)
h->symbol[offs[length[symbol]]++] = symbol;
return left;
}
static int puff_codes(struct puff_state* s, const struct puff_huffman* lencode,
const struct puff_huffman* distcode)
{
static const short lens[29] = {3, 4, 5, 6, 7, 8, 9, 10, 11, 13,
15, 17, 19, 23, 27, 31, 35, 43, 51, 59,
67, 83, 99, 115, 131, 163, 195, 227, 258};
static const short lext[29] = {0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2,
2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0};
static const short dists[30] = {
1, 2, 3, 4, 5, 7, 9, 13, 17, 25,
33, 49, 65, 97, 129, 193, 257, 385, 513, 769,
1025, 1537, 2049, 3073, 4097, 6145, 8193, 12289, 16385, 24577};
static const short dext[30] = {0, 0, 0, 0, 1, 1, 2, 2, 3, 3,
4, 4, 5, 5, 6, 6, 7, 7, 8, 8,
9, 9, 10, 10, 11, 11, 12, 12, 13, 13};
int symbol;
do {
symbol = puff_decode(s, lencode);
if (symbol < 0)
return symbol;
if (symbol < 256) {
if (s->outcnt == s->outlen)
return 1;
if (symbol)
s->out[s->outcnt] = symbol;
s->outcnt++;
} else if (symbol > 256) {
symbol -= 257;
if (symbol >= 29)
return -10;
int len = lens[symbol] + puff_bits(s, lext[symbol]);
symbol = puff_decode(s, distcode);
if (symbol < 0)
return symbol;
unsigned dist = dists[symbol] + puff_bits(s, dext[symbol]);
if (dist > s->outcnt)
return -11;
if (s->outcnt + len > s->outlen)
return 1;
while (len--) {
if (dist <= s->outcnt && s->out[s->outcnt - dist])
s->out[s->outcnt] = s->out[s->outcnt - dist];
s->outcnt++;
}
}
} while (symbol != 256);
return 0;
}
static int puff_fixed(struct puff_state* s)
{
static int virgin = 1;
static short lencnt[MAXBITS + 1], lensym[FIXLCODES];
static short distcnt[MAXBITS + 1], distsym[MAXDCODES];
static struct puff_huffman lencode, distcode;
if (virgin) {
lencode.count = lencnt;
lencode.symbol = lensym;
distcode.count = distcnt;
distcode.symbol = distsym;
short lengths[FIXLCODES];
int symbol;
for (symbol = 0; symbol < 144; symbol++)
lengths[symbol] = 8;
for (; symbol < 256; symbol++)
lengths[symbol] = 9;
for (; symbol < 280; symbol++)
lengths[symbol] = 7;
for (; symbol < FIXLCODES; symbol++)
lengths[symbol] = 8;
puff_construct(&lencode, lengths, FIXLCODES);
for (symbol = 0; symbol < MAXDCODES; symbol++)
lengths[symbol] = 5;
puff_construct(&distcode, lengths, MAXDCODES);
virgin = 0;
}
return puff_codes(s, &lencode, &distcode);
}
static int puff_dynamic(struct puff_state* s)
{
static const short order[19] = {16, 17, 18, 0, 8, 7, 9, 6, 10, 5,
11, 4, 12, 3, 13, 2, 14, 1, 15};
int nlen = puff_bits(s, 5) + 257;
int ndist = puff_bits(s, 5) + 1;
int ncode = puff_bits(s, 4) + 4;
if (nlen > MAXLCODES || ndist > MAXDCODES)
return -3;
short lengths[MAXCODES];
int index;
for (index = 0; index < ncode; index++)
lengths[order[index]] = puff_bits(s, 3);
for (; index < 19; index++)
lengths[order[index]] = 0;
short lencnt[MAXBITS + 1], lensym[MAXLCODES];
struct puff_huffman lencode = {lencnt, lensym};
int err = puff_construct(&lencode, lengths, 19);
if (err != 0)
return -4;
index = 0;
while (index < nlen + ndist) {
int symbol;
int len;
symbol = puff_decode(s, &lencode);
if (symbol < 0)
return symbol;
if (symbol < 16)
lengths[index++] = symbol;
else {
len = 0;
if (symbol == 16) {
if (index == 0)
return -5;
len = lengths[index - 1];
symbol = 3 + puff_bits(s, 2);
} else if (symbol == 17)
symbol = 3 + puff_bits(s, 3);
else
symbol = 11 + puff_bits(s, 7);
if (index + symbol > nlen + ndist)
return -6;
while (symbol--)
lengths[index++] = len;
}
}
if (lengths[256] == 0)
return -9;
err = puff_construct(&lencode, lengths, nlen);
if (err && (err < 0 || nlen != lencode.count[0] + lencode.count[1]))
return -7;
short distcnt[MAXBITS + 1], distsym[MAXDCODES];
struct puff_huffman distcode = {distcnt, distsym};
err = puff_construct(&distcode, lengths + nlen, ndist);
if (err && (err < 0 || ndist != distcode.count[0] + distcode.count[1]))
return -8;
return puff_codes(s, &lencode, &distcode);
}
static int puff(unsigned char* dest, unsigned long* destlen,
const unsigned char* source, unsigned long sourcelen)
{
struct puff_state s = {
.out = dest,
.outlen = *destlen,
.outcnt = 0,
.in = source,
.inlen = sourcelen,
.incnt = 0,
.bitbuf = 0,
.bitcnt = 0,
};
int err;
if (setjmp(s.env) != 0)
err = 2;
else {
int last;
do {
last = puff_bits(&s, 1);
int type = puff_bits(&s, 2);
err = type == 0 ? puff_stored(&s)
: (type == 1 ? puff_fixed(&s)
: (type == 2 ? puff_dynamic(&s) : -1));
if (err != 0)
break;
} while (!last);
}
*destlen = s.outcnt;
return err;
}
//% END CODE DERIVED FROM puff.{c,h}
#define ZLIB_HEADER_WIDTH 2
static int puff_zlib_to_file(const unsigned char* source,
unsigned long sourcelen, int dest_fd)
{
if (sourcelen < ZLIB_HEADER_WIDTH)
return 0;
source += ZLIB_HEADER_WIDTH;
sourcelen -= ZLIB_HEADER_WIDTH;
const unsigned long max_destlen = 132 << 20;
void* ret = mmap(0, max_destlen, PROT_WRITE | PROT_READ,
MAP_PRIVATE | MAP_ANON, -1, 0);
if (ret == MAP_FAILED)
return -1;
unsigned char* dest = (unsigned char*)ret;
unsigned long destlen = max_destlen;
int err = puff(dest, &destlen, source, sourcelen);
if (err) {
munmap(dest, max_destlen);
errno = -err;
return -1;
}
if (write(dest_fd, dest, destlen) != (ssize_t)destlen) {
munmap(dest, max_destlen);
return -1;
}
return munmap(dest, max_destlen);
}
static int setup_loop_device(unsigned char* data, unsigned long size,
const char* loopname, int* loopfd_p)
{
int err = 0, loopfd = -1;
int memfd = syscall(__NR_memfd_create, "syzkaller", 0);
if (memfd == -1) {
err = errno;
goto error;
}
if (puff_zlib_to_file(data, size, memfd)) {
err = errno;
goto error_close_memfd;
}
loopfd = open(loopname, O_RDWR);
if (loopfd == -1) {
err = errno;
goto error_close_memfd;
}
if (ioctl(loopfd, LOOP_SET_FD, memfd)) {
if (errno != EBUSY) {
err = errno;
goto error_close_loop;
}
ioctl(loopfd, LOOP_CLR_FD, 0);
usleep(1000);
if (ioctl(loopfd, LOOP_SET_FD, memfd)) {
err = errno;
goto error_close_loop;
}
}
close(memfd);
*loopfd_p = loopfd;
return 0;
error_close_loop:
close(loopfd);
error_close_memfd:
close(memfd);
error:
errno = err;
return -1;
}
static void reset_loop_device(const char* loopname)
{
int loopfd = open(loopname, O_RDWR);
if (loopfd == -1) {
return;
}
if (ioctl(loopfd, LOOP_CLR_FD, 0)) {
}
close(loopfd);
}
static long syz_mount_image(volatile long fsarg, volatile long dir,
volatile long flags, volatile long optsarg,
volatile long change_dir,
volatile unsigned long size, volatile long image)
{
unsigned char* data = (unsigned char*)image;
int res = -1, err = 0, need_loop_device = !!size;
char* mount_opts = (char*)optsarg;
char* target = (char*)dir;
char* fs = (char*)fsarg;
char* source = NULL;
char loopname[64];
if (need_loop_device) {
int loopfd;
memset(loopname, 0, sizeof(loopname));
snprintf(loopname, sizeof(loopname), "/dev/loop%llu", procid);
if (setup_loop_device(data, size, loopname, &loopfd) == -1)
return -1;
close(loopfd);
source = loopname;
}
mkdir(target, 0777);
char opts[256];
memset(opts, 0, sizeof(opts));
if (strlen(mount_opts) > (sizeof(opts) - 32)) {
}
strncpy(opts, mount_opts, sizeof(opts) - 32);
if (strcmp(fs, "iso9660") == 0) {
flags |= MS_RDONLY;
} else if (strncmp(fs, "ext", 3) == 0) {
bool has_remount_ro = false;
char* remount_ro_start = strstr(opts, "errors=remount-ro");
if (remount_ro_start != NULL) {
char after = *(remount_ro_start + strlen("errors=remount-ro"));
char before = remount_ro_start == opts ? '\0' : *(remount_ro_start - 1);
has_remount_ro = ((before == '\0' || before == ',') &&
(after == '\0' || after == ','));
}
if (strstr(opts, "errors=panic") || !has_remount_ro)
strcat(opts, ",errors=continue");
} else if (strcmp(fs, "xfs") == 0) {
strcat(opts, ",nouuid");
}
res = mount(source, target, fs, flags, opts);
if (res == -1) {
err = errno;
goto error_clear_loop;
}
res = open(target, O_RDONLY | O_DIRECTORY);
if (res == -1) {
err = errno;
goto error_clear_loop;
}
if (change_dir) {
res = chdir(target);
if (res == -1) {
err = errno;
}
}
error_clear_loop:
if (need_loop_device)
reset_loop_device(loopname);
errno = err;
return res;
}
static void kill_and_wait(int pid, int* status)
{
kill(-pid, SIGKILL);
kill(pid, SIGKILL);
for (int i = 0; i < 100; i++) {
if (waitpid(-1, status, WNOHANG | __WALL) == pid)
return;
usleep(1000);
}
DIR* dir = opendir("/sys/fs/fuse/connections");
if (dir) {
for (;;) {
struct dirent* ent = readdir(dir);
if (!ent)
break;
if (strcmp(ent->d_name, ".") == 0 || strcmp(ent->d_name, "..") == 0)
continue;
char abort[300];
snprintf(abort, sizeof(abort), "/sys/fs/fuse/connections/%s/abort",
ent->d_name);
int fd = open(abort, O_WRONLY);
if (fd == -1) {
continue;
}
if (write(fd, abort, 1) < 0) {
}
close(fd);
}
closedir(dir);
} else {
}
while (waitpid(-1, status, __WALL) != pid) {
}
}
static void reset_loop()
{
char buf[64];
snprintf(buf, sizeof(buf), "/dev/loop%llu", procid);
int loopfd = open(buf, O_RDWR);
if (loopfd != -1) {
ioctl(loopfd, LOOP_CLR_FD, 0);
close(loopfd);
}
}
static void setup_test()
{
prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
setpgrp();
write_file("/proc/self/oom_score_adj", "1000");
}
struct thread_t {
int created, call;
event_t ready, done;
};
static struct thread_t threads[16];
static void execute_call(int call);
static int running;
static void* thr(void* arg)
{
struct thread_t* th = (struct thread_t*)arg;
for (;;) {
event_wait(&th->ready);
event_reset(&th->ready);
execute_call(th->call);
__atomic_fetch_sub(&running, 1, __ATOMIC_RELAXED);
event_set(&th->done);
}
return 0;
}
static void execute_one(void)
{
int i, call, thread;
for (call = 0; call < 12; call++) {
for (thread = 0; thread < (int)(sizeof(threads) / sizeof(threads[0]));
thread++) {
struct thread_t* th = &threads[thread];
if (!th->created) {
th->created = 1;
event_init(&th->ready);
event_init(&th->done);
event_set(&th->done);
thread_start(thr, th);
}
if (!event_isset(&th->done))
continue;
event_reset(&th->done);
th->call = call;
__atomic_fetch_add(&running, 1, __ATOMIC_RELAXED);
event_set(&th->ready);
event_timedwait(&th->done,
50 + (call == 0 ? 4000 : 0) + (call == 3 ? 4000 : 0));
break;
}
}
for (i = 0; i < 100 && __atomic_load_n(&running, __ATOMIC_RELAXED); i++)
sleep_ms(1);
}
static void execute_one(void);
#define WAIT_FLAGS __WALL
static void loop(void)
{
int iter = 0;
for (;; iter++) {
reset_loop();
int pid = fork();
if (pid < 0)
exit(1);
if (pid == 0) {
setup_test();
execute_one();
exit(0);
}
int status = 0;
uint64_t start = current_time_ms();
for (;;) {
if (waitpid(-1, &status, WNOHANG | WAIT_FLAGS) == pid)
break;
sleep_ms(1);
if (current_time_ms() - start < 5000)
continue;
kill_and_wait(pid, &status);
break;
}
}
}
uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff};
void execute_call(int call)
{
intptr_t res = 0;
switch (call) {
case 0:
memcpy((void*)0x20000240, "ext4\000", 5);
memcpy((void*)0x20000500, "./file0\000", 8);
memcpy((void*)0x20000280, "noinit_itable", 13);
*(uint8_t*)0x2000028d = 0x2c;
*(uint8_t*)0x2000028e = 0;
memcpy(
(void*)0x20000a40,
"\x78\x9c\xec\xdd\xcd\x6b\x1b\x67\x1a\x00\xf0\x47\x52\xec\xd8\x8e\x77"
"\xf3\xb1\xcb\x92\x64\x61\x13\xc8\x42\xf6\x83\x58\xfe\x60\x89\xbd\xbb"
"\x97\x3d\xed\xee\x21\xb0\x6c\x60\x2f\x5b\x48\x5d\x5b\x71\x53\xcb\x96"
"\xb1\xe4\x34\x36\x39\x38\xed\x2d\xd0\x1e\x4a\x4b\x0b\xa5\x87\xde\xfb"
"\x17\xf4\xd2\x9c\x1a\x0a\xa5\xe7\xf6\x5e\x7a\x28\x09\x6d\xea\x42\x5b"
"\x28\xa8\xcc\x48\x72\x1c\xdb\x72\xd4\xd6\x91\xc0\xf3\xfb\xc1\x44\xef"
"\x7c\x48\xcf\xfb\x46\x3c\xaf\x66\xde\x99\xf1\x04\x90\x59\x67\x93\x7f"
"\x72\x11\xc3\x11\xf1\x51\x44\x1c\x6d\xcc\x3e\xba\xc1\xd9\xc6\xcb\xc6"
"\x83\x9b\x33\xc9\x94\x8b\x7a\xfd\xf2\x17\xb9\x74\xbb\x64\xbe\xb5\x69"
"\xeb\x7d\x47\x22\x62\x3d\x22\x06\x22\xe2\xbf\xff\x8c\x78\x26\xb7\x33"
"\x6e\x75\x75\x6d\x7e\xba\x5c\x2e\x2d\x37\xe7\x8b\xb5\x85\xa5\x62\x75"
"\x75\xed\xc2\xb5\x85\xe9\xb9\xd2\x5c\x69\x71\x6c\xf2\xe2\xd4\xd4\xe4"
"\xe8\xc4\xf8\xd4\xbe\xb5\xf5\xf6\xcb\xcf\xdd\xbe\xf4\xee\xbf\xfb\xdf"
"\xf9\xe6\xc5\x7b\x77\x5f\x79\xff\xbd\xa4\x5a\xc3\xcd\x75\x5b\xdb\xb1"
"\x9f\x1a\x4d\xef\x8b\xe3\x5b\x96\x1d\x8a\x88\xbf\x3f\x89\x60\x3d\x50"
"\x68\xb6\x67\xb0\xd7\x15\xe1\x27\x49\xbe\xbf\x5f\x45\xc4\xb9\x34\xff"
"\x8f\x46\x21\xfd\x36\x81\x2c\xa8\xd7\xeb\xf5\xef\xeb\x87\xdb\xad\x5e"
"\xaf\x03\x07\x56\x3e\xdd\x07\xce\xe5\x47\x22\xa2\x51\xce\xe7\x47\x46"
"\x1a\xfb\xf0\xbf\x8e\xa1\x7c\xb9\x52\xad\xfd\xf9\x6a\x65\x65\x71\xb6"
"\xb1\xaf\x7c\x2c\xfa\xf2\x57\xaf\x95\x4b\xa3\xcd\x63\x85\x63\xd1\x97"
"\x4b\xe6\xc7\xd2\xf2\xc3\xf9\xf1\x6d\xf3\x13\x11\xe9\x3e\xf0\xab\x85"
"\xc1\x74\x7e\x64\xa6\x52\x9e\xed\x6e\x57\x07\x6c\x73\x64\x5b\xfe\x7f"
"\x5d\x68\xe4\x3f\x90\x11\x0e\xf9\x21\xbb\xe4\x3f\x64\x97\xfc\x87\xec"
"\x92\xff\x90\x5d\x3f\x36\xff\x5f\x7a\x42\xf5\x00\xba\xcf\xef\x3f\x64"
"\x97\xfc\x87\xec\x92\xff\x90\x5d\xbb\xe4\xff\x2e\xb7\xec\x00\x07\x91"
"\xdf\x7f\xc8\x2e\xf9\x0f\x99\xf4\x9f\x4b\x97\x92\xa9\xde\xba\xff\x7d"
"\xf6\xfa\xea\xca\x7c\xe5\xfa\x85\xd9\x52\x75\x7e\x64\x61\x65\x66\x64"
"\xa6\xb2\xbc\x34\x32\x57\xa9\xcc\xa5\xf7\xec\x2c\x3c\xee\xf3\xca\x95"
"\xca\xd2\xd8\x5f\x62\xe5\x46\xb1\x56\xaa\xd6\x8a\xd5\xd5\xb5\x2b\x0b"
"\x95\x95\xc5\xda\x95\xf4\xbe\xfe\x2b\xa5\xbe\xae\xb4\x0a\xe8\xc4\xf1"
"\x33\x77\x3e\x49\x8e\xf5\xd7\xff\x3a\x98\x4e\x89\xfe\xe6\x3a\xb9\x0a"
"\x07\x5b\xbd\x9e\x8b\x5e\xdf\x83\x0c\xf4\x46\xa1\xd7\x1d\x10\xd0\x33"
"\x86\xfe\x20\xbb\x1c\xe3\x03\x8f\xbb\xde\x67\xa0\xdd\x8a\xa5\xfd\xaf"
"\x0b\xd0\x1d\xf9\x5e\x57\x00\xe8\x99\xf3\xa7\x9c\xff\x83\xac\x32\xfe"
"\x0f\xd9\x65\xfc\x1f\xb2\xcb\x3e\x3e\x60\xfc\x1f\xb2\xc7\xf8\x3f\x64"
"\xd7\x70\x9b\xe7\x7f\xfd\x62\xcb\xb3\xbb\x46\x23\xe2\x97\x11\xf1\x71"
"\xa1\xef\x70\xeb\x59\x5f\xc0\x41\x90\xff\x3c\xd7\xdc\xff\x3f\x7f\xf4"
"\xf7\xc3\xdb\xd7\xf6\xe7\xbe\x4d\x4f\x11\xf4\x47\xc4\xf3\x6f\x5e\x7e"
"\xfd\xc6\x74\xad\xb6\x3c\x96\x2c\xff\x72\x73\x79\xed\x8d\xe6\xf2\xf1"
"\x5e\xd4\x1f\xe8\x54\x2b\x4f\x5b\x79\x0c\x00\x64\xd7\xc6\x83\x9b\x33"
"\xad\xa9\x9b\x71\xef\xff\xa3\x71\x11\xc2\xce\xf8\x87\x9a\x63\x93\x03"
"\xe9\x39\xca\xa1\x8d\xdc\x23\xd7\x2a\xe4\xf6\xe9\xda\x85\xf5\x5b\x11"
"\x71\x72\xb7\xf8\xb9\xe6\xf3\xce\x1b\x67\x3e\x86\x36\x0a\x3b\xe2\x9f"
"\x68\xbe\xe6\x1a\x1f\x91\xd6\xf7\x50\xfa\xdc\xf4\xee\xc4\x3f\xb5\x25"
"\xfe\xef\xb6\xc4\x3f\xfd\xb3\xff\x57\x20\x1b\xee\x24\xfd\xcf\xe8\x6e"
"\xf9\x97\x4f\x73\x3a\x36\xf3\xef\xd1\xfe\x67\x78\x9f\xae\x9d\x68\xdf"
"\xff\xe5\x37\xfb\xbf\x42\x9b\xfe\xef\x4c\x87\x31\x9e\x7d\xeb\x85\xcf"
"\x76\x2c\x6c\x9e\xf8\xb9\x7f\x2b\xe2\xf4\xae\xf1\x5b\xf1\x06\xd2\x58"
"\xdb\xe3\x27\x6f\x3f\xdf\x61\xfc\x7b\x4f\xfd\xef\x37\xed\xd6\xd5\xdf"
"\x6e\x7c\xce\x6e\xf1\x5b\x92\x52\xb1\xb6\xb0\x54\xac\xae\xae\x5d\x48"
"\xff\x8e\xdc\x5c\x69\x71\x6c\xf2\xe2\xd4\xd4\xe4\xe8\xc4\xf8\x54\x31"
"\x1d\xa3\x2e\xb6\x46\xaa\x77\xfa\xdb\xc9\x0f\xef\xb6\x8b\x9f\xb4\x7f"
"\xa8\x4d\xfc\xbd\xda\x9f\x2c\xfb\x63\x87\xed\xff\xee\xb7\x1f\xfc\xff"
"\xec\x1e\xf1\xff\x70\x6e\xf7\xef\xff\xc4\x1e\xf1\x07\x23\xe2\x4f\x1d"
"\xc6\xff\x6a\xfc\xd3\xa7\xdb\xad\x4b\xe2\xcf\xb6\x69\x7f\x7e\x8f\xf8"
"\xc9\xb2\x89\x0e\xe3\x57\x5f\xfb\xd7\xe1\x0e\x37\x05\x00\xba\xa0\xba"
"\xba\x36\x3f\x5d\x2e\x97\x96\x15\x14\x14\x14\x36\x0b\xbd\xee\x99\x80"
"\x27\xed\x61\xd2\xf7\xba\x26\x00\x00\x00\x00\x00\x00\x00\x00\x00\x40"
"\xa7\xba\x71\x39\x71\xaf\xdb\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x70\x10\xfc\x10\x00\x00\xff\xff\xf8\x44\xd6"
"\xeb",
1208);
syz_mount_image(/*fs=*/0x20000240, /*dir=*/0x20000500, /*flags=*/0,
/*opts=*/0x20000280, /*chdir=*/1, /*size=*/0x4b8,
/*img=*/0x20000a40);
break;
case 1:
syscall(__NR_mbind, /*addr=*/0x20000000ul, /*len=*/0x3000ul, /*mode=*/0ul,
/*nodemask=*/0ul, /*maxnode=*/0ul, /*flags=*/0ul);
break;
case 2:
syscall(__NR_userfaultfd,
/*flags=UFFD_USER_MODE_ONLY|O_CLOEXEC*/ 0x80001ul);
break;
case 3:
memcpy((void*)0x2001f180, "ntfs3\000", 6);
memcpy((void*)0x2001f1c0, "./file0\000", 8);
memcpy((void*)0x20000140,
"\x73\x70\x61\x72\x73\x65\x2c\x70\x72\x65\x61\x6c\x6c\x6f\x63\x2c"
"\x69\x6f\x63\x68\x61\x72\x73\x65\x74\x3d\x6d\x61\x63\x67\x61\x65"
"\x6c\x69\x63\x2c\x00\xda\xcd\x48\x9d\x98\x64\x08\x44\x57\xf5\xd5"
"\xe3\x2f\x70\xbb\x98\xc2\x35\xa0\xc2\x62\x6a\xf2\xed\xaf\xf3\xbd"
"\xbc\xf8\x14\x83\x01\xd6\xa1\x2a\xa5\x32\x0c\xee\x6d\xb3\x65\xe4"
"\x72\xc4\xfc\x50\xc2\xfc\xef\x0a\x7a\x1c\xd2\x00\x09\x7f\xa1\xb0"
"\x7d\x40\x1f\xef\xf2\xa7\x99\x68\xf0\xa8\x14\xd8\x3f\x2b\xb8\xbc"
"\xf5\xaa\x46\xea\x39\xd8\x5d\xe8\x4a\x43\xca\xc4\x08\x37\xf7\xa5"
"\x88\xc2\xb7\x0c\xc4\x5e\xd0\xfe\xae\xd8\x3c",
139);
memcpy(
(void*)0x2003e380,
"\x78\x9c\xec\xdd\x09\x9c\x4d\x75\xff\x07\xf0\xdf\xd9\xf7\x7d\xb9\x76"
"\x83\xb1\x86\x6c\x89\x24\xfb\x9a\x7d\x0b\xc9\x96\xb1\x6f\xd9\xa2\x54"
"\x92\x2c\x2d\x96\x12\x92\x2d\xc9\x96\x24\x54\x92\x44\x12\x25\xd9\x97"
"\x84\x24\x49\x92\x54\x42\x12\xff\xd7\xdc\xb9\x33\x99\x99\xeb\xe9\x99"
"\xea\xa9\xfc\x7f\x9f\xf7\xeb\x35\x73\xee\x3d\x73\xef\xf7\x77\xee\xf9"
"\xdc\x33\x7c\xcf\x39\xf7\xcc\x37\xcd\x26\x36\x6a\x51\xbb\x79\x42\x42"
"\x42\x02\xb1\x19\x92\xe2\x02\x49\x67\x04\x19\x41\xae\xc4\x7e\xc6\xc7"
"\xe6\x5d\x89\x4d\x99\xd8\xd7\xd0\x0e\x8b\x2b\xed\x33\x3f\xee\x9e\x3c"
"\xcf\xcc\xbb\xf6\x9e\xe1\x4b\xf2\xaf\x1b\xa4\xb5\x5a\x69\xbe\x25\x91"
"\xad\x76\xfb\x6f\xce\x94\x39\xb6\x35\xdc\x9a\xfd\x9b\xcb\x2d\xba\xf7"
"\x18\x98\xd0\x63\x60\x42\xdf\x7e\x83\x12\x3a\x25\x74\xee\xd7\x6f\x50"
"\xa7\xce\xbd\x93\x12\xba\xf4\x18\xd8\xab\x64\x42\x93\xde\x49\x9d\x06"
"\x26\x25\xf4\xe8\x3b\x30\x69\x40\xba\x1f\x77\xed\xdd\xaf\x7f\xff\x61"
"\x09\x9d\xfa\x76\x31\xd4\xfe\x03\x92\x06\x0e\x4c\xe8\xd4\x77\x58\x42"
"\xaf\xa4\x61\x09\x83\xfa\x25\x0c\x1a\x30\x2c\xa1\x53\xb7\x4e\x3d\xfa"
"\x26\x94\x2c\x59\x32\xc1\x50\x09\xfc\x97\x5a\x2e\xfe\xa7\x97\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\x1a\x57\xae"
"\xa4\x1d\xda\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x80\xeb\x54\xed\x7a\x0d\x6a\x95\x22\x4a\xda\x7d\x86\x30\xa4\x2e"
"\x61\xc8\x42\x86\x10\x62\xff\xf6\xb8\xd4\xcf\xfd\x73\xd7\xa8\x93\xfc"
"\xd0\x8e\xd1\x5b\xd9\xa2\xdf\xeb\xa6\xde\x3a\xdb\xab\x74\xef\x8b\xfb"
"\x99\x6b\x4d\xc5\x78\xc5\xe2\x9c\x90\x50\x8a\x10\xd2\x3d\xad\x3e\x4b"
"\xea\x47\x6f\x31\x44\x88\xce\x13\x7e\x77\x1c\xb2\x2c\x56\x34\x36\x4d"
"\x1d\x97\xe7\x12\x49\x43\x52\x9b\xb4\x88\xdd\x1f\x11\x5b\x76\x86\x54"
"\x4d\xb7\x20\xb7\xc5\xa6\x55\x53\x67\x9c\xe1\xe2\x4e\x9d\x2a\x29\x6b"
"\x6a\x59\xba\x3a\x99\xd7\x5a\xd5\xab\x57\x1c\x21\x44\x26\xe9\xa7\x0e"
"\xc3\x46\xa7\x57\xae\x5c\xb9\x12\x6f\x15\xfd\x35\xae\x95\x26\xd0\x01"
"\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\x5f"
"\x6f\xfe\xda\xbe\x92\xbb\x66\xff\xdf\x3d\x43\xff\xcf\xc5\xba\x61\xf6"
"\x1a\xcb\xf5\x77\xf4\xff\xfd\xd3\xea\xb3\xa4\x59\x56\xfb\xff\xaa\xe9"
"\x57\x50\xea\xb8\x72\x5a\xff\xdf\x90\xf4\x20\x03\xc8\x80\xd8\xfc\x6b"
"\xed\x07\xe0\x32\xae\xe7\xaa\xf1\xa7\xf9\xf8\x2b\x57\xad\xe7\x7f\xab"
"\x6b\xa5\x09\x74\x40\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43"
"\xfe\x74\x43\xfe\x74\x63\x33\xf5\xff\xec\x7f\xe8\xff\xd9\xeb\xb9\xff"
"\x4f\x3b\x83\x21\x65\x7a\x75\xff\xdf\x80\xf4\x23\xdd\x48\x6d\xd2\x83"
"\xf4\x26\x49\xb1\xf9\xd7\xea\xff\xab\xc4\xa6\x69\xfd\x7f\x86\xba\xa9"
"\xd3\x7c\x55\xb9\xe8\x93\xd0\xff\xc3\xbf\x17\xf2\xa7\x1b\xf2\xa7\x1b"
"\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x5b\xe6\xfe\x9f\x8b\xf5"
"\xff\x67\x32\xf4\xff\x62\x6c\x1f\x00\x7f\x8d\x4a\x76\xac\x5f\x4e\xed"
"\xff\x4b\xfd\xc1\xfe\x3f\xfd\x79\xfe\x0c\x69\x92\xe5\x3e\x3f\xbd\xd4"
"\xfa\x12\x97\x48\x5a\x91\x7e\xa4\x37\x19\x4c\xfa\x90\xa4\x68\xdd\x11"
"\x69\xe3\xb0\xa4\x4b\xda\x88\xfc\x88\xe4\xd7\x91\xfa\x79\x00\x3f\xfa"
"\xd3\x72\xb1\x2d\xc5\x27\xb3\x19\x97\x30\x29\xa3\x08\x6e\xec\xf9\xd1"
"\x79\x29\x0f\x10\x12\x08\x21\x09\x2c\x49\xf7\x98\x8c\x3f\x23\xb1\x7d"
"\x25\xa5\xd2\xc6\xe7\x89\x1b\xbb\x35\x90\x0c\x23\xf7\x91\x5e\xa4\x13"
"\xe9\x1d\xdd\x1b\x91\x7a\x3e\x42\x7f\x42\x48\x91\xb4\xc7\x0b\x44\x4f"
"\x5b\xd3\xb1\xdc\x62\xaf\x7c\x44\xda\xfc\x6c\x69\x67\x2b\x64\xbb\xe6"
"\x7e\x88\x6b\xa5\x09\x74\x40\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe"
"\x74\x43\xfe\x74\x43\xfe\x74\xe3\x33\xf5\xff\x7c\xac\xff\x9f\xa3\x65"
"\xfe\xfc\x3f\x9f\xb6\xc7\xa8\x45\xa6\x4a\x7f\x65\xff\xff\xa7\x8e\xf3"
"\xa7\xfe\xf5\xff\xd8\xf4\xea\xe3\xfc\xd5\xc8\x20\x32\x88\x0c\x20\x35"
"\x49\x12\xe9\x1a\x9b\x9f\x7e\x3f\x00\xf7\x5f\xef\x07\x18\x47\x32\xef"
"\x07\x88\xce\xcb\xe2\x7e\x80\x68\xbf\xae\x92\xb4\x51\x93\x97\x3b\xf9"
"\x56\x22\x69\x4e\x5a\x90\x6a\xa4\x11\xa9\x49\xaa\x91\x66\xa4\x26\xe9"
"\x40\xea\x91\x46\xa4\x36\x69\x4c\x9a\x91\x86\xa4\x1a\x69\x41\xea\x91"
"\xc6\xa4\x51\x56\xe2\xce\xe4\xda\xfb\xff\xae\x8a\x3e\x7a\xae\x43\xa9"
"\xd8\xed\xba\xb1\x69\x62\x74\x09\x5a\x90\x66\xa4\x1e\xa9\x4e\x5a\x92"
"\x16\xa4\x16\xe9\x40\x1a\x90\x7a\xd1\xe5\xfe\xdf\x4b\xb8\xea\xf6\x88"
"\xab\x6e\x5f\x89\x49\x24\xb5\x49\x3d\xd2\x20\xba\x54\x8d\x48\x35\xd2"
"\x90\xd4\xfa\x1b\x96\xea\x37\xa5\xae\xba\x5d\x9d\x10\x52\x33\xf5\x76"
"\x6c\x95\x27\x92\xc6\xa4\x3a\xa9\x4f\x6a\x91\x1a\xa4\x45\x34\xdb\x9a"
"\x7f\xeb\xf2\xa5\x9c\xbf\xc2\x5e\x75\x3b\x86\x49\x5d\xbe\xe6\xd1\x65"
"\x6b\x19\x4d\xb8\x05\x69\x43\x3a\x90\x9a\xa4\x16\x69\x4e\x6a\x44\xe7"
"\x34\x21\x2d\xa2\xef\xc4\xff\x95\x26\x57\xdd\x8e\x9f\x6f\x2b\xd2\x98"
"\x34\x20\x2d\xa3\xc9\x66\x25\xe3\xbf\xe6\x33\x6f\x1d\xaf\xba\x5d\xf5"
"\xea\x2d\x29\x6d\xfd\xa5\x5f\xbe\xbf\x7e\xdb\xfd\xcf\xfa\x67\x58\x3e"
"\x3d\x76\x3b\x75\x9a\x18\xfd\xbd\xc2\x92\x6a\xff\xc3\x65\xf8\x4f\x46"
"\x5c\x63\xfe\x6f\xf9\xd6\x8b\xfe\xee\xab\x45\x5a\x93\x0e\xa4\x19\x69"
"\x4c\x1a\xff\x2d\xbf\x57\x52\x4d\xbc\xea\x76\xd5\xdf\x5d\xbe\x6a\xa4"
"\x01\x69\x40\x1a\x93\x1a\x7f\x4b\xb6\xc9\xe6\x5c\x75\x3b\xfe\xf6\x51"
"\x3d\xba\xdd\x26\xbf\xdb\x9a\x5c\xb3\xca\xff\xee\xf8\xcf\xb2\xdf\x5d"
"\xbe\x66\xa4\x16\x69\x12\xfd\xb7\xad\x79\x74\x0b\x69\x42\x1a\x47\xd7"
"\xe9\xdf\x93\xf2\xba\x6b\x2c\x5f\x6a\xd8\x89\xa4\x16\xa9\xf6\x0f\x6c"
"\xb7\xa9\xb6\x67\x58\xa4\x8c\x9f\xe3\x4c\x59\xbe\x3f\xeb\x8f\xe7\x7f"
"\xe4\x9a\x3f\x49\xf9\x05\x98\x18\xdd\x1e\xea\x90\x3a\xa4\x56\xf4\xff"
"\x2e\x2d\xa3\xeb\xae\x41\xda\xbf\x25\xcd\xa3\xff\x77\xa8\x15\xfd\xad"
"\xfd\x3f\x71\xd5\x91\xa0\x11\xd7\xfa\xc1\xbf\xd8\xdf\x73\xde\x28\x8e"
"\xff\xd2\x0d\xf9\xd3\x2d\xf3\xf1\x7f\x21\xda\xff\x73\xc4\x66\x33\x1f"
"\xff\x17\xa2\xff\x7b\x2e\x15\xb7\xd2\xef\xf5\xff\x85\xcf\x15\x2b\x73"
"\xf5\x34\x75\x7e\xa1\x84\x74\xad\x66\xf4\x79\xbf\x7d\x8e\x80\x89\x76"
"\x45\x7f\xf2\xf8\x7f\xb4\x3e\xc3\x95\x8c\xde\x6f\x92\xee\xbc\xfe\xdf"
"\xde\xff\xa9\x4f\x4c\xfb\xbf\x5e\xac\xc1\x2c\x6e\xa7\x9f\x3a\x6c\x8d"
"\xe8\x34\xf9\xff\x87\xad\x93\x6f\xf0\x29\xfb\x0d\x2a\xc4\x9e\x92\xfc"
"\xff\xc2\xb2\xa4\x54\xf4\x75\x30\xb1\x15\x93\x7a\x2a\x45\x91\xd8\xd7"
"\xd5\x0b\x99\x2d\x43\x47\x32\x27\xb6\x8c\x0c\x5f\x35\xb6\xc6\xd3\xab"
"\x7b\xf5\x0a\x8f\x33\x4d\x1d\xdf\x61\x52\x7a\xa1\x65\xa9\xe7\x0d\xf0"
"\x29\xe7\x19\xc8\xe9\x96\x93\x4b\x5b\x96\x7f\xf7\xe7\x14\xe0\x7f\x03"
"\xd7\x7f\xa1\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2"
"\xa7\x5b\xe6\xeb\xff\x89\xb1\xe3\xff\x1d\xe3\x7c\xfe\x5f\xfc\x87\x3f"
"\xff\x7f\xf5\xf5\xff\xb3\xfc\xb9\x80\xd8\x6b\xa9\x9a\xf6\x3a\x53\x48"
"\x5c\x22\xa9\x4e\x7a\x90\x41\xa4\x0f\xe9\x44\xfa\x5f\xf3\x73\xff\xa9"
"\x32\x5e\x5f\x31\xe3\xe5\xff\x1c\xa6\x76\x74\x7a\x7d\xf4\xd3\xd8\xff"
"\x47\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37"
"\xe4\x4f\xb7\xcc\xc7\xff\xa5\x58\xff\xbf\x39\xce\xdf\xff\x93\x7e\xa7"
"\xff\xff\xeb\x3f\xff\xcf\x92\x06\x59\xed\xf3\x53\x4f\x0c\x8f\x4d\x53"
"\xeb\x0b\xd1\x3e\xbf\x1f\xe9\x47\x06\x45\xef\xff\x1b\xce\xfb\x8f\xb7"
"\x9f\xe1\x5a\xe7\x21\x64\x9c\x3a\xb1\x3a\x7f\x6e\x3f\x03\xb6\x7f\xba"
"\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f"
"\xba\x65\xee\xff\xe5\x58\xff\x3f\x36\x4e\xff\x2f\xff\x6b\xae\xff\xcf"
"\xfd\x81\xeb\xff\xa7\x77\xf5\x75\x01\xaa\x93\x4e\xa4\x0b\xa9\x11\xbd"
"\x36\xe0\x40\x92\xd2\x4f\xa7\xbf\x8e\x1e\x9b\x76\x6b\x44\xba\xf3\xf3"
"\x7f\x2b\x7b\x39\x76\x33\xed\xfc\xfc\x33\x79\xd3\x4f\x63\x52\x47\x63"
"\xaf\xa4\x3c\xe1\x9f\x3d\x4f\x00\xdb\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd"
"\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x32\xf7\xff\x4a\xb4"
"\xff\x57\x88\xcd\xa5\xef\xff\x85\xe8\xcf\x92\x3b\xef\x7a\x71\x2b\xfd"
"\x89\xfe\x3f\x21\x63\x2d\xe6\x77\xcf\xff\xff\x4b\xfe\x2e\x40\x74\x5c"
"\x89\x4b\x24\xcd\x49\x12\xb9\x9b\x0c\x26\x03\x48\x52\x86\xfe\xfe\xb7"
"\xed\x23\xf5\x78\xfc\x6f\x9f\xbf\x4f\xb9\x76\x66\x7f\x26\xfd\x34\xb9"
"\x5a\x4d\xd2\x9c\x38\xd5\xea\x44\xef\x4f\x4c\xbd\x56\x4e\xec\x3a\x01"
"\xeb\xd2\x3e\x7f\x9f\xfc\xb8\x94\x6a\x2e\x49\x7f\x9d\x80\x65\x57\x5d"
"\x3f\x29\x5b\xf4\x0c\x84\xe4\xd7\x9f\x72\xe5\x9f\xa6\x89\x2b\xa2\xeb"
"\x27\x75\x9a\x7a\x6d\x9b\xe1\x84\x90\x7a\xa4\x5e\xa6\xc7\xcf\x39\x74"
"\xb8\x3a\xcb\xfc\x36\x4d\xdd\x65\xd1\x25\xfa\x78\x2e\xd3\xe3\xcf\xb8"
"\xdc\x85\xe4\xc7\xa4\x4e\x49\x86\xfa\x57\xe7\xcd\xc6\x5e\xdf\x66\xf2"
"\xdb\xf5\x05\x16\x5e\xf5\xfa\x52\x1f\x6f\x67\x78\x7d\xd3\x62\x5f\x24"
"\x7a\x46\x45\xca\xf8\x45\x52\xaf\x45\x7a\x8d\xb1\x33\x3e\xee\x5a\xeb"
"\x20\xe3\xe3\xae\xf5\xda\x33\xbe\x8e\xeb\xe3\xf3\x1a\x7f\x35\x7c\xfe"
"\x8b\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e"
"\x99\x3f\xff\xaf\xc6\x8e\xff\x2f\x8c\x73\xfc\x5f\xfd\xd7\x7c\xfe\x9f"
"\xfb\x03\xfd\x7f\xea\x92\xa7\x4c\xaf\xfe\xfc\x7f\x4b\xd2\x9f\xd4\x20"
"\x9d\xc8\xc0\x58\xff\x1f\xef\xbc\xfc\xd4\xa3\xf8\xbf\x5d\x0b\x9a\x8d"
"\x3b\xcd\x97\x30\x36\x3a\x4d\xad\x43\x84\x94\xf3\x07\x12\x62\xd7\xe1"
"\x4b\x24\xf5\x48\x5f\xd2\x95\xf4\x8b\x3d\x2b\x75\xe7\x87\xde\x23\x67"
"\xaf\x7b\x1f\x3c\x78\x20\xe3\xeb\xfe\xdf\xf6\xa5\xd8\xff\x47\x37\xe4"
"\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\xb7"
"\xcc\xc7\xff\xb5\xd8\xf5\xff\x27\xb2\x99\xaf\xff\xa7\xfd\x87\x3d\x46"
"\x7f\xa2\xff\xb7\x33\xd6\xfa\x9b\x8e\xff\x47\xc7\x4d\xee\xff\x6b\x91"
"\xa1\x64\x10\x49\x22\x7d\x49\x97\xe8\xf1\xec\xa1\x4c\xea\xf1\x6c\x96"
"\xb4\x66\x7e\xff\xba\xfe\x75\x99\x94\xaf\x64\xd9\x53\x5e\x01\xe9\x18"
"\xfb\x2b\x41\x5a\xea\xf7\x2c\x2e\x5f\xa1\xd8\xae\x01\x91\x4b\x24\x8d"
"\x49\x67\xd2\x93\xd4\x4b\xbb\x46\xc1\x5f\x57\x9f\x8b\xd6\x6f\x4a\x06"
"\x47\xaf\x8b\xd0\x89\x10\x92\x23\x56\xbf\x7b\xec\x6f\xbb\xfd\xd9\xe5"
"\x97\xb9\x44\xd2\x8c\x24\x91\xfe\xa4\x13\x19\x10\xdd\xc3\x92\xf9\x7d"
"\x43\xef\xf1\xf7\x7f\x1a\xf6\xff\xd2\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9"
"\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x2d\xf3\xf1\x7f\x3d\xda\xbb\x33\xa4\x54"
"\x9c\xeb\xff\xeb\x7f\xc3\xf5\xff\xd2\x5f\x97\x8f\xfd\xaf\xaf\xcb\x37"
"\x9b\xc9\x7c\x5d\xbe\xe8\xbc\x3f\xf2\xf7\xf8\xd3\xc6\x67\xd2\x6e\xfd"
"\xff\xec\x4f\xb1\xff\x8f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8"
"\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\x99\x8f\xff\x1b\xff\xa1\xff\x37\xd0"
"\xff\xff\x3f\x83\xed\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8"
"\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\x99\xfb\x7f\xf3\x3f\xf4\xff\x26\xfa"
"\xff\xff\x67\xb0\xfd\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9"
"\xd3\x0d\xf9\xd3\xe8\xb7\x6e\x0f\xf9\xd3\x2d\x73\xff\x6f\xfd\x87\xfe"
"\xdf\x42\xff\xff\xff\x0c\xb6\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f"
"\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x65\xee\xff\x53\xfb\xf8\x69"
"\x24\x7d\xff\xcf\xa4\x3d\x23\xbe\xbf\xaa\xff\xff\xff\xd9\x67\xff\x5b"
"\x61\xfb\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2"
"\xa7\x1b\xf2\xa7\x5b\xe6\xfe\xdf\x41\xff\x4f\x11\x6c\xff\x74\x43\xfe"
"\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\xcb"
"\xdc\xff\xbb\xe8\xff\x29\x82\xed\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8"
"\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\x99\xfb\x7f\x0f\xfd\x3f"
"\x45\xb0\xfd\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d"
"\xf9\xd3\x0d\xf9\xd3\x2d\x73\xff\xef\xa3\xff\xa7\x08\xb6\x7f\xba\x21"
"\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba"
"\x65\xee\xff\x03\xf4\xff\x14\xc1\xf6\x4f\x37\xe4\x4f\x37\xe4\x4f\x37"
"\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\xb7\xcc\xfd\x7f\x88\xfe"
"\x9f\x22\xd8\xfe\xe9\x86\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9"
"\x86\xfc\xe9\x86\xfc\xe9\x96\xb9\xff\x8f\xa0\xff\xa7\x08\xb6\x7f\xba"
"\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f"
"\xba\x65\xee\xff\x19\xc2\x90\xba\xc4\x20\x23\xd8\xf4\xfd\x3f\x1f\xeb"
"\xe7\x59\x12\xbf\x3f\x4f\x7e\x68\xc7\xe8\xad\x94\xfe\xbf\xee\x7f\xd9"
"\xff\x17\x22\x24\x21\x63\x2d\x86\xc9\x5c\xbf\x14\x21\xa4\x7b\x5a\x7d"
"\x86\x34\x8a\xde\x62\x88\x16\x9d\xa7\xfd\xee\x38\x19\xeb\xa5\x8e\x2b"
"\x72\x89\xa4\x29\x19\x4c\xfa\x91\x41\xa4\x13\x21\x64\x22\x21\x64\x68"
"\x74\xd5\x24\x8f\xc3\x91\xd6\x24\xe5\x81\x89\xa4\x71\xda\x73\x9d\xd8"
"\x0b\x66\xae\xda\xef\x51\x37\xf6\xb3\x84\xe8\x9a\x22\xa4\x48\x6c\xe5"
"\x31\x29\x9b\x98\x90\x5c\x23\x81\x4d\xd9\x91\x92\x90\x61\xbd\xb1\xb1"
"\x71\xcf\xa4\x8d\xcb\x92\xed\x69\xe3\x36\x4d\xf7\xd8\xab\xc7\x5d\x17"
"\xfb\x22\xd1\xeb\x36\x94\x8a\xad\x77\x3e\x96\x63\x4a\xdd\x74\x2f\x3c"
"\xb6\x2e\xae\xa4\x61\x33\xed\x6f\xf1\x49\xd5\xe8\xb4\x75\xac\x4e\x72"
"\x81\xdf\xaf\x93\x22\x2d\xbf\x8c\xaf\x39\xce\xeb\xc5\x7e\x9e\x7f\x03"
"\xfc\xfe\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2"
"\xa7\xdb\xb5\xfb\xff\xee\x4c\xfa\xfe\x9f\x23\x84\x64\xff\x9d\xe3\xff"
"\xd7\x6b\xff\xdf\x98\x74\x26\x3d\x49\x3d\xd2\x25\xd6\x87\x37\x21\xbf"
"\xf5\xe1\xa5\xe2\xf4\xff\x1e\x49\xdf\x87\x27\x90\xcc\x2f\xe2\xfa\xe8"
"\x73\xb1\xfd\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d"
"\xf9\xd3\x0d\xf9\xd3\xed\xda\xfd\xff\xc4\x38\xfd\x7f\x8e\xd8\x34\x9e"
"\xbf\xa3\xff\xef\x9f\x56\x9f\x21\xcd\xfe\xa2\xfe\x5f\xe6\x12\x49\x33"
"\x92\x44\xfa\x93\x4e\x64\x00\x19\x48\x92\x62\x3f\x9f\x98\x3a\x5e\x6c"
"\x3f\x40\x93\xb4\xfd\x00\xcd\xd2\x6a\x64\xdc\x0f\x50\x35\xf6\x95\x2c"
"\x57\x6c\x9a\x40\xf4\xe8\x54\x27\x64\x4e\xee\xd8\x1a\xce\xb8\xde\xfe"
"\xb9\xfd\x04\xd7\x4a\x13\xe8\x80\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x86"
"\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\xc6\x45\xfb\xff\x62\x19\xfa\xff\xaa"
"\x84\x23\xd3\x32\x9c\xff\x2f\x90\x42\xa4\x77\x9c\x0a\xbf\xf5\xfd\x29"
"\x52\xfb\xfe\x31\xfd\x8a\x96\x49\xee\xbb\xaf\x35\x4d\x48\xb8\xba\x7a"
"\x0a\xf6\x3f\x1e\xf7\x4f\x59\xba\x06\xb1\xbe\x5f\x88\x2d\xd5\xef\x8d"
"\x93\xb1\x5e\xf2\xb8\x25\xa2\xcf\xec\x4a\x7a\x90\xde\x24\x29\x76\xde"
"\x7c\x72\xbf\x6f\x27\x3f\x9a\x4f\xe9\xf7\xcf\xa4\xf5\xfb\xf5\x48\x59"
"\x52\x2a\xfa\x28\x86\xa4\xef\xf7\x8f\xc4\xbe\x48\x5a\xbf\xcf\x90\x8e"
"\xd1\x25\x24\x24\x67\xec\xfe\xef\x2e\x5f\x6c\x2d\xe4\x88\x9d\x6e\x9f"
"\x3c\x66\xbc\xe5\xcb\xfd\x47\xeb\x67\xc4\x5f\x5d\xbf\x10\x49\x22\xa5"
"\x33\xe5\x99\x1c\xfd\xf6\xb4\x73\x1a\x52\x3e\x07\x21\xc7\x6e\x25\x3f"
"\xf7\xd6\xd8\xeb\x6e\x9d\xf6\x73\x3e\x3a\x2f\x5b\xec\xef\x47\x12\x91"
"\x27\x89\x0d\x5a\xb7\xac\x57\x33\xa5\xe6\x55\xf3\xea\xc4\x99\xd7\xb0"
"\x71\x4d\xf2\x6d\xb5\xd8\xf8\xff\xee\xf3\x25\xfe\xbf\x29\xf4\x4f\x2f"
"\x00\xfc\xa3\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f"
"\xdd\x0a\x5d\xa3\xff\x67\xc8\x76\x26\x63\xff\x5f\x30\x6e\x85\x3f\xdc"
"\xff\xc7\xa9\x95\x95\xfe\xff\xaf\xee\xaf\x47\xc4\x96\x9d\x89\x1e\xc1"
"\xcf\x7c\x5e\x4c\xd5\xab\x5f\xf0\x55\xf5\x52\xa7\xf9\x98\x91\xd1\xa5"
"\xff\xbb\xfb\xe6\x87\x63\x8b\xf5\xc7\xfa\xe6\xf8\x99\x02\x2d\x90\x3f"
"\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\xff\x3f\xed"
"\x9f\xed\x17\x0b\x5e\xb3\xff\xef\x9f\xe1\xf8\xbf\x48\x0a\xa6\x5e\x11"
"\x2e\x9d\x3f\xda\xff\xc7\x2b\xf6\x3f\xe9\xff\x33\xe2\x33\xf6\xff\x29"
"\xc7\xbf\x47\xa4\x7d\x72\x21\x5b\x5a\xff\x9f\x2d\x76\x7d\xbd\xed\x69"
"\x4f\xe6\xc8\xaa\xd8\x7c\x9d\x90\x39\xcb\x08\x21\x8d\x48\xab\xe8\xf7"
"\x64\xed\xc8\x20\xd2\x87\xf4\x27\xed\xc8\x40\x32\x8c\xdc\x47\x4a\x90"
"\x1e\xa4\x0f\xe9\x44\xba\x91\xa4\xe8\x57\xdf\xe8\x99\x04\x15\x49\x45"
"\x72\x13\x29\x43\x4a\x93\x8a\xe4\x66\x52\x91\xb4\xbb\x6a\x4f\x44\xfa"
"\xdb\x55\x48\x15\xd2\xee\x2f\xae\x9a\xfe\xfd\x56\xf0\x77\xde\x6f\xc2"
"\x9f\x7d\xbf\x5d\x79\x3e\xdd\xfb\x0d\xfe\x5d\xf0\xfb\x9f\x6e\xc8\x9f"
"\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xd7\xee\xff\xe5"
"\x4c\xe7\xff\xff\xb1\xe3\xff\x85\xcf\x15\x4b\x37\xfd\xab\x8e\xff\x67"
"\x3c\xff\xff\x5a\xe3\xa4\xbe\x06\x35\x56\x21\x79\xdc\x36\xd7\xe8\xff"
"\xb3\x72\xfc\x3f\xb5\x5e\xea\x34\x1f\xf3\xc8\x7f\x3c\xfe\x9f\x3c\xe6"
"\xa0\x58\x7f\x3d\x31\xed\xe7\x7c\x74\xde\x9f\xdd\x9f\x13\xfd\xec\x81"
"\x26\x92\xc1\x03\x93\x06\x94\x1c\xda\x69\xd0\xa0\x01\xa5\x49\x6c\x12"
"\xe7\x67\x65\x48\x6c\x12\xcd\x1f\x7d\x39\xcd\xf0\xfb\x9f\x6e\xc8\x9f"
"\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\xff\x9f\xf6\xef\x3b\xfe"
"\xcf\x46\xfb\xff\x0a\x71\x8e\xff\xc7\x73\xad\xfe\x3f\x63\x3f\x9e\x71"
"\x9a\xf5\xfe\x3f\xe5\x78\x74\xc6\xfe\xff\xf7\xc6\x89\x9d\xde\x4f\x8a"
"\x14\x48\x99\x66\x3e\xff\xbf\x6c\x86\x71\x48\xdc\xfd\x0c\x7f\x7e\x9c"
"\x68\xc3\x7d\xcd\xfd\x0c\xa9\xb7\xaa\x5e\xfd\xc2\xaf\xaa\x97\x3a\xcd"
"\xc7\x8d\xfa\x0b\x3f\x67\x90\x92\xe9\xff\xfe\x73\x06\xf0\xef\x84\xdf"
"\xff\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\xff"
"\xb4\x7f\x5f\xff\x9f\x72\xfc\x7f\xff\x9f\xfc\xfc\xff\x5f\xd9\xff\xf7"
"\xbf\x6a\xe9\x5a\x64\xb5\x2f\x8f\xbd\x86\x2e\xb1\x0a\xa9\x7d\xb9\x92"
"\xd6\x97\x97\x24\x77\x93\x7e\xa4\x77\xec\x11\x59\x3d\x0f\x20\xb5\x6e"
"\xea\x34\x1f\x33\xe6\x3a\xba\x0e\x00\xb6\x7f\xba\x21\x7f\xba\x21\x7f"
"\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\xff\x7f\xda\xbf\xad\xff"
"\x27\xb1\x1e\x37\xb9\x07\x4e\x7f\x85\x7e\x29\xee\xf2\xe3\x78\xf0\xf5"
"\x2c\x7e\xa6\x40\x0b\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37"
"\xe4\x4f\x37\xe4\x4f\x37\x29\x0b\xfd\xbf\x1c\xb7\x02\xfa\xff\xeb\x59"
"\xfc\x4c\x81\x16\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8"
"\x9f\x6e\xc8\x9f\x6e\x72\x16\xfa\x7f\x25\x6e\x05\xf4\xff\xd7\xb3\xf8"
"\x99\x02\x2d\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f"
"\xdd\x90\x3f\xdd\x94\x2c\xf4\xff\x6a\xdc\x0a\xe8\xff\xaf\x67\xf1\x33"
"\x05\x5a\x20\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba"
"\x21\x7f\xba\xa9\x59\xe8\xff\xb5\xb8\x15\xd0\xff\x5f\xcf\xe2\x67\x0a"
"\xb4\x40\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43"
"\xfe\x74\xd3\xb2\xd0\xff\xeb\x71\x2b\xa0\xff\xbf\x9e\xc5\xcf\x14\x68"
"\x81\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x86\xfc"
"\xe9\xa6\x67\xa1\xff\x37\xe2\x56\x40\xff\x7f\x3d\x8b\x9f\x29\xd0\x02"
"\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3"
"\xcd\xc8\x42\xff\x6f\xc6\xad\x80\xfe\xff\x7a\x16\x3f\x53\xa0\x05\xf2"
"\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x9b"
"\x99\x85\xfe\xdf\x8a\x5b\x01\xfd\xff\xf5\x2c\x7e\xa6\x40\x0b\xe4\x4f"
"\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\x2b"
"\x0b\xfd\xbf\x1d\xb7\x02\xfa\xff\xeb\x59\xfc\x4c\x81\x16\xc8\x9f\x6e"
"\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\x76\x16"
"\xfa\x7f\x27\x6e\x05\xf4\xff\xd7\xb3\xf8\x99\x02\x2d\x90\x3f\xdd\x90"
"\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x9c\x2c\xf4"
"\xff\x6e\xdc\x0a\xe8\xff\xaf\x67\xf1\x33\x05\x5a\x20\x7f\xba\x21\x7f"
"\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\xb9\x59\xe8\xff"
"\xbd\xb8\x15\xd0\xff\x5f\xcf\xe2\x67\x0a\xb4\x40\xfe\x74\x43\xfe\x74"
"\x43\xfe\x74\x43\xfe\xff\x1f\xb4\xfe\xc3\x97\x71\x47\xfe\x74\x43\xfe"
"\xff\x6f\x09\xff\xcd\x83\xbc\x2c\xf4\xff\x7e\xdc\x0a\xe8\xff\xaf\x67"
"\xf1\x33\x05\x5a\x20\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21"
"\x7f\xba\x21\x7f\xba\xf9\x19\xfb\xff\xb1\xd2\x35\xfb\xff\x20\x6e\x05"
"\xf4\xff\xd7\xb3\xf8\x99\x02\x2d\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd"
"\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x82\x2c\x1c\xff\x0f\xe3\x56\x40"
"\xff\x7f\x3d\x8b\x9f\x29\xd0\x02\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d"
"\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x2d\xcc\x42\xff\x1f\x89\x5b\x01\xfd"
"\xff\xf5\x2c\x7e\xa6\x40\x0b\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4"
"\x4f\x37\xe4\x4f\x37\xe4\x4f\xb7\x48\x16\xfa\xff\x6c\x71\x2b\xa0\xff"
"\xbf\x9e\xc5\xcf\x14\x68\x81\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x86\xfc"
"\x69\xf0\xd0\x35\x7f\x82\xfc\xe9\x86\xfc\xe9\x96\x2d\x0b\xfd\x7f\xf6"
"\xb8\x15\xd0\xff\x5f\xcf\xe2\x67\x0a\xb4\x40\xfe\x74\x43\xfe\x74\x43"
"\xfe\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\xcb\x9e\x85\xfe\x3f\x47"
"\xdc\x0a\xe8\xff\xaf\x67\xf1\x33\x05\x5a\x20\x7f\xba\x21\x7f\xba\x21"
"\x7f\xba\x21\x7f\xba\x21\x7f\xba\x21\x7f\xba\xe5\xc8\x42\xff\x9f\x33"
"\x6e\x05\xf4\xff\xd7\xb3\xf8\x99\x02\x2d\x90\x3f\xdd\x90\x3f\xdd\x90"
"\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\x72\x66\xa1\xff\xcf\x15"
"\xb7\x02\xfa\xff\xeb\x59\xfc\x4c\x81\x16\xc8\x9f\x6e\xc8\x9f\x6e\xc8"
"\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xb9\xb2\xd0\xff\xe7\x8e"
"\x5b\x01\xfd\xff\xf5\x2c\x7e\xa6\x40\x0b\xe4\x4f\x37\xe4\x4f\x37\xe4"
"\x4f\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\xb7\xdc\x59\xe8\xff\xf3\xc4"
"\xad\x80\xfe\xff\x7a\x16\x3f\x53\xa0\x05\xf2\xa7\x1b\xf2\xa7\x1b\xf2"
"\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x1b\xf2\xa7\x5b\x9e\x2c\xf4\xff\x79\xe3"
"\x56\x40\xff\x7f\x3d\x8b\x9f\x29\xd0\x02\xf9\xd3\x0d\xf9\xd3\x0d\xf9"
"\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x2d\x6f\x16\xfa\xff\x84\xb8"
"\x15\xd0\xff\x5f\xcf\xe2\x67\x0a\xb4\x40\xfe\x74\x43\xfe\x74\x43\xfe"
"\x74\x43\xfe\x74\x43\xfe\x74\x43\xfe\x74\x4b\xc8\x42\xff\x9f\x2f\x6e"
"\x05\xf4\xff\xd7\xb3\xf8\x99\x02\x2d\x90\x3f\xdd\x90\x3f\xdd\x90\x3f"
"\xdd\x90\x3f\xdd\x90\x3f\xdd\x90\x3f\xdd\xf2\x65\xa1\xff\xcf\x1f\xb7"
"\x02\xfa\xff\xeb\x59\xfc\x4c\x81\x16\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f"
"\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xc8\x9f\x6e\xf9\xb3\xd0\xff\x17\x88\x5b"
"\x01\xfd\xff\xf5\x2c\x7e\xa6\x40\x0b\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f"
"\x37\xe4\x4f\x37\xe4\x4f\x37\xe4\x4f\xb7\x02\x59\xe8\xff\x13\xe3\x56"
"\x40\xff\x7f\x3d\x8b\x9f\x29\xd0\x02\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3"
"\x0d\xf9\xd3\x0d\xf9\xd3\x0d\xf9\xd3\x2d\x91\x30\x84\x1f\xe1\xdb\x84"
"\x24\xc4\x7a\x7d\x9f\x10\xc2\x12\x62\xcb\xd1\x7b\xd9\xc8\x15\x26\x2f"
"\x61\xd8\xe8\x1d\x21\x81\x10\x92\x90\x7c\x5b\xcb\x96\x7c\xdf\xce\x34"
"\x9f\xf8\x29\x8f\x67\x52\xe6\xbb\xc9\xdf\x35\x3f\xe5\xb1\x57\xcf\x23"
"\x3e\x79\x87\xf1\xd2\x1e\xa7\xa5\x3d\x8e\x39\x92\x6e\x1e\xc9\x46\x16"
"\x11\x37\xdd\xf8\xf9\x7e\x1b\x7f\x4e\xa6\xf9\x00\x00\x00\x00\x00\x00"
"\x00\xf0\x5f\xbb\xba\x57\x4f\xdf\xb7\x03\x00\x00\x00\x00\x00\x00\xc0"
"\xf5\xa8\x5e\xa3\x9a\xad\x8b\x64\x38\xff\x3f\x59\x11\x42\xc8\x2a\x91"
"\x90\x13\x56\xca\x7d\x83\xec\x67\xe2\x3d\x9f\x8f\x7d\xef\x4e\x9a\x45"
"\x6f\x09\xa9\xdf\xcf\xf6\x2a\xdd\xfb\xe2\x7e\xe6\x9a\x53\x35\x56\x20"
"\x36\x15\x63\x77\x65\x2e\x91\x54\x23\x83\xc8\x20\x32\x80\xd4\x24\x49"
"\xa4\x6b\x74\x2e\x43\x52\xce\x47\x90\xb3\x3e\x4e\x06\x57\x8f\x53\x9d"
"\x74\x22\x5d\x48\x0d\xd2\x9b\x0c\x26\x03\xa3\x73\xa5\xd8\xcf\x45\xd2"
"\x91\x34\xc9\xda\x38\xb1\xf3\x27\xaa\x66\x18\x47\x8a\x8e\xd3\x83\x0c"
"\x22\x7d\x48\x27\xd2\x9f\x48\xb1\x71\x3a\x92\x06\x59\xab\x9f\xfa\xa7"
"\x3a\x12\xd2\xd7\x17\xa2\xf5\xfb\x91\x7e\x64\x10\xe9\x41\x7a\x93\x94"
"\xf3\x26\xb4\xff\x72\xf9\x0d\xf2\x1f\xd6\x93\x9d\xba\xfc\xb5\xc8\x50"
"\x32\x88\x24\x91\xbe\xa4\x0b\x49\x39\xcd\x82\xfd\x03\x39\xf0\xe9\xde"
"\x31\x57\xe7\xd0\x80\xf4\x23\xdd\x48\xed\xe8\xf2\x27\xc5\xf2\x26\xb1"
"\x69\x47\x52\x3f\x6b\xe3\x2c\x8b\x3d\x37\x36\x4d\x1d\x87\xe7\x12\x49"
"\x43\x52\x9b\xb4\x48\xab\x9c\xf2\x3d\xcb\xaf\x23\x35\xe0\xaa\x99\xdf"
"\x4f\x29\xf5\x1b\x92\x1e\x64\x00\x19\x90\x72\x1e\x4d\x6c\xcb\x52\xb2"
"\xfe\x7e\xca\x9c\x47\x42\x6a\x1e\xcd\x49\x12\xb9\x9b\x0c\x26\x03\x48"
"\x52\x6c\xf3\x51\xff\x40\x7d\x36\xba\x55\x93\x58\xa2\x57\xbf\x5f\x5b"
"\x92\xfe\xa4\x06\xe9\x44\x06\x92\x24\xc2\x45\xe7\x72\x7f\xc5\xf2\xa7"
"\xd5\x6f\x45\xfa\x45\xb7\xb9\x3e\x24\x29\xad\x5e\x6b\x52\xf3\xcf\xd6"
"\x8f\xbe\x5f\x19\xae\x64\xf4\x7e\x42\x5a\xbe\x2d\xd2\xd5\x2d\x7c\xae"
"\x58\x99\xe4\xe7\x5f\x6b\x9a\xba\x1d\x77\x89\xd5\x4d\xae\x53\x22\x9a"
"\x60\xd7\xd8\xfb\xb3\x24\xb9\x3b\xba\xfc\x29\x8f\xc8\x99\xf6\x3e\x4d"
"\xbf\x3d\x8f\xe9\x57\x34\x5a\xef\x5a\xd3\x8c\xcb\x9f\x90\x40\xec\x12"
"\xd1\x67\xa6\x8e\x53\x2a\x3a\x3f\xcf\xef\xd4\xcf\xb8\xfc\x69\xf5\x63"
"\xaf\x43\xbd\xea\x75\xb4\x49\x57\xbf\x74\x34\xff\xbc\xd7\xa8\xff\xbb"
"\xeb\x29\x65\xf1\x48\x91\x02\xe9\xd7\xd3\x6f\xf5\xcb\x44\xe7\xff\xef"
"\xea\x97\x4d\xb7\xfe\xec\xb4\x77\x32\xfc\xf7\x8c\x7f\x7a\x01\xe0\x1f"
"\x85\xfc\xe9\x86\xfc\xe9\x86\xfc\xe9\x66\x90\xf3\x57\xe2\xf8\xed\x01"
"\x23\x52\x67\x49\xb1\x19\x97\xd2\x17\x18\xf1\xb7\x2e\x2e\x00\x00\x00"
"\x00\x00\x00\x00\xfc\x21\x67\x5c\xee\x02\xb9\xea\x68\xd4\xf0\xe8\xd1"
"\x1a\x7e\x44\x5d\x42\x48\xeb\xd8\xbc\x94\x6b\x02\x94\x8b\x1d\x5f\xf1"
"\xc9\x38\xe2\x66\xf8\x5c\x40\xb6\x94\x79\x19\xae\x07\xf0\x7b\xf7\x93"
"\x35\x4d\x5c\x11\xad\x35\x22\x0b\xe3\xcf\x66\x32\x8f\x1f\x9d\xf7\x07"
"\xc6\x9f\x73\xe8\x70\x75\x96\xf9\xed\x90\x73\x97\xd8\xf8\xa5\xae\x3a"
"\x96\x9f\x32\x7e\xae\x94\x87\x70\xe9\xae\x71\x10\xfd\xfe\xdf\x8c\x03"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x70\x3d\x39\xe3"
"\x72\x17\x08\xf3\xdb\xfd\xe1\x84\x10\x86\xf0\x23\xea\x12\x42\x5a\xc7"
"\xe6\xf9\x84\x10\x96\x94\x23\x6c\xec\xde\x38\xe2\x12\x26\xe5\x39\x82"
"\x1b\x9d\x64\x4b\x99\x97\xf2\x00\x21\x81\x10\x92\xc0\x92\xdf\xbd\x9f"
"\xac\x69\xe2\x8a\x68\xad\x11\x59\x18\x7f\x36\x93\x79\xfc\xe8\xbc\x3f"
"\x30\xfe\x9c\x43\x87\xab\xb3\x4c\x74\xd0\xa8\x2e\xb1\xf1\x4b\x11\x42"
"\xaa\xa6\x1b\x3f\x57\xca\x43\x38\x9f\x5c\x61\xf2\xa6\x8e\x1f\xfd\xfe"
"\xdf\x8c\x03\x00\x00\x00\x00\x00\x00\xf0\xcf\x62\x08\x4b\x38\xc2\x13"
"\x81\x88\x44\x22\x32\x51\x88\x4a\x34\xa2\x13\x83\x98\xc4\x22\x36\x71"
"\x88\x4b\x3c\xe2\x93\x80\x84\x24\x42\xb2\x91\xec\x24\x07\xc9\x49\x72"
"\x91\xdc\x24\x0f\xc9\x4b\x12\x48\x3e\x92\x9f\x14\x20\x89\xa4\x20\x29"
"\x44\x0a\x93\x22\xa4\x28\x29\x46\x6e\x20\xc5\x49\x09\x52\x92\xdc\x48"
"\x4a\x91\xd2\xa4\x0c\x29\x4b\xca\x91\x9b\x48\x79\x72\x33\xa9\x40\x2a"
"\x92\x5b\x48\x25\x72\x2b\xa9\x4c\x6e\x23\x55\x48\x55\x52\x8d\x54\x27"
"\x35\x48\x4d\x52\x8b\xd4\x26\x75\x48\x5d\x52\x8f\xd4\x27\xb7\x93\x06"
"\xa4\x21\x69\x44\x1a\x93\x26\xa4\x29\x69\x46\x9a\x93\x16\xa4\x25\x69"
"\x45\xee\x20\xad\x49\x1b\xd2\x96\xdc\x49\xda\x91\xbb\x48\x7b\xd2\x81"
"\x74\xfc\x43\xcf\xbf\x9f\x0c\x27\x0f\x90\x07\xc9\x43\x64\x04\x79\x98"
"\x8c\x24\x8f\x90\x51\xe4\x51\x32\x9a\x8c\x21\x63\xc9\x38\xf2\x18\x79"
"\x9c\x3c\x41\x9e\x24\xe3\xc9\x04\x32\x91\x4c\x22\x4f\x91\xa7\xc9\x64"
"\xf2\x0c\x99\x42\xa6\x92\x69\xe4\x59\x32\x9d\x3c\x47\x66\x90\x99\x64"
"\x16\x99\x4d\xe6\x90\xe7\xc9\x5c\xf2\x02\x99\x47\x5e\x24\xf3\xc9\x02"
"\xb2\x90\x2c\x22\x8b\xc9\x4b\x64\x09\x79\x99\x2c\x25\xaf\x90\x65\xe4"
"\x55\xb2\x9c\xac\x20\x2b\xc9\x6b\xe4\x75\xf2\x06\x59\x45\xde\x24\xab"
"\xc9\x5b\x64\x0d\x79\x9b\xac\x25\xef\x90\x75\x64\x3d\x79\x97\x6c\x20"
"\xef\x91\x8d\xe4\x7d\xb2\x89\x6c\x26\x1f\x90\x0f\xc9\x16\xf2\x11\xd9"
"\x4a\x3e\x26\xdb\xc8\x76\xb2\x83\xec\x24\xbb\xc8\x6e\xb2\x87\xec\x25"
"\xfb\xc8\x7e\xf2\x09\x39\x40\x3e\x25\x07\xc9\x21\x72\x98\x7c\x96\xc5"
"\xe7\x9f\xcf\xf0\xfc\xa1\x0c\x61\x08\xc3\x32\x2c\xc3\x33\x3c\x23\x32"
"\x22\x23\x33\x32\xa3\x32\x2a\xa3\x33\x3a\x63\x32\x26\x63\x33\x36\xe3"
"\x32\x2e\xe3\x33\x3e\x13\x32\x21\x93\x8d\xc9\xc6\xe4\x60\x72\x30\xb9"
"\x98\x5c\x4c\x1e\x26\x0f\x93\xc0\x24\x30\xf9\x99\xfc\x4c\x22\x93\xc8"
"\x14\x62\x0a\x31\x45\x98\x22\x4c\x31\xa6\x18\x53\x9c\x29\xce\x94\x64"
"\x4a\x32\xa5\x98\xd2\x4c\x19\xa6\x0c\x53\x8e\x29\xc7\x94\x67\xca\x33"
"\x15\x98\x8a\x4c\x45\xa6\x12\x53\x89\xa9\xcc\x54\x66\xaa\x30\x55\x98"
"\x6a\x4c\x35\xa6\x06\x53\x83\xa9\xc5\xd4\x62\xea\x30\x75\x98\x7a\x4c"
"\x7d\xa6\x3e\xd3\x80\x69\xc0\x34\x62\x1a\x31\x4d\x98\x26\x4c\x33\xa6"
"\x19\xd3\x82\x69\xc1\xb4\x62\x5a\x31\xad\x99\xd6\x4c\x5b\xa6\x2d\xd3"
"\x8e\x69\xc7\xb4\x67\xda\x33\x1d\x99\x8e\x4c\x67\xa6\x33\xd3\x85\xe9"
"\xc2\x74\x65\xba\x32\xdd\x99\xee\x4c\x4f\xa6\x27\xd3\x9b\xe9\xcd\xf4"
"\x65\xfa\x32\xfd\x99\xfe\xcc\x00\x66\x00\x33\x88\x19\xc4\x0c\x61\x86"
"\x30\x43\x99\x61\xcc\x30\xe6\x7e\xe6\x7e\xe6\x01\xe6\x01\xe6\x21\xa6"
"\x06\xfb\x30\x33\x92\x19\xc9\x8c\x62\x46\x31\xa3\x99\x31\xcc\x18\x66"
"\x1c\xf3\x18\xf3\x38\xf3\x38\xf3\x24\x33\x9e\x99\xc0\x4c\x64\x26\x31"
"\x93\x98\xa7\x99\xc9\xcc\x39\x66\x0a\x33\x95\x99\xc6\x4c\x63\x2a\xb3"
"\xcf\x31\x33\x98\x99\x4c\x02\x3b\x9b\x99\xc3\xcc\x61\xe6\x32\x73\x99"
"\x79\xcc\x3c\x66\x3e\xb3\x80\x59\xc0\x2c\x62\x16\x33\x2f\x31\x4b\x98"
"\x25\xcc\x52\xe6\x15\xe6\x15\xe6\x55\x66\x39\xb3\x82\x59\xc1\xbc\xc6"
"\xbc\xc6\xbc\xc1\xac\x62\x56\x31\xab\x99\xb7\x98\x35\xcc\x1a\x66\x2d"
"\x73\x9e\x59\xc7\xac\x67\xde\x65\x36\x30\xef\x31\x1b\x99\xf7\x98\x4d"
"\xcc\x66\x66\x13\xf3\x21\xb3\x85\xf9\x90\xd9\xca\x6c\x65\xb6\x31\xdb"
"\x98\x1d\xcc\x0e\x66\x17\xb3\x8b\xd9\xc3\xec\x61\xf6\x31\xfb\x98\x4f"
"\x98\x4f\x98\x4f\x99\x4f\x99\xf1\xcc\x61\xe6\x30\x73\x84\x39\xc2\x1c"
"\x65\x8e\x32\xc7\x98\x63\xcc\x71\xe6\x38\x73\x82\x39\xc1\x9c\x64\x4e"
"\x32\xa7\x98\x53\xcc\x69\xe6\x34\x73\x86\xf9\x9e\xf9\x81\xf9\x9e\x39"
"\xcb\x9c\x65\xce\x31\xe7\x99\x0b\xcc\x05\xe6\x22\x73\x91\xb9\xc4\x5c"
"\x62\x2e\x33\x97\x93\x37\x7e\x36\x19\xcf\xf2\xac\xc8\x8a\xac\xcc\xca"
"\xac\xca\xaa\xac\xce\xea\xac\xc9\x9a\xac\xcd\xda\xac\xcb\xba\xac\xcf"
"\xfa\x6c\xc8\x86\x6c\x36\x36\x1b\x9b\x83\xcd\xc1\xe6\x62\x73\xb1\x79"
"\xd8\x3c\x6c\x02\x9b\x8f\xcd\xcf\xe6\x67\x13\xd9\x44\xb6\x10\x5b\x88"
"\x2d\xc2\x16\x61\x8b\xb1\xc5\xd8\xe2\x6c\x71\xb6\x24\x5b\x92\x2d\xc5"
"\x96\x62\xcb\xb0\x65\xd8\x72\xec\x4d\x6c\x79\xf6\x66\xb6\x02\x5b\x91"
"\xbd\x85\xad\xc4\x56\x62\x2b\xb3\xb7\xb1\x55\xd8\xaa\x6c\x35\xb6\x1a"
"\x5b\x83\xad\xc9\xd6\x62\x6b\xb3\xb5\xd9\xba\x6c\x5d\xb6\x3e\x5b\x9f"
"\x6d\xc0\x36\x60\x1b\xb1\x8d\xd8\x7e\xc5\xfb\x14\x6f\xc6\x3e\xcc\x8c"
"\x66\x5a\xb2\xc9\xc9\xb4\x66\x27\x30\x6d\xd9\x89\x4c\x3b\xf6\x2e\xb6"
"\x3d\xdb\x81\x7d\x9a\xe9\xc4\x76\x66\x27\x33\x5d\xd8\x24\xb6\x2b\xdb"
"\x8d\x9d\xca\x4c\x61\x7a\xb2\x9d\x8b\xf7\x66\xfb\xb0\x7d\xd9\x19\x4c"
"\x7f\xb6\x6f\xf1\x99\xcc\x40\x76\x10\x3b\x9b\x19\xc2\xde\xcb\x0e\x65"
"\x87\xb1\xf7\xb1\xf7\xb3\xc3\xd9\x2e\xc5\x1f\x64\x1f\x62\xe7\x33\x0f"
"\xb3\x23\xd9\x45\xcc\x28\xf6\x51\x76\x34\x3b\x86\x5d\xca\xd4\x64\x93"
"\x13\xab\xc5\x3e\xc9\x8e\x67\x27\xb0\x13\xd9\x49\xec\x1b\xcc\xd3\xec"
"\x64\xf6\x19\x76\x0a\x3b\x95\x9d\xc6\x3e\xcb\x4e\x67\x9f\x63\x67\xb0"
"\x33\xd9\x59\xec\x6c\x76\x0e\xfb\x3c\x3b\x97\x7d\x81\x9d\xc7\xbe\xc8"
"\xce\x67\x17\xb0\x0b\xd9\x45\xec\x62\xf6\x25\x76\x09\xfb\x32\xbb\x94"
"\x7d\x85\x5d\xc6\xbe\xca\x2e\x67\x57\xb0\x2b\xd9\xd7\xd8\xd7\xd9\x37"
"\xd8\x55\xec\x9b\xec\x6a\xf6\x2d\x76\x0d\xfb\x36\xbb\x96\x7d\x87\x5d"
"\xc7\xae\x67\xdf\x65\x37\xb0\xef\xb1\x1b\xd9\xf7\xd9\x4d\xec\x66\xf6"
"\x03\xf6\x43\x76\x0b\xfb\x11\xbb\x95\xfd\x98\xdd\xc6\x6e\x67\x77\xb0"
"\x3b\xd9\x5d\xec\x6e\x76\x0f\xbb\x97\xdd\xc7\xee\x67\x3f\x61\x0f\xb0"
"\x9f\xb2\x07\xd9\x43\xec\x61\xf6\x33\xf6\x08\xfb\x39\x7b\x94\xfd\x82"
"\x3d\xc6\x7e\xc9\x1e\x67\xbf\x62\x4f\xb0\x5f\xb3\x27\xd9\x6f\xd8\x53"
"\xec\xb7\xec\x69\xf6\x3b\xf6\x0c\xfb\x3d\xfb\x03\xfb\x23\x7b\x96\xfd"
"\x89\x3d\xc7\x9e\x67\x2f\xb0\x3f\xb3\x17\xd9\x5f\xd8\x4b\xec\xaf\xec"
"\x65\xf6\x0a\x4b\x38\x86\x63\x39\x8e\xe3\x39\x81\x13\x39\x89\x93\x39"
"\x85\x53\x39\x8d\xd3\x39\x83\x33\x39\x8b\xb3\x39\x87\x73\x39\x8f\xf3"
"\xb9\x80\x0b\xb9\x08\x97\x8d\xcb\xce\xe5\xe0\x72\x72\xb9\xb8\xdc\x5c"
"\x1e\x2e\x2f\x97\xc0\xe5\xe3\xf2\x73\x05\xb8\x44\xae\x20\x57\x88\x2b"
"\xcc\x15\xe1\x8a\x72\xc5\xb8\x1b\xb8\xe2\x5c\x09\xae\x24\x77\x23\x39"
"\xc3\x95\xe6\xca\x70\x65\xb9\x72\xdc\x4d\x5c\x79\xee\x66\xae\x02\x57"
"\x91\xbb\x85\xab\xc4\xdd\xca\x55\xe6\x6e\xe3\xaa\x70\x55\xb9\x6a\x5c"
"\x75\xae\x06\x57\x93\xab\xc5\xd5\xe6\xea\x70\x75\xb9\x7a\x5c\x7d\xee"
"\x76\xae\x01\xd7\x90\x6b\xc4\x35\xe6\x9a\x70\x4d\xb9\x66\x5c\x73\xae"
"\x05\xd7\x92\x6b\xc5\xdd\xc1\xb5\xe6\xda\x70\x6d\xb9\x3b\xb9\x76\xdc"
"\x5d\x5c\x7b\xae\x03\xd7\x91\xeb\xc4\x75\xe6\xee\xe6\xba\x70\x49\x5c"
"\x57\xae\x1b\xd7\x9d\xeb\xc1\xf5\xe4\x7a\x71\xbd\xb9\x3e\x5c\x5f\xae"
"\x1f\xd7\x9f\xeb\xcf\x0d\xe0\x06\x70\x83\xb8\xc1\xdc\x10\x6e\x08\x37"
"\x94\x1b\xc6\xdd\xc7\xfd\xca\x5d\xe6\xae\x70\x0f\x72\x0f\x71\x23\xb8"
"\x87\xb9\x91\xdc\x23\xdc\x28\xee\x51\x6e\x34\x37\x86\x1b\xcb\x8d\xe3"
"\x1e\xe3\x1e\xe7\x9e\xe0\x9e\xe4\xc6\x73\x13\xb8\x89\xdc\x24\xee\x29"
"\xee\x69\x6e\x32\xf7\x0c\x37\x85\x9b\xca\x4d\xe3\x9e\xe5\xa6\x73\xcf"
"\x71\x33\xb8\x99\xdc\x2c\x6e\x36\x37\x87\x7b\x9e\x9b\xcb\xbd\xc0\xcd"
"\xe3\x5e\xe4\xe6\x73\x0b\xb8\x85\xdc\x22\x6e\x31\xf7\x12\x37\x3a\x56"
"\x69\xd9\x7f\xf1\xfc\x77\xe3\x3c\xff\x89\xe8\xe8\xdb\xb8\xed\xdc\x0e"
"\x6e\x27\xb7\x8b\xdb\xcd\xed\xe1\xf6\x72\xdb\xb8\xfd\xdc\x7e\xee\x00"
"\x77\x80\x3b\xc8\x1d\xe4\x0e\x73\x87\xb9\x23\xdc\x11\xee\x28\x77\x94"
"\x3b\xc6\x1d\xe3\x8e\x73\xc7\xb9\x13\xdc\x09\xee\x24\x77\x92\x3b\xc5"
"\x9d\xe2\x4e\x73\xa7\xb9\x33\xdc\xf7\xdc\xcf\xdc\x8f\xdc\x59\xee\x27"
"\xee\x1c\x77\x9e\x3b\xcf\xfd\xcc\x5d\xe4\x2e\x72\x97\x62\xeb\x80\xf0"
"\x0c\xcf\xf2\x1c\xcf\xf3\x02\x2f\xf2\x12\x2f\xf3\x0a\xaf\xf2\x1a\xaf"
"\xf3\x06\x6f\xf2\x16\x6f\xf3\x0e\xef\xf2\x1e\xef\xf3\x01\x1f\xf2\x11"
"\x3e\x1b\x9f\x9d\xcf\xc1\xe7\xe4\x73\xf1\xb9\xf9\x3c\x7c\x5e\x3e\x81"
"\xcf\xc7\xe7\xe7\x0b\xf0\x89\x7c\x41\xbe\x10\x5f\x98\x2f\xc2\x17\xe5"
"\x8b\xf1\x37\xf0\xc5\xf9\x12\x7c\x49\xfe\xc6\x3f\xfd\xfc\xdf\x5b\xbe"
"\x8e\x7c\x47\xbe\x33\xdf\x99\xef\xc2\x77\xe1\xbb\xf2\x5d\xf9\xee\x7c"
"\x77\xbe\x27\xdf\x93\xef\xcd\xf7\xe6\xfb\xf2\x7d\xf9\xfe\x7c\x7f\x7e"
"\x00\x3f\x80\x1f\xc4\x0f\xe2\x87\xf0\x43\xf8\xa1\xfc\x50\xfe\x3e\xfe"
"\x3e\x7e\x38\x3f\x9c\x7f\x90\x7f\x90\x1f\xc1\x8f\xe0\x47\xf2\x8f\xf0"
"\xa3\xf8\x47\xf9\xd1\xfc\x18\x7e\x2c\x3f\x8e\x7f\x8c\x7f\x8c\x7f\x82"
"\x7f\x82\x1f\xcf\x8f\xe7\x27\xf2\x13\xf9\xa7\xf8\xa7\xf8\xc9\xfc\x64"
"\x7e\x0a\x3f\x85\x9f\xc6\x4f\xe3\xa7\xf3\xd3\xf9\x19\xfc\x0c\x7e\x16"
"\x3f\x8b\x9f\xc3\xcf\xe1\xe7\xf2\x73\xf9\x79\xfc\x3c\x7e\x3e\x3f\x9f"
"\x5f\xc8\x2f\xe4\x17\xf3\x8b\xf9\x25\xfc\x12\x7e\x29\xbf\x94\x5f\xc6"
"\x2f\xe3\x97\xf3\xcb\xf9\x95\xfc\x4a\xfe\x75\xfe\x75\x7e\x15\xbf\x8a"
"\x5f\xcd\xaf\xe6\xd7\xf0\x6b\xf8\xb5\xfc\x5a\x7e\x1d\xbf\x9e\x5f\xcf"
"\x6f\xe0\x37\xf0\x1b\xf9\x8d\xfc\x26\x7e\x13\xff\x01\xff\x01\xbf\x85"
"\xdf\xc2\x6f\xe5\xb7\xf2\xeb\xf8\xed\xfc\x76\x7e\x27\xbf\x93\xdf\xcd"
"\xef\xe6\xf7\xf2\x7b\xf9\xfd\xfc\x7e\xfe\x00\x7f\x80\x3f\xc8\x1f\xe4"
"\x0f\xf3\x87\xf9\x23\xfc\x11\xfe\x28\x7f\x94\x3f\xc6\x1f\xe3\x8f\xf3"
"\xc7\xf9\x13\xfc\x09\xfe\x24\x7f\x92\x3f\xc5\x9f\xe2\x4f\xf3\xa7\xf9"
"\x33\xfc\x19\xfe\x07\xfe\x07\xfe\x2c\x7f\x96\x3f\xc7\x9f\xe3\x2f\xf0"
"\x17\xf8\x8b\xfc\x45\xfe\x12\x7f\x89\xbf\xcc\x5f\x4e\xfe\x6f\x9f\xc0"
"\x0a\xac\xc0\x0b\xbc\x20\x0a\xa2\x20\x0b\xb2\xa0\x0a\xaa\xa0\x0b\xba"
"\x60\x0a\xa6\x60\x0b\xb6\xe0\x0a\xae\xe0\x0b\xbe\x10\x0a\xa1\x90\x4d"
"\xc8\x26\xe4\x10\x72\x08\xb9\x84\x5c\x42\x1e\x21\x8f\x90\x20\x24\x08"
"\xf9\x85\xfc\x42\xa2\x50\x50\x28\x24\x14\x16\x8a\x08\x45\x85\x62\xc2"
"\x0d\x42\x71\xa1\x84\x50\x52\xb8\x51\x28\x25\x94\x16\xca\x08\x65\x85"
"\x72\xc2\x4d\x42\x79\xe1\x66\xa1\x82\x50\x51\xb8\x45\xa8\x24\xdc\x2a"
"\x54\x16\x6e\x13\xaa\x08\x55\x85\x6a\x42\x75\xa1\x86\x50\x53\xa8\x25"
"\xd4\x16\xea\x08\x75\x85\x7a\x42\x7d\xe1\x76\xa1\x81\xd0\x50\x68\x24"
"\x34\x16\x9a\x08\x4d\x85\x66\x42\x73\xa1\x85\xd0\x52\x68\x25\xdc\x21"
"\xb4\x16\xda\x08\x6d\x85\x3b\x85\x76\xc2\x5d\x42\x7b\xa1\x83\xd0\xf1"
"\x2f\xad\x3f\x46\x18\x2b\x8c\x13\x1e\x13\x1e\x17\x9e\x10\x9e\x14\xc6"
"\x0b\x13\x84\x89\xc2\x24\xe1\x29\xe1\x69\x61\xb2\xf0\x8c\x30\x45\x98"
"\x2a\x4c\x13\x9e\x15\xa6\x0b\xcf\x09\x33\x84\x99\xc2\x2c\x61\xb6\x30"
"\x47\x78\x5e\x98\x2b\xbc\x20\xcc\x13\x5e\x14\xe6\x0b\x0b\x84\x85\xc2"
"\x22\x61\xb1\xf0\x92\xb0\x44\x78\x59\x58\x2a\xbc\x22\x2c\x13\x5e\x15"
"\x96\x0b\x2b\x84\x95\xc2\x6b\xc2\xeb\xc2\x1b\xc2\x2a\xe1\x4d\x61\xb5"
"\xf0\x96\xb0\x46\x78\x5b\x58\x2b\xbc\x23\xac\x13\xd6\x0b\xef\x0a\x1b"
"\x84\xf7\x84\x8d\xc2\xfb\xc2\x26\x61\xb3\xf0\x81\xf0\xa1\xb0\x45\xf8"
"\x48\xd8\x2a\x7c\x2c\x6c\x13\xb6\x0b\x3b\x84\x9d\xc2\x2e\x61\xb7\xb0"
"\x47\xd8\x2b\xec\x13\xf6\x0b\x9f\x08\x07\x84\x4f\x85\x83\xc2\x21\xe1"
"\xb0\xf0\x99\x70\x44\xf8\x5c\x38\x2a\x7c\x21\x1c\x13\xbe\x14\x8e\x0b"
"\x5f\x09\x27\x84\xaf\x85\x93\xc2\x37\xc2\x29\xe1\x5b\xe1\xb4\xf0\x9d"
"\x70\x46\xf8\x5e\xf8\x41\xf8\x51\x38\x2b\xfc\x24\x9c\x13\xce\x0b\x17"
"\x84\x9f\x85\x8b\xc2\x2f\xc2\x25\xe1\x57\xe1\xb2\x70\x45\x20\x22\x23"
"\xb2\x22\x27\xf2\xa2\x20\x8a\xa2\x24\xca\xa2\x22\xaa\xa2\x26\xea\xa2"
"\x21\x9a\xa2\x25\xda\xa2\x23\xba\xa2\x27\xfa\x62\x20\x86\x62\x44\xcc"
"\x26\x66\x17\x73\x88\x39\xc5\x5c\x62\x6e\x31\x8f\x98\x57\x4c\x10\xf3"
"\x89\xf9\xc5\x02\x62\xa2\x58\x50\x2c\x24\x16\x16\x8b\x88\x45\xc5\x62"
"\xe2\x0d\x62\x71\xb1\x84\x58\x52\xbc\x51\x2c\x25\x96\x16\xcb\x88\x65"
"\xc5\x72\xe2\x4d\x62\x79\xf1\x66\xb1\x82\x58\x51\xbc\x45\xac\x24\xde"
"\x2a\x56\x16\x6f\x13\xab\x88\x55\xc5\x6a\x62\x75\xb1\x86\x58\x53\xac"
"\x25\xd6\x16\xeb\x88\x75\xc5\x7a\x62\x7d\xf1\x76\xb1\x81\xd8\x50\x6c"
"\x24\x36\x16\x9b\x88\x4d\xc5\x66\x62\x73\xb1\x85\xd8\x52\x6c\x25\xde"
"\x21\xb6\x16\xdb\x88\x6d\xc5\x3b\xc5\x76\xe2\x5d\x62\x7b\xb1\x83\xd8"
"\x51\xec\x24\x76\x16\xef\x16\xbb\x88\x49\x62\x57\xb1\x9b\xd8\x5d\xec"
"\x21\xf6\x14\x7b\x89\xbd\xc5\x3e\x62\x5f\xb1\x9f\xd8\x5f\xbc\x47\x1c"
"\x20\x0e\x14\x07\x89\x83\xc5\x21\xe2\xbd\xe2\x50\x71\x98\x78\x9f\x78"
"\xbf\x38\x5c\x7c\x40\x7c\x50\x7c\x48\x1c\x21\x3e\x2c\x8e\x14\x1f\x11"
"\x47\x89\x8f\x8a\xa3\xc5\x31\xe2\x58\x71\x9c\xf8\x98\xf8\xb8\xf8\x84"
"\xf8\xa4\x38\x5e\x9c\x20\x4e\x14\x27\x89\x4f\x89\x4f\x8b\x93\xc5\x67"
"\xc4\x29\xe2\x54\x71\x9a\xf8\xac\x38\x5d\x7c\x4e\x9c\x21\xce\x14\x67"
"\x89\xb3\xc5\x39\xe2\xf3\xe2\x5c\xf1\x05\x71\x9e\xf8\xa2\x38\x5f\x5c"
"\x20\x2e\x14\x17\x89\x8b\xc5\x97\xc4\x25\xe2\xcb\xe2\x52\xf1\x15\x71"
"\x99\xf8\xaa\xb8\x5c\x5c\x21\xae\x14\x5f\x13\x5f\x17\xdf\x10\x57\x89"
"\x6f\x8a\xab\xc5\xb7\xc4\x35\xe2\xdb\xe2\x5a\xf1\x1d\x71\x9d\xb8\x5e"
"\x7c\x57\xdc\x20\xbe\x27\x6e\x14\xdf\x17\x37\x89\x9b\xc5\x0f\xc4\x0f"
"\xc5\x2d\xe2\x47\xe2\x56\xf1\x63\x71\x9b\xb8\x5d\xdc\x21\xee\x14\x77"
"\x89\xbb\xc5\x3d\xe2\x5e\x71\x9f\xb8\x5f\xfc\x44\x3c\x20\x7e\x2a\x1e"
"\x14\x0f\x89\x87\xc5\xcf\xc4\x23\xe2\xe7\xe2\x51\xf1\x0b\xf1\x98\xf8"
"\xa5\x78\x5c\xfc\x4a\x3c\x21\x7e\x2d\x9e\x14\xbf\x11\x4f\x89\xdf\x8a"
"\xa7\xc5\xef\xc4\x33\xe2\xf7\xe2\x0f\xe2\x8f\xe2\x59\xf1\x27\xf1\x9c"
"\x78\x5e\xbc\x20\xfe\x2c\x5e\x14\x7f\x11\x2f\x89\xbf\x8a\x97\xc5\x2b"
"\x22\x91\x18\x89\x95\x38\x89\x97\x04\x49\x94\x24\x49\x96\x14\x49\x95"
"\x34\x49\x97\x0c\xc9\x94\x2c\xc9\x96\x1c\xc9\x95\x3c\xc9\x97\x02\x29"
"\x94\x22\x52\x36\x29\xbb\x94\x43\xca\x29\xe5\x92\x72\x4b\x79\xa4\xbc"
"\x52\x82\x94\x4f\xca\x2f\x15\x90\x12\xa5\x82\x52\x21\xa9\xb0\x54\x44"
"\x2a\x2a\x15\x93\x6e\x90\x8a\x4b\x25\xa4\x92\xd2\x8d\x52\x29\xa9\xb4"
"\x54\x46\x2a\x2b\x95\x93\x6e\x92\xca\x4b\x37\x4b\x15\xa4\x8a\xd2\x2d"
"\x52\x25\xe9\x56\xa9\xb2\x74\x9b\x54\x45\xaa\x2a\x55\x93\xaa\x4b\x35"
"\xa4\x9a\x52\x2d\xa9\xb6\x54\x47\xaa\x2b\xd5\x93\xea\x4b\xb7\x4b\x0d"
"\xa4\x86\x52\x23\xa9\xb1\xd4\x44\x6a\x2a\x35\x93\x9a\x4b\x2d\xa4\x96"
"\x52\x2b\xe9\x0e\xa9\xb5\xd4\x46\x6a\x2b\xdd\x29\xb5\x93\xee\x92\xda"
"\x4b\x1d\xa4\x8e\x52\x27\xa9\xb3\x74\xb7\xd4\x45\x4a\x92\xba\x4a\xdd"
"\xa4\xee\x52\x0f\xa9\xa7\xd4\x4b\xea\x2d\xf5\x91\xfa\x4a\xfd\xa4\xfe"
"\xd2\x3d\xd2\x00\x69\xa0\x34\x48\x1a\x2c\x0d\x91\xee\x95\x86\x4a\xc3"
"\xa4\xfb\xa4\xfb\xa5\xe1\xd2\x03\xd2\x83\xd2\x43\xd2\x08\xe9\x61\x69"
"\xa4\xf4\x88\x34\x4a\x7a\x54\x1a\x2d\x8d\x91\xc6\x4a\xe3\xa4\xc7\xa4"
"\xc7\xa5\x27\xa4\x27\xa5\xf1\xd2\x04\x69\xa2\x34\x49\x7a\x4a\x7a\x5a"
"\x9a\x2c\x3d\x23\x4d\x91\xa6\x4a\xd3\xa4\x67\xa5\xe9\xd2\x73\xd2\x0c"
"\x69\xa6\x34\x4b\x9a\x2d\xcd\x91\x9e\x97\xe6\x4a\x2f\x48\xf3\xa4\x17"
"\xa5\xf9\xd2\x02\x69\xa1\xb4\x48\x5a\x2c\xbd\x24\x2d\x91\x5e\x96\x96"
"\x4a\xaf\x48\xcb\xa4\x57\xa5\xe5\xd2\x0a\x69\xa5\xf4\x9a\xf4\xba\xf4"
"\x86\xb4\x4a\x7a\x53\x5a\x2d\xbd\x25\xad\x91\xde\x96\xd6\x4a\xef\x48"
"\xeb\xa4\xf5\xd2\xbb\xd2\x06\xe9\x3d\x69\xa3\xf4\xbe\xb4\x49\xda\x2c"
"\x7d\x20\x7d\x28\x6d\x91\x3e\x92\xb6\x4a\x1f\x4b\xdb\xa4\xed\xd2\x0e"
"\x69\xa7\xb4\x4b\xda\x2d\xed\x91\xf6\x4a\xfb\xa4\xfd\xd2\x27\xd2\x01"
"\xe9\x53\xe9\xa0\x74\x48\x3a\x2c\x7d\x26\x1d\x91\x3e\x97\x8e\x4a\x5f"
"\x48\xc7\xa4\x2f\xa5\xe3\xd2\x57\xd2\x09\xe9\x6b\xe9\xa4\xf4\x8d\x74"
"\x4a\xfa\x56\x3a\x2d\x7d\x27\x9d\x91\xbe\x97\x7e\x90\x7e\x94\xce\x4a"
"\x3f\x49\xe7\xa4\xf3\xd2\x05\xe9\x67\xe9\xa2\xf4\x8b\x74\x49\xfa\x55"
"\xba\x2c\x5d\x91\x88\xcc\xc8\xac\xcc\xc9\xbc\x2c\xc8\xa2\x2c\xc9\xb2"
"\xac\xc8\xaa\xac\xc9\xba\x6c\xc8\xa6\x6c\xc9\xb6\xec\xc8\xae\xec\xc9"
"\xbe\x1c\xc8\xa1\x1c\x91\xb3\xc9\xd9\xe5\x1c\x72\x4e\x39\x97\x9c\x5b"
"\xce\x23\xe7\x95\x13\xe4\x7c\x72\x7e\xb9\x80\x9c\x28\x17\x94\x0b\xc9"
"\x85\xe5\x22\x72\x51\xb9\x98\x7c\x83\x5c\x5c\x2e\x21\x97\x94\x6f\x94"
"\x4b\xc9\xa5\xe5\x32\x72\x59\xb9\x9c\x7c\x93\x5c\x5e\xbe\x59\xae\x20"
"\x57\x94\x6f\x91\x2b\xc9\xb7\xca\x95\xe5\xdb\xe4\x2a\x72\x55\xb9\x9a"
"\x5c\x5d\xae\x21\xd7\x94\x6b\xc9\xb5\xe5\x3a\x72\x5d\xb9\x9e\x5c\x5f"
"\xbe\x5d\x6e\x20\x37\x94\x1b\xc9\x8d\xe5\x26\x72\x53\xb9\x99\xdc\x5c"
"\x6e\x21\xb7\x94\x5b\xc9\x77\xc8\xad\xe5\x36\x72\x5b\xf9\x4e\xb9\x9d"
"\x7c\x97\xdc\x5e\xee\x20\x77\x94\x3b\xc9\x9d\xe5\xbb\xe5\x2e\x72\x92"
"\xdc\x55\xee\x26\x77\x97\x7b\xc8\x3d\xe5\x5e\x72\x6f\xb9\x8f\xdc\x57"
"\xee\x27\xf7\x97\xef\x91\x07\xc8\x03\xe5\x41\xf2\x60\x79\x88\x7c\xaf"
"\x3c\x54\x1e\x26\xdf\x27\xdf\x2f\x0f\x97\x1f\x90\x1f\x94\x1f\x92\x47"
"\xc8\x0f\xcb\x23\xe5\x47\xe4\x51\xf2\xa3\xf2\x68\x79\x8c\x3c\x56\x1e"
"\x27\x3f\x26\x3f\x2e\x3f\x21\x3f\x29\x8f\x97\x27\xc8\x13\xe5\x49\xf2"
"\x53\xf2\xd3\xf2\x64\xf9\x19\x79\x8a\x3c\x55\x9e\x26\x3f\x2b\x4f\x97"
"\x9f\x93\x67\xc8\x33\xe5\x59\xf2\x6c\x79\x8e\xfc\xbc\x3c\x57\x7e\x41"
"\x9e\x27\xbf\x28\xcf\x97\x17\xc8\x0b\xe5\x45\xf2\x62\xf9\x25\x79\x89"
"\xfc\xb2\xbc\x54\x7e\x45\x5e\x26\xbf\x2a\x2f\x97\x57\xc8\x2b\xe5\xd7"
"\xe4\xd7\xe5\x37\xe4\x55\xf2\x9b\xf2\x6a\xf9\x2d\x79\x8d\xfc\xb6\xbc"
"\x56\x7e\x47\x5e\x27\xaf\x97\xdf\x95\x37\xc8\xef\xc9\x1b\xe5\xf7\xe5"
"\x4d\xf2\x66\xf9\x03\xf9\x43\x79\x8b\xfc\x91\xbc\x55\xfe\x58\xde\x26"
"\x6f\x97\x77\xc8\x3b\xe5\x5d\xf2\x6e\x79\x8f\xbc\x57\xde\x27\xef\x97"
"\x3f\x91\x0f\xc8\x9f\xca\x07\xe5\x43\xf2\x61\xf9\x33\xf9\x88\xfc\xb9"
"\x7c\x54\xfe\x42\x3e\x26\x7f\x29\x1f\x97\xbf\x92\x4f\xc8\x5f\xcb\x27"
"\xe5\x6f\xe4\x53\xf2\xb7\xf2\x69\xf9\x3b\xf9\x8c\xfc\xbd\xfc\x83\xfc"
"\xa3\x7c\x56\xfe\x49\x3e\x27\x9f\x97\x2f\xc8\x3f\xcb\x17\xe5\x5f\xe4"
"\x4b\xf2\xaf\xf2\x65\xf9\x8a\x4c\x14\x46\x61\x15\x4e\xe1\x15\x41\x11"
"\x15\x49\x91\x15\x45\x51\x15\x4d\xd1\x15\x43\x31\x15\x4b\xb1\x15\x47"
"\x71\x15\x4f\xf1\x95\x40\x09\x95\x88\x92\x4d\xc9\xae\xe4\x50\x72\x2a"
"\xb9\x94\xdc\x4a\x1e\x25\xaf\x92\xa0\xe4\x53\xf2\x2b\x05\x94\x44\xa5"
"\xa0\x52\x48\x29\xac\x14\x51\x8a\x2a\xc5\x94\x1b\x94\xe2\x4a\x09\xa5"
"\xa4\x72\xa3\x52\x4a\x29\xad\x94\x51\xca\x2a\xe5\x94\x9b\x94\xf2\xca"
"\xcd\x4a\x05\xa5\xa2\x72\x8b\x52\x49\xb9\x55\xa9\xac\xdc\xa6\x54\x51"
"\xaa\x2a\xd5\x94\xea\x4a\x0d\xa5\xa6\x52\x4b\xa9\xad\xd4\x51\xea\x2a"
"\xf5\x94\xfa\xca\xed\x4a\x03\xa5\xa1\xd2\x48\x69\xac\x34\x51\x9a\x2a"
"\xcd\x94\xe6\x4a\x0b\xa5\xa5\xd2\x4a\xb9\x43\x69\xad\xb4\x51\xda\x2a"
"\x77\x2a\xed\x94\xbb\x94\xf6\x4a\x07\xa5\xa3\xd2\x49\xe9\xac\xdc\xad"
"\x74\x51\x92\x94\xae\x4a\x37\xa5\xbb\xd2\x43\xe9\xa9\xf4\x52\x7a\x2b"
"\x7d\x94\xbe\x4a\x3f\xa5\xbf\x72\x8f\x32\x40\x19\xa8\x0c\x52\x06\x2b"
"\x43\x94\x7b\x95\xa1\xca\x30\xe5\x3e\xe5\x7e\x65\xb8\xf2\x80\xf2\xa0"
"\xf2\x90\x32\x42\x79\x58\x19\xa9\x3c\xa2\x8c\x52\x1e\x55\x46\x2b\x63"
"\x94\xb1\xca\x38\xe5\x31\xe5\x71\xe5\x09\xe5\x49\x65\xbc\x32\x41\x99"
"\xa8\x4c\x52\x9e\x52\x9e\x56\x26\x2b\xcf\x28\x53\x94\xa9\xca\x34\xe5"
"\x59\x65\xba\xf2\x9c\x32\x43\x99\xa9\xcc\x52\x66\x2b\x73\x94\xe7\x95"
"\xb9\xca\x0b\xca\x3c\xe5\x45\x65\xbe\xb2\x40\x59\xa8\x2c\x52\x16\x2b"
"\x2f\x29\x4b\x94\x97\x95\xa5\xca\x2b\xca\x32\xe5\x55\x65\xb9\xb2\x42"
"\x59\xa9\xbc\xa6\xbc\xae\xbc\xa1\xac\x52\xde\x54\x56\x2b\x6f\x29\x6b"
"\x94\xb7\x95\xb5\xca\x3b\xca\x3a\x65\xbd\xf2\xae\xb2\x41\x79\x4f\xd9"
"\xa8\xbc\xaf\x6c\x52\x36\x2b\x1f\x28\x1f\x2a\x5b\x94\x8f\x94\xad\xca"
"\xc7\xca\x36\x65\xbb\xb2\x43\xd9\xa9\xec\x52\x76\x2b\x7b\x94\xbd\xca"
"\x3e\x65\xbf\xf2\x89\x72\x40\xf9\x54\x39\xa8\x1c\x52\x0e\x2b\x9f\x29"
"\x47\x94\xcf\x95\xa3\xca\x17\xca\x31\xe5\x4b\xe5\xb8\xf2\x95\x72\x42"
"\xf9\x5a\x39\xa9\x7c\xa3\x9c\x52\xbe\x55\x4e\x2b\xdf\x29\x67\x94\xef"
"\x95\x1f\x94\x1f\x95\xb3\xca\x4f\xca\x39\xe5\xbc\x72\x41\xf9\x59\xb9"
"\xa8\xfc\xa2\x5c\x52\x7e\x55\x2e\x2b\x57\x14\xa2\x32\x2a\xab\x72\x2a"
"\xaf\x0a\xaa\xa8\x4a\xaa\xac\x2a\xaa\xaa\x6a\xaa\xae\x1a\xaa\xa9\x5a"
"\xaa\xad\x3a\xaa\xab\x7a\xaa\xaf\x06\x6a\xa8\x46\xd4\x6c\x6a\x76\x35"
"\x87\x9a\x53\xcd\xa5\xe6\x56\xf3\xa8\x79\xd5\x04\x35\x9f\x9a\x5f\x2d"
"\xa0\x26\xaa\x05\xd5\x42\x6a\x61\xb5\x88\x5a\x54\x2d\xa6\xde\xa0\x16"
"\x57\x4b\xa8\x25\xd5\x1b\xd5\x52\x6a\x69\xb5\x8c\x5a\x56\x2d\xa7\xde"
"\xa4\x96\x57\x6f\x56\x2b\xa8\x15\xd5\x5b\xd4\x4a\xea\xad\x6a\x65\xf5"
"\x36\xb5\x8a\x5a\x55\xad\xa6\x56\x57\x6b\xa8\x35\xd5\x5a\x6a\x6d\xb5"
"\x8e\x5a\x57\xad\xa7\xd6\x57\x6f\x57\x1b\xa8\x0d\xd5\x46\x6a\x63\xb5"
"\x89\xda\x54\x6d\xa6\x36\x57\x5b\xa8\x2d\xd5\x56\xea\x1d\x6a\x6b\xb5"
"\x8d\xda\x56\xbd\x53\x6d\xa7\xde\xa5\xb6\x57\x3b\xa8\x1d\xd5\x4e\x6a"
"\x67\xf5\x6e\xb5\x8b\x9a\xa4\x76\x55\xbb\xa9\xdd\xd5\x1e\x6a\x4f\xb5"
"\x97\xda\x5b\xed\xa3\xf6\x55\xfb\xa9\xfd\xd5\x7b\xd4\x01\xea\x40\x75"
"\x90\x3a\x58\x1d\xa2\xde\xab\x0e\x55\x87\xa9\xf7\xa9\xf7\xab\xc3\xd5"
"\x07\xd4\x07\xd5\x87\xd4\x11\xea\xc3\xea\x48\xf5\x11\x75\x94\xfa\xa8"
"\x3a\x5a\x1d\xa3\x8e\x55\xc7\xa9\x8f\xa9\x8f\xab\x4f\xa8\x4f\xaa\xe3"
"\xd5\x09\xea\x44\x75\x92\xfa\x94\xfa\xb4\x3a\x59\x7d\x46\x9d\xa2\x4e"
"\x55\xa7\xa9\xcf\xaa\xd3\xd5\xe7\xd4\x19\xea\x4c\x75\x96\x3a\x5b\x9d"
"\xa3\x3e\xaf\xce\x55\x5f\x50\xe7\xa9\x2f\xaa\xf3\xd5\x05\xea\x42\x75"
"\x91\x9a\xa0\xbe\xa4\x2e\x51\x5f\x56\x97\xaa\xaf\xa8\xcb\xd4\x57\xd5"
"\xe5\xea\x0a\x75\xa5\xfa\x9a\xfa\xba\xfa\x86\xba\x4a\x7d\x53\x5d\xad"
"\xbe\xa5\xae\x51\xdf\x56\xd7\xaa\xef\xa8\xeb\xd4\xf5\xea\xbb\xea\x06"
"\xf5\x3d\x75\xa3\xfa\xbe\xba\x49\xdd\xac\x7e\xa0\x7e\xa8\x6e\x51\x3f"
"\x52\xb7\xaa\x1f\xab\xdb\xd4\xed\xea\x0e\x75\xa7\xba\x4b\xdd\xad\xee"
"\x51\xf7\xaa\xfb\xd4\xfd\xea\x27\xea\x01\xf5\x53\xf5\xa0\x7a\x48\x3d"
"\xac\x7e\xa6\x1e\x51\x3f\x57\x8f\xaa\x5f\xa8\xc7\xd4\x2f\xd5\xe3\xea"
"\x57\xea\x09\xf5\x6b\xf5\xa4\xfa\x8d\x7a\x4a\xfd\x56\x3d\xad\x7e\xa7"
"\x9e\x51\xbf\x57\x7f\x50\x7f\x54\xcf\xaa\x3f\xa9\xe7\xd4\xf3\xea\x05"
"\xf5\x67\xf5\xa2\xfa\x8b\x7a\x49\xfd\x55\xbd\xac\x5e\x51\x89\xc6\x68"
"\xac\xc6\x69\xbc\x26\x68\xa2\x26\x69\xb2\xa6\x68\xaa\xa6\x69\xba\x66"
"\x68\xa6\x66\x69\xb6\xe6\x68\xae\xe6\x69\xbe\x16\x68\xa1\x16\xd1\xb2"
"\x69\xd9\xb5\x1c\x5a\x4e\x2d\x97\x96\x5b\xcb\xa3\xe5\xd5\x12\xb4\x7c"
"\x5a\x7e\xad\x80\x96\xa8\x15\xd4\x0a\x69\x85\xb5\x22\x5a\x51\xad\x98"
"\x76\x83\x56\x5c\x2b\xa1\x95\xd4\x6e\xd4\x4a\x69\xa5\xb5\x32\x5a\x59"
"\xad\x9c\x76\x93\x56\x5e\xbb\x59\xab\xa0\x55\xd4\x6e\xd1\x2a\x69\xb7"
"\x6a\x95\xb5\xdb\xb4\x2a\x5a\x55\xad\x9a\x56\x5d\xab\xa1\xd5\xd4\x6a"
"\x69\xb5\xb5\x3a\x5a\x5d\xad\x9e\x56\x5f\xbb\x5d\x6b\xa0\x35\xd4\x1a"
"\x69\x8d\xb5\x26\x5a\x53\xad\x99\xd6\x5c\x6b\xa1\xb5\xd4\x5a\x69\x77"
"\x68\xad\xb5\x36\x5a\x5b\xed\x4e\xad\x9d\x76\x97\xd6\x5e\xeb\xa0\x75"
"\xd4\x3a\x69\x9d\xb5\xbb\xb5\x2e\x5a\x92\xd6\x55\xeb\xa6\x75\xd7\x7a"
"\x68\x3d\xb5\x5e\x5a\x6f\xad\x8f\xd6\x57\xeb\xa7\xf5\xd7\xee\xd1\x06"
"\x68\x03\xb5\x41\xda\x60\x6d\x88\x76\xaf\x36\x54\x1b\xa6\xdd\xa7\xdd"
"\xaf\x0d\xd7\x1e\xd0\x1e\xd4\x1e\xd2\x46\x68\x0f\x6b\x23\xb5\x47\xb4"
"\x51\xda\xa3\xda\x68\x6d\x8c\x36\x56\x1b\xa7\x3d\xa6\x3d\xae\x3d\xa1"
"\x3d\xa9\x8d\xd7\x26\x68\x13\xb5\x49\xda\x53\xda\xd3\xda\x64\xed\x19"
"\x6d\x8a\x36\x55\x9b\xa6\x3d\xab\x4d\xd7\x9e\xd3\x66\x68\x33\xb5\x59"
"\xda\x6c\x6d\x8e\xf6\xbc\x36\x57\x7b\x41\x9b\xa7\xbd\xa8\xcd\xd7\x16"
"\x68\x0b\xb5\x45\xda\x62\xed\x25\x6d\x89\xf6\xb2\xb6\x54\x7b\x45\x5b"
"\xa6\xbd\xaa\x2d\xd7\x56\x68\x2b\xb5\xd7\xb4\xd7\xb5\x37\xb4\x55\xda"
"\x9b\xda\x6a\xed\x2d\x6d\x8d\xf6\xb6\xb6\x56\x7b\x47\x5b\xa7\xad\xd7"
"\xde\xd5\x36\x68\xef\x69\x1b\xb5\xf7\xb5\x4d\xda\x66\xed\x03\xed\x43"
"\x6d\x8b\xf6\x91\xb6\x55\xfb\x58\xdb\xa6\x6d\xd7\x76\x68\x3b\xb5\x5d"
"\xda\x6e\x6d\x8f\xb6\x57\xdb\xa7\xed\xd7\x3e\xd1\x0e\x68\x9f\x6a\x07"
"\xb5\x43\xda\x61\xed\x33\xed\x88\xf6\xb9\x76\x54\xfb\x42\x3b\xa6\x7d"
"\xa9\x1d\xd7\xbe\xd2\x4e\x68\x5f\x6b\x27\xb5\x6f\xb4\x53\xda\xb7\xda"
"\x69\xed\x3b\xed\x8c\xf6\xbd\xf6\x83\xf6\xa3\x76\x56\xfb\x49\x3b\xa7"
"\x9d\xd7\x2e\x68\x3f\x6b\x17\xb5\x5f\xb4\x4b\xda\xaf\xda\x65\xed\x8a"
"\x46\x74\x46\x67\x75\x4e\xe7\x75\x41\x17\x75\x49\x97\x75\x45\x57\x75"
"\x4d\xd7\x75\x43\x37\x75\x4b\xb7\x75\x47\x77\x75\x4f\xf7\xf5\x40\x0f"
"\xf5\x88\x9e\x4d\xcf\xae\xe7\xd0\x73\xea\xb9\xf4\xdc\x7a\x1e\x3d\xaf"
"\x9e\xa0\xe7\xd3\xf3\xeb\x05\xf4\x44\xbd\xa0\x5e\x48\x2f\xac\x17\xd1"
"\x8b\xea\xc5\xf4\x1b\xf4\xe2\x7a\x09\xbd\xa4\x7e\xa3\x5e\x4a\x2f\xad"
"\x97\xd1\xcb\xea\xe5\xf4\x9b\xf4\xf2\xfa\xcd\x7a\x05\xbd\xa2\x7e\x8b"
"\x5e\x49\xbf\x55\xaf\xac\xdf\xa6\x57\xd1\xab\xea\xd5\xf4\xea\x7a\x0d"
"\xbd\xa6\x5e\x4b\xaf\xad\xd7\xd1\xeb\xea\xf5\xf4\xfa\xfa\xed\x7a\x03"
"\xbd\xa1\xde\x48\x6f\xac\x37\xd1\x9b\xea\xcd\xf4\xe6\x7a\x0b\xbd\xa5"
"\xde\x4a\xbf\x43\x6f\xad\xb7\xd1\xdb\xea\x77\xea\xed\xf4\xbb\xf4\xf6"
"\x7a\x07\xbd\xa3\xde\x49\xef\xac\xdf\xad\x77\xd1\x93\xf4\xae\x7a\x37"
"\xbd\xbb\xde\x43\xef\xa9\xf7\xd2\x7b\xeb\x7d\xf4\xbe\x7a\x3f\xbd\xbf"
"\x7e\x8f\x3e\x40\x1f\xa8\x0f\xd2\x07\xeb\x43\xf4\x7b\xf5\xa1\xfa\x30"
"\xfd\x3e\xfd\x7e\x7d\xb8\xfe\x80\xfe\xa0\xfe\x90\x3e\x42\x7f\x58\x1f"
"\xa9\x3f\xa2\x8f\xd2\x1f\xd5\x47\xeb\x63\xf4\xb1\xfa\x38\xfd\x31\xfd"
"\x71\xfd\x09\xfd\x49\x7d\xbc\x3e\x41\x9f\xa8\x4f\xd2\x9f\xd2\x9f\xd6"
"\x27\xeb\xcf\xe8\x53\xf4\xa9\xfa\x34\xfd\x59\x7d\xba\xfe\x9c\x3e\x43"
"\x9f\xa9\xcf\xd2\x67\xeb\x73\xf4\xe7\xf5\xb9\xfa\x0b\xfa\x3c\xfd\x45"
"\x7d\xbe\xbe\x40\x5f\xa8\x2f\xd2\x17\xeb\x2f\xe9\x4b\xf4\x97\xf5\xa5"
"\xfa\x2b\xfa\x32\xfd\x55\x7d\xb9\xbe\x42\x5f\xa9\xbf\xa6\xbf\xae\xbf"
"\xa1\xaf\xd2\xdf\xd4\x57\xeb\x6f\xe9\x6b\xf4\xb7\xf5\xb5\xfa\x3b\xfa"
"\x3a\x7d\xbd\xfe\xae\xbe\x41\x7f\x4f\xdf\xa8\xbf\xaf\x6f\xd2\x37\xeb"
"\x1f\xe8\x1f\xea\x5b\xf4\x8f\xf4\xad\xfa\xc7\xfa\x36\x7d\xbb\xbe\x43"
"\xdf\xa9\xef\xd2\x77\xeb\x7b\xf4\xbd\xfa\x3e\x7d\xbf\xfe\x89\x7e\x40"
"\xff\x54\x3f\xa8\x1f\xd2\x0f\xeb\x9f\xe9\x47\xf4\xcf\xf5\xa3\xfa\x17"
"\xfa\x31\xfd\x4b\xfd\xb8\xfe\x95\x7e\x42\xff\x5a\x3f\xa9\x7f\xa3\x9f"
"\xd2\xbf\xd5\x4f\xeb\xdf\xe9\x67\xf4\xef\xf5\x1f\xf4\x1f\xf5\xb3\xfa"
"\x4f\xfa\x39\xfd\xbc\x7e\x41\xff\x59\xbf\xa8\xff\xa2\x5f\xd2\x7f\xd5"
"\x2f\xeb\x57\x74\x62\x30\x06\x6b\x70\x06\x6f\x08\x86\x68\x48\x86\x6c"
"\x28\x86\x6a\x68\x86\x6e\x18\x86\x69\x58\x86\x6d\x38\x86\x6b\x78\x86"
"\x6f\x04\x46\x68\x44\x8c\x6c\x46\x76\x23\x87\x91\xd3\xc8\x65\xe4\x36"
"\xf2\x18\x79\x8d\x04\x23\x9f\x91\xdf\x28\x60\x24\x1a\x05\x8d\x42\x46"
"\x61\xa3\x88\x51\xd4\x28\x66\xdc\x60\x14\x37\x4a\x18\x25\x8d\x1b\x8d"
"\x52\x46\x69\xa3\x8c\x51\xd6\x28\x67\xdc\x64\x94\x37\x6e\x36\x2a\x18"
"\x15\x8d\x5b\x8c\x4a\xc6\xad\x46\x65\xe3\x36\xa3\x8a\x51\xd5\xa8\x66"
"\x54\x37\x6a\x18\x35\x8d\x5a\x46\x6d\xa3\x8e\x51\xd7\xa8\x67\xd4\x37"
"\x6e\x37\x1a\x18\x0d\x8d\x46\x46\x63\xa3\x89\xd1\xd4\x68\x66\x34\x37"
"\x5a\x18\x2d\x8d\x56\xc6\x1d\x46\x6b\xa3\x8d\xd1\xd6\xb8\xd3\x68\x67"
"\xdc\x65\xb4\x37\x3a\x18\x1d\x8d\x4e\x46\x67\xe3\x6e\xa3\x8b\x91\x64"
"\x74\x35\xba\x19\xdd\x8d\x1e\x46\x4f\xa3\x97\xd1\xdb\xe8\x63\xf4\x35"
"\xfa\x19\xfd\x8d\x7b\x8c\x01\xc6\x40\x63\x90\x31\xd8\x18\x62\xdc\x6b"
"\x0c\x35\x86\x19\xf7\x19\xf7\x1b\xc3\x8d\x07\x8c\x07\x8d\x87\x8c\x11"
"\xc6\xc3\xc6\x48\xe3\x11\x63\x94\xf1\xa8\x31\xda\x18\x63\x8c\x35\xc6"
"\x19\x8f\x19\x8f\x1b\x4f\x18\x4f\x1a\xe3\x8d\x09\xc6\x44\x63\x92\xf1"
"\x54\x89\xd8\x49\x9c\xc6\x54\x63\x9a\xf1\xac\x31\xdd\x78\xce\x98\x61"
"\xcc\x34\x66\x19\xb3\x8d\x39\xc6\xf3\xc6\x5c\xe3\x05\x63\x9e\xf1\xa2"
"\x31\xdf\x58\x60\x2c\x34\x16\x19\x8b\x8d\x97\x8c\x25\xc6\xcb\xc6\x52"
"\xe3\x15\x63\x99\xf1\xaa\xb1\xdc\x58\x61\xac\x34\x5e\x33\x5e\x37\xde"
"\x30\x56\x19\x6f\x1a\xab\x8d\xb7\x8c\x35\xc6\xdb\xc6\x5a\xe3\x1d\x63"
"\x9d\xb1\xde\x78\xd7\xd8\x60\xbc\x67\x6c\x34\xde\x37\x36\x19\x9b\x8d"
"\x0f\x8c\x0f\x8d\x2d\xc6\x47\xc6\x56\xe3\x63\x63\x9b\xb1\xdd\xd8\x61"
"\xec\x34\x76\x19\xbb\x8d\x3d\xc6\x5e\x63\x9f\xb1\xdf\xf8\xc4\x38\x60"
"\x7c\x6a\x1c\x34\x0e\x19\x87\x8d\xcf\x8c\x23\xc6\xe7\xc6\x51\xe3\x0b"
"\xe3\x98\xf1\xa5\x71\xdc\xf8\xca\x38\x61\x7c\x6d\x9c\x34\xbe\x31\x4e"
"\x19\xdf\x1a\xa7\x8d\xef\x8c\x33\xc6\xf7\xc6\x0f\xc6\x8f\xc6\x59\xe3"
"\x27\xe3\x9c\x71\xde\xb8\x60\xfc\x6c\x5c\x34\x7e\x31\x2e\x19\xbf\x1a"
"\x97\x8d\x2b\x06\x31\x19\x93\x35\x39\x93\x37\x05\x53\x34\x25\x53\x36"
"\x15\x53\x35\x35\x53\x37\x0d\xd3\x34\x2d\xd3\x36\x1d\xd3\x35\x3d\xd3"
"\x37\x03\x33\x34\x23\x66\x36\x33\xbb\x99\xc3\xcc\x69\xe6\x32\x73\x9b"
"\x79\xcc\xbc\x66\x82\x99\xcf\xcc\x6f\x16\x30\x13\xcd\x82\x66\x21\xb3"
"\xb0\x59\xc4\x2c\x6a\x16\x33\x6f\x30\x8b\x9b\x25\xcc\x92\xe6\x8d\x66"
"\x29\xb3\xb4\x59\xc6\x2c\x6b\x96\x33\x6f\x32\xcb\x9b\x37\x9b\x15\xcc"
"\x8a\xe6\x2d\x66\x25\xf3\x56\xb3\xb2\x79\x9b\x59\xc5\xac\x6a\x56\x33"
"\xab\x9b\x35\xcc\x9a\x66\x2d\xb3\xb6\x59\xc7\xac\x6b\xd6\x33\xeb\x9b"
"\xb7\x9b\x0d\xcc\x86\x66\x23\xb3\xb1\xd9\xc4\x6c\x6a\x36\x33\x9b\x9b"
"\x2d\xcc\x96\x66\x2b\xf3\x0e\xb3\xb5\xd9\xc6\x6c\x6b\xde\x69\xb6\x33"
"\xef\x32\xdb\x9b\x1d\xcc\x8e\x66\x27\xb3\xb3\x79\xb7\xd9\xc5\x4c\x32"
"\xbb\x9a\xdd\xcc\xee\x66\x0f\xb3\xa7\xd9\xcb\xec\x6d\xf6\x31\xfb\x9a"
"\xfd\xcc\xfe\xe6\x3d\xe6\x00\x73\xa0\x39\xc8\x1c\x6c\x0e\x31\xef\x35"
"\x87\x9a\xc3\xcc\xfb\xcc\xfb\xcd\xe1\xe6\x03\xe6\x83\xe6\x43\xe6\x08"
"\xf3\x61\x73\xa4\xf9\x88\x39\xca\x7c\xd4\x1c\x6d\x8e\x31\xc7\x9a\xe3"
"\xcc\xc7\xcc\xc7\xcd\x27\xcc\x27\xcd\xf1\xe6\x04\x73\xa2\x39\xc9\x7c"
"\xca\x7c\xda\x9c\x6c\x3e\x63\x4e\x31\xa7\x9a\xd3\xcc\x67\xcd\xe9\xe6"
"\x73\xe6\x0c\x73\xa6\x39\xcb\x9c\x6d\xce\x31\x9f\x37\xe7\x9a\x2f\x98"
"\xf3\xcc\x17\xcd\xf9\xe6\x02\x73\xa1\xb9\xc8\x5c\x6c\xbe\x64\x2e\x31"
"\x5f\x36\x97\x9a\xaf\x98\xcb\xcc\x57\xcd\xe5\xe6\x0a\x73\xa5\xf9\x9a"
"\xf9\xba\xf9\x86\xb9\xca\x7c\xd3\x5c\x6d\xbe\x65\xae\x31\xdf\x36\xd7"
"\x9a\xef\x98\xeb\xcc\xf5\xe6\xbb\xe6\x06\xf3\x3d\x73\xa3\xf9\xbe\xb9"
"\xc9\xdc\x6c\x7e\x60\x7e\x68\x6e\x31\x3f\x32\xb7\x9a\x1f\x9b\xdb\xcc"
"\xed\xe6\x0e\x73\xa7\xb9\xcb\xdc\x6d\xee\x31\xf7\x9a\xfb\xcc\xfd\xe6"
"\x27\xe6\x01\xf3\x53\xf3\xa0\x79\xc8\x3c\x6c\x7e\x66\x1e\x31\x3f\x37"
"\x8f\x9a\x5f\x98\xc7\xcc\x2f\xcd\xe3\xe6\x57\xe6\x09\xf3\x6b\xf3\xa4"
"\xf9\x8d\x79\xca\xfc\xd6\x3c\x6d\x7e\x67\x9e\x31\xbf\x37\x7f\x30\x7f"
"\x34\xcf\x9a\x3f\x99\xe7\xcc\xf3\xe6\x05\xf3\x67\xf3\xa2\xf9\x8b\x79"
"\xc9\xfc\xd5\xbc\x6c\x5e\x31\x89\xc5\x58\xac\xc5\x59\xbc\x25\x58\xa2"
"\x25\x59\xb2\xa5\x58\xaa\xa5\x59\xba\x65\x58\xa6\x65\x59\xb6\xe5\x58"
"\xae\xe5\x59\xbe\x15\x58\xa1\x15\xb1\xb2\x59\xd9\xad\x1c\x56\x4e\x2b"
"\x97\x95\xdb\xca\x63\xe5\xb5\x12\xac\x7c\x56\x7e\xab\x80\x95\x68\x15"
"\xb4\x0a\x59\x85\xad\x22\x56\x51\xab\x98\x75\x83\x55\xdc\x2a\x61\x95"
"\xb4\x6e\xb4\x4a\x59\xa5\xad\x32\x56\x59\xab\x9c\x75\x93\x55\xde\xba"
"\xd9\xaa\x60\x55\xb4\x6e\xb1\x2a\x59\xb7\x5a\x95\xad\xdb\xac\x2a\x56"
"\x55\xab\x9a\x55\xdd\xaa\x61\xd5\xb4\x6a\x59\xb5\xad\x3a\x56\x5d\xab"
"\x9e\x55\xdf\xba\xdd\x6a\x60\x35\xb4\x1a\x59\x8d\xad\x26\x56\x53\xab"
"\x99\xd5\xdc\x6a\x61\xb5\xb4\x5a\x59\x77\x58\xad\xad\x36\x56\x5b\xeb"
"\x4e\xab\x9d\x75\x97\xd5\xde\xea\x60\x75\xb4\x3a\x59\x9d\xad\xbb\xad"
"\x2e\x56\x92\xd5\xd5\xea\x66\x75\xb7\x7a\x58\x3d\xad\x5e\x56\x6f\xab"
"\x8f\xd5\xd7\xea\x67\xf5\xb7\xee\xb1\x06\x58\x03\xad\x41\xd6\x60\x6b"
"\x88\x75\xaf\x35\xd4\x1a\x66\xdd\x67\xdd\x6f\x0d\xb7\x1e\xb0\x1e\xb4"
"\x1e\xb2\x46\x58\x0f\x5b\x23\xad\x47\xac\x51\xd6\xa3\xd6\x68\x6b\x8c"
"\x35\xd6\x1a\x67\x3d\x66\x3d\x6e\x3d\x61\x3d\x69\x8d\xb7\x26\x58\x13"
"\xad\x49\xd6\x53\xd6\xd3\xd6\x64\xeb\x19\x6b\x8a\x35\xd5\x9a\x66\x3d"
"\x6b\x4d\xb7\x9e\xb3\x66\x58\x33\xad\x59\xd6\x6c\x6b\x8e\xf5\xbc\x35"
"\xd7\x7a\xc1\x9a\x67\xbd\x68\xcd\xb7\x16\x58\x0b\xad\x45\xd6\x62\xeb"
"\x25\x6b\x89\xf5\xb2\xb5\xd4\x7a\xc5\x5a\x66\xbd\x6a\x2d\xb7\x56\x58"
"\x2b\xad\xd7\xac\xd7\xad\x37\xac\x55\xd6\x9b\xd6\x6a\xeb\x2d\x6b\x8d"
"\xf5\xb6\xb5\xd6\x7a\xc7\x5a\x67\xad\xb7\xde\xb5\x36\x58\xef\x59\x1b"
"\xad\xf7\xad\x4d\xd6\x66\xeb\x03\xeb\x43\x6b\x8b\xf5\x91\xb5\xd5\xfa"
"\xd8\xda\x66\x6d\xb7\x76\x58\x3b\xad\x5d\xd6\x6e\x6b\x8f\xb5\xd7\xda"
"\x67\xed\xb7\x3e\xb1\x0e\x58\x9f\x5a\x07\xad\x43\xd6\x61\xeb\x33\xeb"
"\x88\xf5\xb9\x75\xd4\xfa\xc2\x3a\x66\x7d\x69\x1d\xb7\xbe\xb2\x4e\x58"
"\x5f\x5b\x27\xad\x6f\xac\x53\xd6\xb7\xd6\x69\xeb\x3b\xeb\x8c\xf5\xbd"
"\xf5\x83\xf5\xa3\x75\xd6\xfa\xc9\x3a\x67\x9d\xb7\x2e\x58\x3f\x5b\x17"
"\xad\x5f\xac\x4b\xd6\xaf\xd6\x65\xeb\x8a\x45\x6c\xc6\x66\x6d\xce\xe6"
"\x6d\xc1\x16\x6d\xc9\x96\x6d\xc5\x56\x6d\xcd\xd6\x6d\xc3\x36\x6d\xcb"
"\xb6\x6d\xc7\x76\x6d\xcf\xf6\xed\xc0\x0e\xed\x88\x9d\xcd\xce\x6e\xe7"
"\xb0\x73\xda\xb9\xec\xdc\x76\x1e\x3b\xaf\x9d\x60\xe7\xb3\xf3\xdb\x05"
"\xec\x44\xbb\xa0\x5d\xc8\x2e\x6c\x17\xb1\x8b\xda\xc5\xec\x1b\xec\xe2"
"\x76\x09\xbb\xa4\x7d\xa3\x5d\xca\x2e\x6d\x97\xb1\xcb\xda\xe5\xec\x9b"
"\xec\xf2\xf6\xcd\x76\x05\xbb\xa2\x7d\x8b\x5d\xc9\xbe\xd5\xae\x6c\xdf"
"\x66\x57\xb1\xab\xda\xd5\xec\xea\x76\x0d\xbb\xa6\x5d\xcb\xae\x6d\xd7"
"\xb1\xeb\xda\xf5\xec\xfa\xf6\xed\x76\x03\xbb\xa1\xdd\xc8\x6e\x6c\x37"
"\xb1\x9b\xda\xcd\xec\xe6\x76\x0b\xbb\xa5\xdd\xca\xbe\xc3\x6e\x6d\xb7"
"\xb1\xdb\xda\x77\xda\xed\xec\xbb\xec\xf6\x76\x07\xbb\xa3\xdd\xc9\xee"
"\x6c\xdf\x6d\x77\xb1\x93\xec\xae\x76\x37\xbb\xbb\xdd\xc3\xee\x69\xf7"
"\xb2\x7b\xdb\x7d\xec\xbe\x76\x3f\xbb\xbf\x7d\x8f\x3d\xc0\x1e\x68\x0f"
"\xb2\x07\xdb\x43\xec\x7b\xed\xa1\xf6\x30\xfb\x3e\xfb\x7e\x7b\xb8\xfd"
"\x80\xfd\xa0\xfd\x90\x3d\xc2\x7e\xd8\x1e\x69\x3f\x62\x8f\xb2\x1f\xb5"
"\x47\xdb\x63\xec\xb1\xf6\x38\xfb\x31\xfb\x71\xfb\x09\xfb\x49\x7b\xbc"
"\x3d\xc1\x9e\x68\x4f\xb2\x9f\xb2\x9f\xb6\x27\xdb\xcf\xd8\x53\xec\xa9"
"\xf6\x34\xfb\x59\x7b\xba\xfd\x9c\x3d\xc3\x9e\x69\xcf\xb2\x67\xdb\x73"
"\xec\xe7\xed\xb9\xf6\x0b\xf6\x3c\xfb\x45\x7b\xbe\xbd\xc0\x5e\x68\x2f"
"\xb2\x17\xdb\x2f\xd9\x4b\xec\x97\xed\xa5\xf6\x2b\xf6\x32\xfb\x55\x7b"
"\xb9\xbd\xc2\x5e\x69\xbf\x66\xbf\x6e\xbf\x61\xaf\xb2\xdf\xb4\x57\xdb"
"\x6f\xd9\x6b\xec\xb7\xed\xb5\xf6\x3b\xf6\x3a\x7b\xbd\xfd\xae\xbd\xc1"
"\x7e\xcf\xde\x68\xbf\x6f\x6f\xb2\x37\xdb\x1f\xd8\x1f\xda\x5b\xec\x8f"
"\xec\xad\xf6\xc7\xf6\x36\x7b\xbb\xbd\xc3\xde\x69\xef\xb2\x77\xdb\x7b"
"\xec\xbd\xf6\x3e\x7b\xbf\xfd\x89\x7d\xc0\xfe\xd4\x3e\x68\x1f\xb2\x0f"
"\xdb\x9f\xd9\x47\xec\xcf\xed\xa3\xf6\x17\xf6\x31\xfb\x4b\xfb\xb8\xfd"
"\x95\x7d\xc2\xfe\xda\x3e\x69\x7f\x63\x9f\xb2\xbf\xb5\x4f\xdb\xdf\xd9"
"\x67\xec\xef\xed\x1f\xec\x1f\xed\xb3\xf6\x4f\xf6\x39\xfb\xbc\x7d\xc1"
"\xfe\xd9\xbe\x68\xff\x62\x5f\xb2\x7f\xb5\x2f\xdb\x57\x6c\xe2\x30\x0e"
"\xeb\x70\x0e\xef\x08\x8e\xe8\x48\x8e\xec\x28\x8e\xea\x68\x8e\xee\x18"
"\x8e\xe9\x58\x8e\xed\x38\x8e\xeb\x78\x8e\xef\x04\x4e\xe8\x44\x9c\x6c"
"\x4e\x76\x27\x87\x93\xd3\xc9\xe5\xe4\x76\xf2\x38\x79\x9d\x04\x27\x9f"
"\x93\xdf\x29\xe0\x24\x3a\x05\x9d\x42\x4e\x61\xa7\x88\x53\xd4\x29\xe6"
"\xdc\xe0\x14\x77\x4a\x38\x25\x9d\x1b\x9d\x52\x4e\x69\xa7\x8c\x53\xd6"
"\x29\xe7\xdc\xe4\x94\x77\x6e\x76\x2a\x38\x15\x9d\x5b\x9c\x4a\xce\xad"
"\x4e\x65\xe7\x36\xa7\x8a\x53\xd5\xa9\xe6\x54\x77\x6a\x38\x35\x9d\x5a"
"\x4e\x6d\xa7\x8e\x53\xd7\xa9\xe7\xd4\x77\x6e\x77\x1a\x38\x0d\x9d\x46"
"\x4e\x63\xa7\x89\xd3\xd4\x69\xe6\x34\x77\x5a\x38\x2d\x9d\x56\xce\x1d"
"\x4e\x6b\xa7\x8d\xd3\xd6\xb9\xd3\x69\xe7\xdc\xe5\xb4\x77\x3a\x38\x1d"
"\x9d\x4e\x4e\x67\xe7\x6e\xa7\x8b\x93\xe4\x74\x75\xba\x39\xdd\x9d\x1e"
"\x4e\x4f\xa7\x97\xd3\xdb\xe9\xe3\xf4\x75\xfa\x39\xfd\x9d\x7b\x9c\x01"
"\xce\x40\x67\x90\x33\xd8\x19\xe2\xdc\xeb\x0c\x75\x86\x39\xf7\x39\xf7"
"\x3b\xc3\x9d\x07\x9c\x07\x9d\x87\x9c\x11\xce\xc3\xce\x48\xe7\x11\x67"
"\x94\xf3\xa8\x33\xda\x19\xe3\x8c\x75\xc6\x39\x8f\x39\x8f\x3b\x4f\x38"
"\x4f\x3a\xe3\x9d\x09\xce\x44\x67\x92\xf3\x94\xf3\xb4\x33\xd9\x79\xc6"
"\x99\xe2\x4c\x75\xa6\x39\xcf\x3a\xd3\x9d\xe7\x9c\x19\xce\x4c\x67\x96"
"\x33\xdb\x99\xe3\x3c\xef\xcc\x75\x5e\x70\xe6\x39\x2f\x3a\xf3\x9d\x05"
"\xce\x42\x67\x91\xb3\xd8\x79\xc9\x59\xe2\xbc\xec\x2c\x75\x5e\x71\x96"
"\x39\xaf\x3a\xcb\x9d\x15\xce\x4a\xe7\x35\xe7\x75\xe7\x0d\x67\x95\xf3"
"\xa6\xb3\xda\x79\xcb\x59\xe3\xbc\xed\xac\x75\xde\x71\xd6\x39\xeb\x9d"
"\x77\x9d\x0d\xce\x7b\xce\x46\xe7\x7d\x67\x93\xb3\xd9\xf9\xc0\xf9\xd0"
"\xd9\xe2\x7c\xe4\x6c\x75\x3e\x76\xb6\x39\xdb\x9d\x1d\xce\x4e\x67\x97"
"\xb3\xdb\xd9\xe3\xec\x75\xf6\x39\xfb\x9d\x4f\x9c\x03\xce\xa7\xce\x41"
"\xe7\x90\x73\xd8\xf9\xcc\x39\xe2\x7c\xee\x1c\x75\xbe\x70\x8e\x39\x5f"
"\x3a\xc7\x9d\xaf\x9c\x13\xce\xd7\xce\x49\xe7\x1b\xe7\x94\xf3\xad\x73"
"\xda\xf9\xce\x39\xe3\x7c\xef\xfc\xe0\xfc\xe8\x9c\x75\x7e\x72\xce\x39"
"\xe7\x9d\x0b\xce\xcf\xce\x45\xe7\x17\xe7\x92\xf3\xab\x73\xd9\xb9\xe2"
"\x10\x97\x71\x59\x97\x73\x79\x57\x70\x45\x57\x72\x65\x57\x71\x55\x57"
"\x73\x75\xd7\x70\x4d\xd7\x72\x6d\xd7\x71\x5d\xd7\x73\x7d\x37\x70\x43"
"\x37\xe2\x66\x73\xb3\xbb\x39\xdc\x9c\x6e\x2e\x37\xb7\x9b\xc7\xcd\xeb"
"\x26\xb8\xf9\xdc\xfc\x6e\x01\x37\xd1\x2d\xe8\x16\x72\x0b\xbb\x45\xdc"
"\xa2\x6e\x31\xf7\x06\xb7\xb8\x5b\xc2\x2d\xe9\xde\xe8\x96\x72\x4b\xbb"
"\x65\xdc\xb2\x6e\x39\xf7\x26\xb7\xbc\x7b\xb3\x5b\xc1\xad\xe8\xde\xe2"
"\x56\x72\x6f\x75\x2b\xbb\xb7\xb9\x55\xdc\xaa\x6e\x35\xb7\xba\x5b\xc3"
"\xad\xe9\xd6\x72\x6b\xbb\x75\xdc\xba\x6e\x3d\xb7\xbe\x7b\xbb\xdb\xc0"
"\x6d\xe8\x36\x72\x1b\xbb\x4d\xdc\xa6\x6e\x33\xb7\xb9\xdb\xc2\x6d\xe9"
"\xb6\x72\xef\x70\x5b\xbb\x6d\xdc\xb6\xee\x9d\x6e\x3b\xf7\x2e\xb7\xbd"
"\xdb\xc1\xed\xe8\x76\x72\x3b\xbb\x77\xbb\x5d\xdc\x24\xb7\xab\xdb\xcd"
"\xed\xee\xf6\x70\x7b\xba\xbd\xdc\xde\x6e\x1f\xb7\xaf\xdb\xcf\xed\xef"
"\xde\xe3\x0e\x70\x07\xba\x83\xdc\xc1\xee\x10\xf7\x5e\x77\xa8\x3b\xcc"
"\xbd\xcf\xbd\xdf\x1d\xee\x3e\xe0\x3e\xe8\x3e\xe4\x8e\x70\x1f\x76\x47"
"\xba\x8f\xb8\xa3\xdc\x47\xdd\xd1\xee\x18\x77\xac\x3b\xce\x7d\xcc\x7d"
"\xdc\x7d\xc2\x7d\xd2\x1d\xef\x4e\x70\x27\xba\x93\xdc\xa7\xdc\xa7\xdd"
"\xc9\xee\x33\xee\x14\x77\xaa\x3b\xcd\x7d\xd6\x9d\xee\x3e\xe7\xce\x70"
"\x67\xba\xb3\xdc\xd9\xee\x1c\xf7\x79\x77\xae\xfb\x82\x3b\xcf\x7d\xd1"
"\x9d\xef\x2e\x70\x17\xba\x8b\xdc\xc5\xee\x4b\xee\x12\xf7\x65\x77\xa9"
"\xfb\x8a\xbb\xcc\x7d\xd5\x5d\xee\xae\x70\x57\xba\xaf\xb9\xaf\xbb\x6f"
"\xb8\xab\xdc\x37\xdd\xd5\xee\x5b\xee\x1a\xf7\x6d\x77\xad\xfb\x8e\xbb"
"\xce\x5d\xef\xbe\xeb\x6e\x70\xdf\x73\x37\xba\xef\xbb\x9b\xdc\xcd\xee"
"\x07\xee\x87\xee\x16\xf7\x23\x77\xab\xfb\xb1\xbb\xcd\xdd\xee\xee\x70"
"\x77\xba\xbb\xdc\xdd\xee\x1e\x77\xaf\xbb\xcf\xdd\xef\x7e\xe2\x1e\x70"
"\x3f\x75\x0f\xba\x87\xdc\xc3\xee\x67\xee\x11\xf7\x73\xf7\xa8\xfb\x85"
"\x7b\xcc\xfd\xd2\x3d\xee\x7e\xe5\x9e\x70\xbf\x76\x4f\xba\xdf\xb8\xa7"
"\xdc\x6f\xdd\xd3\xee\x77\xee\x19\xf7\x7b\xf7\x07\xf7\x47\xf7\xac\xfb"
"\x93\x7b\xce\x3d\xef\x5e\x70\x7f\x76\x2f\xba\xbf\xb8\x97\xdc\x5f\xdd"
"\xcb\xee\x15\x97\x78\x8c\xc7\x7a\x9c\xc7\x7b\x82\x27\x7a\x92\x27\x7b"
"\x8a\xa7\x7a\x9a\xa7\x7b\x86\x67\x7a\x96\x67\x7b\x8e\xe7\x7a\x9e\xe7"
"\x7b\x81\x17\x7a\x11\x2f\x9b\x97\xdd\xcb\xe1\xe5\xf4\x72\x79\xb9\xbd"
"\x3c\x5e\x5e\x2f\xc1\xcb\xe7\xe5\xf7\x0a\x78\x89\x5e\x41\xaf\x90\x57"
"\xd8\x2b\xe2\x15\xf5\x8a\x79\x37\x78\xc5\xbd\x12\x5e\x49\xef\x46\xaf"
"\x94\x57\xda\x2b\xe3\x95\xf5\xca\x79\x37\x79\xe5\xbd\x9b\xbd\x0a\x5e"
"\x45\xef\x16\xaf\x92\x77\xab\x57\xd9\xbb\xcd\xab\xe2\x55\xf5\xaa\x79"
"\xd5\xbd\x1a\x5e\x4d\xaf\x96\x57\xdb\xab\xe3\xd5\xf5\xea\x79\xf5\xbd"
"\xdb\xbd\x06\x5e\x43\xaf\x91\xd7\xd8\x6b\xe2\x35\xf5\x9a\x79\xcd\xbd"
"\x16\x5e\x4b\xaf\x95\x77\x87\xd7\xda\x6b\xe3\xb5\xf5\xee\xf4\xda\x79"
"\x77\x79\xed\xbd\x0e\x5e\x47\xaf\x93\xd7\xd9\xbb\xdb\xeb\xe2\x25\x79"
"\x5d\xbd\x6e\x5e\x77\xaf\x87\xd7\xd3\xeb\xe5\xf5\xf6\xfa\x78\x7d\xbd"
"\x7e\x5e\x7f\xef\x1e\x6f\x80\x37\xd0\x1b\xe4\x0d\xf6\x86\x78\xf7\x7a"
"\x43\xbd\x61\xde\x7d\xde\xfd\xde\x70\xef\x01\xef\x41\xef\x21\x6f\x84"
"\xf7\xb0\x37\xd2\x7b\xc4\x1b\xe5\x3d\xea\x8d\xf6\xc6\x78\x63\xbd\x71"
"\xde\x63\xde\xe3\xde\x13\xde\x93\xde\x78\x6f\x82\x37\xd1\x9b\xe4\x3d"
"\xe5\x3d\xed\x4d\xf6\x9e\xf1\xa6\x78\x53\xbd\x69\xde\xb3\xde\x74\xef"
"\x39\x6f\x86\x37\xd3\x9b\xe5\xcd\xf6\xe6\x78\xcf\x7b\x73\xbd\x17\xbc"
"\x79\xde\x8b\xde\x7c\x6f\x81\xb7\xd0\x5b\xe4\x2d\xf6\x5e\xf2\x96\x78"
"\x2f\x7b\x4b\xbd\x57\xbc\x65\xde\xab\xde\x72\x6f\x85\xb7\xd2\x7b\xcd"
"\x7b\xdd\x7b\xc3\x5b\xe5\xbd\xe9\xad\xf6\xde\xf2\xd6\x78\x6f\x7b\x6b"
"\xbd\x77\xbc\x75\xde\x7a\xef\x5d\x6f\x83\xf7\x9e\xb7\xd1\x7b\xdf\xdb"
"\xe4\x6d\xf6\x3e\xf0\x3e\xf4\xb6\x78\x1f\x79\x5b\xbd\x8f\xbd\x6d\xde"
"\x76\x6f\x87\xb7\xd3\xdb\xe5\xed\xf6\xf6\x78\x7b\xbd\x7d\xde\x7e\xef"
"\x13\xef\x80\xf7\xa9\x77\xd0\x3b\xe4\x1d\xf6\x3e\xf3\x8e\x78\x9f\x7b"
"\x47\xbd\x2f\xbc\x63\xde\x97\xde\x71\xef\x2b\xef\x84\xf7\xb5\x77\xd2"
"\xfb\xc6\x3b\xe5\x7d\xeb\x9d\xf6\xbe\xf3\xce\x78\xdf\x7b\x3f\x78\x3f"
"\x7a\x67\xbd\x9f\xbc\x73\xde\x79\xef\x82\xf7\xb3\x77\xd1\xfb\xc5\xbb"
"\xe4\xfd\xea\x5d\xf6\xae\x78\xc4\x67\x7c\xd6\xe7\x7c\xde\x17\x7c\xd1"
"\x97\x7c\xd9\x57\x7c\xd5\xd7\x7c\xdd\x37\x7c\xd3\xb7\x7c\xdb\x77\x7c"
"\xd7\xf7\x7c\xdf\x0f\xfc\xd0\x8f\xf8\xd9\xfc\xec\x7e\x0e\x3f\xa7\x9f"
"\xcb\xcf\xed\xe7\xf1\xf3\xfa\x09\x7e\x3e\x3f\xbf\x5f\xc0\x4f\xf4\x0b"
"\xfa\x85\xfc\xc2\x7e\x11\xbf\xa8\x5f\xcc\xbf\xc1\x2f\xee\x97\xf0\x4b"
"\xfa\x37\xfa\xa5\xfc\xd2\x7e\x19\xbf\xac\x5f\xce\xbf\xc9\x2f\xef\xdf"
"\xec\x57\xf0\x2b\xfa\xb7\xf8\x95\xfc\x5b\xfd\xca\xfe\x6d\x7e\x15\xbf"
"\xaa\x5f\xcd\xaf\xee\xd7\xf0\x6b\xfa\xb5\xfc\xda\x7e\x1d\xbf\xae\x5f"
"\xcf\xaf\xef\xdf\xee\x37\xf0\x1b\xfa\x8d\xfc\xc6\x7e\x13\xbf\xa9\xdf"
"\xcc\x6f\xee\xb7\xf0\x5b\xfa\xad\xfc\x3b\xfc\xd6\x7e\x1b\xbf\xad\x7f"
"\xa7\xdf\xce\xbf\xcb\x6f\xef\x77\xf0\x3b\xfa\x9d\xfc\xce\xfe\xdd\x7e"
"\x17\x3f\xc9\xef\xea\x77\xf3\xbb\xfb\x3d\xfc\x9e\x7e\x2f\xbf\xb7\xdf"
"\xc7\xef\xeb\xf7\xf3\xfb\xfb\xf7\xf8\x03\xfc\x81\xfe\x20\x7f\xb0\x3f"
"\xc4\xbf\xd7\x1f\xea\x0f\xf3\xef\xf3\xef\xf7\x87\xfb\x0f\xf8\x0f\xfa"
"\x0f\xf9\x23\xfc\x87\xfd\x91\xfe\x23\xfe\x28\xff\x51\x7f\xb4\x3f\xc6"
"\x1f\xeb\x8f\xf3\x1f\xf3\x1f\xf7\x9f\xf0\x9f\xf4\xc7\xfb\x13\xfc\x89"
"\xfe\x24\xff\x29\xff\x69\x7f\xb2\xff\x8c\x3f\xc5\x9f\xea\x4f\xf3\x9f"
"\xf5\xa7\xfb\xcf\xf9\x33\xfc\x99\xfe\x2c\x7f\xb6\x3f\xc7\x7f\xde\x9f"
"\xeb\xbf\xe0\xcf\xf3\x5f\xf4\xe7\xfb\x0b\xfc\x85\xfe\x22\x7f\xb1\xff"
"\x92\xbf\xc4\x7f\xd9\x5f\xea\xbf\xe2\x2f\xf3\x5f\xf5\x97\xfb\x2b\xfc"
"\x95\xfe\x6b\xfe\xeb\xfe\x1b\xfe\x2a\xff\x4d\x7f\xb5\xff\x96\xbf\xc6"
"\x7f\xdb\x5f\xeb\xbf\xe3\xaf\xf3\xd7\xfb\xef\xfa\x1b\xfc\xf7\xfc\x8d"
"\xfe\xfb\xfe\x26\x7f\xb3\xff\x81\xff\xa1\xbf\xc5\xff\xc8\xdf\xea\x7f"
"\xec\x6f\xf3\xb7\xfb\x3b\xfc\x9d\xfe\x2e\x7f\xb7\xbf\xc7\xdf\xeb\xef"
"\xf3\xf7\xfb\x9f\xf8\x07\xfc\x4f\xfd\x83\xfe\x21\xff\xb0\xff\x99\x7f"
"\xc4\xff\xdc\x3f\xea\x7f\xe1\x1f\xf3\xbf\xf4\x8f\xfb\x5f\xf9\x27\xfc"
"\xaf\xfd\x93\xfe\x37\xfe\x29\xff\x5b\xff\xb4\xff\x9d\x7f\xc6\xff\xde"
"\xff\xc1\xff\xd1\x3f\xeb\xff\xe4\x9f\xf3\xcf\xfb\x17\xfc\x9f\xfd\x8b"
"\xfe\x2f\xfe\x25\xff\x57\xff\xb2\x7f\xc5\x27\x01\x13\xb0\x01\x17\xf0"
"\x81\x10\x88\x81\x14\xc8\x81\x12\xa8\x81\x16\xe8\x81\x11\x98\x81\x15"
"\xd8\x81\x13\xb8\x81\x17\xf8\x41\x10\x84\x41\x24\xc8\x16\x64\x0f\x72"
"\x04\x39\x83\x5c\x41\xee\x20\x4f\x90\x37\x48\x08\xf2\x05\xf9\x83\x02"
"\x41\x62\x50\x30\x28\x14\x14\x0e\x8a\x04\x45\x83\x62\xc1\x0d\x41\xf1"
"\xa0\x44\x50\x32\xb8\x31\x28\x15\x94\x0e\xca\x04\x65\x83\x72\xc1\x4d"
"\x41\xf9\xe0\xe6\xa0\x42\x50\x31\xb8\x25\xa8\x14\xdc\x1a\x54\x0e\x6e"
"\x0b\xaa\x04\x55\x83\x6a\x41\xf5\xa0\x46\x50\x33\xa8\x15\xd4\x0e\xea"
"\x04\x75\x83\x7a\x41\xfd\xe0\xf6\xa0\x41\xd0\x30\x68\x14\x34\x0e\x9a"
"\x04\x4d\x83\x66\x41\xf3\xa0\x45\xd0\x32\x68\x15\xdc\x11\xb4\x0e\xda"
"\x04\x6d\x83\x3b\x83\x76\xc1\x5d\x41\xfb\xa0\x43\xd0\x31\xe8\x14\x74"
"\x0e\xee\x0e\xba\x04\x49\x41\xd7\xa0\x5b\xd0\x3d\xe8\x11\xf4\x0c\x7a"
"\x05\xbd\x83\x3e\x41\xdf\xa0\x5f\xd0\x3f\xb8\x27\x18\x10\x0c\x0c\x06"
"\x05\x83\x83\x21\xc1\xbd\xc1\xd0\x60\x58\x70\x5f\x70\x7f\x30\x3c\x78"
"\x20\x78\x30\x78\x28\x18\x11\x3c\x1c\x8c\x0c\x1e\x09\x46\x05\x8f\x06"
"\xa3\x83\x31\xc1\xd8\x60\x5c\xf0\x58\xf0\x78\xf0\x44\xf0\x64\x30\x3e"
"\x98\x10\x4c\x0c\x26\x05\x4f\x05\x4f\x07\x93\x83\x67\x82\x29\xc1\xd4"
"\x60\x5a\xf0\x6c\x30\x3d\x78\x2e\x98\x11\xcc\x0c\x66\x05\xb3\x83\x39"
"\xc1\xf3\xc1\xdc\xe0\x85\x60\x5e\xf0\x62\x30\x3f\x58\x10\x2c\x0c\x16"
"\x05\x8b\x83\x97\x82\x25\xc1\xcb\xc1\xd2\xe0\x95\x60\x59\xf0\x6a\xb0"
"\x3c\x58\x11\xac\x0c\x5e\x0b\x5e\x0f\xde\x08\x56\x05\x6f\x06\xab\x83"
"\xb7\x82\x35\xc1\xdb\xc1\xda\xe0\x9d\x60\x5d\xb0\x3e\x78\x37\xd8\x10"
"\xbc\x17\x6c\x0c\xde\x0f\x36\x05\x9b\x83\x0f\x82\x0f\x83\x2d\xc1\x47"
"\xc1\xd6\xe0\xe3\x60\x5b\xb0\x3d\xd8\x11\xec\x0c\x76\x05\xbb\x83\x3d"
"\xc1\xde\x60\x5f\xb0\x3f\xf8\x24\x38\x10\x7c\x1a\x1c\x0c\x0e\x05\x87"
"\x83\xcf\x82\x23\xc1\xe7\xc1\xd1\xe0\x8b\xe0\x58\xf0\x65\x70\x3c\xf8"
"\x2a\x38\x11\x7c\x1d\x9c\x0c\xbe\x09\x4e\x05\xdf\x06\xa7\x83\xef\x82"
"\x33\xc1\xf7\xc1\x0f\xc1\x8f\xc1\xd9\xe0\xa7\xe0\x5c\x70\x3e\xb8\x10"
"\xfc\x1c\x5c\x0c\x7e\x09\x2e\x05\xbf\x06\x97\x83\x2b\x01\x09\x99\x90"
"\x0d\xb9\x90\x0f\x85\x50\x0c\xa5\x50\x0e\x95\x50\x0d\xb5\x50\x0f\x8d"
"\xd0\x0c\xad\xd0\x0e\x9d\xd0\x0d\xbd\xd0\x0f\x83\x30\x0c\x23\x61\xb6"
"\x30\x7b\x98\x23\xcc\x19\xe6\x0a\x73\x87\x79\xc2\xbc\x61\x42\x98\x2f"
"\xcc\x1f\x16\x08\x13\xc3\x82\x61\xa1\xb0\x70\x58\x24\x2c\x1a\x16\x0b"
"\x6f\x08\x8b\x87\x25\xc2\x92\xe1\x8d\x61\xa9\xb0\x74\x58\x26\x2c\x1b"
"\x96\x0b\x6f\x0a\xcb\x87\x37\x87\x15\xc2\x8a\xe1\x2d\x61\xa5\xf0\xd6"
"\xb0\x72\x78\x5b\x58\x25\xac\x1a\x56\x0b\xab\x87\x35\xc2\x9a\x61\xad"
"\xb0\x76\x58\x27\xac\x1b\xd6\x0b\xeb\x87\xb7\x87\x0d\xc2\x86\x61\xa3"
"\xb0\x71\xd8\x24\x6c\x1a\x36\x0b\x9b\x87\x2d\xc2\x96\x61\xab\xf0\x8e"
"\xb0\x75\xd8\x26\x6c\x1b\xde\x19\xb6\x0b\xef\x0a\xdb\x87\x1d\xc2\x8e"
"\x61\xa7\xb0\x73\x78\x77\xd8\x25\x4c\x0a\xbb\x86\xdd\xc2\xee\x61\x8f"
"\xb0\x67\xd8\x2b\xec\x1d\xf6\x09\xfb\x86\xfd\xc2\xfe\xe1\x3d\xe1\x80"
"\x70\x60\x38\x28\x1c\x1c\x0e\x09\xef\x0d\x87\x86\xc3\xc2\xfb\xc2\xfb"
"\xc3\xe1\xe1\x03\xe1\x83\xe1\x43\xe1\x88\xf0\xe1\x70\x64\xf8\x48\x38"
"\x2a\x7c\x34\x1c\x1d\x8e\x09\xc7\x86\xe3\xc2\xc7\xc2\xc7\xc3\x27\xc2"
"\x27\xc3\xf1\xe1\x84\x70\x62\x38\x29\x7c\x2a\x7c\x3a\x9c\x1c\x3e\x13"
"\x4e\x09\xa7\x86\xd3\xc2\x67\xc3\xe9\xe1\x73\xe1\x8c\x70\x66\x38\x2b"
"\x9c\x1d\xce\x09\x9f\x0f\xe7\x86\x2f\x84\xf3\xc2\x17\xc3\xf9\xe1\x82"
"\x70\x61\xb8\x28\x5c\x1c\xbe\x14\x2e\x09\x5f\x0e\x97\x86\xaf\x84\xcb"
"\xc2\x57\xc3\xe5\xe1\x8a\x70\x65\xf8\x5a\xf8\x7a\xf8\x46\xb8\x2a\x7c"
"\x33\x5c\x1d\xbe\x15\xae\x09\xdf\x0e\xd7\x86\xef\x84\xeb\xc2\xf5\xe1"
"\xbb\xe1\x86\xf0\xbd\x70\x63\xf8\x7e\xb8\x29\xdc\x1c\x7e\x10\x7e\x18"
"\x6e\x09\x3f\x0a\xb7\x86\x1f\x87\xdb\xc2\xed\xe1\x8e\x70\x67\xb8\x2b"
"\xdc\x1d\xee\x09\xf7\x86\xfb\xc2\xfd\xe1\x27\xe1\x81\xf0\xd3\xf0\x60"
"\x78\x28\x3c\x1c\x7e\x16\x1e\x09\x3f\x0f\x8f\x86\x5f\x84\xc7\xc2\x2f"
"\xc3\xe3\xe1\x57\xe1\x89\xf0\xeb\xf0\x64\xf8\x4d\x78\x2a\xfc\x36\x3c"
"\x1d\x7e\x17\x9e\x09\xbf\x0f\x7f\x08\x7f\x0c\xcf\x86\x3f\x85\xe7\xc2"
"\xf3\xe1\x85\xf0\xe7\xf0\x62\xf8\x4b\x78\x29\xfc\x35\xbc\x1c\x5e\x09"
"\x49\x84\x89\xb0\x11\x2e\xc2\x47\x84\x88\x18\x91\x22\x72\x44\x89\xa8"
"\x11\x2d\xa2\x47\x8c\x88\x19\xb1\x22\x76\xc4\x89\xb8\x11\x2f\xe2\x47"
"\x82\x48\x18\x89\x44\xb2\x45\xb2\x47\x72\x44\x72\x46\x72\x45\x72\x47"
"\xf2\x44\xf2\x46\x12\x22\xf9\x22\xf9\x23\x05\x22\x89\x91\x82\x91\x42"
"\x91\xc2\x91\x22\x91\xa2\x91\x62\x91\x1b\x22\xc5\x23\x25\x22\x25\x23"
"\x37\x46\x4a\x45\x4a\x47\xca\x44\xca\x46\xca\x45\x6e\x8a\x94\x8f\xdc"
"\x1c\xa9\x10\xa9\x18\xb9\x25\x52\x29\x72\x6b\xa4\x72\xe4\xb6\x48\x95"
"\x48\xd5\x48\xb5\x48\xf5\x48\x8d\x48\xcd\x48\xad\x48\xed\x48\x9d\x48"
"\xdd\x48\xbd\x48\xfd\xc8\xed\x91\x06\x91\x86\x91\x46\x91\xc6\x91\x26"
"\x91\xa6\x91\x66\x91\xe6\x91\x16\x91\x96\x91\x56\x91\x3b\x22\xad\x23"
"\x6d\x22\x6d\x23\x77\x46\xda\x45\xee\x8a\xb4\x8f\x74\x88\x74\x8c\x74"
"\x8a\x74\x8e\xdc\x1d\xe9\x12\x49\x8a\x74\x8d\x74\x8b\x74\x8f\xfc\x1f"
"\xed\xf6\xf8\xec\x47\xd0\x37\x7e\x3e\xb6\x35\xb6\x7b\x66\x7a\x3c\xf3"
"\x8d\x6d\xdb\xb6\x6d\xdb\xb6\x6d\x5b\x27\x76\x72\x72\x62\xdb\x3a\xb1"
"\x93\x93\x64\xeb\xba\x1f\x6c\xd5\xbd\xd7\x83\xad\xda\xda\x5f\x3f\x7b"
"\x55\x57\x77\x7d\xde\x7f\xc0\xa7\x2d\xd2\x0e\x69\x8f\x74\x40\x3a\x22"
"\x9d\x90\xce\x48\x17\xa4\x2b\xd2\x0d\xe9\x8e\xf4\x40\x7a\x22\xbd\x90"
"\xde\x48\x1f\xa4\x2f\xd2\x0f\xe9\x8f\x0c\x40\x06\x22\x83\x90\xc1\xc8"
"\x10\x64\x28\x32\x0c\x19\x8e\x8c\x40\x46\x22\xa3\x90\xd1\xc8\x18\x64"
"\x2c\x32\x0e\x19\x8f\x4c\x40\x26\x22\x93\x90\xc9\xc8\x14\x64\x2a\x32"
"\x0d\x99\x8e\xcc\x40\x66\x22\xb3\x90\xd9\xc8\x1c\x64\x2e\x32\x0f\x99"
"\x8f\x2c\x40\x16\x22\x8b\x90\xc5\xc8\x12\x64\x29\xb2\x0c\x59\x8e\xac"
"\x40\x56\x22\xab\x90\xd5\xc8\x1a\x64\x2d\xb2\x0e\x59\x8f\x6c\x40\x36"
"\x22\x9b\x90\xcd\xc8\x16\x64\x2b\xb2\x0d\xd9\x8e\xec\x40\x76\x22\xbb"
"\x90\xdd\xc8\x1e\x64\x2f\xb2\x0f\xd9\x8f\xc4\x21\x07\x90\x83\xc8\x21"
"\xe4\x30\x72\x04\x39\x8a\x1c\x43\x8e\x23\x27\x90\x93\xc8\x29\xe4\x34"
"\x72\x06\x39\x8b\x9c\x43\xe2\x91\xf3\x48\x02\x72\x01\xb9\x88\x5c\x42"
"\x2e\x23\x57\x90\xab\xc8\x35\xe4\x3a\x72\x03\xb9\x89\xdc\x42\x6e\x23"
"\x77\x90\xbb\xc8\x3d\xe4\x3e\xf2\x00\x79\x88\x3c\x42\x1e\x23\x4f\x90"
"\xa7\xc8\x33\xe4\x39\xf2\x02\x79\x89\xbc\x42\x5e\x23\x6f\x90\x44\xe4"
"\x2d\xf2\x0e\x79\x8f\x7c\x40\x3e\x22\x9f\x90\xcf\xc8\x17\xe4\x2b\xf2"
"\x0d\xf9\x8e\xfc\x40\x7e\x22\xbf\x90\xdf\x48\x12\xf2\x07\xf9\x8b\xfc"
"\x43\x92\xa1\xc9\xd1\x14\x68\x4a\x34\x15\x9a\x1a\x4d\x83\xa6\x45\xd3"
"\xa1\xe9\xd1\x0c\x68\x46\x34\x13\x9a\x19\xcd\x82\x66\x45\xb3\xa1\xd9"
"\xd1\x1c\x68\x4e\x34\x17\x9a\x1b\xcd\x83\x22\x28\x8a\x62\x28\x8e\x12"
"\x28\x89\x52\x28\x8d\x32\x28\x8b\x72\x28\x8f\x0a\xa8\x88\x4a\xa8\x8c"
"\x2a\xa8\x8a\x6a\x28\x40\x75\xd4\x40\x4d\x14\xa2\x16\x6a\xa3\x0e\xea"
"\xa2\x1e\xea\xa3\x01\x1a\xa2\x11\x1a\x43\xf3\xa2\xf9\xd0\xfc\x68\x01"
"\xb4\x20\x5a\x08\x2d\x8c\x16\x41\x8b\xa2\xc5\xd0\xe2\x68\x09\xb4\x24"
"\x5a\x0a\x2d\x8d\x96\x41\xcb\xa2\xe5\xd0\xf2\x68\x05\xb4\x22\x5a\x09"
"\xad\x8c\x56\x41\xab\xa2\xd5\xd0\xea\x68\x0d\xb4\x26\x5a\x0b\xad\x8d"
"\xd6\x41\xeb\xa2\xf5\xd0\xfa\x68\x03\xb4\x21\xda\x08\x6d\x8c\x36\x41"
"\x9b\xa2\xcd\xd0\xe6\x68\x0b\xb4\x25\xda\x0a\x6d\x8d\xb6\x41\xdb\xa2"
"\xed\xd0\xf6\x68\x07\xb4\x23\xda\x09\xed\x8c\x76\x41\xbb\xa2\xdd\xd0"
"\xee\x68\x0f\xb4\x27\xda\x0b\xed\x8d\xf6\x41\xfb\xa2\xfd\xd0\xfe\xe8"
"\x00\x74\x20\x3a\x08\x1d\x8c\x0e\x41\x87\xa2\xc3\xd0\xe1\xe8\x08\x74"
"\x24\x3a\x0a\x1d\x8d\x8e\x41\xc7\xa2\xe3\xd0\xf1\xe8\x04\x74\x22\x3a"
"\x09\x9d\x8c\x4e\x41\xa7\xa2\xd3\xd0\xe9\xe8\x0c\x74\x26\x3a\x0b\x9d"
"\x8d\xce\x41\xe7\xa2\xf3\xd0\xf9\xe8\x02\x74\x21\xba\x08\x5d\x8c\x2e"
"\x41\x97\xa2\xcb\xd0\xe5\xe8\x0a\x74\x25\xba\x0a\x5d\x8d\xae\x41\xd7"
"\xa2\xeb\xd0\xf5\xe8\x06\x74\x23\xba\x09\xdd\x8c\x6e\x41\xb7\xa2\xdb"
"\xd0\xed\xe8\x0e\x74\x27\xba\x0b\xdd\x8d\xee\x41\xf7\xa2\xfb\xd0\xfd"
"\x68\x1c\x7a\x00\x3d\x88\x1e\x42\x0f\xa3\x47\xd0\xa3\xe8\x31\xf4\x38"
"\x7a\x02\x3d\x89\x9e\x42\x4f\xa3\x67\xd0\xb3\xe8\x39\x34\x1e\x3d\x8f"
"\x26\xa0\x17\xd0\x8b\xe8\x25\xf4\x32\x7a\x05\xbd\x8a\x5e\x43\xaf\xa3"
"\x37\xd0\x9b\xe8\x2d\xf4\x36\x7a\x07\xbd\x8b\xde\x43\xef\xa3\x0f\xd0"
"\x87\xe8\x23\xf4\x31\xfa\x04\x7d\x8a\x3e\x43\x9f\xa3\x2f\xd0\x97\xe8"
"\x2b\xf4\x35\xfa\x06\x4d\x44\xdf\xa2\xef\xd0\xf7\xe8\x07\xf4\x23\xfa"
"\x09\xfd\x8c\x7e\x41\xbf\xa2\xdf\xd0\xef\xe8\x0f\xf4\x27\xfa\x0b\xfd"
"\x8d\x26\xa1\x7f\xd0\xbf\xe8\x3f\x34\x19\x96\x1c\x4b\x81\xa5\xc4\x52"
"\x61\xa9\xb1\x34\x58\x5a\x2c\x1d\x96\x1e\xcb\x80\x65\xc4\x32\x61\x99"
"\xb1\x2c\x58\x56\x2c\x1b\x96\x1d\xcb\x81\xe5\xc4\x72\x61\xb9\xb1\x3c"
"\x18\x82\xa1\x18\x86\xe1\x18\x81\x91\x18\x85\xd1\x18\x83\xb1\x18\x87"
"\xf1\x98\x80\x89\x98\x84\xc9\x98\x82\xa9\x98\x86\x01\x4c\xc7\x0c\xcc"
"\xc4\x20\x66\x61\x36\xe6\x60\x2e\xe6\x61\x3e\x16\x60\x21\x16\x61\x31"
"\x2c\x2f\x96\x0f\xcb\x8f\x15\xc0\x0a\x62\x85\xb0\xc2\x58\x11\xac\x28"
"\x56\x0c\x2b\x8e\x95\xc0\x4a\x62\xa5\xb0\xd2\x58\x19\xac\x2c\x56\x0e"
"\x2b\x8f\x55\xc0\x2a\x62\x95\xb0\xca\x58\x15\xac\x2a\x56\x0d\xab\x8e"
"\xd5\xc0\x6a\x62\xb5\xb0\xda\x58\x1d\xac\x2e\x56\x0f\xab\x8f\x35\xc0"
"\x1a\x62\x8d\xb0\xc6\x58\x13\xac\x29\xd6\x0c\x6b\x8e\xb5\xc0\x5a\x62"
"\xad\xb0\xd6\x58\x1b\xac\x2d\xd6\x0e\x6b\x8f\x75\xc0\x3a\x62\x9d\xb0"
"\xce\x58\x17\xac\x2b\xd6\x0d\xeb\x8e\xf5\xc0\x7a\x62\xbd\xb0\xde\x58"
"\x1f\xac\x2f\xd6\x0f\xeb\x8f\x0d\xc0\x06\x62\x83\xb0\xc1\xd8\x10\x6c"
"\x28\x36\x0c\x1b\x8e\x8d\xc0\x46\x62\xa3\xb0\xd1\xd8\x18\x6c\x2c\x36"
"\x0e\x1b\x8f\x4d\xc0\x26\x62\x93\xb0\xc9\xd8\x14\x6c\x2a\x36\x0d\x9b"
"\x8e\xcd\xc0\x66\x62\xb3\xb0\xd9\xd8\x1c\x6c\x2e\x36\x0f\x9b\x8f\x2d"
"\xc0\x16\x62\x8b\xb0\xc5\xd8\x12\x6c\x29\xb6\x0c\x5b\x8e\xad\xc0\x56"
"\x62\xab\xb0\xd5\xd8\x1a\x6c\x2d\xb6\x0e\x5b\x8f\x6d\xc0\x36\x62\x9b"
"\xb0\xcd\xd8\x16\x6c\x2b\xb6\x0d\xdb\x8e\xed\xc0\x76\x62\xbb\xb0\xdd"
"\xd8\x1e\x6c\x2f\xb6\x0f\xdb\x8f\xc5\x61\x07\xb0\x83\xd8\x21\xec\x30"
"\x76\x04\x3b\x8a\x1d\xc3\x8e\x63\x27\xb0\x93\xd8\x29\xec\x34\x76\x06"
"\x3b\x8b\x9d\xc3\xe2\xb1\xf3\x58\x02\x76\x01\xbb\x88\x5d\xc2\x2e\x63"
"\x57\xb0\xab\xd8\x35\xec\x3a\x76\x03\xbb\x89\xdd\xc2\x6e\x63\x77\xb0"
"\xbb\xd8\x3d\xec\x3e\xf6\x00\x7b\x88\x3d\xc2\x1e\x63\x4f\xb0\xa7\xd8"
"\x33\xec\x39\xf6\x02\x7b\x89\xbd\xc2\x5e\x63\x6f\xb0\x44\xec\x2d\xf6"
"\x0e\x7b\x8f\x7d\xc0\x3e\x62\x9f\xb0\xcf\xd8\x17\xec\x2b\xf6\x0d\xfb"
"\x8e\xfd\xc0\x7e\x62\xbf\xb0\xdf\x58\x12\xf6\x07\xfb\x8b\xfd\xc3\x92"
"\xe1\xc9\xf1\x14\x78\x4a\x3c\x15\x9e\x1a\x4f\x83\xa7\xc5\xd3\xe1\xe9"
"\xf1\x0c\x78\x46\x3c\x13\x9e\x19\xcf\x82\x67\xc5\xb3\xe1\xd9\xf1\x1c"
"\x78\x4e\x3c\x17\x9e\x1b\xcf\x83\x23\x38\x8a\x63\x38\x8e\x13\x38\x89"
"\x53\x38\x8d\x33\x38\x8b\x73\x38\x8f\x0b\xb8\x88\x4b\xb8\x8c\x2b\xb8"
"\x8a\x6b\x38\xc0\x75\xdc\xc0\x4d\x1c\xe2\x16\x6e\xe3\x0e\xee\xe2\x1e"
"\xee\xe3\x01\x1e\xe2\x11\x1e\xc3\xf3\xe2\xf9\xf0\xfc\x78\x01\xbc\x20"
"\x5e\x08\x2f\x8c\x17\xc1\x8b\xe2\xc5\xf0\xe2\x78\x09\xbc\x24\x5e\x0a"
"\x2f\x8d\x97\xc1\xcb\xe2\xe5\xf0\xf2\x78\x05\xbc\x22\x5e\x09\xaf\x8c"
"\x57\xc1\xab\xe2\xd5\xf0\xea\x78\x0d\xbc\x26\x5e\x0b\xaf\x8d\xd7\xc1"
"\xeb\xe2\xf5\xf0\xfa\x78\x03\xbc\x21\xde\x08\x6f\x8c\x37\xc1\x9b\xe2"
"\xcd\xf0\xe6\x78\x0b\xbc\x25\xde\x0a\x6f\x8d\xb7\xc1\xdb\xe2\xed\xf0"
"\xf6\x78\x07\xbc\x23\xde\x09\xef\x8c\x77\xc1\xbb\xe2\xdd\xf0\xee\x78"
"\x0f\xbc\x27\xde\x0b\xef\x8d\xf7\xc1\xfb\xe2\xfd\xf0\xfe\xf8\x00\x7c"
"\x20\x3e\x08\x1f\x8c\x0f\xc1\x87\xe2\xc3\xf0\xe1\xf8\x08\x7c\x24\x3e"
"\x0a\x1f\x8d\x8f\xc1\xc7\xe2\xe3\xf0\xf1\xf8\x04\x7c\x22\x3e\x09\x9f"
"\x8c\x4f\xc1\xa7\xe2\xd3\xf0\xe9\xf8\x0c\x7c\x26\x3e\x0b\x9f\x8d\xcf"
"\xc1\xe7\xe2\xf3\xf0\xf9\xf8\x02\x7c\x21\xbe\x08\x5f\x8c\x2f\xc1\x97"
"\xe2\xcb\xf0\xe5\xf8\x0a\x7c\x25\xbe\x0a\x5f\x8d\xaf\xc1\xd7\xe2\xeb"
"\xf0\xf5\xf8\x06\x7c\x23\xbe\x09\xdf\x8c\x6f\xc1\xb7\xe2\xdb\xf0\xed"
"\xf8\x0e\x7c\x27\xbe\x0b\xdf\x8d\xef\xc1\xf7\xe2\xfb\xf0\xfd\x78\x1c"
"\x7e\x00\x3f\x88\x1f\xc2\x0f\xe3\x47\xf0\xa3\xf8\x31\xfc\x38\x7e\x02"
"\x3f\x89\x9f\xc2\x4f\xe3\x67\xf0\xb3\xf8\x39\x3c\x1e\x3f\x8f\x27\xe0"
"\x17\xf0\x8b\xf8\x25\xfc\x32\x7e\x05\xbf\x8a\x5f\xc3\xaf\xe3\x37\xf0"
"\x9b\xf8\x2d\xfc\x36\x7e\x07\xbf\x8b\xdf\xc3\xef\xe3\x0f\xf0\x87\xf8"
"\x23\xfc\x31\xfe\x04\x7f\x8a\x3f\xc3\x9f\xe3\x2f\xf0\x97\xf8\x2b\xfc"
"\x35\xfe\x06\x4f\xc4\xdf\xe2\xef\xf0\xf7\xf8\x07\xfc\x23\xfe\x09\xff"
"\x8c\x7f\xc1\xbf\xe2\xdf\xf0\xef\xf8\x0f\xfc\x27\xfe\x0b\xff\x8d\x27"
"\xe1\x7f\xf0\xbf\xf8\x3f\x3c\x19\x91\x9c\x48\x41\xa4\x24\x52\x11\xa9"
"\x89\x34\x44\x5a\x22\x1d\x91\x9e\xc8\x40\x64\x24\x32\x11\x99\x89\x2c"
"\x44\x56\x22\x1b\x91\x9d\xc8\x41\xe4\x24\x72\x11\xb9\x89\x3c\x04\x42"
"\xa0\x04\x46\xe0\x04\x41\x90\x04\x45\xd0\x04\x43\xb0\x04\x47\xf0\x84"
"\x40\x88\x84\x44\xc8\x84\x42\xa8\x84\x46\x00\x42\x27\x0c\xc2\x24\x20"
"\x61\x11\x36\xe1\x10\x2e\xe1\x11\x3e\x11\x10\x21\x11\x11\x31\x22\x2f"
"\x91\x8f\xc8\x4f\x14\x20\x0a\x12\x85\x88\xc2\x44\x11\xa2\x28\x51\x8c"
"\x28\x4e\x94\x20\x4a\x12\xa5\x88\xd2\x44\x19\xa2\x2c\x51\x8e\x28\x4f"
"\x54\x20\x2a\x12\x95\x88\xca\x44\x15\xa2\x2a\x51\x8d\xa8\x4e\xd4\x20"
"\x6a\x12\xb5\x88\xda\x44\x1d\xa2\x2e\x51\x8f\xa8\x4f\x34\x20\x1a\x12"
"\x8d\x88\xc6\x44\x13\xa2\x29\xd1\x8c\x68\x4e\xb4\x20\x5a\x12\xad\x88"
"\xd6\x44\x1b\xa2\x2d\xd1\x8e\x68\x4f\x74\x20\x3a\x12\x9d\x88\xce\x44"
"\x17\xa2\x2b\xd1\x8d\xe8\x4e\xf4\x20\x7a\x12\xbd\x88\xde\x44\x1f\xa2"
"\x2f\xd1\x8f\xe8\x4f\x0c\x20\x06\x12\x83\x88\xc1\xc4\x10\x62\x28\x31"
"\x8c\x18\x4e\x8c\x20\x46\x12\xa3\x88\xd1\xc4\x18\x62\x2c\x31\x8e\x18"
"\x4f\x4c\x20\x26\x12\x93\x88\xc9\xc4\x14\x62\x2a\x31\x8d\x98\x4e\xcc"
"\x20\x66\x12\xb3\x88\xd9\xc4\x1c\x62\x2e\x31\x8f\x98\x4f\x2c\x20\x16"
"\x12\x8b\x88\xc5\xc4\x12\x62\x29\xb1\x8c\x58\x4e\xac\x20\x56\x12\xab"
"\x88\xd5\xc4\x1a\x62\x2d\xb1\x8e\x58\x4f\x6c\x20\x36\x12\x9b\x88\xcd"
"\xc4\x16\x62\x2b\xb1\x8d\xd8\x4e\xec\x20\x76\x12\xbb\x88\xdd\xc4\x1e"
"\x62\x2f\xb1\x8f\xd8\x4f\xc4\x11\x07\x88\x83\xc4\x21\xe2\x30\x71\x84"
"\x38\x4a\x1c\x23\x8e\x13\x27\x88\x93\xc4\x29\xe2\x34\x71\x86\x38\x4b"
"\x9c\x23\xe2\x89\xf3\x44\x02\x71\x81\xb8\x48\x5c\x22\x2e\x13\x57\x88"
"\xab\xc4\x35\xe2\x3a\x71\x83\xb8\x49\xdc\x22\x6e\x13\x77\x88\xbb\xc4"
"\x3d\xe2\x3e\xf1\x80\x78\x48\x3c\x22\x1e\x13\x4f\x88\xa7\xc4\x33\xe2"
"\x39\xf1\x82\x78\x49\xbc\x22\x5e\x13\x6f\x88\x44\xe2\x2d\xf1\x8e\x78"
"\x4f\x7c\x20\x3e\x12\x9f\x88\xcf\xc4\x17\xe2\x2b\xf1\x8d\xf8\x4e\xfc"
"\x20\x7e\x12\xbf\x88\xdf\x44\x12\xf1\x87\xf8\x4b\xfc\x23\x92\x91\xc9"
"\xc9\x14\x64\x4a\x32\x15\x99\x9a\x4c\x43\xa6\x25\xd3\x91\xe9\xc9\x0c"
"\x64\x46\x32\x13\x99\x99\xcc\x42\x66\x25\xb3\x91\xd9\xc9\x1c\x64\x4e"
"\x32\x17\x99\x9b\xcc\x43\x22\x24\x4a\x62\x24\x4e\x12\x24\x49\x52\x24"
"\x4d\x32\x24\x4b\x72\x24\x4f\x0a\xa4\x48\x4a\xa4\x4c\x2a\xa4\x4a\x6a"
"\x24\x20\x75\xd2\x20\x4d\x12\x92\x16\x69\x93\x0e\xe9\x92\x1e\xe9\x93"
"\x01\x19\x92\x11\x19\x23\xf3\x92\xf9\xc8\xfc\x64\x01\xb2\x20\x59\x88"
"\x2c\x4c\x16\x21\x8b\x92\xc5\xc8\xe2\x64\x09\xb2\x24\x59\x8a\x2c\x4d"
"\x96\x21\xcb\x92\xe5\xc8\xf2\x64\x05\xb2\x22\x59\x89\xac\x4c\x56\x21"
"\xab\x92\xd5\xc8\xea\x64\x0d\xb2\x26\x59\x8b\xac\x4d\xd6\x21\xeb\x92"
"\xf5\xc8\xfa\x64\x03\xb2\x21\xd9\x88\x6c\x4c\x36\x21\x9b\x92\xcd\xc8"
"\xe6\x64\x0b\xb2\x25\xd9\x8a\x6c\x4d\xb6\x21\xdb\x92\xed\xc8\xf6\x64"
"\x07\xb2\x23\xd9\x89\xec\x4c\x76\x21\xbb\x92\xdd\xc8\xee\x64\x0f\xb2"
"\x27\xd9\x8b\xec\x4d\xf6\x21\xfb\x92\xfd\xc8\xfe\xe4\x00\x72\x20\x39"
"\x88\x1c\x4c\x0e\x21\x87\x92\xc3\xc8\xe1\xe4\x08\x72\x24\x39\x8a\x1c"
"\x4d\x8e\x21\xc7\x92\xe3\xc8\xf1\xe4\x04\x72\x22\x39\x89\x9c\x4c\x4e"
"\x21\xa7\x92\xd3\xc8\xe9\xe4\x0c\x72\x26\x39\x8b\x9c\x4d\xce\x21\xe7"
"\x92\xf3\xc8\xf9\xe4\x02\x72\x21\xb9\x88\x5c\x4c\x2e\x21\x97\x92\xcb"
"\xc8\xe5\xe4\x0a\x72\x25\xb9\x8a\x5c\x4d\xae\x21\xd7\x92\xeb\xc8\xf5"
"\xe4\x06\x72\x23\xb9\x89\xdc\x4c\x6e\x21\xb7\x92\xdb\xc8\xed\xe4\x0e"
"\x72\x27\xb9\x8b\xdc\x4d\xee\x21\xf7\x92\xfb\xc8\xfd\x64\x1c\x79\x80"
"\x3c\x48\x1e\x22\x0f\x93\x47\xc8\xa3\xe4\x31\xf2\x38\x79\x82\x3c\x49"
"\x9e\x22\x4f\x93\x67\xc8\xb3\xe4\x39\x32\x9e\x3c\x4f\x26\x90\x17\xc8"
"\x8b\xe4\x25\xf2\x32\x79\x85\xbc\x4a\x5e\x23\xaf\x93\x37\xc8\x9b\xe4"
"\x2d\xf2\x36\x79\x87\xbc\x4b\xde\x23\xef\x93\x0f\xc8\x87\xe4\x23\xf2"
"\x31\xf9\x84\x7c\x4a\x3e\x23\x9f\x93\x2f\xc8\x97\xe4\x2b\xf2\x35\xf9"
"\x86\x4c\x24\xdf\x92\xef\xc8\xf7\xe4\x07\xf2\x23\xf9\x89\xfc\x4c\x7e"
"\x21\xbf\x92\xdf\xc8\xef\xe4\x0f\xf2\x27\xf9\x8b\xfc\x4d\x26\x91\x7f"
"\xc8\xbf\xe4\x3f\x32\x19\x95\x9c\x4a\x41\xa5\xa4\x52\x51\xa9\xa9\x34"
"\x54\x5a\x2a\x1d\x95\x9e\xca\x40\x65\xa4\x32\x51\x99\xa9\x2c\x54\x56"
"\x2a\x1b\x95\x9d\xca\x41\xe5\xa4\x72\x51\xb9\xa9\x3c\x14\x42\xa1\x14"
"\x46\xe1\x14\x41\x91\x14\x45\xd1\x14\x43\xb1\x14\x47\xf1\x94\x40\x89"
"\x94\x44\xc9\x94\x42\xa9\x94\x46\x01\x4a\xa7\x0c\xca\xa4\x20\x65\x51"
"\x36\xe5\x50\x2e\xe5\x51\x3e\x15\x50\x21\x15\x51\x31\x2a\x2f\x95\x8f"
"\xca\x4f\x15\xa0\x0a\x52\x85\xa8\xc2\x54\x11\xaa\x28\x55\x8c\x2a\x4e"
"\x95\xa0\x4a\x52\xa5\xa8\xd2\x54\x19\xaa\x2c\x55\x8e\x2a\x4f\x55\xa0"
"\x2a\x52\x95\xa8\xca\x54\x15\xaa\x2a\x55\x8d\xaa\x4e\xd5\xa0\x6a\x52"
"\xb5\xa8\xda\x54\x1d\xaa\x2e\x55\x8f\xaa\x4f\x35\xa0\x1a\x52\x8d\xa8"
"\xc6\x54\x13\xaa\x29\xd5\x8c\x6a\x4e\xb5\xa0\x5a\x52\xad\xa8\xd6\x54"
"\x1b\xaa\x2d\xd5\x8e\x6a\x4f\x75\xa0\x3a\x52\x9d\xa8\xce\x54\x17\xaa"
"\x2b\xd5\x8d\xea\x4e\xf5\xa0\x7a\x52\xbd\xa8\xde\x54\x1f\x6a\xe0\xea"
"\x7e\x54\x7f\x6a\x00\xd5\xdc\x18\x44\x0d\xa6\x86\x50\x43\xa9\x61\xd4"
"\x70\x6a\x04\x35\x92\x1a\x45\x8d\xa6\xc6\x50\x63\xa9\x71\xd4\x78\x6a"
"\x02\x35\x91\x9a\x44\x4d\xa6\xa6\x50\x53\xa9\x69\xd4\x74\x6a\x06\x35"
"\x93\x9a\x45\xcd\xa6\xe6\x50\x73\xa9\x79\xd4\x7c\x6a\x01\xb5\x90\x5a"
"\x44\x2d\xa6\x96\x50\x4b\xa9\x65\xd4\x72\x6a\x05\xb5\x92\x5a\x45\xad"
"\xa6\xd6\x50\x6b\xa9\x75\xd4\x7a\x6a\x03\xb5\x91\xda\x44\x6d\xa6\xb6"
"\x50\x5b\xa9\x6d\xd4\x76\x6a\x07\xb5\x93\xda\x45\xed\xa6\xf6\x50\x7b"
"\xa9\x7d\xd4\x7e\x2a\x8e\x3a\x40\x1d\xa4\x0e\x51\x87\xa9\x23\xd4\x51"
"\xea\x18\x75\x9c\x3a\x41\x9d\xa4\x4e\x51\xa7\xa9\x33\xd4\x59\xea\x1c"
"\x15\x4f\x9d\xa7\x12\xa8\x0b\xd4\x45\xea\x12\x75\x99\xba\x42\x5d\xa5"
"\xae\x51\xd7\xa9\x1b\xd4\x4d\xea\x16\x75\x9b\xba\x43\xdd\xa5\xee\x51"
"\xf7\xa9\x07\xd4\x43\xea\x11\xf5\x98\x7a\x42\x3d\xa5\x9e\x51\xcf\xa9"
"\x17\xd4\x4b\xea\x15\xf5\x9a\x7a\x43\x25\x52\x6f\xa9\x77\xd4\x7b\xea"
"\x03\xf5\x91\xfa\x44\x7d\xa6\xbe\x50\x5f\xa9\x6f\xd4\x77\xea\x07\xf5"
"\x93\xfa\x45\xfd\xa6\x92\xa8\x3f\xd4\x5f\xea\x1f\x95\x8c\x4e\x46\xa7"
"\xa0\x53\xd0\xa9\xe8\x54\x74\x1a\x3a\x0d\x9d\x8e\x4e\x47\x67\xa0\x33"
"\xd0\x99\xe8\x4c\x74\x16\x3a\x0b\x9d\x8d\xce\x46\xe7\xa0\x73\xd0\xb9"
"\xe8\x5c\x74\x1e\x3a\x0f\x8d\xd2\x28\x8d\xd3\x38\x4d\xd2\x24\xfd\x9f"
"\xc3\xd2\x2c\xcd\xd3\x3c\x2d\xd2\x22\x2d\xd3\x32\xad\xd2\x2a\x0d\x68"
"\x40\x1b\xb4\x41\x43\x1a\xd2\x36\x6d\xd3\x2e\xed\xd2\x3e\xed\xd3\x21"
"\x1d\xd2\x31\x3a\x46\xe7\xa3\xf3\xd1\x05\xe8\x02\x74\x21\xba\x10\x5d"
"\x84\x2e\x42\x17\xa3\x8b\xd1\x25\xe8\x12\x74\x29\xba\x14\x5d\x86\x2e"
"\x43\x97\xa3\xcb\xd1\x15\xe8\x0a\x74\x25\xba\x12\x5d\x85\xae\x42\x57"
"\xa3\xab\xd1\x35\xe8\x1a\x74\x2d\xba\x16\x5d\x87\xae\x43\xd7\xa3\xeb"
"\xd1\x0d\xe8\x06\x74\x23\xba\x11\xdd\x84\x6e\x42\x37\xa3\x9b\xd1\x2d"
"\xe8\x16\x74\x2b\xba\x15\xdd\x86\x6e\x43\xb7\xa3\xdb\xd1\x1d\xe8\x0e"
"\x74\x27\xba\x13\xdd\x85\xee\x42\x77\xa3\xbb\xd1\x3d\xe8\x1e\x74\x2f"
"\xba\x17\xdd\x87\xee\x43\xf7\xa3\xfb\xd1\x03\xe8\x01\xf4\x20\x7a\x10"
"\x3d\x84\x1e\x42\x0f\xa3\x87\xd1\x23\xe8\x11\xf4\x28\x7a\x14\x3d\x86"
"\x1e\x43\x8f\xa3\xc7\xd1\x13\xe8\x09\xf4\x24\x7a\x12\x3d\x85\x9e\x42"
"\x4f\xa3\xa7\xd1\x33\xe8\x19\xf4\x2c\x7a\x36\x3d\x87\x9e\x4b\xcf\xa3"
"\xe7\xd3\x0b\xe8\x85\xf4\x22\x7a\x31\xbd\x84\x5e\x42\x2f\xa3\x97\xd1"
"\x2b\xe8\x15\xf4\x2a\x7a\x15\xbd\x86\x5e\x43\xaf\xa3\xd7\xd1\x1b\xe8"
"\x0d\xf4\x26\x7a\x13\xbd\x85\xde\x42\x6f\xa3\xb7\xd1\x3b\xe8\x1d\xf4"
"\x2e\x7a\x17\xbd\x87\xde\x43\xef\xa3\xf7\xd1\x71\x74\x1c\x7d\x90\x3e"
"\x48\x1f\xa6\x0f\xd3\x47\xe9\xa3\xf4\x71\xfa\x38\x7d\x92\x3e\x49\x9f"
"\xa6\x4f\xd3\x67\xe9\xb3\x74\x3c\x1d\x4f\x27\xd0\x09\xf4\x45\xfa\x22"
"\x7d\x99\xbe\x4c\x5f\xa5\xaf\xd2\xd7\xe9\xeb\xf4\x4d\xfa\x26\x7d\x9b"
"\xbe\x4d\xdf\xa5\xef\xd2\xf7\xe9\xfb\xf4\x43\xfa\x21\xfd\x98\x7e\x4c"
"\x3f\xa5\x9f\xd2\xcf\xe9\xe7\xf4\x4b\xfa\x25\xfd\x9a\x7e\x4d\x27\xd2"
"\x89\xf4\x3b\xfa\x1d\xfd\x81\xfe\x40\x7f\xa2\x3f\xd1\x5f\xe8\x2f\xf4"
"\x37\xfa\x1b\xfd\x83\xfe\x41\xff\xa2\x7f\xd1\x49\x74\x12\xfd\x97\xfe"
"\x4b\xa7\x63\xd2\x33\x19\x98\x8c\x4c\x26\x26\x33\x93\x85\xc9\xca\xfc"
"\x3f\x8d\x32\x18\x83\x33\x04\x43\x32\x14\x93\x87\x41\xfe\x97\x69\x86"
"\x61\x54\x46\x63\x00\xa3\x33\x06\x63\x32\x90\xb1\xfe\xcb\x31\x26\x2f"
"\x93\x8f\xc9\xcf\x14\x60\x0a\x32\x85\x98\xc2\xff\xe5\x32\x4c\x59\xa6"
"\x1c\x53\x9e\xa9\xc0\x54\x64\x4a\x31\xa5\xff\x97\x2b\x31\x95\x99\x2a"
"\x4c\x5d\xa6\x1a\x53\x9f\xa9\xc1\x34\x64\x6a\x31\x8d\x99\x3a\x4c\x5d"
"\xa6\x1e\x53\x9f\x69\xc0\x34\x64\x1a\x31\x8d\x99\x36\x4c\x5b\xa6\x1d"
"\xd3\x9e\xe9\xc0\x74\x64\x3a\x31\x9d\xff\xcb\xfb\x98\xfd\xcc\x69\xe6"
"\x0c\x73\x96\x39\xc7\xdc\x66\xee\x30\x3f\x98\x9f\xcc\x6b\xe6\x0d\xf3"
"\x8b\xf9\xcd\x0c\x62\x06\x33\x63\x98\xb1\xcc\x38\x66\x3c\x33\x81\x99"
"\xc8\x4c\x62\x26\xff\x97\xe7\x30\x73\x99\x79\xcc\x7c\x66\x01\xb3\x90"
"\x59\xc4\x2c\xfe\x2f\xaf\x61\xd6\x32\xeb\x98\xf5\xcc\x06\x66\x23\xb3"
"\x89\xd9\xfc\x5f\xde\xc3\xec\x65\xb6\x31\x71\xcc\x0e\x66\x27\xb3\x8b"
"\xd9\xfd\x3f\xfe\xcf\x4c\x71\xcc\x01\xe6\x20\x73\x88\x39\xcc\x1c\x61"
"\x8e\x32\xf1\xcc\x71\xe6\x04\x73\x92\x39\xf5\x7f\xcf\x1a\xcf\x9c\x67"
"\x12\x98\x0b\xcc\x4d\xe6\x16\x73\x99\xb9\xc2\x5c\x65\xae\x31\xd7\x99"
"\x1b\xff\xe3\xff\x74\xdc\x65\xee\x31\xf7\x99\x07\xcc\x4b\xe6\x15\xf3"
"\x98\x79\xc2\x3c\x65\x12\x99\xe7\xcc\x8b\xff\xf1\x7f\xfa\x12\x99\xb7"
"\xcc\x3b\xe6\x3d\xf3\x81\xf9\xc8\x7c\x62\x92\x98\x2f\xcc\x57\xe6\x1b"
"\xf3\xfd\x7f\xfa\xff\xd3\x9e\xc4\xfc\x61\xfe\x32\xff\x98\x64\x6c\x72"
"\x36\x05\x9b\x92\x4d\xc5\xa6\x66\xd3\xb0\x69\xd9\x74\x6c\x7a\x36\x03"
"\x9b\x91\xcd\xc4\x66\x66\xb3\xb0\x59\xd9\x6c\x6c\x76\x36\x07\x9b\x93"
"\xcd\xc5\xe6\x66\xf3\xb0\x08\x8b\xb2\x18\x8b\xb3\x04\x4b\xb2\x14\x4b"
"\xb3\x0c\xcb\xb2\x1c\xcb\xb3\x02\x2b\xb2\x12\x2b\xb3\x0a\xab\xb2\x1a"
"\x0b\x58\x9d\x35\x58\x93\x85\xac\xc5\xda\xac\xc3\xba\xac\xc7\xfa\x6c"
"\xc0\x86\x6c\xc4\xc6\xd8\xbc\x6c\x3e\x36\x3f\x5b\x80\x2d\xc8\x16\x62"
"\x0b\xb3\x45\xd8\xa2\x6c\x31\xb6\x38\x5b\x82\x2d\xc9\x96\x62\x4b\xb3"
"\x65\xd8\xb2\x6c\x39\xb6\x3c\x5b\x81\xad\xc8\x56\x62\x2b\xb3\x55\xd8"
"\xaa\x6c\x35\xb6\x3a\x5b\x83\xad\xc9\xd6\x62\x6b\xb3\x75\xd8\xba\x6c"
"\x3d\xb6\x3e\xdb\x80\x6d\xc8\x36\x62\x1b\xb3\x4d\xd8\xa6\x6c\x33\xb6"
"\x39\xdb\x82\x6d\xc9\xb6\x62\x5b\xb3\x6d\xd8\xb6\x6c\x3b\xb6\x3d\xdb"
"\x81\xed\xc8\x76\x62\x3b\xb3\x5d\xd8\xae\x6c\x37\xb6\x3b\xdb\x83\xed"
"\xc9\xf6\x62\x7b\xb3\x7d\xd8\xbe\x6c\x3f\xb6\x3f\x3b\x80\x1d\xc8\x0e"
"\x62\x07\xb3\x43\xd8\xa1\xec\x30\x76\x38\x3b\x82\x1d\xc9\x8e\x62\x47"
"\xb3\x63\xd8\xb1\xec\x38\x76\x3c\x3b\x81\x9d\xc8\x4e\x62\x27\xb3\x53"
"\xd8\xa9\xec\x34\x76\x3a\x3b\x83\x9d\xc9\xce\x62\x67\xb3\x73\xd8\xb9"
"\xec\x3c\x76\x3e\xbb\x80\x5d\xc8\x2e\x62\x17\xb3\x4b\xd8\xa5\xec\x32"
"\x76\x39\xbb\x82\x5d\xc9\xae\x62\x57\xb3\x6b\xd8\xb5\xec\x3a\x76\x3d"
"\xbb\x81\xdd\xc8\x6e\x62\x37\xb3\x5b\xd8\xad\xec\x36\x76\x3b\xbb\x83"
"\xdd\xc9\xee\x62\x77\xb3\x7b\xd8\xbd\xec\x3e\x76\x3f\x1b\xc7\x1e\x60"
"\x0f\xb2\x87\xd8\xc3\xec\x11\xf6\x28\x7b\x8c\x3d\xce\x9e\x60\x4f\xb2"
"\xa7\xd8\xd3\xec\x19\xf6\x2c\x7b\x8e\x8d\x67\xcf\xb3\x09\xec\x05\xf6"
"\x22\x7b\x89\xbd\xcc\x5e\x61\xaf\xb2\xd7\xd8\xeb\xec\x0d\xf6\x26\x7b"
"\x8b\xbd\xcd\xde\x61\xef\xb2\xf7\xd8\xfb\xec\x03\xf6\x21\xfb\x88\x7d"
"\xcc\x3e\x61\x9f\xb2\xcf\xd8\xe7\xec\x0b\xf6\x25\xfb\x8a\x7d\xcd\xbe"
"\x61\x13\xd9\xb7\xec\x3b\xf6\x3d\xfb\x81\xfd\xc8\x7e\x62\x3f\xb3\x5f"
"\xd8\xaf\xec\x37\xf6\x3b\xfb\x83\xfd\xc9\xfe\x62\x7f\xb3\x49\xec\x1f"
"\xf6\x2f\xfb\x8f\x4d\xc6\x25\xe7\x52\x70\x29\xb9\x54\x5c\x6a\x2e\x0d"
"\x97\x96\x4b\xc7\xa5\xe7\x32\x70\x19\xb9\x4c\x5c\x66\x2e\x0b\x97\x95"
"\xcb\xc6\x65\xe7\x72\x70\x39\xb9\x5c\x5c\x6e\x2e\x0f\x87\x70\x28\x87"
"\x71\x38\x47\x70\x24\x47\x71\x34\xc7\x70\x2c\xc7\x71\x3c\x27\x70\x22"
"\x27\x71\x32\xa7\x70\x2a\xa7\x71\x80\xd3\x39\x83\x33\x39\xc8\x59\x9c"
"\xcd\x39\x9c\xcb\x79\x9c\xcf\x05\x5c\xc8\x45\x5c\x8c\xcb\xcb\xe5\xe3"
"\xf2\x73\x05\xb8\x82\x5c\x21\xae\x30\x57\x84\x2b\xca\x15\xe3\x8a\x73"
"\x25\xb8\x92\x5c\x29\xae\x34\x57\x86\x2b\xcb\x95\xe3\xca\x73\x15\xb8"
"\x8a\x9c\xcb\x55\xe6\xaa\x70\x55\xb9\x6a\x5c\x75\xae\x06\x57\x93\xab"
"\xc5\xd5\xe6\xea\x70\x75\xb9\x7a\x5c\x7d\xae\x01\xd7\x90\x6b\xc4\x35"
"\xe6\x9a\x70\x4d\xb9\x66\x5c\x73\xae\x05\xd7\x92\x6b\xc5\xb5\xe6\xda"
"\x70\x6d\xb9\x76\x5c\x7b\xae\x03\xd7\x91\xeb\xc4\x75\xfe\x7f\xbd\x1f"
"\xc2\x0d\xe5\x86\x71\xc3\xb9\xe1\xdc\x48\x6e\x14\x37\x9a\x1b\xc3\x8d"
"\xe5\xc6\x71\xe3\xb9\x09\xdc\x44\x6e\x12\x37\x99\x9b\xc2\x4d\xe5\xa6"
"\x71\xd3\xb9\x19\xdc\x4c\x6e\x16\x37\x9b\x9b\xc3\xcd\xe5\xe6\x71\xf3"
"\xb9\x05\xdc\x42\x6e\x11\xb7\x98\x5b\xc2\x2d\xe5\x96\x71\xcb\xb9\x15"
"\xdc\x4a\x6e\x15\xb7\x9a\x5b\xc3\xad\xe5\xd6\x71\xeb\xb9\x0d\xdc\x46"
"\x6e\x13\xb7\x99\xdb\xc2\x6d\xe5\xb6\x71\xdb\xb9\x1d\xdc\x4e\x6e\x17"
"\xb7\x9b\xdb\xc3\xed\xe5\xf6\x71\xfb\xb9\x38\xee\x00\x77\x90\x3b\xc4"
"\x1d\xe6\x8e\x70\x47\xb9\x63\xdc\x71\xee\x04\x77\x92\x3b\xc5\x9d\xe6"
"\xce\x70\x67\xb9\x73\x5c\x3c\x77\x9e\x4b\xe0\x2e\x70\x17\xb9\x4b\xdc"
"\x65\xee\x0a\x77\x95\xbb\xc6\x5d\xe7\x6e\x70\x37\xb9\x5b\xdc\x6d\xee"
"\x0e\x77\x97\xbb\xc7\xdd\xe7\x1e\x70\x0f\xb9\x47\xdc\x63\xee\x09\xf7"
"\x94\x7b\xc6\x3d\xe7\x5e\x70\x2f\xb9\x57\xdc\x6b\xee\x0d\x97\xc8\xbd"
"\xe5\xde\x71\xef\xb9\x0f\xdc\x47\xee\x13\xf7\x99\xfb\xc2\x7d\xe5\xbe"
"\x71\xdf\xb9\x1f\xdc\x4f\xee\x17\xf7\x9b\x4b\xe2\xfe\x70\x7f\xb9\x7f"
"\x5c\x32\x3e\x39\x9f\x82\x4f\xc9\xa7\xe2\x53\xf3\x69\xf8\xb4\x7c\x3a"
"\x3e\x3d\x9f\x81\xcf\xc8\x67\xe2\x33\xf3\x59\xf8\xac\x7c\x36\x3e\x3b"
"\x9f\x83\xcf\xc9\xe7\xe2\x73\xf3\x79\x78\x84\x47\x79\x8c\xc7\x79\x82"
"\x27\x79\x8a\xa7\x79\x86\x67\x79\x8e\xe7\x79\x81\x17\x79\x89\x97\x79"
"\x85\x57\x79\x8d\x07\xbc\xce\x1b\xbc\xc9\x43\xde\xe2\x6d\xde\xe1\x5d"
"\xde\xe3\x7d\x3e\xe0\x43\x3e\xe2\x63\x7c\x5e\x3e\x1f\x9f\x9f\x2f\xc0"
"\x17\xe4\x0b\xf1\x85\xf9\x22\x7c\x51\xbe\x18\x5f\x9c\x2f\xc1\x97\xe4"
"\x4b\xf1\xa5\xf9\x32\x7c\x59\xbe\x1c\x5f\x9e\xaf\xc0\x57\xe4\x2b\xf1"
"\x95\xf9\x2a\x7c\x55\xbe\x1a\x5f\x9d\xaf\xc1\xd7\xe4\x6b\xf1\xb5\xf9"
"\x3a\x7c\x5d\xbe\x1e\x5f\x9f\x6f\xc0\x37\xe4\x1b\xf1\x8d\xf9\x26\x7c"
"\x53\xbe\x19\xdf\x9c\x6f\xc1\xb7\xe4\x5b\xf1\xad\xf9\x36\x7c\x5b\xbe"
"\x1d\xdf\x9e\xef\xc0\x77\xe4\x3b\xf1\x9d\xf9\x2e\x7c\x57\xbe\x1b\xdf"
"\x9d\xef\xc1\xf7\xe4\x7b\xf1\xbd\xf9\x3e\x7c\x5f\xbe\x1f\xdf\x9f\x1f"
"\xc0\x0f\xe4\x07\xf1\x83\xf9\x21\xfc\x50\x7e\x18\x3f\x9c\x1f\xc1\x8f"
"\xe4\x47\xf1\xa3\xf9\x31\xfc\x58\x7e\x1c\x3f\x9e\x9f\xc0\x4f\xe4\x27"
"\xf1\x93\xf9\x29\xfc\x54\x7e\x1a\x3f\x9d\x9f\xc1\xcf\xe4\x67\xf1\xb3"
"\xf9\x39\xfc\x5c\x7e\x1e\x3f\x9f\x5f\xc0\x2f\xe4\x17\xf1\x8b\xf9\x25"
"\xfc\x52\x7e\x19\xbf\x9c\x5f\xc1\xaf\xe4\x57\xf1\xab\xf9\x35\xfc\x5a"
"\x7e\x1d\xbf\x9e\xdf\xc0\x6f\xe4\x37\xf1\x9b\xf9\x2d\xfc\x56\x7e\x1b"
"\xbf\x9d\xdf\xc1\xef\xe4\x77\xf1\xbb\xf9\x3d\xfc\x5e\x7e\x1f\xbf\x9f"
"\x8f\xe3\x0f\xf0\x07\xf9\x43\xfc\x61\xfe\x08\x7f\x94\x3f\xc6\x1f\xe7"
"\x4f\xf0\x27\xf9\x53\xfc\x69\xfe\x0c\x7f\x96\x3f\xc7\xc7\xf3\xe7\xf9"
"\x04\xfe\x02\x7f\x91\xbf\xc4\x5f\xe6\xaf\xf0\x57\xf9\x6b\xfc\x75\xfe"
"\x06\x7f\x93\xbf\xc5\xdf\xe6\xef\xf0\x77\xf9\x7b\xfc\x7d\xfe\x01\xff"
"\x90\x7f\xc4\x3f\xe6\x9f\xf0\x4f\xf9\x67\xfc\x73\xfe\x05\xff\x92\x7f"
"\xc5\xbf\xe6\xdf\xf0\x89\xfc\x5b\xfe\x1d\xff\x9e\xff\xc0\x7f\xe4\x3f"
"\xf1\x9f\xf9\x2f\xfc\x57\xfe\x1b\xff\x9d\xff\xc1\xff\xe4\x7f\xf1\xbf"
"\xf9\x24\xfe\x0f\xff\x97\xff\xc7\x27\x13\x92\x0b\x29\x84\x94\x42\x2a"
"\x21\xb5\x90\x46\x48\x2b\xa4\x13\xd2\x0b\x19\x84\x8c\x42\x26\x21\xb3"
"\x90\x45\xc8\x2a\x64\x13\xb2\x0b\x39\x84\x9c\x42\x2e\x21\xb7\x90\x47"
"\x40\x04\x54\xc0\x04\x5c\x20\x04\x52\xa0\x04\x5a\x60\x04\x56\xe0\x04"
"\x5e\x10\x04\x51\x90\x04\x59\x50\x04\x55\xd0\x04\x20\xe8\x82\x21\x98"
"\x02\x14\x2c\xc1\x16\x1c\xc1\x15\x3c\xc1\x17\x02\x21\x14\x22\x21\x26"
"\xe4\x15\xf2\x09\xf9\x85\x02\x42\x41\xa1\x90\x50\x58\x28\x22\x14\x15"
"\x8a\x09\xc5\x85\x12\x42\x49\xa1\x94\x50\x5a\x28\x23\x94\x15\xca\x09"
"\xe5\x85\x0a\x42\x45\xa1\x92\x50\x59\xa8\x22\x54\x15\xaa\x09\xd5\x85"
"\x1a\x42\x4d\xa1\x96\x50\x5b\xa8\x23\xd4\x15\xea\x09\xf5\x85\x06\x42"
"\x43\xa1\x91\xd0\x58\x68\x22\x34\x15\x9a\x09\xcd\x85\x16\x42\x4b\xa1"
"\x95\xd0\x5a\x68\x23\xb4\x15\xda\x09\xed\x85\x0e\x42\x47\xa1\x93\xd0"
"\x59\xe8\x22\x74\x15\xba\x09\xdd\x85\x1e\x42\x4f\xa1\x97\xd0\x5b\xe8"
"\x23\xf4\x15\xfa\x09\xfd\x85\x01\xc2\x40\x61\x90\x30\x58\x18\x22\x0c"
"\x15\x86\x09\xc3\x85\x11\xc2\x48\x61\x94\x30\x5a\x18\x23\x8c\x15\xc6"
"\x09\xe3\x85\x09\xc2\x44\x61\x92\x30\x59\x98\x22\x4c\x15\xa6\x09\xd3"
"\x85\x19\xc2\x4c\x61\x96\x30\x5b\x98\x23\xcc\x15\xe6\x09\xf3\x85\x05"
"\xc2\x42\x61\x91\xb0\x58\x58\x22\x2c\x15\x96\x09\xcb\x85\x15\xc2\x4a"
"\x61\x95\xb0\x5a\x58\x23\xac\x15\xd6\x09\xeb\x85\x0d\xc2\x46\x61\x93"
"\xb0\x59\xd8\x22\x6c\x15\xb6\x09\xdb\x85\x1d\xc2\x4e\x61\x97\xb0\x5b"
"\xd8\x23\xec\x15\xf6\x09\xfb\x85\x38\xe1\x80\x70\x50\x38\x24\x1c\x16"
"\x8e\x08\x47\x85\x63\xc2\x71\xe1\x84\x70\x52\x38\x25\x9c\x16\xce\x08"
"\x67\x85\x73\x42\xbc\x70\x5e\x48\x10\x2e\x08\x17\x85\x4b\xc2\x65\xe1"
"\x8a\x70\x55\xb8\x26\x5c\x17\x6e\x08\x37\x85\x5b\xc2\x6d\xe1\x8e\x70"
"\x57\xb8\x27\xdc\x17\x1e\x08\x0f\x85\x47\xc2\x63\xe1\x89\xf0\x54\x78"
"\x26\x3c\x17\x5e\x08\x2f\x85\x57\xc2\x6b\xe1\x8d\x90\x28\xbc\x15\xde"
"\x09\xef\x85\x0f\xc2\x47\xe1\x93\xf0\x59\xf8\x22\x7c\x15\xbe\x09\xdf"
"\x85\x1f\xc2\x4f\xe1\x97\xf0\x5b\x48\x12\xfe\x08\x7f\x85\x7f\x42\x32"
"\x31\xb9\x98\x42\x4c\x29\xa6\x12\x53\x8b\x69\xc4\xb4\x62\x3a\x31\xbd"
"\x98\x41\xcc\x28\x66\x12\x33\x8b\x59\xc4\xac\x62\x36\x31\xbb\x98\x43"
"\xcc\x29\xe6\x12\x73\x8b\x79\x44\x44\x44\x45\x4c\xc4\x45\x42\x24\x45"
"\x4a\xa4\x45\x46\x64\x45\x4e\xe4\x45\x41\x14\x45\x49\x94\x45\x45\x54"
"\x45\x4d\x04\xa2\x2e\x1a\xa2\x29\x42\xd1\x12\x6d\xd1\x11\x5d\xd1\x13"
"\x7d\x31\x10\x43\x31\x12\x63\x62\x5e\x31\x9f\x98\x5f\x2c\x20\x16\x14"
"\x0b\x89\x85\xc5\x22\x62\x51\xb1\x98\x58\x5c\x2c\x21\x96\x14\x4b\x89"
"\xa5\xc5\x32\x62\x59\xb1\x9c\x58\x5e\xac\x20\x56\x14\x2b\x89\x95\xc5"
"\x2a\x62\x55\xb1\x9a\x58\x5d\xac\x21\xd6\x14\x6b\x89\xb5\xc5\x3a\x62"
"\x5d\xb1\x9e\x58\x5f\x6c\x20\x36\x14\x1b\x89\x8d\xc5\x26\x62\x53\xb1"
"\x99\xd8\x5c\x6c\x21\xb6\x14\x5b\x89\xad\xc5\x36\x62\x5b\xb1\x9d\xd8"
"\x5e\xec\x20\x76\x14\x3b\x89\x9d\xc5\x2e\x62\x57\xb1\x9b\xd8\x5d\xec"
"\x21\xf6\x14\x7b\x89\xbd\xc5\x3e\x62\x5f\xb1\x9f\xd8\x5f\x1c\x20\x0e"
"\x14\x07\x89\x83\xc5\x21\xe2\x50\x71\x98\x38\x5c\x1c\x21\x8e\x14\x47"
"\x89\xa3\xc5\x31\xe2\x58\x71\x9c\x38\x5e\x9c\x20\x4e\x14\x27\x89\x93"
"\xc5\x29\xe2\x54\x71\x9a\x38\x5d\x9c\x21\xce\x14\x67\x89\xb3\xc5\x39"
"\xe2\x5c\x71\x9e\x38\x5f\x5c\x20\x2e\x14\x17\x89\x8b\xc5\x25\xe2\x52"
"\x71\x99\xb8\x5c\x5c\x21\xae\x14\x57\x89\xab\xc5\x35\xe2\x5a\x71\x9d"
"\xb8\x5e\xdc\x20\x6e\x14\x37\x89\x9b\xc5\x2d\xe2\x56\x71\x9b\xb8\x5d"
"\xdc\x21\xee\x14\x77\x89\xbb\xc5\x3d\xe2\x5e\x71\x9f\xb8\x5f\x8c\x13"
"\x0f\x88\x07\xc5\x43\xe2\x61\xf1\x88\x78\x54\x3c\x26\x1e\x17\x4f\x88"
"\x27\xc5\x53\xe2\x69\xf1\x8c\x78\x56\x3c\x27\xc6\x8b\xe7\xc5\x04\xf1"
"\xc2\xff\xa7\x37\xaf\xc5\x37\x62\xa2\xf8\x56\x7c\x27\xbe\x17\x3f\x88"
"\x1f\xc5\x4f\xe2\x67\xf1\x8b\xf8\x55\xfc\x26\x7e\x17\x7f\x88\x3f\xc5"
"\x5f\xe2\x6f\x31\x49\xfc\x23\xfe\x15\xff\x89\xc9\xa4\xe4\x52\x0a\x29"
"\xa5\x94\x4a\x4a\x2d\xa5\x91\xd2\x4a\xe9\xa4\xf4\x52\x06\x29\xa3\x94"
"\x49\xca\x2c\x65\x91\xb2\x4a\xd9\xa4\xec\x52\x0e\x29\xa7\x94\x4b\xca"
"\x2d\xe5\x91\x10\x09\x95\x30\x09\x97\x08\x89\x94\x28\x89\x96\x18\x89"
"\x95\x38\x89\x97\x04\x49\x94\x24\x49\x96\x14\x49\x95\x34\x09\x48\xba"
"\x64\x48\xa6\x04\x25\x4b\xb2\x25\x47\x72\x25\x4f\xf2\xa5\x40\x0a\xa5"
"\x48\x8a\x49\x79\xa5\x7c\x52\x7e\xa9\x80\x54\x50\x2a\x24\x15\x96\x8a"
"\x48\x45\xa5\x62\x52\x71\xa9\x84\x54\x52\x2a\x25\x95\x96\xca\x48\x65"
"\xa5\x72\x52\x79\xa9\x82\x54\x51\xaa\x24\x55\x96\xaa\x48\x55\xa5\x6a"
"\x52\x75\xa9\x86\x54\x53\xaa\x25\xd5\x96\xea\x48\x75\xa5\x7a\x52\x7d"
"\xa9\x81\xd4\x50\x6a\x24\x35\x96\x9a\x48\x4d\xa5\x66\x52\x73\xa9\x85"
"\xd4\x52\x6a\x25\xb5\x96\xda\x48\x6d\xa5\x76\x52\x7b\xa9\x83\xd4\x51"
"\xea\x24\x75\x96\xba\x48\x5d\xa5\x6e\x52\x77\xa9\x87\xd4\x53\xea\x25"
"\xf5\x96\xfa\x48\x7d\xa5\x7e\x52\x7f\x69\x80\x34\x50\x1a\x24\x0d\x96"
"\x86\x48\x43\xa5\x61\xd2\x70\x69\x84\x34\x52\x1a\x25\x8d\x96\xc6\x48"
"\x63\xa5\x71\xd2\x78\x69\x82\x34\x51\x9a\x24\x4d\x96\xa6\x48\x53\xa5"
"\x69\xd2\x74\x69\x86\x34\x53\x9a\x25\xcd\x96\xe6\x48\x73\xa5\x79\xd2"
"\x7c\x69\x81\xb4\x50\x5a\x24\x2d\x96\x96\x48\x4b\xa5\x65\xd2\x72\x69"
"\x85\xb4\x52\x5a\x25\xad\x96\xd6\x48\x6b\xa5\x75\xd2\x7a\x69\x83\xb4"
"\x51\xda\x24\x6d\x96\xb6\x48\x5b\xa5\x6d\xd2\x76\x69\x87\xb4\x53\xda"
"\x25\xed\x96\xf6\x48\x7b\xa5\x7d\xd2\x7e\x29\x4e\x3a\x20\x1d\x94\x0e"
"\x49\x87\xa5\x23\xd2\x51\xe9\x98\x74\x5c\x3a\x21\x9d\x94\x4e\x49\xa7"
"\xa5\x33\xd2\x59\xe9\x9c\x14\x2f\x9d\x97\x12\xa4\x0b\xd2\x45\xe9\x92"
"\x74\x59\xba\x22\x5d\x95\xae\x49\xd7\xa5\x1b\xd2\x4d\xe9\x96\x74\x5b"
"\xba\x23\xdd\x95\xee\x49\xf7\xa5\x07\xd2\x43\xe9\x91\xf4\x58\x7a\x22"
"\x3d\x95\x9e\x49\xcf\xa5\x17\xd2\x4b\xe9\x95\xf4\x5a\x7a\x23\x25\x4a"
"\x6f\xa5\x77\xd2\x7b\xe9\x83\xf4\x51\xfa\x24\x7d\x96\xbe\x48\x5f\xa5"
"\x6f\xd2\x77\xe9\x87\xf4\x53\xfa\x25\xfd\x96\x92\xa4\x3f\xd2\x5f\xe9"
"\x9f\x94\x4c\x4e\x2e\xa7\x90\x53\xca\xa9\xe4\xd4\x72\x1a\x39\xad\x9c"
"\x4e\x4e\x2f\x67\x90\x33\xca\x99\xe4\xcc\x72\x16\x39\xab\x9c\x4d\xce"
"\x2e\xe7\x90\x73\xca\xb9\xe4\xdc\x72\x1e\x19\x91\x51\x19\x93\x71\x99"
"\x90\x49\x99\x92\x69\x99\x91\x59\x99\x93\x79\x59\x90\x45\x59\x92\x65"
"\x59\x91\x55\x59\x93\x81\xac\xcb\x86\x6c\xca\x50\xb6\x64\x5b\x76\x64"
"\x57\xf6\x64\x5f\x0e\xe4\x50\x8e\xe4\x98\x9c\x57\xce\x27\xe7\x97\x0b"
"\xc8\x05\xe5\x42\x72\x61\xb9\x88\x5c\x54\x2e\x26\x17\x97\x4b\xc8\x25"
"\xe5\x52\x72\x69\xb9\x8c\x5c\x56\x2e\x27\x97\x97\x2b\xc8\x15\xe5\x4a"
"\x72\x65\xb9\x8a\x5c\x55\xae\x26\x57\x97\x6b\xc8\x35\xe5\x5a\x72\x6d"
"\xb9\x8e\x5c\x57\xae\x27\xd7\x97\x1b\xc8\x0d\xe5\x46\x72\x63\xb9\x89"
"\xdc\x54\x6e\x26\x37\x97\x5b\xc8\x2d\xe5\x56\x72\x6b\xb9\x8d\xdc\x56"
"\x6e\x27\xb7\x97\x3b\xc8\x1d\xe5\x4e\x72\x67\xb9\x8b\xdc\x55\xee\x26"
"\x77\x97\x7b\xc8\x3d\xe5\x5e\x72\x6f\xb9\x8f\xdc\x57\xee\x27\xf7\x97"
"\x07\xc8\x03\xe5\x41\xf2\x60\x79\x88\x3c\x54\x1e\x26\x0f\x97\x47\xc8"
"\x23\xe5\x51\xf2\x68\x79\x8c\x3c\x56\x1e\x27\x8f\x97\x27\xc8\x13\xe5"
"\x49\xf2\x64\x79\x8a\x3c\x55\x9e\x26\x4f\x97\x67\xc8\x33\xe5\x59\xf2"
"\x6c\x79\x8e\x3c\x57\x9e\x27\xcf\x97\x17\xc8\x0b\xe5\x45\xf2\x62\x79"
"\x89\xbc\x54\x5e\x26\x2f\x97\x57\xc8\x2b\xe5\x55\xf2\x6a\x79\x8d\xbc"
"\x56\x5e\x27\xaf\x97\x37\xc8\x1b\xe5\x4d\xf2\x66\x79\x8b\xbc\x55\xde"
"\x26\x6f\x97\x77\xc8\x3b\xe5\x5d\xf2\x6e\x79\x8f\xbc\x57\xde\x27\xef"
"\x97\xe3\xe4\x03\xf2\x41\xf9\x90\x7c\x58\x3e\x22\x1f\x95\x8f\xc9\xc7"
"\xe5\x13\xf2\x49\xf9\x94\x7c\x5a\x3e\x23\x9f\x95\xcf\xc9\xf1\xf2\x79"
"\x39\x41\xbe\x20\x5f\x94\x2f\xc9\x97\xe5\x2b\xf2\x55\xf9\x9a\x7c\x5d"
"\xbe\x21\xdf\x94\x6f\xc9\xb7\xe5\x3b\xf2\x5d\xf9\x9e\x7c\x5f\x7e\x20"
"\x3f\x94\x1f\xc9\x8f\xe5\x27\xf2\x53\xf9\x99\xfc\x5c\x7e\x21\xbf\x94"
"\x5f\xc9\xaf\xe5\x37\x72\xa2\xfc\x56\x7e\x27\xbf\x97\x3f\xc8\x1f\xe5"
"\x4f\xf2\x67\xf9\x8b\xfc\x55\xfe\x26\x7f\x97\x7f\xc8\x3f\xe5\x5f\xf2"
"\x6f\x39\x49\xfe\x23\xff\x95\xff\xc9\xc9\x94\xe4\x4a\x0a\x25\xa5\x92"
"\x4a\x49\xad\xa4\x51\xd2\x2a\xe9\x94\xf4\x4a\x06\x25\xa3\x92\x49\xc9"
"\xac\x64\x51\xb2\x2a\xd9\x94\xec\x4a\x0e\x25\xa7\x92\x4b\xc9\xad\xe4"
"\x51\x10\x05\x55\x30\x05\x57\x08\x85\x54\x28\x85\x56\x18\x85\x55\x38"
"\x85\x57\x04\x45\x54\x24\x45\x56\x14\x45\x55\x34\x05\x28\xba\x62\x28"
"\xa6\x02\x15\x4b\xb1\x15\x47\x71\x15\x4f\xf1\x95\x40\x09\x95\x48\x89"
"\x29\x79\x95\x7c\x4a\x7e\xa5\x80\x52\x50\x29\xa4\x14\x56\x8a\x28\x45"
"\x95\x62\x4a\x71\xa5\x84\x52\x52\x29\xa5\x94\x56\xca\x28\x65\x95\x72"
"\x4a\x79\xa5\x82\x52\x51\xa9\xa4\x54\x56\xaa\x28\x55\x95\x6a\x4a\x75"
"\xa5\x86\x52\x53\xa9\xa5\xd4\x56\xea\x28\x75\x95\x7a\x4a\x7d\xa5\x81"
"\xd2\x50\x69\xa4\x34\x56\x9a\x28\x4d\x95\x66\x4a\x73\xa5\x85\xd2\x52"
"\x69\xa5\xb4\x56\xda\x28\x6d\x95\x76\x4a\x7b\xa5\x83\xd2\x51\xe9\xa4"
"\x74\x56\xba\x28\x5d\x95\x6e\x4a\x77\xa5\x87\xd2\x53\xe9\xa5\xf4\x56"
"\xfa\x28\x7d\x95\x7e\x4a\x7f\x65\x80\x32\x50\x19\xa4\x0c\x56\x86\x28"
"\x43\x95\x61\xca\x70\x65\x84\x32\x52\x19\xa5\x8c\x56\xc6\x28\x63\x95"
"\x71\xca\x78\x65\x82\x32\x51\x99\xa4\x4c\x56\xa6\x28\x53\x95\x69\xca"
"\x74\x65\x86\x32\x53\x99\xa5\xcc\x56\xe6\x28\x73\x95\x79\xca\x7c\x65"
"\x81\xb2\x50\x59\xa4\x2c\x56\x96\x28\x4b\x95\x65\xca\x72\x65\x85\xb2"
"\x52\x59\xa5\xac\x56\xd6\x28\x6b\x95\x75\xca\x7a\x65\x83\xb2\x51\xd9"
"\xa4\x6c\x56\xb6\x28\x5b\x95\x6d\xca\x76\x65\x87\xb2\x53\xd9\xa5\xec"
"\x56\xf6\x28\x7b\x95\x7d\xca\x7e\x25\x4e\x39\xa0\x1c\x54\x0e\x29\x87"
"\x95\x23\xca\x51\xe5\x98\x72\x5c\x39\xa1\x9c\x54\x4e\x29\xa7\x95\x33"
"\xca\x59\xe5\x9c\x12\xaf\x9c\x57\x12\x94\x0b\xca\x45\xe5\x92\x72\x59"
"\xb9\xa2\x5c\x55\xae\x29\xd7\x95\x1b\xca\x4d\xe5\x96\x72\x5b\xb9\xa3"
"\xdc\x55\xee\x29\xf7\x95\x07\xca\x43\xe5\x91\xf2\x58\x79\xa2\x3c\x55"
"\x9e\x29\xcf\x95\x17\xca\x4b\xe5\x95\xf2\x5a\x79\xa3\x24\x2a\x6f\x95"
"\x77\xca\x7b\xe5\x83\xf2\x51\xf9\xa4\x7c\x56\xbe\x28\x5f\x95\x6f\xca"
"\x77\xe5\x87\xf2\x53\xf9\xa5\xfc\x56\x92\x94\x3f\xca\x5f\xe5\x9f\x92"
"\x4c\x4d\xae\xa6\x50\x53\xaa\xa9\xd4\xd4\x6a\x1a\x35\xad\x9a\x4e\x4d"
"\xaf\x66\x50\x33\xaa\x99\xd4\xcc\x6a\x16\x35\xab\x9a\x4d\xcd\xae\xe6"
"\x50\x73\xaa\xb9\xd4\xdc\x6a\x1e\x15\x51\x51\x15\x53\x71\x95\x50\x49"
"\x95\x52\x69\x95\x51\x59\x95\x53\x79\x55\x50\x45\x55\x52\x65\x55\x51"
"\x55\x55\x53\x81\xaa\xab\x86\x6a\xaa\x50\xb5\x54\x5b\x75\x54\x57\xf5"
"\x54\x5f\x0d\xd4\x50\x8d\xd4\x98\x9a\x57\xcd\xa7\xe6\x57\x0b\xa8\x05"
"\xd5\x42\x6a\x61\xb5\x88\x5a\x54\x2d\xa6\x16\x57\x4b\xa8\x25\xd5\x52"
"\x6a\x69\xb5\x8c\x5a\x56\x2d\xa7\x96\x57\x2b\xa8\x15\xd5\x4a\x6a\x65"
"\xb5\x8a\x5a\x55\xad\xa6\x56\x57\x6b\xa8\x35\xd5\x5a\x6a\x6d\xb5\x8e"
"\x5a\x57\xad\xa7\xd6\x57\x1b\xa8\x0d\xd5\x46\x6a\x63\xb5\x89\xda\x54"
"\x6d\xa6\x36\x57\x5b\xa8\x2d\xd5\x56\x6a\x6b\xb5\x8d\xda\x56\x6d\xa7"
"\xb6\x57\x3b\xa8\x1d\xd5\x4e\x6a\x67\xb5\x8b\xda\x55\xed\xa6\x76\x57"
"\x7b\xa8\x3d\xd5\x5e\x6a\x6f\xb5\x8f\xda\x57\xed\xa7\xf6\x57\x07\xa8"
"\x03\xd5\x41\xea\x60\x75\x88\x3a\x54\x1d\xa6\x0e\x57\x47\xa8\x23\xd5"
"\x51\xea\x68\x75\x8c\x3a\x56\x1d\xa7\x8e\x57\x27\xa8\x13\xd5\x49\xea"
"\x64\x75\x8a\x3a\x55\x9d\xa6\x4e\x57\x67\xa8\x33\xd5\x59\xea\x6c\x75"
"\x8e\x3a\x57\x9d\xa7\xce\x57\x17\xa8\x0b\xd5\x45\xea\x62\x75\x89\xba"
"\x54\x5d\xa6\x2e\x57\x57\xa8\x2b\xd5\x55\xea\x6a\x75\x8d\xba\x56\x5d"
"\xa7\xae\x57\x37\xa8\x1b\xd5\x4d\xea\x66\x75\x8b\xba\x55\xdd\xa6\x6e"
"\x57\x77\xa8\x3b\xd5\x5d\xea\x6e\x75\x8f\xba\x57\xdd\xa7\xee\x57\xe3"
"\xd4\x03\xea\x41\xf5\x90\x7a\x58\x3d\xa2\x1e\x55\x8f\xa9\xc7\xd5\x13"
"\xea\x49\xf5\x94\x7a\x5a\x3d\xa3\x9e\x55\xcf\xa9\xf1\xea\x79\x35\x41"
"\xbd\xa0\x5e\x54\x2f\xa9\x97\xd5\x2b\xea\x55\xf5\x9a\x7a\x5d\xbd\xa1"
"\xde\x54\x6f\xa9\xb7\xd5\x3b\xea\x5d\xf5\x9e\x7a\x5f\x7d\xa0\x3e\x54"
"\x1f\xa9\x8f\xd5\x27\xea\x53\xf5\x99\xfa\x5c\x7d\xa1\xbe\x54\x5f\xa9"
"\xaf\xd5\x37\x6a\xa2\xfa\x56\x7d\xa7\xbe\x57\x3f\xa8\x1f\xd5\x4f\xea"
"\x67\xf5\x8b\xfa\x55\xfd\xa6\x7e\x57\x7f\xa8\x3f\xd5\x5f\xea\x6f\x35"
"\x49\xfd\xa3\xfe\x55\xff\xa9\xc9\xb4\xe4\x5a\x0a\x2d\xa5\x96\x4a\x4b"
"\xad\xa5\xd1\xd2\x6a\xe9\xb4\xf4\x5a\x06\x2d\xa3\x96\x49\xcb\xac\x65"
"\xd1\xb2\x6a\xd9\xb4\xec\x5a\x0e\x2d\xa7\x96\x4b\xcb\xad\xe5\xd1\x10"
"\x0d\xd5\x30\x0d\xd7\x08\x8d\xd4\x28\x8d\xd6\x18\x8d\xd5\x38\x8d\xd7"
"\x04\x4d\xd4\x24\x4d\xd6\x14\x4d\xd5\x34\x0d\x68\xba\x66\x68\xa6\x06"
"\x35\x4b\xb3\x35\x47\x73\x35\x4f\xf3\xb5\x40\x0b\xb5\x48\x8b\x69\x79"
"\xb5\x7c\x5a\x7e\xad\x80\x56\x50\x2b\xa4\x15\xd6\x8a\x68\x45\xb5\x62"
"\x5a\x71\xad\x84\x56\x52\x2b\xa5\x95\xd6\xca\x68\x65\xb5\x72\x5a\x79"
"\xad\x82\x56\x51\xab\xa4\x55\xd6\xaa\x68\x55\xb5\x6a\x5a\x75\xad\x86"
"\x56\x53\xab\xa5\xd5\xd6\xea\x68\x75\xb5\x7a\x5a\x7d\xad\x81\xd6\x50"
"\x6b\xa4\x35\xd6\x9a\x68\x4d\xb5\x66\x5a\x73\xad\x85\xd6\x52\x6b\xa5"
"\xb5\xd6\xda\x68\x6d\xb5\x76\x5a\x7b\xad\x83\xd6\x51\xeb\xa4\x75\xd6"
"\xba\x68\x5d\xb5\x6e\x5a\x77\xad\x87\xd6\x53\xeb\xa5\xf5\xd6\xfa\x68"
"\x7d\xb5\x7e\x5a\x7f\x6d\x80\x36\x50\x1b\xa4\x0d\xd6\x86\x68\x43\xb5"
"\x61\xda\x70\x6d\x84\x36\x52\x1b\xa5\x8d\xd6\xc6\x68\x63\xb5\x71\xda"
"\x78\x6d\x82\x36\x51\x9b\xa4\x4d\xd6\xa6\x68\x53\xb5\x69\xda\x74\x6d"
"\x86\x36\x53\x9b\xa5\xcd\xd6\xe6\x68\x73\xb5\x79\xda\x7c\x6d\x81\xb6"
"\x50\x5b\xa4\x2d\xd6\x96\x68\x4b\xb5\x65\xda\x72\x6d\x85\xb6\x52\x5b"
"\xa5\xad\xd6\xd6\x68\x6b\xb5\x75\xda\x7a\x6d\x83\xb6\x51\xdb\xa4\x6d"
"\xd6\xb6\x68\x5b\xb5\x6d\xda\x76\x6d\x87\xb6\x53\xdb\xa5\xed\xd6\xf6"
"\x68\x7b\xb5\x7d\xda\x7e\x2d\x4e\x3b\xa0\x1d\xd4\x0e\x69\x87\xb5\x23"
"\xda\x51\xed\x98\x76\x5c\x3b\xa1\x9d\xd4\x4e\x69\xa7\xb5\x33\xda\x59"
"\xed\x9c\x16\xaf\x9d\xd7\x12\xb4\x0b\xda\x45\xed\x92\x76\x59\xbb\xa2"
"\x5d\xd5\xae\x69\xd7\xb5\x1b\xda\x4d\xed\x96\x76\x5b\xbb\xa3\xdd\xd5"
"\xee\x69\xf7\xb5\x07\xda\x43\xed\x91\xf6\x58\x7b\xa2\x3d\xd5\x9e\x69"
"\xcf\xb5\x17\xda\x4b\xed\x95\xf6\x5a\x7b\xa3\x25\x6a\x6f\xb5\x77\xda"
"\x7b\xed\x83\xf6\x51\xfb\xa4\x7d\xd6\xbe\x68\x5f\xb5\x6f\xda\x77\xed"
"\x87\xf6\x53\xfb\xa5\xfd\xd6\x92\xb4\x3f\xda\x5f\xed\x9f\x96\x0c\x24"
"\x07\x29\x40\x4a\x90\x0a\xa4\x06\x69\x40\x5a\x90\x0e\xa4\x07\x19\x40"
"\x46\x90\x09\x64\x06\x59\x40\x56\x90\x0d\x64\x07\x39\x40\x4e\x90\x0b"
"\xe4\x06\x79\x00\x02\x50\x80\x01\x1c\x10\x80\x04\x14\xa0\x01\x03\x58"
"\xc0\x01\x1e\x08\x40\x04\x12\x90\x81\x02\x54\xa0\x01\x00\x74\x60\x00"
"\x13\x40\x60\x01\x1b\x38\xc0\x05\x1e\xf0\x41\x00\x42\x10\x81\x18\xc8"
"\x0b\xf2\x81\xfc\xa0\x00\x28\x08\x0a\x81\xc2\xa0\x08\x28\x0a\x8a\x81"
"\xe2\xa0\x04\x28\x09\x4a\x81\xd2\xa0\x0c\x28\x0b\xca\x81\xf2\xa0\x02"
"\xa8\x08\x2a\x81\xca\xa0\x0a\xa8\x0a\xaa\x81\xea\xa0\x06\xa8\x09\x6a"
"\x81\xda\xa0\x0e\xa8\x0b\xea\x81\xfa\xa0\x01\x68\x08\x1a\x81\xc6\xa0"
"\x09\x68\x0a\x9a\x81\xe6\xa0\x05\x68\x09\x5a\x81\xd6\xa0\x0d\x68\x0b"
"\xda\x81\xf6\xa0\x03\xe8\x08\x3a\x81\xce\xa0\x0b\xe8\x0a\xba\x81\xee"
"\xa0\x07\xe8\x09\x7a\x81\xde\xa0\x0f\xe8\x0b\xfa\x81\xfe\x60\x00\x18"
"\x08\x06\x81\xc1\x60\x08\x18\x0a\x86\x81\xe1\x60\x04\x18\x09\x46\x81"
"\xd1\x60\x0c\x18\x0b\xc6\x81\xf1\x60\x02\x98\x08\x26\x81\xc9\x60\x0a"
"\x98\x0a\xa6\x81\xe9\x60\x06\x98\x09\x66\x81\xd9\x60\x0e\x98\x0b\xe6"
"\x81\xf9\x60\x01\x58\x08\x16\x81\xc5\x60\x09\x58\x0a\x96\x81\xe5\x60"
"\x05\x58\x09\x56\x81\xd5\x60\x0d\x58\x0b\xd6\x81\xf5\x60\x03\xd8\x08"
"\x36\x81\xcd\x60\x0b\xd8\x0a\xb6\x81\xed\x60\x07\xd8\x09\x76\x81\xdd"
"\x60\x0f\xd8\x0b\xf6\x81\xfd\x20\x0e\x1c\x00\x07\xc1\x21\x70\x18\x1c"
"\x01\x47\xc1\x31\x70\x1c\x9c\x00\x27\xc1\x29\x70\x1a\x9c\x01\x67\xc1"
"\x39\x10\x0f\xce\x83\x04\x70\x01\x5c\x04\x97\xc0\x65\x70\x05\x5c\x05"
"\xd7\xc0\x75\x70\x03\xdc\x04\xb7\xc0\x6d\x70\x07\xdc\x05\xf7\xc0\x7d"
"\xf0\x00\x3c\x04\x8f\xc0\x63\xf0\x04\x3c\x05\xcf\xc0\x73\xf0\x02\xbc"
"\x04\xaf\xc0\x6b\xf0\x06\x24\x82\xb7\xe0\x1d\x78\x0f\x3e\x80\x8f\xe0"
"\x13\xf8\x0c\xbe\x80\xaf\xe0\x1b\xf8\x0e\x7e\x80\x9f\xe0\x17\xf8\x0d"
"\x92\xc0\x1f\xf0\x17\xfc\x03\xc9\xf4\xe4\x7a\x0a\x3d\xa5\x9e\x4a\x4f"
"\xad\xa7\xd1\xd3\xea\xe9\xf4\xf4\x7a\x06\x3d\xa3\x9e\x49\xcf\xac\x67"
"\xd1\xb3\xea\xd9\xf4\xec\x7a\x0e\x3d\xa7\x9e\x4b\xcf\xad\xe7\xd1\x11"
"\x1d\xd5\x31\x1d\xd7\x09\x9d\xd4\x29\x9d\xd6\x19\x9d\xd5\x39\x9d\xd7"
"\x05\x5d\xd4\x25\x5d\xd6\x15\x5d\xd5\x35\x1d\xe8\xba\x6e\xe8\xa6\x0e"
"\x75\x4b\xb7\x75\x47\x77\x75\x4f\xf7\xf5\x40\x0f\xf5\x48\x8f\xe9\x79"
"\xf5\x7c\x7a\x7e\xbd\x80\x5e\x50\x2f\xa4\x17\xd6\x8b\xe8\x45\xf5\x62"
"\x7a\x71\xbd\x84\x5e\x52\x2f\xa5\x97\xd6\xcb\xe8\x65\xf5\x72\x7a\x79"
"\xbd\x82\x5e\x51\xaf\xa4\x57\xd6\xab\xe8\x55\xf5\x6a\x7a\x75\xbd\x86"
"\x5e\x53\xaf\xa5\xd7\xd6\xeb\xe8\x75\xf5\x7a\x7a\x7d\xbd\x81\xde\x50"
"\x6f\xa4\x37\xd6\x9b\xe8\x4d\xf5\x66\x7a\x73\xbd\x85\xde\x52\x6f\xa5"
"\xb7\xd6\xdb\xe8\x6d\xf5\x76\x7a\x7b\xbd\x83\xde\x51\xef\xa4\x77\xd6"
"\xbb\xe8\x5d\xf5\x6e\x7a\x77\xbd\x87\xde\x53\xef\xa5\xf7\xd6\xfb\xe8"
"\x7d\xf5\x7e\x7a\x7f\x7d\x80\x3e\x50\x1f\xa4\x0f\xd6\x87\xe8\x43\xf5"
"\x61\xfa\x70\x7d\x84\x3e\x52\x1f\xa5\x8f\xd6\xc7\xe8\x63\xf5\x71\xfa"
"\x78\x7d\x82\x3e\x51\x9f\xa4\x4f\xd6\xa7\xe8\x53\xf5\x69\xfa\x74\x7d"
"\x86\x3e\x53\x9f\xa5\xcf\xd6\xe7\xe8\x73\xf5\x79\xfa\x7c\x7d\x81\xbe"
"\x50\x5f\xa4\x2f\xd6\x97\xe8\x4b\xf5\x65\xfa\x72\x7d\x85\xbe\x52\x5f"
"\xa5\xaf\xd6\xd7\xe8\x6b\xf5\x75\xfa\x7a\x7d\x83\xbe\x51\xdf\xa4\x6f"
"\xd6\xb7\xe8\x5b\xf5\x6d\xfa\x76\x7d\x87\xbe\x53\xdf\xa5\xef\xd6\xf7"
"\xe8\x7b\xf5\x7d\xfa\x7e\x3d\x4e\x3f\xa0\x1f\xd4\x0f\xe9\x87\xf5\x23"
"\xfa\x51\xfd\x98\x7e\x5c\x3f\xa1\x9f\xd4\x4f\xe9\xa7\xf5\x33\xfa\x59"
"\xfd\x9c\x1e\xaf\x9f\xd7\x13\xf4\x0b\xfa\x45\xfd\x92\x7e\x59\xbf\xa2"
"\x5f\xd5\xaf\xe9\xd7\xf5\x1b\xfa\x4d\xfd\x96\x7e\x5b\xbf\xa3\xdf\xd5"
"\xef\xe9\xf7\xf5\x07\xfa\x43\xfd\x91\xfe\x58\x7f\xa2\x3f\xd5\x9f\xe9"
"\xcf\xf5\x17\xfa\x4b\xfd\x95\xfe\x5a\x7f\xa3\x27\xea\x6f\xf5\x77\xfa"
"\x7b\xfd\x83\xfe\x51\xff\xa4\x7f\xd6\xbf\xe8\x5f\xf5\x6f\xfa\x77\xfd"
"\x87\xfe\x53\xff\xa5\xff\xd6\x93\xf4\x3f\xfa\x5f\xfd\x9f\x9e\xcc\x48"
"\x6e\xa4\x30\x52\x1a\xa9\x8c\xd4\x46\x1a\x23\xad\x91\xce\x48\x6f\x64"
"\x30\x32\x1a\x99\x8c\xcc\x46\x16\x23\xab\x91\xcd\xc8\x6e\xe4\x30\x72"
"\x1a\xb9\x8c\xdc\x46\x1e\x03\x31\x50\x03\x33\x70\x83\x30\x48\x83\x32"
"\x68\x83\x31\x58\x83\x33\x78\x43\x30\x44\x43\x32\x64\x43\x31\x54\x43"
"\x33\x80\xa1\x1b\x86\x61\x1a\xd0\xb0\x8c\xff\xb3\xff\x37\x36\x9a\x18"
"\x4d\x8c\x66\x46\x73\xa3\x85\x91\x2f\x45\xa1\x14\xad\x8d\xd6\x46\x5b"
"\xa3\xad\xd1\xde\x68\x6f\x74\x34\x3a\x19\x9d\x8d\x2e\x46\x57\xa3\x9b"
"\xd1\xcd\xe8\x61\xf4\x34\x7a\x1a\xbd\x8d\x3e\x46\x5f\xa3\x9f\xd1\xdf"
"\x18\x60\x0c\x34\x06\x19\x83\x8d\x21\xc6\x10\x63\x98\x31\xcc\x18\x61"
"\x8c\x30\x46\x19\xa3\x8c\x31\xc6\x18\x63\x9c\x31\xce\x98\x60\x4c\x30"
"\x26\x19\x93\x8c\x29\xc6\x14\x63\x9a\x31\xcd\x98\x61\xcc\x30\x66\x19"
"\xb3\x8c\x39\xc6\x1c\x63\x9e\x31\xcf\x58\x60\x2c\x30\x16\x19\x8b\x8c"
"\x25\xc6\x12\x63\x99\xb1\xcc\x58\x61\xac\x30\x56\x19\xab\x8c\x35\xc6"
"\x1a\x63\x9d\xb1\xce\xd8\x60\x6c\x30\x36\x19\x9b\x8c\x2d\xc6\x16\x63"
"\x9b\xb1\xcd\xd8\x61\xec\x30\x76\x19\xbb\x8c\x3d\xc6\x1e\x63\x9f\xb1"
"\xcf\x88\x33\xe2\x8c\x83\xc6\x41\xe3\xb0\x71\xd8\x38\x6a\x1c\x35\x8e"
"\x1b\xc7\x8d\x93\xc6\x49\xe3\xb4\x71\xda\x38\x6b\x9c\x35\xe2\x8d\x78"
"\x23\xc1\x48\x30\x2e\x1a\x17\x8d\xcb\xc6\x65\xe3\xaa\x71\xd5\xb8\x6e"
"\x5c\x37\x6e\x1a\x37\x8d\xdb\xc6\x6d\xe3\xae\x71\xd7\xb8\x6f\xdc\x37"
"\x1e\x1a\x0f\x8d\xc7\xc6\x63\xe3\xa9\xf1\xcc\x78\x6e\xbc\x30\x5e\x1a"
"\xaf\x8c\xd7\xc6\x1b\x23\xd1\x78\x6b\xbc\x33\xde\x1b\x1f\x8c\x8f\xc6"
"\x27\xe3\xb3\xf1\xc5\xf8\x6a\x7c\x33\xbe\x1b\x3f\x8c\x9f\xc6\x2f\xe3"
"\xb7\x91\x64\xfc\x31\xfe\x1a\xff\x8c\xff\x3f\xf7\x17\x65\x53\x31\x55"
"\x53\x33\x81\xa9\x9b\x86\x69\x9a\xd0\xb4\x4c\xdb\x74\x4c\xd7\xf4\x4c"
"\xdf\x0c\xcc\xd0\x8c\xcc\x98\x99\xd7\xcc\x67\xe6\x37\x0b\x98\x05\xcd"
"\x42\x66\x61\xb3\x88\x59\xd4\x2c\x66\x16\x37\x4b\x98\x25\xcd\x52\x66"
"\x69\xb3\x8c\x59\xd6\x2c\x67\x96\x37\x2b\x98\x15\xcd\x4a\x66\x65\xb3"
"\x8a\x59\xd5\xac\x66\x56\x37\x6b\x98\x35\xcd\x5a\x66\x6d\xb3\x8e\x59"
"\xd7\xac\x67\xd6\x37\x1b\x98\x0d\xcd\x46\x66\x63\xb3\x89\xd9\xd4\x6c"
"\x66\x36\x37\x5b\x98\x2d\xcd\x56\x66\x6b\xb3\x8d\xd9\xd6\x6c\x67\xb6"
"\x37\x3b\x98\x1d\xcd\x4e\x66\x67\xb3\x8b\xd9\xd5\xec\x66\x76\x37\x7b"
"\x98\x3d\xcd\x5e\x66\x6f\xb3\x8f\xd9\xd7\xec\x67\xf6\x37\x07\x98\x03"
"\xcd\x41\xe6\x60\x73\x88\x39\xd4\x1c\x66\x0e\x37\x47\x98\x23\xcd\x51"
"\xe6\x68\x73\x8c\x39\xd6\x1c\x67\x8e\x37\x27\x98\x13\xcd\x49\xe6\x64"
"\x73\x8a\x39\xd5\x9c\x66\x4e\x37\x67\x98\x33\xcd\x59\xe6\x6c\x73\x8e"
"\x39\xd7\x9c\x67\xce\x37\x17\x98\x0b\xcd\x45\xe6\x62\x73\x89\xb9\xd4"
"\x5c\x66\x2e\x37\x57\x98\x2b\xcd\x55\xe6\x6a\x73\x8d\xb9\xd6\x5c\x67"
"\xae\x37\x37\x98\x1b\xcd\x4d\xe6\x66\x73\x8b\xb9\xd5\xdc\x66\x6e\x37"
"\x77\x98\x3b\xcd\x5d\xe6\x6e\x73\x8f\xb9\xd7\xdc\x67\xee\x37\xe3\xcc"
"\x03\xe6\x41\xf3\x90\x79\xd8\x3c\x62\x1e\x35\x8f\x99\xc7\xcd\x13\xe6"
"\x49\xf3\x94\x79\xda\x3c\x63\x9e\x35\xcf\x99\xf1\xe6\x79\x33\xc1\xbc"
"\x60\x5e\x34\x2f\x99\x97\xcd\x2b\xe6\x55\xf3\x9a\x79\xdd\xbc\x61\xde"
"\x34\x6f\x99\xb7\xcd\x3b\xe6\x5d\xf3\x9e\x79\xdf\x7c\x60\x3e\x34\x1f"
"\x99\x8f\xcd\x27\xe6\x53\xf3\x99\xf9\xdc\x7c\x61\xbe\x34\x5f\x99\xaf"
"\xcd\x37\x66\xa2\xf9\xd6\x7c\x67\xbe\x37\x3f\x98\x1f\xcd\x4f\xe6\x67"
"\xf3\x8b\xf9\xd5\xfc\x66\x7e\x37\x7f\x98\x3f\xcd\x5f\xe6\x6f\x33\xc9"
"\xfc\x63\xfe\x35\xff\x99\xc9\x60\x72\x98\x02\xa6\x84\xa9\x60\x6a\x98"
"\x06\xa6\x85\xe9\x60\x7a\x98\x01\x66\x84\x99\x60\x66\x98\x05\x66\x85"
"\xd9\x60\x76\x98\x03\xe6\x84\xb9\x60\x6e\x98\x07\x22\x10\x85\x18\xc4"
"\x21\x01\x49\x48\x41\x1a\x32\x90\x85\x1c\xe4\xa1\x00\x45\x28\x41\x19"
"\x2a\x50\x85\x1a\x04\x50\x87\x06\x34\x21\x84\x16\xb4\xa1\x03\x5d\xe8"
"\x41\x1f\x06\x30\x84\x11\x8c\xc1\xbc\x30\x1f\xcc\x0f\x0b\xc0\x82\xb0"
"\x10\x2c\x0c\x8b\xc0\xa2\xb0\x18\x2c\x0e\x4b\xc0\x92\xb0\x14\x2c\x0d"
"\xcb\xc0\xb2\xb0\x1c\x2c\x0f\x2b\xc0\x8a\xb0\x12\xac\x0c\xab\xc0\xaa"
"\xb0\x1a\xac\x0e\x6b\xc0\x9a\xb0\x16\xac\x0d\xeb\xc0\xba\xb0\x1e\xac"
"\x0f\x1b\xc0\x86\xb0\x11\x6c\x0c\x9b\xc0\xa6\xb0\x19\x6c\x0e\x5b\xc0"
"\x96\xb0\x15\x6c\x0d\xdb\xc0\xb6\xb0\x1d\x6c\x0f\x3b\xc0\x8e\xb0\x13"
"\xec\x0c\xbb\xc0\xae\xb0\x1b\xec\x0e\x7b\xc0\x9e\xb0\x17\xec\x0d\xfb"
"\xc0\xbe\xb0\x1f\xec\x0f\x07\xc0\x81\x70\x10\x1c\x0c\x87\xc0\xa1\x70"
"\x18\x1c\x0e\x47\xc0\x91\x70\x14\x1c\x0d\xc7\xc0\xb1\x70\x1c\x1c\x0f"
"\x27\xc0\x89\x70\x12\x9c\x0c\xa7\xc0\xa9\x70\x1a\x9c\x0e\x67\xc0\x99"
"\x70\x16\x9c\x0d\xe7\xc0\xb9\x70\x1e\x9c\x0f\x17\xc0\x85\x70\x11\x5c"
"\x0c\x97\xc0\xa5\x70\x19\x5c\x0e\x57\xc0\x95\x70\x15\x5c\x0d\xd7\xc0"
"\xb5\x70\x1d\x5c\x0f\x37\xc0\x8d\x70\x13\xdc\x0c\xb7\xc0\xad\x70\x1b"
"\xdc\x0e\x77\xc0\x9d\x70\x17\xdc\x0d\xf7\xc0\xbd\x70\x1f\xdc\x0f\xe3"
"\xe0\x01\x78\x10\x1e\x82\x87\xe1\x11\x78\x14\x1e\x83\xc7\xe1\x09\x78"
"\x12\x9e\x82\xa7\xe1\x19\x78\x16\x9e\x83\xf1\xf0\x3c\x4c\x80\x17\xe0"
"\x45\x78\x09\x5e\x86\x57\xe0\x55\x78\x0d\x5e\x87\x37\xe0\x4d\x78\x0b"
"\xde\x86\x77\xe0\x5d\x78\x0f\xde\x87\x0f\xe0\x43\xf8\x08\x3e\x86\x4f"
"\xe0\x53\xf8\x0c\x3e\x87\x2f\xe0\x4b\xf8\x0a\xbe\x86\x6f\x60\x22\x7c"
"\x0b\xdf\xc1\xf7\xf0\x03\xfc\x08\x3f\xc1\xcf\xf0\x0b\xfc\x0a\xbf\xc1"
"\xef\xf0\x07\xfc\x09\x7f\xc1\xdf\x30\x09\xfe\x81\x7f\xe1\x3f\x98\xcc"
"\x4a\x6e\xa5\xb0\x52\x5a\xa9\xac\xd4\x56\x1a\x2b\xad\x95\xce\x4a\x6f"
"\x65\xb0\x32\x5a\x99\xac\xcc\x56\x16\x2b\xab\x95\xcd\xca\x6e\xe5\xb0"
"\x72\x5a\xb9\xac\xdc\x56\x1e\x0b\xb1\x50\x0b\xb3\x70\x8b\xb0\x48\x8b"
"\xb2\x68\x8b\xb1\x58\x8b\xb3\x78\x4b\xb0\x44\x4b\xb2\x64\x4b\xb1\x54"
"\x4b\xb3\x80\xa5\x5b\x86\x65\x5a\xd0\xb2\x2c\xdb\x72\x2c\xd7\xf2\x2c"
"\xdf\x0a\xac\xd0\x8a\xac\x98\x95\xd7\xca\x67\xe5\xb7\x0a\x58\x05\xad"
"\x42\x56\x61\xab\x88\x55\xd4\x2a\x66\x15\xb7\x4a\x58\x25\xad\x52\x56"
"\x69\xab\x8c\x55\xd6\x2a\x67\x95\xb7\x2a\x58\x15\xad\x4a\x56\x65\xab"
"\x8a\x55\xd5\xaa\x66\x55\xb7\x6a\x58\x35\xad\x5a\x56\x6d\xab\x8e\x55"
"\xd7\xaa\x67\xd5\xb7\x1a\x58\x0d\xad\x46\x56\x63\xab\x89\xd5\xd4\x6a"
"\x66\x35\xb7\x5a\x58\x2d\xad\x56\x56\x6b\xab\x8d\xd5\xd6\x6a\x67\xb5"
"\xb7\x3a\x58\x1d\xad\x4e\x56\x67\xab\x8b\xd5\xd5\xea\x66\x75\xb7\x7a"
"\x58\x3d\xad\x5e\x56\x6f\xab\x8f\xd5\xd7\xea\x67\xf5\xb7\x06\x58\x03"
"\xad\x41\xd6\x60\x6b\x88\x35\xd4\x1a\x66\x0d\xb7\x46\x58\x23\xad\x51"
"\xd6\x68\x6b\x8c\x35\xd6\x1a\x67\x8d\xb7\x26\x58\x13\xad\x49\xd6\x64"
"\x6b\x8a\x35\xd5\x9a\x66\x4d\xb7\x66\x58\x33\xad\x59\xd6\x6c\x6b\x8e"
"\x35\xd7\x9a\x67\xcd\xb7\x16\x58\x0b\xad\x45\xd6\x62\x6b\x89\xb5\xd4"
"\x5a\x66\x2d\xb7\x56\x58\x2b\xad\x55\xd6\x6a\x6b\x8d\xb5\xd6\x5a\x67"
"\xad\xb7\x36\x58\x1b\xad\x4d\xd6\x66\x6b\x8b\xb5\xd5\xda\x66\x6d\xb7"
"\x76\x58\x3b\xad\x5d\xd6\x6e\x6b\x8f\xb5\xd7\xda\x67\xed\xb7\xe2\xac"
"\x03\xd6\x41\xeb\x90\x75\xd8\x3a\x62\x1d\xb5\x8e\x59\xc7\xad\x13\xd6"
"\x49\xeb\x94\x75\xda\x3a\x63\x9d\xb5\xce\x59\xf1\xd6\x79\x2b\xc1\xba"
"\x60\x5d\xb4\x2e\x59\x97\xad\x2b\xd6\x55\xeb\x9a\x75\xdd\xba\x61\xdd"
"\xb4\x6e\x59\xb7\xad\x3b\xd6\x5d\xeb\x9e\x75\xdf\x7a\x60\x3d\xb4\x1e"
"\x59\x8f\xad\x27\xd6\x53\xeb\x99\xf5\xdc\x7a\x61\xbd\xb4\x5e\x59\xaf"
"\xad\x37\x56\xa2\xf5\xd6\x7a\x67\xbd\xb7\x3e\x58\x1f\xad\x4f\xd6\x67"
"\xeb\x8b\xf5\xd5\xfa\x66\x7d\xb7\x7e\x58\x3f\xad\x5f\xd6\x6f\x2b\xc9"
"\xfa\x63\xfd\xb5\xfe\x59\xc9\xec\xe4\x76\x0a\x3b\xa5\x9d\xca\x4e\x6d"
"\xa7\xb1\xd3\xda\xe9\xec\xf4\x76\x06\x3b\xa3\x9d\xc9\xce\x6c\x67\xb1"
"\xb3\xda\xd9\xec\xec\x76\x0e\x3b\xa7\x9d\xcb\xce\x6d\xe7\xb1\x11\x1b"
"\xb5\x31\x1b\xb7\x09\x9b\xb4\x29\x9b\xb6\x19\x9b\xb5\x39\x9b\xb7\x05"
"\x5b\xb4\x25\x5b\xb6\x15\x5b\xb5\x35\x1b\xd8\xba\x6d\xd8\xa6\x0d\x6d"
"\xcb\xb6\x6d\xc7\x76\x6d\xcf\xf6\xed\xc0\x0e\xed\xc8\x8e\xd9\x79\xed"
"\x7c\x76\x7e\xbb\x80\x5d\xd0\x2e\x64\x17\xb6\x8b\xd8\x45\xed\x62\x76"
"\x71\xbb\x84\x5d\xd2\x2e\x65\x97\xb6\xcb\xd8\x65\xed\x72\x76\x79\xbb"
"\x82\x5d\xd1\xae\x64\x57\xb6\xab\xd8\x55\xed\x6a\x76\x75\xbb\x86\x5d"
"\xd3\xae\x65\xd7\xb6\xeb\xd8\x75\xed\x7a\x76\x7d\xbb\x81\xdd\xd0\x6e"
"\x64\x37\xb6\x9b\xd8\x4d\xed\x66\x76\x73\xbb\x85\xdd\xd2\x6e\x65\xb7"
"\xb6\xdb\xd8\x6d\xed\x76\x76\x7b\xbb\x83\xdd\xd1\xee\x64\x77\xb6\xbb"
"\xd8\x5d\xed\x6e\x76\x77\xbb\x87\xdd\xd3\xee\x65\xf7\xb6\xfb\xd8\x7d"
"\xed\x7e\x76\x7f\x7b\x80\x3d\xd0\x1e\x64\x0f\xb6\x87\xd8\x43\xed\x61"
"\xf6\x70\x7b\x84\x3d\xd2\x1e\x65\x8f\xb6\xc7\xd8\x63\xed\x71\xf6\x78"
"\x7b\x82\x3d\xd1\x9e\x64\x4f\xb6\xa7\xd8\x53\xed\x69\xf6\x74\x7b\x86"
"\x3d\xd3\x9e\x65\xcf\xb6\xe7\xd8\x73\xed\x79\xf6\x7c\x7b\x81\xbd\xd0"
"\x5e\x64\x2f\xb6\x97\xd8\x4b\xed\x65\xf6\x72\x7b\x85\xbd\xd2\x5e\x65"
"\xaf\xb6\xd7\xd8\x6b\xed\x75\xf6\x7a\x7b\x83\xbd\xd1\xde\x64\x6f\xb6"
"\xb7\xd8\x5b\xed\x6d\xf6\x76\x7b\x87\xbd\xd3\xde\x65\xef\xb6\xf7\xd8"
"\x7b\xed\x7d\xf6\x7e\x3b\xce\x3e\x60\x1f\xb4\x0f\xd9\x87\xed\x23\xf6"
"\x51\xfb\x98\x7d\xdc\x3e\x61\x9f\xb4\x4f\xd9\xa7\xed\x33\xf6\x59\xfb"
"\x9c\x1d\x6f\x9f\xb7\x13\xec\x0b\xf6\x45\xfb\x92\x7d\xd9\xbe\x62\x5f"
"\xb5\xaf\xd9\xd7\xed\x1b\xf6\x4d\xfb\x96\x7d\xdb\xbe\x63\xdf\xb5\xef"
"\xd9\xf7\xed\x07\xf6\x43\xfb\x91\xfd\xd8\x7e\x62\x3f\xb5\x9f\xd9\xcf"
"\xed\x17\xf6\x4b\xfb\x95\xfd\xda\x7e\x63\x27\xda\x6f\xed\x77\xf6\x7b"
"\xfb\x83\xfd\xd1\xfe\x64\x7f\xb6\xbf\xd8\x5f\xed\x6f\xf6\x77\xfb\x87"
"\xfd\xd3\xfe\x65\xff\xb6\x93\xec\x3f\xf6\x5f\xfb\x9f\x9d\xcc\x49\xee"
"\xa4\x70\x52\x3a\xa9\x9c\xd4\x4e\x1a\x27\xad\x93\xce\x49\xef\x64\x70"
"\x32\x3a\x99\x9c\xcc\x4e\x16\x27\xab\x93\xcd\xc9\xee\xe4\x70\x72\x3a"
"\xb9\x9c\xdc\x4e\x1e\x07\x71\x50\x07\x73\x70\x87\x70\x48\x87\x72\x68"
"\x87\x71\x58\x87\x73\x78\x47\x70\x44\x47\x72\x64\x47\x71\x54\x47\x73"
"\x80\xa3\x3b\x86\x63\x3a\xd0\xb1\x1c\xdb\x71\x1c\xd7\xf1\x1c\xdf\x09"
"\x9c\xd0\x89\x9c\x98\x93\xd7\xc9\xe7\xe4\x77\x0a\x38\x05\x9d\x42\x4e"
"\x61\xa7\x88\x53\xd4\x29\xe6\x14\x77\x4a\x38\x25\x9d\x52\x4e\x69\xa7"
"\x8c\x53\xd6\x29\xe7\x94\x77\x2a\x38\x15\x9d\x4a\x4e\x65\xa7\x8a\x53"
"\xd5\xa9\xe6\x54\x77\x6a\x38\x35\x9d\x5a\x4e\x6d\xa7\x8e\x53\xd7\xa9"
"\xe7\xd4\x77\x1a\x38\x0d\x9d\x46\x4e\x63\xa7\x89\xd3\xd4\x69\xe6\x34"
"\x77\x5a\x38\x2d\x9d\x56\x4e\x6b\xa7\x8d\xd3\xd6\x69\xe7\xb4\x77\x3a"
"\x38\x1d\x9d\x4e\x4e\x67\xa7\x8b\xd3\xd5\xe9\xe6\x74\x77\x7a\x38\x3d"
"\x9d\x5e\x4e\x6f\xa7\x8f\xd3\xd7\xe9\xe7\xf4\x77\x06\x38\x03\x9d\x41"
"\xce\x60\x67\x88\x33\xd4\x19\xe6\x0c\x77\x46\x38\x23\x9d\x51\xce\x68"
"\x67\x8c\x33\xd6\x19\xe7\x8c\x77\x26\x38\x13\x9d\x49\xce\x64\x67\x8a"
"\x33\xd5\x99\xe6\x4c\x77\x66\x38\x33\x9d\x59\xce\x6c\x67\x8e\x33\xd7"
"\x99\xe7\xcc\x77\x16\x38\x0b\x9d\x45\xce\x62\x67\x89\xb3\xd4\x59\xe6"
"\x2c\x77\x56\x38\x2b\x9d\x55\xce\x6a\x67\x8d\xb3\xd6\x59\xe7\xac\x77"
"\x36\x38\x1b\x9d\x4d\xce\x66\x67\x8b\xb3\xd5\xd9\xe6\x6c\x77\x76\x38"
"\x3b\x9d\x5d\xce\x6e\x67\x8f\xb3\xd7\xd9\xe7\xec\x77\xe2\x9c\x03\xce"
"\x41\xe7\x90\x73\xd8\x39\xe2\x1c\x75\x8e\x39\xc7\x9d\x13\xce\x49\xe7"
"\x94\x73\xda\x39\xe3\x9c\x75\xce\x39\xf1\xce\x79\x27\xc1\xb9\xe0\x5c"
"\x74\x2e\x39\x97\x9d\x2b\xce\x55\xe7\x9a\x73\xdd\xb9\xe1\xdc\x74\x6e"
"\x39\xb7\x9d\x3b\xce\x5d\xe7\x9e\x73\xdf\x79\xe0\x3c\x74\x1e\x39\x8f"
"\x9d\x27\xce\x53\xe7\x99\xf3\xdc\x79\xe1\xbc\x74\x5e\x39\xaf\x9d\x37"
"\x4e\xa2\xf3\xd6\x79\xe7\xbc\x77\x3e\x38\x1f\x9d\x4f\xce\x67\xe7\x8b"
"\xf3\xd5\xf9\xe6\x7c\x77\x7e\x38\x3f\x9d\x5f\xce\x6f\x27\xc9\xf9\xe3"
"\xfc\x75\xfe\x39\xc9\xdc\xe4\x6e\x0a\x37\xa5\x9b\xca\x4d\xed\xa6\x71"
"\xd3\xba\xe9\xdc\xf4\x6e\x06\x37\xa3\x9b\xc9\xcd\xec\x66\x71\xb3\xba"
"\xd9\xdc\xec\x6e\x0e\x37\xa7\x9b\xcb\xcd\xed\xe6\x71\x11\x17\x75\x31"
"\x17\x77\x09\x97\x74\x29\x97\x76\x19\x97\x75\x39\x97\x77\x05\x57\x74"
"\x25\x57\x76\x15\x57\x75\x35\x17\xb8\xba\x6b\xb8\xa6\x0b\x5d\xcb\xb5"
"\x5d\xc7\x75\x5d\xcf\xf5\xdd\xc0\x0d\xdd\xc8\x8d\xb9\x79\xdd\x7c\x6e"
"\x7e\xb7\x80\x5b\xd0\x2d\xe4\x16\x76\x8b\xb8\x45\xdd\x62\x6e\x71\xb7"
"\x84\x5b\xd2\x2d\xe5\x96\x76\xcb\xb8\x65\xdd\x72\x6e\x79\xb7\x82\x5b"
"\xd1\xad\xe4\x56\x76\xab\xb8\x55\xdd\x6a\x6e\x75\xb7\x86\x5b\xd3\xad"
"\xe5\xd6\x76\xeb\xb8\x75\xdd\x7a\x6e\x7d\xb7\x81\xdb\xd0\x6d\xe4\x36"
"\x76\x9b\xb8\x4d\xdd\x66\x6e\x73\xb7\x85\xdb\xd2\x6d\xe5\xb6\x76\xdb"
"\xb8\x6d\xdd\x76\x6e\x7b\xb7\x83\xdb\xd1\xed\xe4\x76\x76\xbb\xb8\x5d"
"\xdd\x6e\x6e\x77\xb7\x87\xdb\xd3\xed\xe5\xf6\x76\xfb\xb8\x7d\xdd\x7e"
"\x6e\x7f\x77\x80\x3b\xd0\x1d\xe4\x0e\x76\x87\xb8\x43\xdd\x61\xee\x70"
"\x77\x84\x3b\xd2\x1d\xe5\x8e\x76\xc7\xb8\x63\xdd\x71\xee\x78\x77\x82"
"\x3b\xd1\x9d\xe4\x4e\x76\xa7\xb8\x53\xdd\x69\xee\x74\x77\x86\x3b\xd3"
"\x9d\xe5\xce\x76\xe7\xb8\x73\xdd\x79\xee\x7c\x77\x81\xbb\xd0\x5d\xe4"
"\x2e\x76\x97\xb8\x4b\xdd\x65\xee\x72\x77\x85\xbb\xd2\x5d\xe5\xae\x76"
"\xd7\xb8\x6b\xdd\x75\xee\x7a\x77\x83\xbb\xd1\xdd\xe4\x6e\x76\xb7\xb8"
"\x5b\xdd\x6d\xee\x76\x77\x87\xbb\xd3\xdd\xe5\xee\x76\xf7\xb8\x7b\xdd"
"\x7d\xee\x7e\x37\xce\x3d\xe0\x1e\x74\x0f\xb9\x87\xdd\x23\xee\x51\xf7"
"\x98\x7b\xdc\x3d\xe1\x9e\x74\x4f\xb9\xa7\xdd\x33\xee\x59\xf7\x9c\x1b"
"\xef\x9e\x77\x13\xdc\x0b\xee\x45\xf7\x92\x7b\xd9\xbd\xe2\x5e\x75\xaf"
"\xb9\xd7\xdd\x1b\xee\x4d\xf7\x96\x7b\xdb\xbd\xe3\xde\x75\xef\xb9\xf7"
"\xdd\x07\xee\x43\xf7\x91\xfb\xd8\x7d\xe2\x3e\x75\x9f\xb9\xcf\xdd\x17"
"\xee\x4b\xf7\x95\xfb\xda\x7d\xe3\x26\xba\x6f\xdd\x77\xee\x7b\xf7\x83"
"\xfb\xd1\xfd\xe4\x7e\x76\xbf\xb8\x5f\xdd\x6f\xee\x77\xf7\x87\xfb\xd3"
"\xfd\xe5\xfe\x76\x93\xdc\x3f\xee\x5f\xf7\x9f\x9b\xcc\x4b\xee\xa5\xf0"
"\x52\x7a\xa9\xbc\xd4\x5e\x1a\x2f\xad\x97\xce\x4b\xef\x65\xf0\x32\x7a"
"\x99\xbc\xcc\x5e\x16\x2f\xab\x97\xcd\xcb\xee\xe5\xf0\x72\x7a\xb9\xbc"
"\xdc\x5e\x1e\x0f\xf1\x50\x0f\xf3\x70\x8f\xf0\x48\x8f\xf2\x68\x8f\xf1"
"\x58\x8f\xf3\x78\x4f\xf0\x44\x4f\xf2\x64\x4f\xf1\x54\x4f\xf3\x80\xa7"
"\x7b\x86\x67\x7a\xd0\xb3\x3c\xdb\x73\x3c\xd7\xf3\x3c\xdf\x0b\xbc\xd0"
"\x8b\xbc\x98\x97\xd7\xcb\xe7\xe5\xf7\x0a\x78\x05\xbd\x42\x5e\x61\xaf"
"\x88\x57\xd4\x2b\xe6\x15\xf7\x4a\x78\x25\xbd\x52\x5e\x69\xaf\x8c\x57"
"\xd6\x2b\xe7\x95\xf7\x2a\x78\x15\xbd\x4a\x5e\x65\xaf\x8a\x57\xd5\xab"
"\xe6\x55\xf7\x6a\x78\x35\xbd\x5a\x5e\x6d\xaf\x8e\x57\xd7\xab\xe7\xd5"
"\xf7\x1a\x78\x0d\xbd\x46\x5e\x63\xaf\x89\xd7\xd4\x6b\xe6\x35\xf7\x5a"
"\x78\x2d\xbd\x56\x5e\x6b\xaf\x8d\xd7\xd6\x6b\xe7\xb5\xf7\x3a\x78\x1d"
"\xbd\x4e\x5e\x67\xaf\x8b\xd7\xd5\xeb\xe6\x75\xf7\x7a\x78\x3d\xbd\x5e"
"\x5e\x6f\xaf\x8f\xd7\xd7\xeb\xe7\xf5\xf7\x06\x78\x03\xbd\x41\xde\x60"
"\x6f\x88\x37\xd4\x1b\xe6\x0d\xf7\x46\x78\x23\xbd\x51\xde\x68\x6f\x8c"
"\x37\xd6\x1b\xe7\x8d\xf7\x26\x78\x13\xbd\x49\xde\x64\x6f\x8a\x37\xd5"
"\x9b\xe6\x4d\xf7\x66\x78\x33\xbd\x59\xde\x6c\x6f\x8e\x37\xd7\x9b\xe7"
"\xcd\xf7\x16\x78\x0b\xbd\x45\xde\x62\x6f\x89\xb7\xd4\x5b\xe6\x2d\xf7"
"\x56\x78\x2b\xbd\x55\xde\x6a\x6f\x8d\xb7\xd6\x5b\xe7\xad\xf7\x36\x78"
"\x1b\xbd\x4d\xde\x66\x6f\x8b\xb7\xd5\xdb\xe6\x6d\xf7\x76\x78\x3b\xbd"
"\x5d\xde\x6e\x6f\x8f\xb7\xd7\xdb\xe7\xed\xf7\xe2\xbc\x03\xde\x41\xef"
"\x90\x77\xd8\x3b\xe2\x1d\xf5\x8e\x79\xc7\xbd\x13\xde\x49\xef\x94\x77"
"\xda\x3b\xe3\x9d\xf5\xce\x79\xf1\xde\x79\x2f\xc1\xbb\xe0\x5d\xf4\x2e"
"\x79\x97\xbd\x2b\xde\x55\xef\x9a\x77\xdd\xbb\xe1\xdd\xf4\x6e\x79\xb7"
"\xbd\x3b\xde\x5d\xef\x9e\x77\xdf\x7b\xe0\x3d\xf4\x1e\x79\x8f\xbd\x27"
"\xde\x53\xef\x99\xf7\xdc\x7b\xe1\xbd\xf4\x5e\x79\xaf\xbd\x37\x5e\xa2"
"\xf7\xd6\x7b\xe7\xbd\xf7\x3e\x78\x1f\xbd\x4f\xde\x67\xef\x8b\xf7\xd5"
"\xfb\xe6\x7d\xf7\x7e\x78\x3f\xbd\x5f\xde\x6f\x2f\xc9\xfb\xe3\xfd\xf5"
"\xfe\x79\xc9\xfc\xe4\x7e\x0a\x3f\xa5\x9f\xca\x4f\xed\xa7\xf1\xd3\xfa"
"\xe9\xfc\xf4\x7e\x06\x3f\xa3\x9f\xc9\xcf\xec\x67\xf1\xb3\xfa\xd9\xfc"
"\xec\x7e\x0e\x3f\xa7\x9f\xcb\xcf\xed\xe7\xf1\x11\x1f\xf5\x31\x1f\xf7"
"\x09\x9f\xf4\x29\x9f\xf6\x19\x9f\xf5\x39\x9f\xf7\x05\x5f\xf4\x25\x5f"
"\xf6\x15\x5f\xf5\x35\x1f\xf8\xba\x6f\xf8\xa6\x0f\x7d\xcb\xb7\x7d\xc7"
"\x77\x7d\xcf\xf7\xfd\xc0\x0f\xfd\xc8\x8f\xf9\x79\xfd\x7c\x7e\x7e\xbf"
"\x80\x5f\xd0\x2f\xe4\x17\xf6\x8b\xf8\x45\xfd\x62\x7e\x71\xbf\x84\x5f"
"\xd2\x2f\xe5\x97\xf6\xcb\xf8\x65\xfd\x72\x7e\x79\xbf\x82\x5f\xd1\xaf"
"\xe4\x57\xf6\xab\xf8\x55\xfd\x6a\x7e\x75\xbf\x86\x5f\xd3\xaf\xe5\xd7"
"\xf6\xeb\xf8\x75\xfd\x7a\x7e\x7d\xbf\x81\xdf\xd0\x6f\xe4\x37\xf6\x9b"
"\xf8\x4d\xfd\x66\x7e\x73\xbf\x85\xdf\xd2\x6f\xe5\xb7\xf6\xdb\xf8\x6d"
"\xfd\x76\x7e\x7b\xbf\x83\xdf\xd1\xef\xe4\x77\xf6\xbb\xf8\x5d\xfd\x6e"
"\x7e\x77\xbf\x87\xdf\xd3\xef\xe5\xf7\xf6\xfb\xf8\x7d\xfd\x7e\x7e\x7f"
"\x7f\x80\x3f\xd0\x1f\xe4\x0f\xf6\x87\xf8\x43\xfd\x61\xfe\x70\x7f\x84"
"\x3f\xd2\x1f\xe5\x8f\xf6\xc7\xf8\x63\xfd\x71\xfe\x78\x7f\x82\x3f\xd1"
"\x9f\xe4\x4f\xf6\xa7\xf8\x53\xfd\x69\xfe\x74\x7f\x86\x3f\xd3\x9f\xe5"
"\xcf\xf6\xe7\xf8\x73\xfd\x79\xfe\x7c\x7f\x81\xbf\xd0\x5f\xe4\x2f\xf6"
"\x97\xf8\x4b\xfd\x65\xfe\x72\x7f\x85\xbf\xd2\x5f\xe5\xaf\xf6\xd7\xf8"
"\x6b\xfd\x75\xfe\x7a\x7f\x83\xbf\xd1\xdf\xe4\x6f\xf6\xb7\xf8\x5b\xfd"
"\x6d\xfe\x76\x7f\x87\xbf\xd3\xdf\xe5\xef\xf6\xf7\xf8\x7b\xfd\x7d\xfe"
"\x7e\x3f\xce\x3f\xe0\x1f\xf4\x0f\xf9\x87\xfd\x23\xfe\x51\xff\x98\x7f"
"\xdc\x3f\xe1\x9f\xf4\x4f\xf9\xa7\xfd\x33\xfe\x59\xff\x9c\x1f\xef\x9f"
"\xf7\x13\xfc\x0b\xfe\x45\xff\x92\x7f\xd9\xbf\xe2\x5f\xf5\xaf\xf9\xd7"
"\xfd\x1b\xfe\x4d\xff\x96\x7f\xdb\xbf\xe3\xdf\xf5\xef\xf9\xf7\xfd\x07"
"\xfe\x43\xff\x91\xff\xd8\x7f\xe2\x3f\xf5\x9f\xf9\xcf\xfd\x17\xfe\x4b"
"\xff\x95\xff\xda\x7f\xe3\x27\xfa\x6f\xfd\x77\xfe\x7b\xff\x83\xff\xd1"
"\xff\xe4\x7f\xf6\xbf\xf8\x5f\xfd\x6f\xfe\x77\xff\x87\xff\xd3\xff\xe5"
"\xff\xf6\x93\xfc\x3f\xfe\x5f\xff\x9f\x9f\x2c\x48\x1e\xa4\x08\x52\x06"
"\xa9\x82\xd4\x41\x9a\x20\x6d\x90\x2e\x48\x1f\x64\x08\x32\x06\x99\x82"
"\xcc\x41\x96\x20\x6b\x90\x2d\xc8\x1e\xe4\x08\x72\x06\xb9\x82\xdc\x41"
"\x9e\x00\x09\xd0\x00\x0b\xf0\x80\x08\xc8\x80\x0a\xe8\x80\x09\xd8\x80"
"\x0b\xf8\x40\x08\xc4\x40\x0a\xe4\x40\x09\xd4\x40\x0b\x40\xa0\x07\x46"
"\x60\x06\x30\xb0\x02\x3b\x70\x02\x37\xf0\x02\x3f\x08\x82\x30\x88\x82"
"\x58\x90\x37\xc8\x17\xe4\x0f\x0a\x04\x05\x83\x42\x41\xe1\xa0\x48\x50"
"\x34\x28\x16\x14\x0f\x4a\x04\x25\x83\x52\x41\xe9\xa0\x4c\x50\x36\x28"
"\x17\x94\x0f\x2a\x04\x15\x83\x4a\x41\xe5\xa0\x4a\x50\x35\xa8\x16\x54"
"\x0f\x6a\x04\x35\x83\x5a\x41\xed\xa0\x4e\x50\x37\xa8\x17\xd4\x0f\x1a"
"\x04\x0d\x83\x46\x41\xe3\xa0\x49\xd0\x34\x68\x16\x34\x0f\x5a\x04\x2d"
"\x83\x56\x41\xeb\xa0\x4d\xd0\x36\x68\x17\xb4\x0f\x3a\x04\x1d\x83\x4e"
"\x41\xe7\xa0\x4b\xd0\x35\xe8\x16\x74\x0f\x7a\x04\x3d\x83\x5e\x41\xef"
"\xa0\x4f\xd0\x37\xe8\x17\xf4\x0f\x06\x04\x03\x83\x41\xc1\xe0\x60\x48"
"\x30\x34\x18\x16\x0c\x0f\x46\x04\x23\x83\x51\xc1\xe8\x60\x4c\x30\x36"
"\x18\x17\x8c\x0f\x26\x04\x13\x83\x49\xc1\xe4\x60\x4a\x30\x35\x98\x16"
"\x4c\x0f\x66\x04\x33\x83\x59\xc1\xec\x60\x4e\x30\x37\x98\x17\xcc\x0f"
"\x16\x04\x0b\x83\x45\xc1\xe2\x60\x49\xb0\x34\x58\x16\x2c\x0f\x56\x04"
"\x2b\x83\x55\xc1\xea\x60\x4d\xb0\x36\x58\x17\xac\x0f\x36\x04\x1b\x83"
"\x4d\xc1\xe6\x60\x4b\xb0\x35\xd8\x16\x6c\x0f\x76\x04\x3b\x83\x5d\xc1"
"\xee\x60\x4f\xb0\x37\xd8\x17\xec\x0f\xe2\x82\x03\xc1\xc1\xe0\x50\x70"
"\x38\x38\x12\x1c\x0d\x8e\x05\xc7\x83\x13\xc1\xc9\xe0\x54\x70\x3a\x38"
"\x13\x9c\x0d\xce\x05\xf1\xc1\xf9\x20\x21\xb8\x10\x5c\x0c\x2e\x05\x97"
"\x83\x2b\xc1\xd5\xe0\x5a\x70\x3d\xb8\x11\xdc\x0c\x6e\x05\xb7\x83\x3b"
"\xc1\xdd\xe0\x5e\x70\x3f\x78\x10\x3c\x0c\x1e\x05\x8f\x83\x27\xc1\xd3"
"\xe0\x59\xf0\x3c\x78\x11\xbc\x0c\x5e\x05\xaf\x83\x37\x41\x62\xf0\x36"
"\x78\x17\xbc\x0f\x3e\x04\x1f\x83\x4f\xc1\xe7\xe0\x4b\xf0\x35\xf8\x16"
"\x7c\x0f\x7e\x04\x3f\x83\x5f\xc1\xef\x20\x29\xf8\x13\xfc\x0d\xfe\x05"
"\xc9\xc2\xe4\x61\x8a\x30\x65\x98\x2a\x4c\x1d\xa6\x09\xd3\x86\xe9\xc2"
"\xf4\x61\x86\x30\x63\x98\x29\xcc\x1c\x66\x09\xb3\x86\xd9\xc2\xec\x61"
"\x8e\x30\x67\x98\x2b\xcc\x1d\xe6\x09\x91\x10\x0d\xb1\x10\x0f\x89\x90"
"\x0c\xa9\x90\x0e\x99\x90\x0d\xb9\x90\x0f\x85\x50\x0c\xa5\x50\x0e\x95"
"\x50\x0d\xb5\x10\x84\x7a\x68\x84\x66\x08\x43\x2b\xb4\x43\x27\x74\x43"
"\x2f\xf4\xc3\x20\x0c\xc3\x28\x8c\x85\x79\xc3\x7c\x61\xfe\xb0\x40\x58"
"\x30\x2c\x14\x16\x0e\x8b\x84\x45\xc3\x62\x61\xf1\xb0\x44\x58\x32\x2c"
"\x15\x96\x0e\xcb\x84\x65\xc3\x72\x61\xf9\xb0\x42\x58\x31\xac\x14\x56"
"\x0e\xab\x84\x55\xc3\x6a\x61\xf5\xb0\x46\x58\x33\xac\x15\xd6\x0e\xeb"
"\x84\x75\xc3\x7a\x61\xfd\xb0\x41\xd8\x30\x6c\x14\x36\x0e\x9b\x84\x4d"
"\xc3\x66\x61\xf3\xb0\x45\xd8\x32\x6c\x15\xb6\x0e\xdb\x84\x6d\xc3\x76"
"\x61\xfb\xb0\x43\xd8\x31\xec\x14\x76\x0e\xbb\x84\x5d\xc3\x6e\x61\xf7"
"\xb0\x47\xd8\x33\xec\x15\xf6\x0e\xfb\x84\x7d\xc3\x7e\x61\xff\x70\x40"
"\x38\x30\x1c\x14\x0e\x0e\x87\x84\x43\xc3\x61\xe1\xf0\x70\x44\x38\x32"
"\x1c\x15\x8e\x0e\xc7\x84\x63\xc3\x71\xe1\xf8\x70\x42\x38\x31\x9c\x14"
"\x4e\x0e\xa7\x84\x53\xc3\x69\xe1\xf4\x70\x46\x38\x33\x9c\x15\xce\x0e"
"\xe7\x84\x73\xc3\x79\xe1\xfc\x70\x41\xb8\x30\x5c\x14\x2e\x0e\x97\x84"
"\x4b\xc3\x65\xe1\xf2\x70\x45\xb8\x32\x5c\x15\xae\x0e\xd7\x84\x6b\xc3"
"\x75\xe1\xfa\x70\x43\xb8\x31\xdc\x14\x6e\x0e\xb7\x84\x5b\xc3\x6d\xe1"
"\xf6\x70\x47\xb8\x33\xdc\x15\xee\x0e\xf7\x84\x7b\xc3\x7d\xe1\xfe\x30"
"\x2e\x3c\x10\x1e\x0c\x0f\x85\x87\xc3\x23\xe1\xd1\xf0\x58\x78\x3c\x3c"
"\x11\x9e\x0c\x4f\x85\xa7\xc3\x33\xe1\xd9\xf0\x5c\x18\x1f\x9e\x0f\x13"
"\xc2\x0b\xe1\xc5\xf0\x52\x78\x39\xbc\x12\x5e\x0d\xaf\x85\xd7\xc3\x1b"
"\xe1\xcd\xf0\x56\x78\x3b\xbc\x13\xde\x0d\xef\x85\xf7\xc3\x07\xe1\xc3"
"\xf0\x51\xf8\x38\x7c\x12\x3e\x0d\x9f\x85\xcf\xc3\x17\xe1\xcb\xf0\x55"
"\xf8\x3a\x7c\x13\x26\x86\x6f\xc3\x77\xe1\xfb\xf0\x43\xf8\x31\xfc\x14"
"\x7e\x0e\xbf\x84\x5f\xc3\x6f\xe1\xf7\xf0\x47\xf8\x33\xfc\x15\xfe\x0e"
"\x93\xc2\x3f\xe1\xdf\xf0\x5f\x98\x2c\x4a\x1e\xa5\x88\x52\x46\xa9\xa2"
"\xd4\x51\x9a\x28\x6d\x94\x2e\x4a\x1f\x65\x88\x32\x46\x99\xa2\xcc\x51"
"\x96\x28\x6b\x94\x2d\xca\x1e\xe5\x88\x72\x46\xb9\xa2\xdc\x51\x9e\x08"
"\x89\xd0\x08\x8b\xf0\x88\x88\xc8\x88\x8a\xe8\x88\x89\xd8\x88\x8b\xf8"
"\x48\x88\xc4\x48\x8a\xe4\x48\x89\xd4\x48\x8b\x40\xa4\x47\x46\x64\x46"
"\x30\xb2\x22\x3b\x72\x22\x37\xf2\x22\x3f\x0a\xa2\x30\x8a\xa2\x58\x94"
"\x37\xca\x17\xe5\x8f\x0a\x44\x05\xa3\x42\x51\xe1\xa8\x48\x54\x34\x2a"
"\x16\x15\x8f\x4a\x44\x25\xa3\x52\x51\xe9\xa8\x4c\x54\x36\x2a\x17\x95"
"\x8f\x2a\x44\x15\xa3\x4a\x51\xe5\xa8\x4a\x54\x35\xaa\x16\x55\x8f\x6a"
"\x44\x35\xa3\x5a\x51\xed\xa8\x4e\x54\x37\xaa\x17\xd5\x8f\x1a\x44\x0d"
"\xa3\x46\x51\xe3\xa8\x49\xd4\x34\x6a\x16\x35\x8f\x5a\x44\x2d\xa3\x56"
"\x51\xeb\xa8\x4d\xd4\x36\x6a\x17\xb5\x8f\x3a\x44\x1d\xa3\x4e\x51\xe7"
"\xa8\x4b\xd4\x35\xea\x16\x75\x8f\x7a\x44\x3d\xa3\x5e\x51\xef\xa8\x4f"
"\xd4\x37\xea\x17\xf5\x8f\x06\x44\x03\xa3\x41\xd1\xe0\x68\x48\x34\x34"
"\x1a\x16\x0d\x8f\x46\x44\x23\xa3\x51\xd1\xe8\x68\x4c\x34\x36\x1a\x17"
"\x8d\x8f\x26\x44\x13\xa3\x49\xd1\xe4\x68\x4a\x34\x35\x9a\x16\x4d\x8f"
"\x66\x44\x33\xa3\x59\xd1\xec\x68\x4e\x34\x37\x9a\x17\xcd\x8f\x16\x44"
"\x0b\xa3\x45\xd1\xe2\x68\x49\xb4\x34\x5a\x16\x2d\x8f\x56\x44\x2b\xa3"
"\x55\xd1\xea\x68\x4d\xb4\x36\x5a\x17\xad\x8f\x36\x44\x1b\xa3\x4d\xd1"
"\xe6\x68\x4b\xb4\x35\xda\x16\x6d\x8f\x76\x44\x3b\xa3\x5d\xd1\xee\x68"
"\x4f\xb4\x37\xda\x17\xed\x8f\xe2\xa2\x03\xd1\xc1\xe8\x50\x74\x38\x3a"
"\x12\x1d\x8d\x8e\x45\xc7\xa3\x13\xd1\xc9\xe8\x54\x74\x3a\x3a\x13\x9d"
"\x8d\xce\x45\xf1\xd1\xf9\x28\x21\xba\x10\x5d\x8c\x2e\x45\x97\xa3\x2b"
"\xd1\xd5\xe8\x5a\x74\x3d\xba\x11\xdd\x8c\x6e\x45\xb7\xa3\x3b\xd1\xdd"
"\xe8\x5e\x74\x3f\x7a\x10\x3d\x8c\x1e\x45\x8f\xa3\x27\xd1\xd3\xe8\x59"
"\xf4\x3c\x7a\x11\xbd\x8c\x5e\x45\xaf\xa3\x37\x51\x62\xf4\x36\x7a\x17"
"\xbd\x8f\x3e\x44\x1f\xa3\x4f\xd1\xe7\xe8\x4b\xf4\x35\xfa\x16\x7d\x8f"
"\x7e\x44\x3f\xa3\x5f\xd1\xef\x28\x29\xfa\x13\xfd\x8d\xfe\x45\xc9\x62"
"\xc9\x63\x29\x62\x29\x63\xa9\x62\xa9\x63\x69\x62\x69\x63\xe9\x62\xe9"
"\x63\x19\x62\x19\x63\x99\x62\x99\x63\x59\x62\x59\x63\xd9\x62\xd9\x63"
"\x39\x62\x39\x63\xb9\x62\xb9\x63\x79\x62\x48\x0c\x8d\x61\x31\x3c\x46"
"\xc4\xc8\x18\x15\xa3\x63\x4c\x8c\x8d\x71\x31\x3e\x26\xc4\xc4\x98\x14"
"\x93\x63\x4a\x4c\x8d\x69\x31\x10\xd3\x63\x46\xcc\x8c\xc1\x98\x15\xb3"
"\x63\x4e\xcc\x8d\x79\x31\x3f\x16\xc4\xc2\x58\x14\x8b\xc5\xf2\xc6\xf2"
"\xc5\xf2\xc7\x0a\xc4\x0a\xc6\xfe\x2f\x02\xe0\x01\xd0\xaa\x18\x00\x00"
"\x68\xb6\x6d\xdb\xee\xda\xb6\xb6\xbd\x6c\xdb\xb6\x6d\xdb\xb6\x6d\xdb"
"\xb6\x6d\x7f\x74\x18\x8c\xc5\x38\x8c\xc7\x04\x4c\xc4\x24\x4c\xc6\x14"
"\x4c\xc5\x34\x4c\xc7\x0c\xcc\xc4\x2c\xcc\xc6\x1c\xcc\xc5\x3c\xcc\xc7"
"\x02\x2c\xc4\x22\x0c\x60\x10\x43\x58\x0c\xab\x83\xd5\xc5\xea\x61\xf5"
"\xb1\x06\x58\x43\xac\x11\xd6\x18\x6b\x82\x35\xc5\x9a\x61\xcd\xb1\x16"
"\x58\x4b\xac\x15\xd6\x1a\x6b\x83\xb5\xc5\xda\x61\xed\xb1\x0e\x58\x47"
"\xac\x13\xd6\x19\xeb\x82\x75\xc5\xba\x61\xdd\xb1\x1e\x58\x4f\xac\x17"
"\xd6\x1b\xeb\x83\xf5\xc5\xfa\x61\xfd\xb1\x01\xd8\x40\x6c\x10\x36\x18"
"\x1b\x82\x0d\xc5\x86\x61\xc3\xb1\x11\xd8\x48\x6c\x14\x36\x1a\x1b\x83"
"\x8d\xc5\xc6\x61\xe3\xb1\x09\xd8\x44\x6c\x12\x36\x19\x9b\x82\x4d\xc5"
"\xa6\x61\xd3\xb1\x19\xd8\x4c\x6c\x16\x36\x1b\x9b\x83\xcd\xc5\xe6\x61"
"\xf3\xb1\x05\xd8\x42\x6c\x11\xb6\x18\x5b\x82\x2d\xc5\x96\x61\xcb\xb1"
"\x15\xd8\x4a\x6c\x15\xb6\x1a\x5b\x83\xad\xc5\xd6\x61\xeb\xb1\x0d\xd8"
"\x46\x6c\x13\xb6\x19\xdb\x82\x6d\xc5\xb6\x61\xdb\xb1\x1d\xd8\x4e\x6c"
"\x17\xb6\x1b\xdb\x83\xed\xc5\xf6\x61\xfb\xb1\x03\xd8\x41\xec\x10\x76"
"\x18\x3b\x82\x1d\xc5\x8e\x61\xc7\xb1\x13\xd8\x49\xec\x14\x76\x1a\x3b"
"\x83\x9d\xc5\xce\x61\xe7\xb1\x0b\xd8\x45\xec\x12\x76\x19\xbb\x82\x5d"
"\xc5\xae\x61\xd7\xb1\x1b\xd8\x4d\xec\x16\x76\x1b\xbb\x83\xdd\xc5\xee"
"\x61\xf7\xb1\x07\xd8\x43\xec\x11\xf6\x18\x7b\x82\x3d\xc5\x9e\x61\xcf"
"\xb1\x17\xd8\x4b\xec\x15\xf6\x1a\x7b\x83\xbd\xc5\xde\x61\xef\xb1\x0f"
"\xd8\x47\xec\x13\xf6\x19\xfb\x82\x7d\xc5\xbe\x61\xdf\xb1\x1f\xd8\x4f"
"\xec\x17\xf6\x1b\xfb\x83\xfd\xc5\xfe\x61\x71\x58\x3c\x96\x80\x25\x62"
"\x49\xf0\xa4\x78\x32\x3c\x39\x9e\x02\x4f\x89\xa7\xc2\x53\xe3\x69\xf0"
"\xb4\x78\x3a\x3c\x3d\x9e\x01\xcf\x88\x67\xc2\x33\xe3\x59\xf0\xac\x78"
"\x36\x3c\x3b\x9e\x03\xcf\x89\xe7\xc2\x73\xe3\x79\xf0\xbc\x78\x3e\x3c"
"\x3f\x5e\x00\x2f\x88\x17\xc2\x0b\xe3\x45\xf0\xa2\x78\x31\xbc\x38\x5e"
"\x02\x2f\x89\x97\xc2\x4b\xe3\x65\xf0\xb2\x78\x39\xbc\x3c\x5e\x01\xaf"
"\x88\x57\xc2\x2b\xe3\x55\xf0\xaa\x78\x35\xbc\x3a\x5e\x03\xaf\x89\xd7"
"\xc2\x6b\xe3\x18\x8e\xe3\x04\x4e\xe2\x14\x4e\xe3\x0c\xce\xe2\x1c\xce"
"\xe3\x02\x2e\xe2\x12\x2e\xe3\x0a\xae\xe2\x1a\xae\xe3\x06\x6e\xe2\x16"
"\x6e\xe3\x0e\xee\xe2\x1e\xee\xe3\x01\x1e\xe2\x11\x0e\x70\x88\x23\x3c"
"\x86\xd7\xc1\xeb\xe2\xf5\xf0\xfa\x78\x03\xbc\x21\xde\x08\x6f\x8c\x37"
"\xc1\x9b\xe2\xcd\xf0\xe6\x78\x0b\xbc\x25\xde\x0a\x6f\x8d\xb7\xc1\xdb"
"\xe2\xed\xf0\xf6\x78\x07\xbc\x23\xde\x09\xef\x8c\x77\xc1\xbb\xe2\xdd"
"\xf0\xee\x78\x0f\xbc\x27\xde\x0b\xef\x8d\xf7\xc1\xfb\xe2\xfd\xf0\xfe"
"\xf8\x00\x7c\x20\x3e\x08\x1f\x8c\x0f\xc1\x87\xe2\xc3\xf0\xe1\xf8\x08"
"\x7c\x24\x3e\x0a\x1f\x8d\x8f\xc1\xc7\xe2\xe3\xf0\xf1\xf8\x04\x7c\x22"
"\x3e\x09\x9f\x8c\x4f\xc1\xa7\xe2\xd3\xf0\xe9\xf8\x0c\x7c\x26\x3e\x0b"
"\x9f\x8d\xcf\xc1\xe7\xe2\xf3\xf0\xf9\xf8\x02\x7c\x21\xbe\x08\x5f\x8c"
"\x2f\xc1\x97\xe2\xcb\xf0\xe5\xf8\x0a\x7c\x25\xbe\x0a\x5f\x8d\xaf\xc1"
"\xd7\xe2\xeb\xf0\xf5\xf8\x06\x7c\x23\xbe\x09\xdf\x8c\x6f\xc1\xb7\xe2"
"\xdb\xf0\xed\xf8\x0e\x7c\x27\xbe\x0b\xdf\x8d\xef\xc1\xf7\xe2\xfb\xf0"
"\xfd\xf8\x01\xfc\x20\x7e\x08\x3f\x8c\x1f\xc1\x8f\xe2\xc7\xf0\xe3\xf8"
"\x09\xfc\x24\x7e\x0a\x3f\x8d\x9f\xc1\xcf\xe2\xe7\xf0\xf3\xf8\x05\xfc"
"\x22\x7e\x09\xbf\x8c\x5f\xc1\xaf\xe2\xd7\xf0\xeb\xf8\x0d\xfc\x26\x7e"
"\x0b\xbf\x8d\xdf\xc1\xef\xe2\xf7\xf0\xfb\xf8\x03\xfc\x21\xfe\x08\x7f"
"\x8c\x3f\xc1\x9f\xe2\xcf\xf0\xe7\xf8\x0b\xfc\x25\xfe\x0a\x7f\x8d\xbf"
"\xc1\xdf\xe2\xef\xf0\xf7\xf8\x07\xfc\x23\xfe\x09\xff\x8c\x7f\xc1\xbf"
"\xe2\xdf\xf0\xef\xf8\x0f\xfc\x27\xfe\x0b\xff\x8d\xff\xc1\xff\xe2\xff"
"\xf0\x38\x3c\x1e\x4f\xc0\x13\xf1\x24\x44\x52\x22\x19\x91\x9c\x48\x41"
"\xa4\x24\x52\x11\xa9\x89\x34\x44\x5a\x22\x1d\x91\x9e\xc8\x40\x64\x24"
"\x32\x11\x99\x89\x2c\x44\x56\x22\x1b\x91\x9d\xc8\x41\xe4\x24\x72\x11"
"\xb9\x89\x3c\x44\x5e\x22\x1f\x91\x9f\x28\x40\x14\x24\x0a\x11\x85\x89"
"\x22\x44\x51\xa2\x18\x51\x9c\x28\x41\x94\x24\x4a\x11\xa5\x89\x32\x44"
"\x59\xa2\x1c\x51\x9e\xa8\x40\x54\x24\x2a\x11\x95\x89\x2a\x44\x55\xa2"
"\x1a\x51\x9d\xa8\x41\xd4\x24\x6a\x11\xb5\x09\x8c\xc0\x09\x82\x20\x09"
"\x8a\xa0\x09\x86\x60\x09\x8e\xe0\x09\x81\x10\x09\x89\x90\x09\x85\x50"
"\x09\x8d\xd0\x09\x83\x30\x09\x8b\xb0\x09\x87\x70\x09\x8f\xf0\x89\x80"
"\x08\x89\x88\x00\x04\x24\x10\x11\x23\xea\x10\x75\x89\x7a\x44\x7d\xa2"
"\x01\xd1\x90\x68\x44\x34\x26\x9a\x10\x4d\x89\x66\x44\x73\xa2\x05\xd1"
"\x92\x68\x45\xb4\x26\xda\x10\x6d\x89\x76\x44\x7b\xa2\x03\xd1\x91\xe8"
"\x44\x74\x26\xba\x10\x5d\x89\x6e\x44\x77\xa2\x07\xd1\x93\xe8\x45\xf4"
"\x26\xfa\x10\x7d\x89\x7e\x44\x7f\x62\x00\x31\x90\x18\x44\x0c\x26\x86"
"\x10\x43\x89\x61\xc4\x70\x62\x04\x31\x92\x18\x45\x8c\x26\xc6\x10\x63"
"\x89\x71\xc4\x78\x62\x02\x31\x91\x98\x44\x4c\x26\xa6\x10\x53\x89\x69"
"\xc4\x74\x62\x06\x31\x93\x98\x45\xcc\x26\xe6\x10\x73\x89\x79\xc4\x7c"
"\x62\x01\xb1\x90\x58\x44\x2c\x26\x96\x10\x4b\x89\x65\xc4\x72\x62\x05"
"\xb1\x92\x58\x45\xac\x26\xd6\x10\x6b\x89\x75\xc4\x7a\x62\x03\xb1\x91"
"\xd8\x44\x6c\x26\xb6\x10\x5b\x89\x6d\xc4\x76\x62\x07\xb1\x93\xd8\x45"
"\xec\x26\xf6\x10\x7b\x89\x7d\xc4\x7e\xe2\x00\x71\x90\x38\x44\x1c\x26"
"\x8e\x10\x47\x89\x63\xc4\x71\xe2\x04\x71\x92\x38\x45\x9c\x26\xce\x10"
"\x67\x89\x73\xc4\x79\xe2\x02\x71\x91\xb8\x44\x5c\x26\xae\x10\x57\x89"
"\x6b\xc4\x75\xe2\x06\x71\x93\xb8\x45\xdc\x26\xee\x10\x77\x89\x7b\xc4"
"\x7d\xe2\x01\xf1\x90\x78\x44\x3c\x26\x9e\x10\x4f\x89\x67\xc4\x73\xe2"
"\x05\xf1\x92\x78\x45\xbc\x26\xde\x10\x6f\x89\x77\xc4\x7b\xe2\x03\xf1"
"\x91\xf8\x44\x7c\x26\xbe\x10\x5f\x89\x6f\xc4\x77\xe2\x07\xf1\x93\xf8"
"\x45\xfc\x26\xfe\x10\x7f\x89\x7f\x44\x1c\x11\x4f\x24\x10\x89\x44\x12"
"\x32\x29\x99\x8c\x4c\x4e\xa6\x20\x53\x92\xa9\xc8\xd4\x64\x1a\x32\x2d"
"\x99\x8e\x4c\x4f\x66\x20\x33\x92\x99\xc8\xcc\x64\x16\x32\x2b\x99\x8d"
"\xcc\x4e\xe6\x20\x73\x92\xb9\xc8\xdc\x64\x1e\x32\x2f\x99\x8f\xcc\x4f"
"\x16\x20\x0b\x92\x85\xc8\xc2\x64\x11\xb2\x28\x59\x8c\x2c\x4e\x96\x20"
"\x4b\x92\xa5\xc8\xd2\x64\x19\xb2\x2c\x59\x8e\x2c\x4f\x56\x20\x2b\x92"
"\x95\xc8\xca\x64\x15\xb2\x2a\x59\x8d\xac\x4e\xd6\x20\x6b\x92\xb5\xc8"
"\xda\x24\x46\xe2\x24\x41\x92\x24\x45\xd2\x24\x43\xb2\x24\x47\xf2\xa4"
"\x40\x8a\xa4\x44\xca\xa4\x42\xaa\xa4\x46\xea\xa4\x41\x9a\xa4\x45\xda"
"\xa4\x43\xba\xa4\x47\xfa\x64\x40\x86\x64\x44\x02\x12\x92\x88\x8c\x91"
"\x75\xc8\xba\x64\x3d\xb2\x3e\xd9\x80\x6c\x48\x36\x22\x1b\x93\x4d\xc8"
"\xa6\x64\x33\xb2\x39\xd9\x82\x6c\x49\xb6\x22\x5b\x93\x6d\xc8\xb6\x64"
"\x3b\xb2\x3d\xd9\x81\xec\x48\x76\x22\x3b\x93\x5d\xc8\xae\x64\x37\xb2"
"\x3b\xd9\x83\xec\x49\xf6\x22\x7b\x93\x7d\xc8\xbe\x64\x3f\xb2\x3f\x39"
"\x80\x1c\x48\x0e\x22\x07\x93\x43\xc8\xa1\xe4\x30\x72\x38\x39\x82\x1c"
"\x49\x8e\x22\x47\x93\x63\xc8\xb1\xe4\x38\x72\x3c\x39\x81\x9c\x48\x4e"
"\x22\x27\x93\x53\xc8\xa9\xe4\x34\x72\x3a\x39\x83\x9c\x49\xce\x22\x67"
"\x93\x73\xc8\xb9\xe4\x3c\x72\x3e\xb9\x80\x5c\x48\x2e\x22\x17\x93\x4b"
"\xc8\xa5\xe4\x32\x72\x39\xb9\x82\x5c\x49\xae\x22\x57\x93\x6b\xc8\xb5"
"\xe4\x3a\x72\x3d\xb9\x81\xdc\x48\x6e\x22\x37\x93\x5b\xc8\xad\xe4\x36"
"\x72\x3b\xb9\x83\xdc\x49\xee\x22\x77\x93\x7b\xc8\xbd\xe4\x3e\x72\x3f"
"\x79\x80\x3c\x48\x1e\x22\x0f\x93\x47\xc8\xa3\xe4\x31\xf2\x38\x79\x82"
"\x3c\x49\x9e\x22\x4f\x93\x67\xc8\xb3\xe4\x39\xf2\x3c\x79\x81\xbc\x48"
"\x5e\x22\x2f\x93\x57\xc8\xab\xe4\x35\xf2\x3a\x79\x83\xbc\x49\xde\x22"
"\x6f\x93\x77\xc8\xbb\xe4\x3d\xf2\x3e\xf9\x80\x7c\x48\x3e\x22\x1f\x93"
"\x4f\xc8\xa7\xe4\x33\xf2\x39\xf9\x82\x7c\x49\xbe\x22\x5f\x93\x6f\xc8"
"\xb7\xe4\x3b\xf2\x3d\xf9\x81\xfc\x48\x7e\x22\x3f\x93\x5f\xc8\xaf\xe4"
"\x37\xf2\x3b\xf9\x83\xfc\x49\xfe\x22\x7f\x93\x7f\xc8\xbf\xe4\x3f\x32"
"\x8e\x8c\x27\x13\xc8\x44\x32\x09\x95\x94\x4a\x46\x25\xa7\x52\x50\x29"
"\xa9\x54\x54\x6a\x2a\x0d\x95\x96\x4a\x47\xa5\xa7\x32\x50\x19\xa9\x4c"
"\x54\x66\x2a\x0b\x95\x95\xca\x46\x65\xa7\x72\x50\x39\xa9\x5c\x54\x6e"
"\x2a\x0f\x95\x97\xca\x47\xe5\xa7\x0a\x50\x05\xa9\x42\x54\x61\xaa\x08"
"\x55\x94\x2a\x46\x15\xa7\x4a\x50\x25\xa9\x52\x54\x69\xaa\x0c\x55\x96"
"\x2a\x47\x95\xa7\x2a\x50\x15\xa9\x4a\x54\x65\xaa\x0a\x55\x95\xaa\x46"
"\x55\xa7\x6a\x50\x35\xa9\x5a\x54\x6d\x0a\xa3\x70\x8a\xa0\x48\x8a\xa2"
"\x68\x8a\xa1\x58\x8a\xa3\x78\x4a\xa0\x44\x4a\xa2\x64\x4a\xa1\x54\x4a"
"\xa3\x74\xca\xa0\x4c\xca\xa2\x6c\xca\xa1\x5c\xca\xa3\x7c\x2a\xa0\x42"
"\x2a\xa2\x00\x05\x29\x44\xc5\xa8\x3a\x54\x5d\xaa\x1e\x55\x9f\x6a\x40"
"\x35\xa4\x1a\x51\x8d\xa9\x26\x54\x53\xaa\x19\xd5\x9c\x6a\x41\xb5\xa4"
"\x5a\x51\xad\xa9\x36\x54\x5b\xaa\x1d\xd5\x9e\xea\x40\x75\xa4\x3a\x51"
"\x9d\xa9\x2e\x54\x57\xaa\x1b\xd5\x9d\xea\x41\xf5\xa4\x7a\x51\xbd\xa9"
"\x3e\x54\x5f\xaa\x1f\xd5\x9f\x1a\x40\x0d\xa4\x06\x51\x83\xa9\x21\xd4"
"\x50\x6a\x18\x35\x9c\x1a\x41\x8d\xa4\x46\x51\xa3\xa9\x31\xd4\x58\x6a"
"\x1c\x35\x9e\x9a\x40\x4d\xa4\x26\x51\x93\xa9\x29\xd4\x54\x6a\x1a\x35"
"\x9d\x9a\x41\xcd\xa4\x66\x51\xb3\xa9\x39\xd4\x5c\x6a\x1e\x35\x9f\x5a"
"\x40\x2d\xa4\x16\x51\x8b\xa9\x25\xd4\x52\x6a\x19\xb5\x9c\x5a\x41\xad"
"\xa4\x56\x51\xab\xa9\x35\xd4\x5a\x6a\x1d\xb5\x9e\xda\x40\x6d\xa4\x36"
"\x51\x9b\xa9\x2d\xd4\x56\x6a\x1b\xb5\x9d\xda\x41\xed\xa4\x76\x51\xbb"
"\xa9\x3d\xd4\x5e\x6a\x1f\xb5\x9f\x3a\x40\x1d\xa4\x0e\x51\x87\xa9\x23"
"\xd4\x51\xea\x18\x75\x9c\x3a\x41\x9d\xa4\x4e\x51\xa7\xa9\x33\xd4\x59"
"\xea\x1c\x75\x9e\xba\x40\x5d\xa4\x2e\x51\x97\xa9\x2b\xd4\x55\xea\x1a"
"\x75\x9d\xba\x41\xdd\xa4\x6e\x51\xb7\xa9\x3b\xd4\x5d\xea\x1e\x75\x9f"
"\x7a\x40\x3d\xa4\x1e\x51\x8f\xa9\x27\xd4\x53\xea\x19\xf5\x9c\x7a\x41"
"\xbd\xa4\x5e\x51\xaf\xa9\x37\xd4\x5b\xea\x1d\xf5\x9e\xfa\x40\x7d\xa4"
"\x3e\x51\x9f\xa9\x2f\xd4\x57\xea\x1b\xf5\x9d\xfa\x41\xfd\xa4\x7e\x51"
"\xbf\xa9\x3f\xd4\x5f\xea\x1f\x15\x47\xc5\x53\x09\x54\x22\x95\x84\x4e"
"\x4a\x27\xa3\x93\xd3\x29\xe8\x94\x74\x2a\x3a\x35\x9d\x86\x4e\x4b\xa7"
"\xa3\xd3\xd3\x19\xe8\x8c\x74\x26\x3a\x33\x9d\x85\xce\x4a\x67\xa3\xb3"
"\xd3\x39\xe8\x9c\x74\x2e\x3a\x37\x9d\x87\xce\x4b\xe7\xa3\xf3\xd3\x05"
"\xe8\x82\x74\x21\xba\x30\x5d\x84\x2e\x4a\x17\xa3\x8b\xd3\x25\xe8\x92"
"\x74\x29\xba\x34\x5d\x86\x2e\x4b\x97\xa3\xcb\xd3\x15\xe8\x8a\x74\x25"
"\xba\x32\x5d\x85\xae\x4a\x57\xa3\xab\xd3\x35\xe8\x9a\x74\x2d\xba\x36"
"\x8d\xd1\x38\x4d\xd0\x24\x4d\xd1\x34\xcd\xd0\x2c\xcd\xd1\x3c\x2d\xd0"
"\x22\x2d\xd1\x32\xad\xd0\x2a\xad\xd1\x3a\x6d\xd0\x26\x6d\xd1\x36\xed"
"\xd0\x2e\xed\xd1\x3e\x1d\xd0\x21\x1d\xd1\x80\x86\x34\xa2\x63\x74\x1d"
"\xba\x2e\x5d\x8f\xae\x4f\x37\xa0\x1b\xd2\x8d\xe8\xc6\x74\x13\xba\x29"
"\xdd\x8c\x6e\x4e\xb7\xa0\x5b\xd2\xad\xe8\xd6\x74\x1b\xba\x2d\xdd\x8e"
"\x6e\x4f\x77\xa0\x3b\xd2\x9d\xe8\xce\x74\x17\xba\x2b\xdd\x8d\xee\x4e"
"\xf7\xa0\x7b\xd2\xbd\xe8\xde\x74\x1f\xba\x2f\xdd\x8f\xee\x4f\x0f\xa0"
"\x07\xd2\x83\xe8\xc1\xf4\x10\x7a\x28\x3d\x8c\x1e\x4e\x8f\xa0\x47\xd2"
"\xa3\xe8\xd1\xf4\x18\x7a\x2c\x3d\x8e\x1e\x4f\x4f\xa0\x27\xd2\x93\xe8"
"\xc9\xf4\x14\x7a\x2a\x3d\x8d\x9e\x4e\xcf\xa0\x67\xd2\xb3\xe8\xd9\xf4"
"\x1c\x7a\x2e\x3d\x8f\x9e\x4f\x2f\xa0\x17\xd2\x8b\xe8\xc5\xf4\x12\x7a"
"\x29\xbd\x8c\x5e\x4e\xaf\xa0\x57\xd2\xab\xe8\xd5\xf4\x1a\x7a\x2d\xbd"
"\x8e\x5e\x4f\x6f\xa0\x37\xd2\x9b\xe8\xcd\xf4\x16\x7a\x2b\xbd\x8d\xde"
"\x4e\xef\xa0\x77\xd2\xbb\xe8\xdd\xf4\x1e\x7a\x2f\xbd\x8f\xde\x4f\x1f"
"\xa0\x0f\xd2\x87\xe8\xc3\xf4\x11\xfa\x28\x7d\x8c\x3e\x4e\x9f\xa0\x4f"
"\xd2\xa7\xe8\xd3\xf4\x19\xfa\x2c\x7d\x8e\x3e\x4f\x5f\xa0\x2f\xd2\x97"
"\xe8\xcb\xf4\x15\xfa\x2a\x7d\x8d\xbe\x4e\xdf\xa0\x6f\xd2\xb7\xe8\xdb"
"\xf4\x1d\xfa\x2e\x7d\x8f\xbe\x4f\x3f\xa0\x1f\xd2\x8f\xe8\xc7\xf4\x13"
"\xfa\x29\xfd\x8c\x7e\x4e\xbf\xa0\x5f\xd2\xaf\xe8\xd7\xf4\x1b\xfa\x2d"
"\xfd\x8e\x7e\x4f\x7f\xa0\x3f\xd2\x9f\xe8\xcf\xf4\x17\xfa\x2b\xfd\x8d"
"\xfe\x4e\xff\xa0\x7f\xd2\xbf\xe8\xdf\xf4\x1f\xfa\x2f\xfd\x8f\x8e\xa3"
"\xe3\xe9\x04\x3a\x91\x4e\xc2\x24\x65\x92\x31\xc9\x99\x14\x4c\x4a\x26"
"\x15\x93\x9a\x49\xc3\xa4\x65\xd2\x31\xe9\x99\x0c\x4c\x46\x26\x13\x93"
"\x99\xc9\xc2\x64\x65\xb2\x31\xd9\x99\x1c\x4c\x4e\x26\x17\x93\x9b\xc9"
"\xc3\xe4\x65\xf2\x31\xf9\x99\x02\x4c\x41\xa6\x10\x53\x98\x29\xc2\x14"
"\x65\x8a\x31\xc5\x99\x12\x4c\x49\xa6\x14\x53\x9a\x29\xc3\x94\x65\xca"
"\x31\xe5\x99\x0a\x4c\x45\xa6\x12\x53\x99\xa9\xc2\x54\x65\xaa\x31\xd5"
"\x99\x1a\x4c\x4d\xa6\x16\x53\x9b\xc1\x18\x9c\x21\x18\x92\xa1\x18\x9a"
"\x61\x18\x96\xe1\x18\x9e\x11\x18\x91\x91\x18\x99\x51\x18\x95\xd1\x18"
"\x9d\x31\x18\x93\xb1\x18\x9b\x71\x18\x97\xf1\x18\x9f\x09\x98\x90\x89"
"\x18\xc0\x40\x06\x31\x31\xa6\x0e\x53\x97\xa9\xc7\xd4\x67\x1a\x30\x0d"
"\x99\x46\x4c\x63\xa6\x09\xd3\x94\x69\xc6\x34\x67\x5a\x30\x2d\x99\x56"
"\x4c\x6b\xa6\x0d\xd3\x96\x69\xc7\xb4\x67\x3a\x30\x1d\x99\x4e\x4c\x67"
"\xa6\x0b\xd3\x95\xe9\xc6\x74\x67\x7a\x30\x3d\x99\x5e\x4c\x6f\xa6\x0f"
"\xd3\x97\xe9\xc7\xf4\x67\x06\x30\x03\x99\x41\xcc\x60\x66\x08\x33\x94"
"\x19\xc6\x0c\x67\x46\x30\x23\x99\x51\xcc\x68\x66\x0c\x33\x96\x19\xc7"
"\x8c\x67\x26\x30\x13\x99\x49\xcc\x64\x66\x0a\x33\x95\x99\xc6\x4c\x67"
"\x66\x30\x33\x99\x59\xcc\x6c\x66\x0e\x33\x97\x99\xc7\xcc\x67\x16\x30"
"\x0b\x99\x45\xcc\x62\x66\x09\xb3\x94\x59\xc6\x2c\x67\x56\x30\x2b\x99"
"\x55\xcc\x6a\x66\x0d\xb3\x96\x59\xc7\xac\x67\x36\x30\x1b\x99\x4d\xcc"
"\x66\x66\x0b\xb3\x95\xd9\xc6\x6c\x67\x76\x30\x3b\x99\x5d\xcc\x6e\x66"
"\x0f\xb3\x97\xd9\xc7\xec\x67\x0e\x30\x07\x99\x43\xcc\x61\xe6\x08\x73"
"\x94\x39\xc6\x1c\x67\x4e\x30\x27\x99\x53\xcc\x69\xe6\x0c\x73\x96\x39"
"\xc7\x9c\x67\x2e\x30\x17\x99\x4b\xcc\x65\xe6\x0a\x73\x95\xb9\xc6\x5c"
"\x67\x6e\x30\x37\x99\x5b\xcc\x6d\xe6\x0e\x73\x97\xb9\xc7\xdc\x67\x1e"
"\x30\x0f\x99\x47\xcc\x63\xe6\x09\xf3\x94\x79\xc6\x3c\x67\x5e\x30\x2f"
"\x99\x57\xcc\x6b\xe6\x0d\xf3\x96\x79\xc7\xbc\x67\x3e\x30\x1f\x99\x4f"
"\xcc\x67\xe6\x0b\xf3\x95\xf9\xc6\x7c\x67\x7e\x30\x3f\x99\x5f\xcc\x6f"
"\xe6\x0f\xf3\x97\xf9\xc7\xc4\x31\xf1\x4c\x02\x93\xc8\x24\x61\x93\xb2"
"\xc9\xd8\xe4\x6c\x0a\x36\x25\x9b\x8a\x4d\xcd\xa6\x61\xd3\xb2\xe9\xd8"
"\xf4\x6c\x06\x36\x23\x9b\x89\xcd\xcc\x66\x61\xb3\xb2\xd9\xd8\xec\x6c"
"\x0e\x36\x27\x9b\x8b\xcd\xcd\xe6\x61\xf3\xb2\xf9\xd8\xfc\x6c\x01\xb6"
"\x20\x5b\x88\x2d\xcc\x16\x61\x8b\xb2\xc5\xd8\xe2\x6c\x09\xb6\x24\x5b"
"\x8a\x2d\xcd\x96\x61\xcb\xb2\xe5\xd8\xf2\x6c\x05\xb6\x22\x5b\x89\xad"
"\xcc\x56\x61\xab\xb2\xd5\xd8\xea\x6c\x0d\xb6\x26\x5b\x8b\xad\xcd\x62"
"\x2c\xce\x12\x2c\xc9\x52\x2c\xcd\x32\x2c\xcb\x72\x2c\xcf\x0a\xac\xc8"
"\x4a\xac\xcc\x2a\xac\xca\x6a\xac\xce\x1a\xac\xc9\x5a\xac\xcd\x3a\xac"
"\xcb\x7a\xac\xcf\x06\x6c\xc8\x46\x2c\x60\x21\x8b\xd8\x18\x5b\x87\xad"
"\xcb\xd6\x63\xeb\xb3\x0d\xd8\x86\x6c\x23\xb6\x31\xdb\x84\x6d\xca\x36"
"\x63\x9b\xb3\x2d\xd8\x96\x6c\x2b\xb6\x35\xdb\x86\x6d\xcb\xb6\x63\xdb"
"\xb3\x1d\xd8\x8e\x6c\x27\xb6\x33\xdb\x85\xed\xca\x76\x63\xbb\xb3\x3d"
"\xd8\x9e\x6c\x2f\xb6\x37\xdb\x87\xed\xcb\xf6\x63\xfb\xb3\x03\xd8\x81"
"\xec\x20\x76\x30\x3b\x84\x1d\xca\x0e\x63\x87\xb3\x23\xd8\x91\xec\x28"
"\x76\x34\x3b\x86\x1d\xcb\x8e\x63\xc7\xb3\x13\xd8\x89\xec\x24\x76\x32"
"\x3b\x85\x9d\xca\x4e\x63\xa7\xb3\x33\xd8\x99\xec\x2c\x76\x36\x3b\x87"
"\x9d\xcb\xce\x63\xe7\xb3\x0b\xd8\x85\xec\x22\x76\x31\xbb\x84\x5d\xca"
"\x2e\x63\x97\xb3\x2b\xd8\x95\xec\x2a\x76\x35\xbb\x86\x5d\xcb\xae\x63"
"\xd7\xb3\x1b\xd8\x8d\xec\x26\x76\x33\xbb\x85\xdd\xca\x6e\x63\xb7\xb3"
"\x3b\xd8\x9d\xec\x2e\x76\x37\xbb\x87\xdd\xcb\xee\x63\xf7\xb3\x07\xd8"
"\x83\xec\x21\xf6\x30\x7b\x84\x3d\xca\x1e\x63\x8f\xb3\x27\xd8\x93\xec"
"\x29\xf6\x34\x7b\x86\x3d\xcb\x9e\x63\xcf\xb3\x17\xd8\x8b\xec\x25\xf6"
"\x32\x7b\x85\xbd\xca\x5e\x63\xaf\xb3\x37\xd8\x9b\xec\x2d\xf6\x36\x7b"
"\x87\xbd\xcb\xde\x63\xef\xb3\x0f\xd8\x87\xec\x23\xf6\x31\xfb\x84\x7d"
"\xca\x3e\x63\x9f\xb3\x2f\xd8\x97\xec\x2b\xf6\x35\xfb\x86\x7d\xcb\xbe"
"\x63\xdf\xb3\x1f\xd8\x8f\xec\x27\xf6\x33\xfb\x85\xfd\xca\x7e\x63\xbf"
"\xb3\x3f\xd8\x9f\xec\x2f\xf6\x37\xfb\x87\xfd\xcb\xfe\x63\xe3\xd8\x78"
"\x36\x81\x4d\x64\x93\x70\x49\xb9\x64\x5c\x72\x2e\x05\x97\x92\x4b\xc5"
"\xa5\xe6\xd2\x70\x69\xb9\x74\x5c\x7a\x2e\x03\x97\x91\xcb\xc4\x65\xe6"
"\xb2\x70\x59\xb9\x6c\x5c\x76\x2e\x07\x97\x93\xcb\xc5\xe5\xe6\xf2\x70"
"\x79\xb9\x7c\x5c\x7e\xae\x00\x57\x90\x2b\xc4\x15\xe6\x8a\x70\x45\xb9"
"\x62\x5c\x71\xae\x04\x57\x92\x2b\xc5\x95\xe6\xca\x70\x65\xb9\x72\x5c"
"\x79\xae\x02\x57\x91\xab\xc4\x55\xe6\xaa\x70\x55\xb9\x6a\x5c\x75\xae"
"\x06\x57\x93\xab\xc5\xd5\xe6\x30\x0e\xe7\x08\x8e\xe4\x28\x8e\xe6\x18"
"\x8e\xe5\x38\x8e\xe7\x04\x4e\xe4\x24\x4e\xe6\x14\x4e\xe5\x34\x4e\xe7"
"\x0c\xce\xe4\x2c\xce\xe6\x1c\xce\xe5\x3c\xce\xe7\x02\x2e\xe4\x22\x0e"
"\x70\x90\x43\x5c\x8c\xab\xc3\xd5\xe5\xea\x71\xf5\xb9\x06\x5c\x43\xae"
"\x11\xd7\x98\x6b\xc2\x35\xe5\x9a\x71\xcd\xb9\x16\x5c\x4b\xae\x15\xd7"
"\x9a\x6b\xc3\xb5\xe5\xda\x71\xed\xb9\x0e\x5c\x47\xae\x13\xd7\x99\xeb"
"\xc2\x75\xe5\xba\x71\xdd\xb9\x1e\x5c\x4f\xae\x17\xd7\x9b\xeb\xc3\xf5"
"\xe5\xfa\x71\xfd\xb9\x01\xdc\x40\x6e\x10\x37\x98\x1b\xc2\x0d\xe5\x86"
"\x71\xc3\xb9\x11\xdc\x48\x6e\x14\x37\x9a\x1b\xc3\x8d\xe5\xc6\x71\xe3"
"\xb9\x09\xdc\x44\x6e\x12\x37\x99\x9b\xc2\x4d\xe5\xa6\x71\xd3\xb9\x19"
"\xdc\x4c\x6e\x16\x37\x9b\x9b\xc3\xcd\xe5\xe6\x71\xf3\xb9\x05\xdc\x42"
"\x6e\x11\xb7\x98\x5b\xc2\x2d\xe5\x96\x71\xcb\xb9\x15\xdc\x4a\x6e\x15"
"\xb7\x9a\x5b\xc3\xad\xe5\xd6\x71\xeb\xb9\x0d\xdc\x46\x6e\x13\xb7\x99"
"\xdb\xc2\x6d\xe5\xb6\x71\xdb\xb9\x1d\xdc\x4e\x6e\x17\xb7\x9b\xdb\xc3"
"\xed\xe5\xf6\x71\xfb\xb9\x03\xdc\x41\xee\x10\x77\x98\x3b\xc2\x1d\xe5"
"\x8e\x71\xc7\xb9\x13\xdc\x49\xee\x14\x77\x9a\x3b\xc3\x9d\xe5\xce\x71"
"\xe7\xb9\x0b\xdc\x45\xee\x12\x77\x99\xbb\xc2\x5d\xe5\xae\x71\xd7\xb9"
"\x1b\xdc\x4d\xee\x16\x77\x9b\xbb\xc3\xdd\xe5\xee\x71\xf7\xb9\x07\xdc"
"\x43\xee\x11\xf7\x98\x7b\xc2\x3d\xe5\x9e\x71\xcf\xb9\x17\xdc\x4b\xee"
"\x15\xf7\x9a\x7b\xc3\xbd\xe5\xde\x71\xef\xb9\x0f\xdc\x47\xee\x13\xf7"
"\x99\xfb\xc2\x7d\xe5\xbe\x71\xdf\xb9\x1f\xdc\x4f\xee\x17\xf7\x9b\xfb"
"\xc3\xfd\xe5\xfe\x71\x71\x5c\x3c\x97\xc0\x25\x72\x49\xf8\xa4\x7c\x32"
"\x3e\x39\x9f\x82\x4f\xc9\xa7\xe2\x53\xf3\x69\xf8\xb4\x7c\x3a\x3e\x3d"
"\x9f\x81\xcf\xc8\x67\xe2\x33\xf3\x59\xf8\xac\x7c\x36\x3e\x3b\x9f\x83"
"\xcf\xc9\xe7\xe2\x73\xf3\x79\xf8\xbc\x7c\x3e\x3e\x3f\x5f\x80\x2f\xc8"
"\x17\xe2\x0b\xf3\x45\xf8\xa2\x7c\x31\xbe\x38\x5f\x82\x2f\xc9\x97\xe2"
"\x4b\xf3\x65\xf8\xb2\x7c\x39\xbe\x3c\x5f\x81\xaf\xc8\x57\xe2\x2b\xf3"
"\x55\xf8\xaa\x7c\x35\xbe\x3a\x5f\x83\xaf\xc9\xd7\xe2\x6b\xf3\x18\x8f"
"\xf3\x04\x4f\xf2\x14\x4f\xf3\x0c\xcf\xf2\x1c\xcf\xf3\x02\x2f\xf2\x12"
"\x2f\xf3\x0a\xaf\xf2\x1a\xaf\xf3\x06\x6f\xf2\x16\x6f\xf3\x0e\xef\xf2"
"\x1e\xef\xf3\x01\x1f\xf2\x11\x0f\x78\xc8\x23\x3e\xc6\xd7\xe1\xeb\xf2"
"\xf5\xf8\xfa\x7c\x03\xbe\x21\xdf\x88\x6f\xcc\x37\xe1\x9b\xf2\xcd\xf8"
"\xe6\x7c\x0b\xbe\x25\xdf\x8a\x6f\xcd\xb7\xe1\xdb\xf2\xed\xf8\xf6\x7c"
"\x07\xbe\x23\xdf\x89\xef\xcc\x77\xe1\xbb\xf2\xdd\xf8\xee\x7c\x0f\xbe"
"\x27\xdf\x8b\xef\xcd\xf7\xe1\xfb\xf2\xfd\xf8\xfe\xfc\x00\x7e\x20\x3f"
"\x88\x1f\xcc\x0f\xe1\x87\xf2\xc3\xf8\xe1\xfc\x08\x7e\x24\x3f\x8a\x1f"
"\xcd\x8f\xe1\xc7\xf2\xe3\xf8\xf1\xfc\x04\x7e\x22\x3f\x89\x9f\xcc\x4f"
"\xe1\xa7\xf2\xd3\xf8\xe9\xfc\x0c\x7e\x26\x3f\x8b\x9f\xcd\xcf\xe1\xe7"
"\xf2\xf3\xf8\xf9\xfc\x02\x7e\x21\xbf\x88\x5f\xcc\x2f\xe1\x97\xf2\xcb"
"\xf8\xe5\xfc\x0a\x7e\x25\xbf\x8a\x5f\xcd\xaf\xe1\xd7\xf2\xeb\xf8\xf5"
"\xfc\x06\x7e\x23\xbf\x89\xdf\xcc\x6f\xe1\xb7\xf2\xdb\xf8\xed\xfc\x0e"
"\x7e\x27\xbf\x8b\xdf\xcd\xef\xe1\xf7\xf2\xfb\xf8\xfd\xfc\x01\xfe\x20"
"\x7f\x88\x3f\xcc\x1f\xe1\x8f\xf2\xc7\xf8\xe3\xfc\x09\xfe\x24\x7f\x8a"
"\x3f\xcd\x9f\xe1\xcf\xf2\xe7\xf8\xf3\xfc\x05\xfe\x22\x7f\x89\xbf\xcc"
"\x5f\xe1\xaf\xf2\xd7\xf8\xeb\xfc\x0d\xfe\x26\x7f\x8b\xbf\xcd\xdf\xe1"
"\xef\xf2\xf7\xf8\xfb\xfc\x03\xfe\x21\xff\x88\x7f\xcc\x3f\xe1\x9f\xf2"
"\xcf\xf8\xe7\xfc\x0b\xfe\x25\xff\x8a\x7f\xcd\xbf\xe1\xdf\xf2\xef\xf8"
"\xf7\xfc\x07\xfe\x23\xff\x89\xff\xcc\x7f\xe1\xbf\xf2\xdf\xf8\xef\xfc"
"\x0f\xfe\x27\xff\x8b\xff\xcd\xff\xe1\xff\xf2\xff\xf8\x38\x3e\x9e\x4f"
"\xe0\x13\xf9\x24\x42\x52\x21\x99\x90\x5c\x48\x21\xa4\x14\x52\x09\xa9"
"\x85\x34\x42\x5a\x21\x9d\x90\x5e\xc8\x20\x64\x14\x32\x09\x99\x85\x2c"
"\x42\x56\x21\x9b\x90\x5d\xc8\x21\xe4\x14\x72\x09\xb9\x85\x3c\x42\x5e"
"\x21\x9f\x90\x5f\x28\x20\x14\x14\x0a\x09\x85\x85\x22\x42\x51\xa1\x98"
"\x50\x5c\x28\x21\x94\x14\x4a\x09\xa5\x85\x32\x42\x59\xa1\x9c\x50\x5e"
"\xa8\x20\x54\x14\x2a\x09\x95\x85\x2a\x42\x55\xa1\x9a\x50\x5d\xa8\x21"
"\xd4\x14\x6a\x09\xb5\x05\x4c\xc0\x05\x42\x20\x05\x4a\xa0\x05\x46\x60"
"\x05\x4e\xe0\x05\x41\x10\x05\x49\x90\x05\x45\x50\x05\x4d\xd0\x05\x43"
"\x30\x05\x4b\xb0\x05\x47\x70\x05\x4f\xf0\x85\x40\x08\x85\x48\x00\x02"
"\x14\x90\x10\x13\xea\x08\x75\x85\x7a\x42\x7d\xa1\x81\xd0\x50\x68\x24"
"\x34\x16\x9a\x08\x4d\x85\x66\x42\x73\xa1\x85\xd0\x52\x68\x25\xb4\x16"
"\xda\x08\x6d\x85\x76\x42\x7b\xa1\x83\xd0\x51\xe8\x24\x74\x16\xba\x08"
"\x5d\x85\x6e\x42\x77\xa1\x87\xd0\x53\xe8\x25\xf4\x16\xfa\x08\x7d\x85"
"\x7e\x42\x7f\x61\x80\x30\x50\x18\x24\x0c\x16\x86\x08\x43\x85\x61\xc2"
"\x70\x61\x84\x30\x52\x18\x25\x8c\x16\xc6\x08\x63\x85\x71\xc2\x78\x61"
"\x82\x30\x51\x98\x24\x4c\x16\xa6\x08\x53\x85\x69\xc2\x74\x61\x86\x30"
"\x53\x98\x25\xcc\x16\xe6\x08\x73\x85\x79\xc2\x7c\x61\x81\xb0\x50\x58"
"\x24\x2c\x16\x96\x08\x4b\x85\x65\xc2\x72\x61\x85\xb0\x52\x58\x25\xac"
"\x16\xd6\x08\x6b\x85\x75\xc2\x7a\x61\x83\xb0\x51\xd8\x24\x6c\x16\xb6"
"\x08\x5b\x85\x6d\xc2\x76\x61\x87\xb0\x53\xd8\x25\xec\x16\xf6\x08\x7b"
"\x85\x7d\xc2\x7e\xe1\x80\x70\x50\x38\x24\x1c\x16\x8e\x08\x47\x85\x63"
"\xc2\x71\xe1\x84\x70\x52\x38\x25\x9c\x16\xce\x08\x67\x85\x73\xc2\x79"
"\xe1\x82\x70\x51\xb8\x24\x5c\x16\xae\x08\x57\x85\x6b\xc2\x75\xe1\x86"
"\x70\x53\xb8\x25\xdc\x16\xee\x08\x77\x85\x7b\xc2\x7d\xe1\x81\xf0\x50"
"\x78\x24\x3c\x16\x9e\x08\x4f\x85\x67\xc2\x73\xe1\x85\xf0\x52\x78\x25"
"\xbc\x16\xde\x08\x6f\x85\x77\xc2\x7b\xe1\x83\xf0\x51\xf8\x24\x7c\x16"
"\xbe\x08\x5f\x85\x6f\xc2\x77\xe1\x87\xf0\x53\xf8\x25\xfc\x16\xfe\x08"
"\x7f\x85\x7f\x42\x9c\x10\x2f\x24\x08\x89\x42\x12\x31\xa9\x98\x4c\x4c"
"\x2e\xa6\x10\x53\x8a\xa9\xc4\xd4\x62\x1a\x31\xad\x98\x4e\x4c\x2f\x66"
"\x10\x33\x8a\x99\xc4\xcc\x62\x16\x31\xab\x98\x4d\xcc\x2e\xe6\x10\x73"
"\x8a\xb9\xc4\xdc\x62\x1e\x31\xaf\x98\x4f\xcc\x2f\x16\x10\x0b\x8a\x85"
"\xc4\xc2\x62\x11\xb1\xa8\x58\x4c\x2c\x2e\x96\x10\x4b\x8a\xa5\xc4\xd2"
"\x62\x19\xb1\xac\x58\x4e\x2c\x2f\x56\x10\x2b\x8a\x95\xc4\xca\x62\x15"
"\xb1\xaa\x58\x4d\xac\x2e\xd6\x10\x6b\x8a\xb5\xc4\xda\x22\x26\xe2\x22"
"\x21\x92\x22\x25\xd2\x22\x23\xb2\x22\x27\xf2\xa2\x20\x8a\xa2\x24\xca"
"\xa2\x22\xaa\xa2\x26\xea\xa2\x21\x9a\xa2\x25\xda\xa2\x23\xba\xa2\x27"
"\xfa\x62\x20\x86\x62\x24\x02\x11\x8a\x48\x8c\x89\x75\xc4\xba\x62\x3d"
"\xb1\xbe\xd8\x40\x6c\x28\x36\x12\x1b\x8b\x4d\xc4\xa6\x62\x33\xb1\xb9"
"\xd8\x42\x6c\x29\xb6\x12\x5b\x8b\x6d\xc4\xb6\x62\x3b\xb1\xbd\xd8\x41"
"\xec\x28\x76\x12\x3b\x8b\x5d\xc4\xae\x62\x37\xb1\xbb\xd8\x43\xec\x29"
"\xf6\x12\x7b\x8b\x7d\xc4\xbe\x62\x3f\xb1\xbf\x38\x40\x1c\x28\x0e\x12"
"\x07\x8b\x43\xc4\xa1\xe2\x30\x71\xb8\x38\x42\x1c\x29\x8e\x12\x47\x8b"
"\x63\xc4\xb1\xe2\x38\x71\xbc\x38\x41\x9c\x28\x4e\x12\x27\x8b\x53\xc4"
"\xa9\xe2\x34\x71\xba\x38\x43\x9c\x29\xce\x12\x67\x8b\x73\xc4\xb9\xe2"
"\x3c\x71\xbe\xb8\x40\x5c\x28\x2e\x12\x17\x8b\x4b\xc4\xa5\xe2\x32\x71"
"\xb9\xb8\x42\x5c\x29\xae\x12\x57\x8b\x6b\xc4\xb5\xe2\x3a\x71\xbd\xb8"
"\x41\xdc\x28\x6e\x12\x37\x8b\x5b\xc4\xad\xe2\x36\x71\xbb\xb8\x43\xdc"
"\x29\xee\x12\x77\x8b\x7b\xc4\xbd\xe2\x3e\x71\xbf\x78\x40\x3c\x28\x1e"
"\x12\x0f\x8b\x47\xc4\xa3\xe2\x31\xf1\xb8\x78\x42\x3c\x29\x9e\x12\x4f"
"\x8b\x67\xc4\xb3\xe2\x39\xf1\xbc\x78\x41\xbc\x28\x5e\x12\x2f\x8b\x57"
"\xc4\xab\xe2\x35\xf1\xba\x78\x43\xbc\x29\xde\x12\x6f\x8b\x77\xc4\xbb"
"\xe2\x3d\xf1\xbe\xf8\x40\x7c\x28\x3e\x12\x1f\x8b\x4f\xc4\xa7\xe2\x33"
"\xf1\xb9\xf8\x42\x7c\x29\xbe\x12\x5f\x8b\x6f\xc4\xb7\xe2\x3b\xf1\xbd"
"\xf8\x41\xfc\x28\x7e\x12\x3f\x8b\x5f\xc4\xaf\xe2\x37\xf1\xbb\xf8\x43"
"\xfc\x29\xfe\x12\x7f\x8b\x7f\xc4\xbf\xe2\x3f\x31\x4e\x8c\x17\x13\xc4"
"\x44\x31\x89\x94\x54\x4a\x26\x25\x97\x52\x48\x29\xa5\x54\x52\x6a\x29"
"\x8d\x94\x56\x4a\x27\xa5\x97\x32\x48\x19\xa5\x4c\x52\x66\x29\x8b\x94"
"\x55\xca\x26\x65\x97\x72\x48\x39\xa5\x5c\x52\x6e\x29\x8f\x94\x57\xca"
"\x27\xe5\x97\x0a\x48\x05\xa5\x42\x52\x61\xa9\x88\x54\x54\x2a\x26\x15"
"\x97\x4a\x48\x25\xa5\x52\x52\x69\xa9\x8c\x54\x56\x2a\x27\x95\x97\x2a"
"\x48\x15\xa5\x4a\x52\x65\xa9\x8a\x54\x55\xaa\x26\x55\x97\x6a\x48\x35"
"\xa5\x5a\x52\x6d\x09\x93\x70\x89\x90\x48\x89\x92\x68\x89\x91\x58\x89"
"\x93\x78\x49\x90\x44\x49\x92\x64\x49\x91\x54\x49\x93\x74\xc9\x90\x4c"
"\xc9\x92\x6c\xc9\x91\x5c\xc9\x93\x7c\x29\x90\x42\x29\x92\x80\x04\x25"
"\x24\xc5\xa4\x3a\x52\x5d\xa9\x9e\x54\x5f\x6a\x20\x35\x94\x1a\x49\x8d"
"\xa5\x26\x52\x53\xa9\x99\xd4\x5c\x6a\x21\xb5\x94\x5a\x49\xad\xa5\x36"
"\x52\x5b\xa9\x9d\xd4\x5e\xea\x20\x75\x94\x3a\x49\x9d\xa5\x2e\x52\x57"
"\xa9\x9b\xd4\x5d\xea\x21\xf5\x94\x7a\x49\xbd\xa5\x3e\x52\x5f\xa9\x9f"
"\xd4\x5f\x1a\x20\x0d\x94\x06\x49\x83\xa5\x21\xd2\x50\x69\x98\x34\x5c"
"\x1a\x21\x8d\x94\x46\x49\xa3\xa5\x31\xd2\x58\x69\x9c\x34\x5e\x9a\x20"
"\x4d\x94\x26\x49\x93\xa5\x29\xd2\x54\x69\x9a\x34\x5d\x9a\x21\xcd\x94"
"\x66\x49\xb3\xa5\x39\xd2\x5c\x69\x9e\x34\x5f\x5a\x20\x2d\x94\x16\x49"
"\x8b\xa5\x25\xd2\x52\x69\x99\xb4\x5c\x5a\x21\xad\x94\x56\x49\xab\xa5"
"\x35\xd2\x5a\x69\x9d\xb4\x5e\xda\x20\x6d\x94\x36\x49\x9b\xa5\x2d\xd2"
"\x56\x69\x9b\xb4\x5d\xda\x21\xed\x94\x76\x49\xbb\xa5\x3d\xd2\x5e\x69"
"\x9f\xb4\x5f\x3a\x20\x1d\x94\x0e\x49\x87\xa5\x23\xd2\x51\xe9\x98\x74"
"\x5c\x3a\x21\x9d\x94\x4e\x49\xa7\xa5\x33\xd2\x59\xe9\x9c\x74\x5e\xba"
"\x20\x5d\x94\x2e\x49\x97\xa5\x2b\xd2\x55\xe9\x9a\x74\x5d\xba\x21\xdd"
"\x94\x6e\x49\xb7\xa5\x3b\xd2\x5d\xe9\x9e\x74\x5f\x7a\x20\x3d\x94\x1e"
"\x49\x8f\xa5\x27\xd2\x53\xe9\x99\xf4\x5c\x7a\x21\xbd\x94\x5e\x49\xaf"
"\xa5\x37\xd2\x5b\xe9\x9d\xf4\x5e\xfa\x20\x7d\x94\x3e\x49\x9f\xa5\x2f"
"\xd2\x57\xe9\x9b\xf4\x5d\xfa\x21\xfd\x94\x7e\x49\xbf\xa5\x3f\xd2\x5f"
"\xe9\x9f\x14\x27\xc5\x4b\x09\x52\xa2\x94\x44\x4e\x2a\x27\x93\x93\xcb"
"\x29\xe4\x94\x72\x2a\x39\xb5\x9c\x46\x4e\x2b\xa7\x93\xd3\xcb\x19\xe4"
"\x8c\x72\x26\x39\xb3\x9c\x45\xce\x2a\x67\x93\xb3\xcb\x39\xe4\x9c\x72"
"\x2e\x39\xb7\x9c\x47\xce\x2b\xe7\x93\xf3\xcb\x05\xe4\x82\x72\x21\xb9"
"\xb0\x5c\x44\x2e\x2a\x17\x93\x8b\xcb\x25\xe4\x92\x72\x29\xb9\xb4\x5c"
"\x46\x2e\x2b\x97\x93\xcb\xcb\x15\xe4\x8a\x72\x25\xb9\xb2\x5c\x45\xae"
"\x2a\x57\x93\xab\xcb\x35\xe4\x9a\x72\x2d\xb9\xb6\x8c\xc9\xb8\x4c\xc8"
"\xa4\x4c\xc9\xb4\xcc\xc8\xac\xcc\xc9\xbc\x2c\xc8\xa2\x2c\xc9\xb2\xac"
"\xc8\xaa\xac\xc9\xba\x6c\xc8\xa6\x6c\xc9\xb6\xec\xc8\xae\xec\xc9\xbe"
"\x1c\xc8\xa1\x1c\xc9\x40\x86\x32\x92\x63\x72\x1d\xb9\xae\x5c\x4f\xae"
"\x2f\x37\x90\x1b\xca\x8d\xe4\xc6\x72\x13\xb9\xa9\xdc\x4c\x6e\x2e\xb7"
"\x90\x5b\xca\xad\xe4\xd6\x72\x1b\xb9\xad\xdc\x4e\x6e\x2f\x77\x90\x3b"
"\xca\x9d\xe4\xce\x72\x17\xb9\xab\xdc\x4d\xee\x2e\xf7\x90\x7b\xca\xbd"
"\xe4\xde\x72\x1f\xb9\xaf\xdc\x4f\xee\x2f\x0f\x90\x07\xca\x83\xe4\xc1"
"\xf2\x10\x79\xa8\x3c\x4c\x1e\x2e\x8f\x90\x47\xca\xa3\xe4\xd1\xf2\x18"
"\x79\xac\x3c\x4e\x1e\x2f\x4f\x90\x27\xca\x93\xe4\xc9\xf2\x14\x79\xaa"
"\x3c\x4d\x9e\x2e\xcf\x90\x67\xca\xb3\xe4\xd9\xf2\x1c\x79\xae\x3c\x4f"
"\x9e\x2f\x2f\x90\x17\xca\x8b\xe4\xc5\xf2\x12\x79\xa9\xbc\x4c\x5e\x2e"
"\xaf\x90\x57\xca\xab\xe4\xd5\xf2\x1a\x79\xad\xbc\x4e\x5e\x2f\x6f\x90"
"\x37\xca\x9b\xe4\xcd\xf2\x16\x79\xab\xbc\x4d\xde\x2e\xef\x90\x77\xca"
"\xbb\xe4\xdd\xf2\x1e\x79\xaf\xbc\x4f\xde\x2f\x1f\x90\x0f\xca\x87\xe4"
"\xc3\xf2\x11\xf9\xa8\x7c\x4c\x3e\x2e\x9f\x90\x4f\xca\xa7\xe4\xd3\xf2"
"\x19\xf9\xac\x7c\x4e\x3e\x2f\x5f\x90\x2f\xca\x97\xe4\xcb\xf2\x15\xf9"
"\xaa\x7c\x4d\xbe\x2e\xdf\x90\x6f\xca\xb7\xe4\xdb\xf2\x1d\xf9\xae\x7c"
"\x4f\xbe\x2f\x3f\x90\x1f\xca\x8f\xe4\xc7\xf2\x13\xf9\xa9\xfc\x4c\x7e"
"\x2e\xbf\x90\x5f\xca\xaf\xe4\xd7\xf2\x1b\xf9\xad\xfc\x4e\x7e\x2f\x7f"
"\x90\x3f\xca\x9f\xe4\xcf\xf2\x17\xf9\xab\xfc\x4d\xfe\x2e\xff\x90\x7f"
"\xca\xbf\xe4\xdf\xf2\x1f\xf9\xaf\xfc\x4f\x8e\x93\xe3\xe5\x04\x39\x51"
"\x4e\xa2\x24\x55\x92\x29\xc9\x95\x14\x4a\x4a\x25\x95\x92\x5a\x49\xa3"
"\xa4\x55\xd2\x29\xe9\x95\x0c\x4a\x46\x25\x93\x92\x59\xc9\xa2\x64\x55"
"\xb2\x29\xd9\x95\x1c\x4a\x4e\x25\x97\x92\x5b\xc9\xa3\xe4\x55\xf2\x29"
"\xf9\x95\x02\x4a\x41\xa5\x90\x52\x58\x29\xa2\x14\x55\x8a\x29\xc5\x95"
"\x12\x4a\x49\xa5\x94\x52\x5a\x29\xa3\x94\x55\xca\x29\xe5\x95\x0a\x4a"
"\x45\xa5\x92\x52\x59\xa9\xa2\x54\x55\xaa\x29\xd5\x95\x1a\x4a\x4d\xa5"
"\x96\x52\x5b\xc1\x14\x5c\x21\x14\x52\xa1\x14\x5a\x61\x14\x56\xe1\x14"
"\x5e\x11\x14\x51\x91\x14\x59\x51\x14\x55\xd1\x14\x5d\x31\x14\x53\xb1"
"\x14\x5b\x71\x14\x57\xf1\x14\x5f\x09\x94\x50\x89\x14\xa0\x40\x05\x29"
"\x31\xa5\x8e\x52\x57\xa9\xa7\xd4\x57\x1a\x28\x0d\x95\x46\x4a\x63\xa5"
"\x89\xd2\x54\x69\xa6\x34\x57\x5a\x28\x2d\x95\x56\x4a\x6b\xa5\x8d\xd2"
"\x56\x69\xa7\xb4\x57\x3a\x28\x1d\x95\x4e\x4a\x67\xa5\x8b\xd2\x55\xe9"
"\xa6\x74\x57\x7a\x28\x3d\x95\x5e\x4a\x6f\xa5\x8f\xd2\x57\xe9\xa7\xf4"
"\x57\x06\x28\x03\x95\x41\xca\x60\x65\x88\x32\x54\x19\xa6\x0c\x57\x46"
"\x28\x23\x95\x51\xca\x68\x65\x8c\x32\x56\x19\xa7\x8c\x57\x26\x28\x13"
"\x95\x49\xca\x64\x65\x8a\x32\x55\x99\xa6\x4c\x57\x66\x28\x33\x95\x59"
"\xca\x6c\x65\x8e\x32\x57\x99\xa7\xcc\x57\x16\x28\x0b\x95\x45\xca\x62"
"\x65\x89\xb2\x54\x59\xa6\x2c\x57\x56\x28\x2b\x95\x55\xca\x6a\x65\x8d"
"\xb2\x56\x59\xa7\xac\x57\x36\x28\x1b\x95\x4d\xca\x66\x65\x8b\xb2\x55"
"\xd9\xa6\x6c\x57\x76\x28\x3b\x95\x5d\xca\x6e\x65\x8f\xb2\x57\xd9\xa7"
"\xec\x57\x0e\x28\x07\x95\x43\xca\x61\xe5\x88\x72\x54\x39\xa6\x1c\x57"
"\x4e\x28\x27\x95\x53\xca\x69\xe5\x8c\x72\x56\x39\xa7\x9c\x57\x2e\x28"
"\x17\x95\x4b\xca\x65\xe5\x8a\x72\x55\xb9\xa6\x5c\x57\x6e\x28\x37\x95"
"\x5b\xca\x6d\xe5\x8e\x72\x57\xb9\xa7\xdc\x57\x1e\x28\x0f\x95\x47\xca"
"\x63\xe5\x89\xf2\x54\x79\xa6\x3c\x57\x5e\x28\x2f\x95\x57\xca\x6b\xe5"
"\x8d\xf2\x56\x79\xa7\xbc\x57\x3e\x28\x1f\x95\x4f\xca\x67\xe5\x8b\xf2"
"\x55\xf9\xa6\x7c\x57\x7e\x28\x3f\x95\x5f\xca\x6f\xe5\x8f\xf2\x57\xf9"
"\xa7\xc4\x29\xf1\x4a\x82\x92\xa8\x24\x51\x93\xaa\xc9\xd4\xe4\x6a\x0a"
"\x35\xa5\x9a\x4a\x4d\xad\xa6\x51\xd3\xaa\xe9\xd4\xf4\x6a\x06\x35\xa3"
"\x9a\x49\xcd\xac\x66\x51\xb3\xaa\xd9\xd4\xec\x6a\x0e\x35\xa7\x9a\x4b"
"\xcd\xad\xe6\x51\xf3\xaa\xf9\xd4\xfc\x6a\x01\xb5\xa0\x5a\x48\x2d\xac"
"\x16\x51\x8b\xaa\xc5\xd4\xe2\x6a\x09\xb5\xa4\x5a\x4a\x2d\xad\x96\x51"
"\xcb\xaa\xe5\xd4\xf2\x6a\x05\xb5\xa2\x5a\x49\xad\xac\x56\x51\xab\xaa"
"\xd5\xd4\xea\x6a\x0d\xb5\xa6\x5a\x4b\xad\xad\x62\x2a\xae\x12\x2a\xa9"
"\x52\x2a\xad\x32\x2a\xab\x72\x2a\xaf\x0a\xaa\xa8\x4a\xaa\xac\x2a\xaa"
"\xaa\x6a\xaa\xae\x1a\xaa\xa9\x5a\xaa\xad\x3a\xaa\xab\x7a\xaa\xaf\x06"
"\x6a\xa8\x46\x2a\x50\xa1\x8a\xd4\x98\x5a\x47\xad\xab\xd6\x53\xeb\xab"
"\x0d\xd4\x86\x6a\x23\xb5\xb1\xda\x44\x6d\xaa\x36\x53\x9b\xab\x2d\xd4"
"\x96\x6a\x2b\xb5\xb5\xda\x46\x6d\xab\xb6\x53\xdb\xab\x1d\xd4\x8e\x6a"
"\x27\xb5\xb3\xda\x45\xed\xaa\x76\x53\xbb\xab\x3d\xd4\x9e\x6a\x2f\xb5"
"\xb7\xda\x47\xed\xab\xf6\x53\xfb\xab\x03\xd4\x81\xea\x20\x75\xb0\x3a"
"\x44\x1d\xaa\x0e\x53\x87\xab\x23\xd4\x91\xea\x28\x75\xb4\x3a\x46\x1d"
"\xab\x8e\x53\xc7\xab\x13\xd4\x89\xea\x24\x75\xb2\x3a\x45\x9d\xaa\x4e"
"\x53\xa7\xab\x33\xd4\x99\xea\x2c\x75\xb6\x3a\x47\x9d\xab\xce\x53\xe7"
"\xab\x0b\xd4\x85\xea\x22\x75\xb1\xba\x44\x5d\xaa\x2e\x53\x97\xab\x2b"
"\xd4\x95\xea\x2a\x75\xb5\xba\x46\x5d\xab\xae\x53\xd7\xab\x1b\xd4\x8d"
"\xea\x26\x75\xb3\xba\x45\xdd\xaa\x6e\x53\xb7\xab\x3b\xd4\x9d\xea\x2e"
"\x75\xb7\xba\x47\xdd\xab\xee\x53\xf7\xab\x07\xd4\x83\xea\x21\xf5\xb0"
"\x7a\x44\x3d\xaa\x1e\x53\x8f\xab\x27\xd4\x93\xea\x29\xf5\xb4\x7a\x46"
"\x3d\xab\x9e\x53\xcf\xab\x17\xd4\x8b\xea\x25\xf5\xb2\x7a\x45\xbd\xaa"
"\x5e\x53\xaf\xab\x37\xd4\x9b\xea\x2d\xf5\xb6\x7a\x47\xbd\xab\xde\x53"
"\xef\xab\x0f\xd4\x87\xea\x23\xf5\xb1\xfa\x44\x7d\xaa\x3e\x53\x9f\xab"
"\x2f\xd4\x97\xea\x2b\xf5\xb5\xfa\x46\x7d\xab\xbe\x53\xdf\xab\x1f\xd4"
"\x8f\xea\x27\xf5\xb3\xfa\x45\xfd\xaa\x7e\x53\xbf\xab\x3f\xd4\x9f\xea"
"\x2f\xf5\xb7\xfa\x47\xfd\xab\xfe\x53\xe3\xd4\x78\x35\x41\x4d\x54\x93"
"\x68\x49\xb5\x64\x5a\x72\x2d\x85\x96\x52\x4b\xa5\xa5\xd6\xd2\x68\x69"
"\xb5\x74\x5a\x7a\x2d\x83\x96\x51\xcb\xa4\x65\xd6\xb2\x68\x59\xb5\x6c"
"\x5a\x76\x2d\x87\x96\x53\xcb\xa5\xe5\xd6\xf2\x68\x79\xb5\x7c\x5a\x7e"
"\xad\x80\x56\x50\x2b\xa4\x15\xd6\x8a\x68\x45\xb5\x62\x5a\x71\xad\x84"
"\x56\x52\x2b\xa5\x95\xd6\xca\x68\x65\xb5\x72\x5a\x79\xad\x82\x56\x51"
"\xab\xa4\x55\xd6\xaa\x68\x55\xb5\x6a\x5a\x75\xad\x86\x56\x53\xab\xa5"
"\xd5\xd6\x30\x0d\xd7\x08\x8d\xd4\x28\x8d\xd6\x18\x8d\xd5\x38\x8d\xd7"
"\x04\x4d\xd4\x24\x4d\xd6\x14\x4d\xd5\x34\x4d\xd7\x0c\xcd\xd4\x2c\xcd"
"\xd6\x1c\xcd\xd5\x3c\xcd\xd7\x02\x2d\xd4\x22\x0d\x68\x50\x43\x5a\x4c"
"\xab\xa3\xd5\xd5\xea\x69\xf5\xb5\x06\x5a\x43\xad\x91\xd6\x58\x6b\xa2"
"\x35\xd5\x9a\x69\xcd\xb5\x16\x5a\x4b\xad\x95\xd6\x5a\x6b\xa3\xb5\xd5"
"\xda\x69\xed\xb5\x0e\x5a\x47\xad\x93\xd6\x59\xeb\xa2\x75\xd5\xba\x69"
"\xdd\xb5\x1e\x5a\x4f\xad\x97\xd6\x5b\xeb\xa3\xf5\xd5\xfa\x69\xfd\xb5"
"\x01\xda\x40\x6d\x90\x36\x58\x1b\xa2\x0d\xd5\x86\x69\xc3\xb5\x11\xda"
"\x48\x6d\x94\x36\x5a\x1b\xa3\x8d\xd5\xc6\x69\xe3\xb5\x09\xda\x44\x6d"
"\x92\x36\x59\x9b\xa2\x4d\xd5\xa6\x69\xd3\xb5\x19\xda\x4c\x6d\x96\x36"
"\x5b\x9b\xa3\xcd\xd5\xe6\x69\xf3\xb5\x05\xda\x42\x6d\x91\xb6\x58\x5b"
"\xa2\x2d\xd5\x96\x69\xcb\xb5\x15\xda\x4a\x6d\x95\xb6\x5a\x5b\xa3\xad"
"\xd5\xd6\x69\xeb\xb5\x0d\xda\x46\x6d\x93\xb6\x59\xdb\xa2\x6d\xd5\xb6"
"\x69\xdb\xb5\x1d\xda\x4e\x6d\x97\xb6\x5b\xdb\xa3\xed\xd5\xf6\x69\xfb"
"\xb5\x03\xda\x41\xed\x90\x76\x58\x3b\xa2\x1d\xd5\x8e\x69\xc7\xb5\x13"
"\xda\x49\xed\x94\x76\x5a\x3b\xa3\x9d\xd5\xce\x69\xe7\xb5\x0b\xda\x45"
"\xed\x92\x76\x59\xbb\xa2\x5d\xd5\xae\x69\xd7\xb5\x1b\xda\x4d\xed\x96"
"\x76\x5b\xbb\xa3\xdd\xd5\xee\x69\xf7\xb5\x07\xda\x43\xed\x91\xf6\x58"
"\x7b\xa2\x3d\xd5\x9e\x69\xcf\xb5\x17\xda\x4b\xed\x95\xf6\x5a\x7b\xa3"
"\xbd\xd5\xde\x69\xef\xb5\x0f\xda\x47\xed\x93\xf6\x59\xfb\xa2\x7d\xd5"
"\xbe\x69\xdf\xb5\x1f\xda\x4f\xed\x97\xf6\x5b\xfb\xa3\xfd\xd5\xfe\x69"
"\x71\x5a\xbc\x96\xa0\x25\x6a\x49\xf4\xa4\x7a\x32\x3d\xb9\x9e\x42\x4f"
"\xa9\xa7\xd2\x53\xeb\x69\xf4\xb4\x7a\x3a\x3d\xbd\x9e\x41\xcf\xa8\x67"
"\xd2\x33\xeb\x59\xf4\xac\x7a\x36\x3d\xbb\x9e\x43\xcf\xa9\xe7\xd2\x73"
"\xeb\x79\xf4\xbc\x7a\x3e\x3d\xbf\x5e\x40\x2f\xa8\x17\xd2\x0b\xeb\x45"
"\xf4\xa2\x7a\x31\xbd\xb8\x5e\x42\x2f\xa9\x97\xd2\x4b\xeb\x65\xf4\xb2"
"\x7a\x39\xbd\xbc\x5e\x41\xaf\xa8\x57\xd2\x2b\xeb\x55\xf4\xaa\x7a\x35"
"\xbd\xba\x5e\x43\xaf\xa9\xd7\xd2\x6b\xeb\x98\x8e\xeb\x84\x4e\xea\x94"
"\x4e\xeb\x8c\xce\xea\x9c\xce\xeb\x82\x2e\xea\x92\x2e\xeb\x8a\xae\xea"
"\x9a\xae\xeb\x86\x6e\xea\x96\x6e\xeb\x8e\xee\xea\x9e\xee\xeb\x81\x1e"
"\xea\x91\x0e\x74\xa8\x23\x3d\xa6\xd7\xd1\xeb\xea\xf5\xf4\xfa\x7a\x03"
"\xbd\xa1\xde\x48\x6f\xac\x37\xd1\x9b\xea\xcd\xf4\xe6\x7a\x0b\xbd\xa5"
"\xde\x4a\x6f\xad\xb7\xd1\xdb\xea\xed\xf4\xf6\x7a\x07\xbd\xa3\xde\x49"
"\xef\xac\x77\xd1\xbb\xea\xdd\xf4\xee\x7a\x0f\xbd\xa7\xde\x4b\xef\xad"
"\xf7\xd1\xfb\xea\xfd\xf4\xfe\xfa\x00\x7d\xa0\x3e\x48\x1f\xac\x0f\xd1"
"\x87\xea\xc3\xf4\xe1\xfa\x08\x7d\xa4\x3e\x4a\x1f\xad\x8f\xd1\xc7\xea"
"\xe3\xf4\xf1\xfa\x04\x7d\xa2\x3e\x49\x9f\xac\x4f\xd1\xa7\xea\xd3\xf4"
"\xe9\xfa\x0c\x7d\xa6\x3e\x4b\x9f\xad\xcf\xd1\xe7\xea\xf3\xf4\xf9\xfa"
"\x02\x7d\xa1\xbe\x48\x5f\xac\x2f\xd1\x97\xea\xcb\xf4\xe5\xfa\x0a\x7d"
"\xa5\xbe\x4a\x5f\xad\xaf\xd1\xd7\xea\xeb\xf4\xf5\xfa\x06\x7d\xa3\xbe"
"\x49\xdf\xac\x6f\xd1\xb7\xea\xdb\xf4\xed\xfa\x0e\x7d\xa7\xbe\x4b\xdf"
"\xad\xef\xd1\xf7\xea\xfb\xf4\xfd\xfa\x01\xfd\xa0\x7e\x48\x3f\xac\x1f"
"\xd1\x8f\xea\xc7\xf4\xe3\xfa\x09\xfd\xa4\x7e\x4a\x3f\xad\x9f\xd1\xcf"
"\xea\xe7\xf4\xf3\xfa\x05\xfd\xa2\x7e\x49\xbf\xac\x5f\xd1\xaf\xea\xd7"
"\xf4\xeb\xfa\x0d\xfd\xa6\x7e\x4b\xbf\xad\xdf\xd1\xef\xea\xf7\xf4\xfb"
"\xfa\x03\xfd\xa1\xfe\x48\x7f\xac\x3f\xd1\x9f\xea\xcf\xf4\xe7\xfa\x0b"
"\xfd\xa5\xfe\x4a\x7f\xad\xbf\xd1\xdf\xea\xef\xf4\xf7\xfa\x07\xfd\xa3"
"\xfe\x49\xff\xac\x7f\xd1\xbf\xea\xdf\xf4\xef\xfa\x0f\xfd\xa7\xfe\x4b"
"\xff\xad\xff\xd1\xff\xea\xff\xf4\x38\x3d\x5e\x4f\xd0\x13\xf5\x24\x46"
"\x52\x23\x99\x91\xdc\x48\x61\xa4\x34\x52\x19\xa9\x8d\x34\x46\x5a\x23"
"\x9d\x91\xde\xc8\x60\x64\x34\x32\x19\x99\x8d\x2c\x46\x56\x23\x9b\x91"
"\xdd\xc8\x61\xe4\x34\x72\x19\xb9\x8d\x3c\x46\x5e\x23\x9f\x91\xdf\x28"
"\x60\x14\x34\x0a\x19\x85\x8d\x22\x46\x51\xa3\x98\x51\xdc\x28\x61\x94"
"\x34\x4a\x19\xa5\x8d\x32\x46\x59\xa3\x9c\x51\xde\xa8\x60\x54\x34\x2a"
"\x19\x95\x8d\x2a\x46\x55\xa3\x9a\x51\xdd\xa8\x61\xd4\x34\x6a\x19\xb5"
"\x0d\xcc\xc0\x0d\xc2\x20\x0d\xca\xa0\x0d\xc6\x60\x0d\xce\xe0\x0d\xc1"
"\x10\x0d\xc9\x90\x0d\xc5\x50\x0d\xcd\xd0\x0d\xc3\x30\x0d\xcb\xb0\x0d"
"\xc7\x70\x0d\xcf\xf0\x8d\xc0\x08\x8d\xc8\x00\x06\x34\x90\x11\x33\xea"
"\x18\x75\x8d\x7a\x46\x7d\xa3\x81\xd1\xd0\x68\x64\x34\x36\x9a\x18\x4d"
"\x8d\x66\x46\x73\xa3\x85\xd1\xd2\x68\x65\xb4\x36\xda\x18\x6d\x8d\x76"
"\x46\x7b\xa3\x83\xd1\xd1\xe8\x64\x74\x36\xba\x18\x5d\x8d\x6e\x46\x77"
"\xa3\x87\xd1\xd3\xe8\x65\xf4\x36\xfa\x18\x7d\x8d\x7e\x46\x7f\x63\x80"
"\x31\xd0\x18\x64\x0c\x36\x86\x18\x43\x8d\x61\xc6\x70\x63\x84\x31\xd2"
"\x18\x65\x8c\x36\xc6\x18\x63\x8d\x71\xc6\x78\x63\x82\x31\xd1\x98\x64"
"\x4c\x36\xa6\x18\x53\x8d\x69\xc6\x74\x63\x86\x31\xd3\x98\x65\xcc\x36"
"\xe6\x18\x73\x8d\x79\xc6\x7c\x63\x81\xb1\xd0\x58\x64\x2c\x36\x96\x18"
"\x4b\x8d\x65\xc6\x72\x63\x85\xb1\xd2\x58\x65\xac\x36\xd6\x18\x6b\x8d"
"\x75\xc6\x7a\x63\x83\xb1\xd1\xd8\x64\x6c\x36\xb6\x18\x5b\x8d\x6d\xc6"
"\x76\x63\x87\xb1\xd3\xd8\x65\xec\x36\xf6\x18\x7b\x8d\x7d\xc6\x7e\xe3"
"\x80\x71\xd0\x38\x64\x1c\x36\x8e\x18\x47\x8d\x63\xc6\x71\xe3\x84\x71"
"\xd2\x38\x65\x9c\x36\xce\x18\x67\x8d\x73\xc6\x79\xe3\x82\x71\xd1\xb8"
"\x64\x5c\x36\xae\x18\x57\x8d\x6b\xc6\x75\xe3\x86\x71\xd3\xb8\x65\xdc"
"\x36\xee\x18\x77\x8d\x7b\xc6\x7d\xe3\x81\xf1\xd0\x78\x64\x3c\x36\x9e"
"\x18\x4f\x8d\x67\xc6\x73\xe3\x85\xf1\xd2\x78\x65\xbc\x36\xde\x18\x6f"
"\x8d\x77\xc6\x7b\xe3\x83\xf1\xd1\xf8\x64\x7c\x36\xbe\x18\x5f\x8d\x6f"
"\xc6\x77\xe3\x87\xf1\xd3\xf8\x65\xfc\x36\xfe\x18\x7f\x8d\x7f\x46\x9c"
"\x11\x6f\x24\x18\x89\x46\x12\x33\xa9\x99\xcc\x4c\x6e\xa6\x30\x53\x9a"
"\xa9\xcc\xd4\x66\x1a\x33\xad\x99\xce\x4c\x6f\x66\x30\x33\x9a\x99\xcc"
"\xcc\x66\x16\x33\xab\x99\xcd\xcc\x6e\xe6\x30\x73\x9a\xb9\xcc\xdc\x66"
"\x1e\x33\xaf\x99\xcf\xcc\x6f\x16\x30\x0b\x9a\x85\xcc\xc2\x66\x11\xb3"
"\xa8\x59\xcc\x2c\x6e\x96\x30\x4b\x9a\xa5\xcc\xd2\x66\x19\xb3\xac\x59"
"\xce\x2c\x6f\x56\x30\x2b\x9a\x95\xcc\xca\x66\x15\xb3\xaa\x59\xcd\xac"
"\x6e\xd6\x30\x6b\x9a\xb5\xcc\xda\x26\x66\xe2\x26\x61\x92\x26\x65\xd2"
"\x26\x63\xb2\x26\x67\xf2\xa6\x60\x8a\xa6\x64\xca\xa6\x62\xaa\xa6\x66"
"\xea\xa6\x61\x9a\xa6\x65\xda\xa6\x63\xba\xa6\x67\xfa\x66\x60\x86\x66"
"\x64\x02\x13\x9a\xc8\x8c\x99\x75\xcc\xba\x66\x3d\xb3\xbe\xd9\xc0\x6c"
"\x68\x36\x32\x1b\x9b\x4d\xcc\xa6\x66\x33\xb3\xb9\xd9\xc2\x6c\x69\xb6"
"\x32\x5b\x9b\x6d\xcc\xb6\x66\x3b\xb3\xbd\xd9\xc1\xec\x68\x76\x32\x3b"
"\x9b\x5d\xcc\xae\x66\x37\xb3\xbb\xd9\xc3\xec\x69\xf6\x32\x7b\x9b\x7d"
"\xcc\xbe\x66\x3f\xb3\xbf\x39\xc0\x1c\x68\x0e\x32\x07\x9b\x43\xcc\xa1"
"\xe6\x30\x73\xb8\x39\xc2\x1c\x69\x8e\x32\x47\x9b\x63\xcc\xb1\xe6\x38"
"\x73\xbc\x39\xc1\x9c\x68\x4e\x32\x27\x9b\x53\xcc\xa9\xe6\x34\x73\xba"
"\x39\xc3\x9c\x69\xce\x32\x67\x9b\x73\xcc\xb9\xe6\x3c\x73\xbe\xb9\xc0"
"\x5c\x68\x2e\x32\x17\x9b\x4b\xcc\xa5\xe6\x32\x73\xb9\xb9\xc2\x5c\x69"
"\xae\x32\x57\x9b\x6b\xcc\xb5\xe6\x3a\x73\xbd\xb9\xc1\xdc\x68\x6e\x32"
"\x37\x9b\x5b\xcc\xad\xe6\x36\x73\xbb\xb9\xc3\xdc\x69\xee\x32\x77\x9b"
"\x7b\xcc\xbd\xe6\x3e\x73\xbf\x79\xc0\x3c\x68\x1e\x32\x0f\x9b\x47\xcc"
"\xa3\xe6\x31\xf3\xb8\x79\xc2\x3c\x69\x9e\x32\x4f\x9b\x67\xcc\xb3\xe6"
"\x39\xf3\xbc\x79\xc1\xbc\x68\x5e\x32\x2f\x9b\x57\xcc\xab\xe6\x35\xf3"
"\xba\x79\xc3\xbc\x69\xde\x32\x6f\x9b\x77\xcc\xbb\xe6\x3d\xf3\xbe\xf9"
"\xc0\x7c\x68\x3e\x32\x1f\x9b\x4f\xcc\xa7\xe6\x33\xf3\xb9\xf9\xc2\x7c"
"\x69\xbe\x32\x5f\x9b\x6f\xcc\xb7\xe6\x3b\xf3\xbd\xf9\xc1\xfc\x68\x7e"
"\x32\x3f\x9b\x5f\xcc\xaf\xe6\x37\xf3\xbb\xf9\xc3\xfc\x69\xfe\x32\x7f"
"\x9b\x7f\xcc\xbf\xe6\x3f\x33\xce\x8c\x37\x13\xcc\x44\x33\x89\x95\xd4"
"\x4a\x66\x25\xb7\x52\x58\x29\xad\x54\x56\x6a\x2b\x8d\x95\xd6\x4a\x67"
"\xa5\xb7\x32\x58\x19\xad\x4c\x56\x66\x2b\x8b\x95\xd5\xca\x66\x65\xb7"
"\x72\x58\x39\xad\x5c\x56\x6e\x2b\x8f\x95\xd7\xca\x67\xe5\xb7\x0a\x58"
"\x05\xad\x42\x56\x61\xab\x88\x55\xd4\x2a\x66\x15\xb7\x4a\x58\x25\xad"
"\x52\x56\x69\xab\x8c\x55\xd6\x2a\x67\x95\xb7\x2a\x58\x15\xad\x4a\x56"
"\x65\xab\x8a\x55\xd5\xaa\x66\x55\xb7\x6a\x58\x35\xad\x5a\x56\x6d\x0b"
"\xb3\x70\x8b\xb0\x48\x8b\xb2\x68\x8b\xb1\x58\x8b\xb3\x78\x4b\xb0\x44"
"\x4b\xb2\x64\x4b\xb1\x54\x4b\xb3\x74\xcb\xb0\x4c\xcb\xb2\x6c\xcb\xb1"
"\x5c\xcb\xb3\x7c\x2b\xb0\x42\x2b\xb2\x80\x05\x2d\x64\xc5\xac\x3a\x56"
"\x5d\xab\x9e\x55\xdf\x6a\x60\x35\xb4\x1a\x59\x8d\xad\x26\x56\x53\xab"
"\x99\xd5\xdc\x6a\x61\xb5\xb4\x5a\x59\xad\xad\x36\x56\x5b\xab\x9d\xd5"
"\xde\xea\x60\x75\xb4\x3a\x59\x9d\xad\x2e\x56\x57\xab\x9b\xd5\xdd\xea"
"\x61\xf5\xb4\x7a\x59\xbd\xad\x3e\x56\x5f\xab\x9f\xd5\xdf\x1a\x60\x0d"
"\xb4\x06\x59\x83\xad\x21\xd6\x50\x6b\x98\x35\xdc\x1a\x61\x8d\xb4\x46"
"\x59\xa3\xad\x31\xd6\x58\x6b\x9c\x35\xde\x9a\x60\x4d\xb4\x26\x59\x93"
"\xad\x29\xd6\x54\x6b\x9a\x35\xdd\x9a\x61\xcd\xb4\x66\x59\xb3\xad\x39"
"\xd6\x5c\x6b\x9e\x35\xdf\x5a\x60\x2d\xb4\x16\x59\x8b\xad\x25\xd6\x52"
"\x6b\x99\xb5\xdc\x5a\x61\xad\xb4\x56\x59\xab\xad\x35\xd6\x5a\x6b\x9d"
"\xb5\xde\xda\x60\x6d\xb4\x36\x59\x9b\xad\x2d\xd6\x56\x6b\x9b\xb5\xdd"
"\xda\x61\xed\xb4\x76\x59\xbb\xad\x3d\xd6\x5e\x6b\x9f\xb5\xdf\x3a\x60"
"\x1d\xb4\x0e\x59\x87\xad\x23\xd6\x51\xeb\x98\x75\xdc\x3a\x61\x9d\xb4"
"\x4e\x59\xa7\xad\x33\xd6\x59\xeb\x9c\x75\xde\xba\x60\x5d\xb4\x2e\x59"
"\x97\xad\x2b\xd6\x55\xeb\x9a\x75\xdd\xba\x61\xdd\xb4\x6e\x59\xb7\xad"
"\x3b\xd6\x5d\xeb\x9e\x75\xdf\x7a\x60\x3d\xb4\x1e\x59\x8f\xad\x27\xd6"
"\x53\xeb\x99\xf5\xdc\x7a\x61\xbd\xb4\x5e\x59\xaf\xad\x37\xd6\x5b\xeb"
"\x9d\xf5\xde\xfa\x60\x7d\xb4\x3e\x59\x9f\xad\x2f\xd6\x57\xeb\x9b\xf5"
"\xdd\xfa\x61\xfd\xb4\x7e\x59\xbf\xad\x3f\xd6\x5f\xeb\x9f\x15\x67\xc5"
"\x5b\x09\x56\xa2\x95\xc4\x4e\x6a\x27\xb3\x93\xdb\x29\xec\x94\x76\x2a"
"\x3b\xb5\x9d\xc6\x4e\x6b\xa7\xb3\xd3\xdb\x19\xec\x8c\x76\x26\x3b\xb3"
"\x9d\xc5\xce\x6a\x67\xb3\xb3\xdb\x39\xec\x9c\x76\x2e\x3b\xb7\x9d\xc7"
"\xce\x6b\xe7\xb3\xf3\xdb\x05\xec\x82\x76\x21\xbb\xb0\x5d\xc4\x2e\x6a"
"\x17\xb3\x8b\xdb\x25\xec\x92\x76\x29\xbb\xb4\x5d\xc6\x2e\x6b\x97\xb3"
"\xcb\xdb\x15\xec\x8a\x76\x25\xbb\xb2\x5d\xc5\xae\x6a\x57\xb3\xab\xdb"
"\x35\xec\x9a\x76\x2d\xbb\xb6\x8d\xd9\xb8\x4d\xd8\xa4\x4d\xd9\xb4\xcd"
"\xd8\xac\xcd\xd9\xbc\x2d\xd8\xa2\x2d\xd9\xb2\xad\xd8\xaa\xad\xd9\xba"
"\x6d\xd8\xa6\x6d\xd9\xb6\xed\xd8\xae\xed\xd9\xbe\x1d\xd8\xa1\x1d\xd9"
"\xc0\x86\x36\xb2\x63\x76\x1d\xbb\xae\x5d\xcf\xae\x6f\x37\xb0\x1b\xda"
"\x8d\xec\xc6\x76\x13\xbb\xa9\xdd\xcc\x6e\x6e\xb7\xb0\x5b\xda\xad\xec"
"\xd6\x76\x1b\xbb\xad\xdd\xce\x6e\x6f\x77\xb0\x3b\xda\x9d\xec\xce\x76"
"\x17\xbb\xab\xdd\xcd\xee\x6e\xf7\xb0\x7b\xda\xbd\xec\xde\x76\x1f\xbb"
"\xaf\xdd\xcf\xee\x6f\x0f\xb0\x07\xda\x83\xec\xc1\xf6\x10\x7b\xa8\x3d"
"\xcc\x1e\x6e\x8f\xb0\x47\xda\xa3\xec\xd1\xf6\x18\x7b\xac\x3d\xce\x1e"
"\x6f\x4f\xb0\x27\xda\x93\xec\xc9\xf6\x14\x7b\xaa\x3d\xcd\x9e\x6e\xcf"
"\xb0\x67\xda\xb3\xec\xd9\xf6\x1c\x7b\xae\x3d\xcf\x9e\x6f\x2f\xb0\x17"
"\xda\x8b\xec\xc5\xf6\x12\x7b\xa9\xbd\xcc\x5e\x6e\xaf\xb0\x57\xda\xab"
"\xec\xd5\xf6\x1a\x7b\xad\xbd\xce\x5e\x6f\x6f\xb0\x37\xda\x9b\xec\xcd"
"\xf6\x16\x7b\xab\xbd\xcd\xde\x6e\xef\xb0\x77\xda\xbb\xec\xdd\xf6\x1e"
"\x7b\xaf\xbd\xcf\xde\x6f\x1f\xb0\x0f\xda\x87\xec\xc3\xf6\x11\xfb\xa8"
"\x7d\xcc\x3e\x6e\x9f\xb0\x4f\xda\xa7\xec\xd3\xf6\x19\xfb\xac\x7d\xce"
"\x3e\x6f\x5f\xb0\x2f\xda\x97\xec\xcb\xf6\x15\xfb\xaa\x7d\xcd\xbe\x6e"
"\xdf\xb0\x6f\xda\xb7\xec\xdb\xf6\x1d\xfb\xae\x7d\xcf\xbe\x6f\x3f\xb0"
"\x1f\xda\x8f\xec\xc7\xf6\x13\xfb\xa9\xfd\xcc\x7e\x6e\xbf\xb0\x5f\xda"
"\xaf\xec\xd7\xf6\x1b\xfb\xad\xfd\xce\x7e\x6f\x7f\xb0\x3f\xda\x9f\xec"
"\xcf\xf6\x17\xfb\xab\xfd\xcd\xfe\x6e\xff\xb0\x7f\xda\xbf\xec\xdf\xf6"
"\x1f\xfb\xaf\xfd\xcf\x8e\xb3\xe3\xed\x04\x3b\xd1\x4e\xe2\x24\x75\x92"
"\x39\xc9\x9d\x14\x4e\x4a\x27\x95\x93\xda\x49\xe3\xa4\x75\xd2\x39\xe9"
"\x9d\x0c\x4e\x46\x27\x93\x93\xd9\xc9\xe2\x64\x75\xb2\x39\xd9\x9d\x1c"
"\x4e\x4e\x27\x97\x93\xdb\xc9\xe3\xe4\x75\xf2\x39\xf9\x9d\x02\x4e\x41"
"\xa7\x90\x53\xd8\x29\xe2\x14\x75\x8a\x39\xc5\x9d\x12\x4e\x49\xa7\x94"
"\x53\xda\x29\xe3\x94\x75\xca\x39\xe5\x9d\x0a\x4e\x45\xa7\x92\x53\xd9"
"\xa9\xe2\x54\x75\xaa\x39\xd5\x9d\x1a\x4e\x4d\xa7\x96\x53\xdb\xc1\x1c"
"\xdc\x21\x1c\xd2\xa1\x1c\xda\x61\x1c\xd6\xe1\x1c\xde\x11\x1c\xd1\x91"
"\x1c\xd9\x51\x1c\xd5\xd1\x1c\xdd\x31\x1c\xd3\xb1\x1c\xdb\x71\x1c\xd7"
"\xf1\x1c\xdf\x09\x9c\xd0\x89\x1c\xe0\x40\x07\x39\x31\xa7\x8e\x53\xd7"
"\xa9\xe7\xd4\x77\x1a\x38\x0d\x9d\x46\x4e\x63\xa7\x89\xd3\xd4\x69\xe6"
"\x34\x77\x5a\x38\x2d\x9d\x56\x4e\x6b\xa7\x8d\xd3\xd6\x69\xe7\xb4\x77"
"\x3a\x38\x1d\x9d\x4e\x4e\x67\xa7\x8b\xd3\xd5\xe9\xe6\x74\x77\x7a\x38"
"\x3d\x9d\x5e\x4e\x6f\xa7\x8f\xd3\xd7\xe9\xe7\xf4\x77\x06\x38\x03\x9d"
"\x41\xce\x60\x67\x88\x33\xd4\x19\xe6\x0c\x77\x46\x38\x23\x9d\x51\xce"
"\x68\x67\x8c\x33\xd6\x19\xe7\x8c\x77\x26\x38\x13\x9d\x49\xce\x64\x67"
"\x8a\x33\xd5\x99\xe6\x4c\x77\x66\x38\x33\x9d\x59\xce\x6c\x67\x8e\x33"
"\xd7\x99\xe7\xcc\x77\x16\x38\x0b\x9d\x45\xce\x62\x67\x89\xb3\xd4\x59"
"\xe6\x2c\x77\x56\x38\x2b\x9d\x55\xce\x6a\x67\x8d\xb3\xd6\x59\xe7\xac"
"\x77\x36\x38\x1b\x9d\x4d\xce\x66\x67\x8b\xb3\xd5\xd9\xe6\x6c\x77\x76"
"\x38\x3b\x9d\x5d\xce\x6e\x67\x8f\xb3\xd7\xd9\xe7\xec\x77\x0e\x38\x07"
"\x9d\x43\xce\x61\xe7\x88\x73\xd4\x39\xe6\x1c\x77\x4e\x38\x27\x9d\x53"
"\xce\x69\xe7\x8c\x73\xd6\x39\xe7\x9c\x77\x2e\x38\x17\x9d\x4b\xce\x65"
"\xe7\x8a\x73\xd5\xb9\xe6\x5c\x77\x6e\x38\x37\x9d\x5b\xce\x6d\xe7\x8e"
"\x73\xd7\xb9\xe7\xdc\x77\x1e\x38\x0f\x9d\x47\xce\x63\xe7\x89\xf3\xd4"
"\x79\xe6\x3c\x77\x5e\x38\x2f\x9d\x57\xce\x6b\xe7\x8d\xf3\xd6\x79\xe7"
"\xbc\x77\x3e\x38\x1f\x9d\x4f\xce\x67\xe7\x8b\xf3\xd5\xf9\xe6\x7c\x77"
"\x7e\x38\x3f\x9d\x5f\xce\x6f\xe7\x8f\xf3\xd7\xf9\xe7\xc4\x39\xf1\x4e"
"\x82\x93\xe8\x24\x71\x93\xba\xc9\xdc\xe4\x6e\x0a\x37\xa5\x9b\xca\x4d"
"\xed\xa6\x71\xd3\xba\xe9\xdc\xf4\x6e\x06\x37\xa3\x9b\xc9\xcd\xec\x66"
"\x71\xb3\xba\xd9\xdc\xec\x6e\x0e\x37\xa7\x9b\xcb\xcd\xed\xe6\x71\xf3"
"\xba\xf9\xdc\xfc\x6e\x01\xb7\xa0\x5b\xc8\x2d\xec\x16\x71\x8b\xba\xc5"
"\xdc\xe2\x6e\x09\xb7\xa4\x5b\xca\x2d\xed\x96\x71\xcb\xba\xe5\xdc\xf2"
"\x6e\x05\xb7\xa2\x5b\xc9\xad\xec\x56\x71\xab\xba\xd5\xdc\xea\x6e\x0d"
"\xb7\xa6\x5b\xcb\xad\xed\x62\x2e\xee\x12\x2e\xe9\x52\x2e\xed\x32\x2e"
"\xeb\x72\x2e\xef\x0a\xae\xe8\x4a\xae\xec\x2a\xae\xea\x6a\xae\xee\x1a"
"\xae\xe9\x5a\xae\xed\x3a\xae\xeb\x7a\xae\xef\x06\x6e\xe8\x46\x2e\x70"
"\xa1\x8b\xdc\x98\x5b\xc7\xad\xeb\xd6\x73\xeb\xbb\x0d\xdc\x86\x6e\x23"
"\xb7\xb1\xdb\xc4\x6d\xea\x36\x73\x9b\xbb\x2d\xdc\x96\x6e\x2b\xb7\xb5"
"\xdb\xc6\x6d\xeb\xb6\x73\xdb\xbb\x1d\xdc\x8e\x6e\x27\xb7\xb3\xdb\xc5"
"\xed\xea\x76\x73\xbb\xbb\x3d\xdc\x9e\x6e\x2f\xb7\xb7\xdb\xc7\xed\xeb"
"\xf6\x73\xfb\xbb\x03\xdc\x81\xee\x20\x77\xb0\x3b\xc4\x1d\xea\x0e\x73"
"\x87\xbb\x23\xdc\x91\xee\x28\x77\xb4\x3b\xc6\x1d\xeb\x8e\x73\xc7\xbb"
"\x13\xdc\x89\xee\x24\x77\xb2\x3b\xc5\x9d\xea\x4e\x73\xa7\xbb\x33\xdc"
"\x99\xee\x2c\x77\xb6\x3b\xc7\x9d\xeb\xce\x73\xe7\xbb\x0b\xdc\x85\xee"
"\x22\x77\xb1\xbb\xc4\x5d\xea\x2e\x73\x97\xbb\x2b\xdc\x95\xee\x2a\x77"
"\xb5\xbb\xc6\x5d\xeb\xae\x73\xd7\xbb\x1b\xdc\x8d\xee\x26\x77\xb3\xbb"
"\xc5\xdd\xea\x6e\x73\xb7\xbb\x3b\xdc\x9d\xee\x2e\x77\xb7\xbb\xc7\xdd"
"\xeb\xee\x73\xf7\xbb\x07\xdc\x83\xee\x21\xf7\xb0\x7b\xc4\x3d\xea\x1e"
"\x73\x8f\xbb\x27\xdc\x93\xee\x29\xf7\xb4\x7b\xc6\x3d\xeb\x9e\x73\xcf"
"\xbb\x17\xdc\x8b\xee\x25\xf7\xb2\x7b\xc5\xbd\xea\x5e\x73\xaf\xbb\x37"
"\xdc\x9b\xee\x2d\xf7\xb6\x7b\xc7\xbd\xeb\xde\x73\xef\xbb\x0f\xdc\x87"
"\xee\x23\xf7\xb1\xfb\xc4\x7d\xea\x3e\x73\x9f\xbb\x2f\xdc\x97\xee\x2b"
"\xf7\xb5\xfb\xc6\x7d\xeb\xbe\x73\xdf\xbb\x1f\xdc\x8f\xee\x27\xf7\xb3"
"\xfb\xc5\xfd\xea\x7e\x73\xbf\xbb\x3f\xdc\x9f\xee\x2f\xf7\xb7\xfb\xc7"
"\xfd\xeb\xfe\x73\xe3\xdc\x78\x37\xc1\x4d\x74\x93\x78\x49\xbd\x64\x5e"
"\x72\x2f\x85\x97\xd2\x4b\xe5\xa5\xf6\xd2\x78\x69\xbd\x74\x5e\x7a\x2f"
"\x83\x97\xd1\xcb\xe4\x65\xf6\xb2\x78\x59\xbd\x6c\x5e\x76\x2f\x87\x97"
"\xd3\xcb\xe5\xe5\xf6\xf2\x78\x79\xbd\x7c\x5e\x7e\xaf\x80\x57\xd0\x2b"
"\xe4\x15\xf6\x8a\x78\x45\xbd\x62\x5e\x71\xaf\x84\x57\xd2\x2b\xe5\x95"
"\xf6\xca\x78\x65\xbd\x72\x5e\x79\xaf\x82\x57\xd1\xab\xe4\x55\xf6\xaa"
"\x78\x55\xbd\x6a\x5e\x75\xaf\x86\x57\xd3\xab\xe5\xd5\xf6\x30\x0f\xf7"
"\x08\x8f\xf4\x28\x8f\xf6\x18\x8f\xf5\x38\x8f\xf7\x04\x4f\xf4\x24\x4f"
"\xf6\x14\x4f\xf5\x34\x4f\xf7\x0c\xcf\xf4\x2c\xcf\xf6\x1c\xcf\xf5\x3c"
"\xcf\xf7\x02\x2f\xf4\x22\x0f\x78\xd0\x43\x5e\xcc\xab\xe3\xd5\xf5\xea"
"\x79\xf5\xbd\x06\x5e\x43\xaf\x91\xd7\xd8\x6b\xe2\x35\xf5\x9a\x79\xcd"
"\xbd\x16\x5e\x4b\xaf\x95\xd7\xda\x6b\xe3\xb5\xf5\xda\x79\xed\xbd\x0e"
"\x5e\x47\xaf\x93\xd7\xd9\xeb\xe2\x75\xf5\xba\x79\xdd\xbd\x1e\x5e\x4f"
"\xaf\x97\xd7\xdb\xeb\xe3\xf5\xf5\xfa\x79\xfd\xbd\x01\xde\x40\x6f\x90"
"\x37\xd8\x1b\xe2\x0d\xf5\x86\x79\xc3\xbd\x11\xde\x48\x6f\x94\x37\xda"
"\x1b\xe3\x8d\xf5\xc6\x79\xe3\xbd\x09\xde\x44\x6f\x92\x37\xd9\x9b\xe2"
"\x4d\xf5\xa6\x79\xd3\xbd\x19\xde\x4c\x6f\x96\x37\xdb\x9b\xe3\xcd\xf5"
"\xe6\x79\xf3\xbd\x05\xde\x42\x6f\x91\xb7\xd8\x5b\xe2\x2d\xf5\x96\x79"
"\xcb\xbd\x15\xde\x4a\x6f\x95\xb7\xda\x5b\xe3\xad\xf5\xd6\x79\xeb\xbd"
"\x0d\xde\x46\x6f\x93\xb7\xd9\xdb\xe2\x6d\xf5\xb6\x79\xdb\xbd\x1d\xde"
"\x4e\x6f\x97\xb7\xdb\xdb\xe3\xed\xf5\xf6\x79\xfb\xbd\x03\xde\x41\xef"
"\x90\x77\xd8\x3b\xe2\x1d\xf5\x8e\x79\xc7\xbd\x13\xde\x49\xef\x94\x77"
"\xda\x3b\xe3\x9d\xf5\xce\x79\xe7\xbd\x0b\xde\x45\xef\x92\x77\xd9\xbb"
"\xe2\x5d\xf5\xae\x79\xd7\xbd\x1b\xde\x4d\xef\x96\x77\xdb\xbb\xe3\xdd"
"\xf5\xee\x79\xf7\xbd\x07\xde\x43\xef\x91\xf7\xd8\x7b\xe2\x3d\xf5\x9e"
"\x79\xcf\xbd\x17\xde\x4b\xef\x95\xf7\xda\x7b\xe3\xbd\xf5\xde\x79\xef"
"\xbd\x0f\xde\x47\xef\x93\xf7\xd9\xfb\xe2\x7d\xf5\xbe\x79\xdf\xbd\x1f"
"\xde\x4f\xef\x97\xf7\xdb\xfb\xe3\xfd\xf5\xfe\x79\x71\x5e\xbc\x97\xe0"
"\x25\x7a\x49\xfc\xa4\x7e\x32\x3f\xb9\x9f\xc2\x4f\xe9\xa7\xf2\x53\xfb"
"\x69\xfc\xb4\x7e\x3a\x3f\xbd\x9f\xc1\xcf\xe8\x67\xf2\x33\xfb\x59\xfc"
"\xac\x7e\x36\x3f\xbb\x9f\xc3\xcf\xe9\xe7\xf2\x73\xfb\x79\xfc\xbc\x7e"
"\x3e\x3f\xbf\x5f\xc0\x2f\xe8\x17\xf2\x0b\xfb\x45\xfc\xa2\x7e\x31\xbf"
"\xb8\x5f\xc2\x2f\xe9\x97\xf2\x4b\xfb\x65\xfc\xb2\x7e\x39\xbf\xbc\x5f"
"\xc1\xaf\xe8\x57\xf2\x2b\xfb\x55\xfc\xaa\x7e\x35\xbf\xba\x5f\xc3\xaf"
"\xe9\xd7\xf2\x6b\xfb\x98\x8f\xfb\x84\x4f\xfa\x94\x4f\xfb\x8c\xcf\xfa"
"\x9c\xcf\xfb\x82\x2f\xfa\x92\x2f\xfb\x8a\xaf\xfa\x9a\xaf\xfb\x86\x6f"
"\xfa\x96\x6f\xfb\x8e\xef\xfa\x9e\xef\xfb\x81\x1f\xfa\x91\x0f\x7c\xe8"
"\x23\x3f\xe6\xd7\xf1\xeb\xfa\xf5\xfc\xfa\x7e\x03\xbf\xa1\xdf\xc8\x6f"
"\xec\x37\xf1\x9b\xfa\xcd\xfc\xe6\x7e\x0b\xbf\xa5\xdf\xca\x6f\xed\xb7"
"\xf1\xdb\xfa\xed\xfc\xf6\x7e\x07\xbf\xa3\xdf\xc9\xef\xec\x77\xf1\xbb"
"\xfa\xdd\xfc\xee\x7e\x0f\xbf\xa7\xdf\xcb\xef\xed\xf7\xf1\xfb\xfa\xfd"
"\xfc\xfe\xfe\x00\x7f\xa0\x3f\xc8\x1f\xec\x0f\xf1\x87\xfa\xc3\xfc\xe1"
"\xfe\x08\x7f\xa4\x3f\xca\x1f\xed\x8f\xf1\xc7\xfa\xe3\xfc\xf1\xfe\x04"
"\x7f\xa2\x3f\xc9\x9f\xec\x4f\xf1\xa7\xfa\xd3\xfc\xe9\xfe\x0c\x7f\xa6"
"\x3f\xcb\x9f\xed\xcf\xf1\xe7\xfa\xf3\xfc\xf9\xfe\x02\x7f\xa1\xbf\xc8"
"\x5f\xec\x2f\xf1\x97\xfa\xcb\xfc\xe5\xfe\x0a\x7f\xa5\xbf\xca\x5f\xed"
"\xaf\xf1\xd7\xfa\xeb\xfc\xf5\xfe\x06\x7f\xa3\xbf\xc9\xdf\xec\x6f\xf1"
"\xb7\xfa\xdb\xfc\xed\xfe\x0e\x7f\xa7\xbf\xcb\xdf\xed\xef\xf1\xf7\xfa"
"\xfb\xfc\xfd\xfe\x01\xff\xa0\x7f\xc8\x3f\xec\x1f\xf1\x8f\xfa\xc7\xfc"
"\xe3\xfe\x09\xff\xa4\x7f\xca\x3f\xed\x9f\xf1\xcf\xfa\xe7\xfc\xf3\xfe"
"\x05\xff\xa2\x7f\xc9\xbf\xec\x5f\xf1\xaf\xfa\xd7\xfc\xeb\xfe\x0d\xff"
"\xa6\x7f\xcb\xbf\xed\xdf\xf1\xef\xfa\xf7\xfc\xfb\xfe\x03\xff\xa1\xff"
"\xc8\x7f\xec\x3f\xf1\x9f\xfa\xcf\xfc\xe7\xfe\x0b\xff\xa5\xff\xca\x7f"
"\xed\xbf\xf1\xdf\xfa\xef\xfc\xf7\xfe\x07\xff\xa3\xff\xc9\xff\xec\x7f"
"\xf1\xbf\xfa\xdf\xfc\xef\xfe\x0f\xff\xa7\xff\xcb\xff\xed\xff\xf1\xff"
"\xfa\xff\xfc\x38\x3f\xde\x4f\xf0\x13\xfd\x24\x41\xd2\x20\x59\x90\x3c"
"\x48\x11\xa4\x0c\x52\x05\xa9\x83\x34\x41\xda\x20\x5d\x90\x3e\xc8\x10"
"\x64\x0c\x32\x05\x99\x83\x2c\x41\xd6\x20\x5b\x90\x3d\xc8\x11\xe4\x0c"
"\x72\x05\xb9\x83\x3c\x41\xde\x20\x5f\x90\x3f\x28\x10\x14\x0c\x0a\x05"
"\x85\x83\x22\x41\xd1\xa0\x58\x50\x3c\x28\x11\x94\x0c\x4a\x05\xa5\x83"
"\x32\x41\xd9\xa0\x5c\x50\x3e\xa8\x10\x54\x0c\x2a\x05\x95\x83\x2a\x41"
"\xd5\xa0\x5a\x50\x3d\xa8\x11\xd4\x0c\x6a\x05\xb5\x03\x2c\xc0\x03\x22"
"\x20\x03\x2a\xa0\x03\x26\x60\x03\x2e\xe0\x03\x21\x10\x03\x29\x90\x03"
"\x25\x50\x03\x2d\xd0\x03\x23\x30\x03\x2b\xb0\x03\x27\x70\x03\x2f\xf0"
"\x83\x20\x08\x83\x28\x00\x01\x0c\x50\x10\x0b\xea\x04\x75\x83\x7a\x41"
"\xfd\xa0\x41\xd0\x30\x68\x14\x34\x0e\x9a\x04\x4d\x83\x66\x41\xf3\xa0"
"\x45\xd0\x32\x68\x15\xb4\x0e\xda\x04\x6d\x83\x76\x41\xfb\xa0\x43\xd0"
"\x31\xe8\x14\x74\x0e\xba\x04\x5d\x83\x6e\x41\xf7\xa0\x47\xd0\x33\xe8"
"\x15\xf4\x0e\xfa\x04\x7d\x83\x7e\x41\xff\x60\x40\x30\x30\x18\x14\x0c"
"\x0e\x86\x04\x43\x83\x61\xc1\xf0\x60\x44\x30\x32\x18\x15\x8c\x0e\xc6"
"\x04\x63\x83\x71\xc1\xf8\x60\x42\x30\x31\x98\x14\x4c\x0e\xa6\x04\x53"
"\x83\x69\xc1\xf4\x60\x46\x30\x33\x98\x15\xcc\x0e\xe6\x04\x73\x83\x79"
"\xc1\xfc\x60\x41\xb0\x30\x58\x14\x2c\x0e\x96\x04\x4b\x83\x65\xc1\xf2"
"\x60\x45\xb0\x32\x58\x15\xac\x0e\xd6\x04\x6b\x83\x75\xc1\xfa\x60\x43"
"\xb0\x31\xd8\x14\x6c\x0e\xb6\x04\x5b\x83\x6d\xc1\xf6\x60\x47\xb0\x33"
"\xd8\x15\xec\x0e\xf6\x04\x7b\x83\x7d\xc1\xfe\xe0\x40\x70\x30\x38\x14"
"\x1c\x0e\x8e\x04\x47\x83\x63\xc1\xf1\xe0\x44\x70\x32\x38\x15\x9c\x0e"
"\xce\x04\x67\x83\x73\xc1\xf9\xe0\x42\x70\x31\xb8\x14\x5c\x0e\xae\x04"
"\x57\x83\x6b\xc1\xf5\xe0\x46\x70\x33\xb8\x15\xdc\x0e\xee\x04\x77\x83"
"\x7b\xc1\xfd\xe0\x41\xf0\x30\x78\x14\x3c\x0e\x9e\x04\x4f\x83\x67\xc1"
"\xf3\xe0\x45\xf0\x32\x78\x15\xbc\x0e\xde\x04\x6f\x83\x77\xc1\xfb\xe0"
"\x43\xf0\x31\xf8\x14\x7c\x0e\xbe\x04\x5f\x83\x6f\xc1\xf7\xe0\x47\xf0"
"\x33\xf8\x15\xfc\x0e\xfe\x04\x7f\x83\x7f\x41\x5c\x10\x1f\x24\x04\x89"
"\x41\x92\x30\x69\x98\x2c\x4c\x1e\xa6\x08\x53\x86\xa9\xc2\xd4\x61\x9a"
"\x30\x6d\x98\x2e\x4c\x1f\x66\x08\x33\x86\x99\xc2\xcc\x61\x96\x30\x6b"
"\x98\x2d\xcc\x1e\xe6\x08\x73\x86\xb9\xc2\xdc\x61\x9e\x30\x6f\x98\x2f"
"\xcc\x1f\x16\x08\x0b\x86\x85\xc2\xc2\x61\x91\xb0\x68\x58\x2c\x2c\x1e"
"\x96\x08\x4b\x86\xa5\xc2\xd2\x61\x99\xb0\x6c\x58\x2e\x2c\x1f\x56\x08"
"\x2b\x86\x95\xc2\xca\x61\x95\xb0\x6a\x58\x2d\xac\x1e\xd6\x08\x6b\x86"
"\xb5\xc2\xda\x21\x16\xe2\x21\x11\x92\x21\x15\xd2\x21\x13\xb2\x21\x17"
"\xf2\xa1\x10\x8a\xa1\x14\xca\xa1\x12\xaa\xa1\x16\xea\xa1\x11\x9a\xa1"
"\x15\xda\xa1\x13\xba\xa1\x17\xfa\x61\x10\x86\x61\x14\x82\x10\x86\x28"
"\x8c\x85\x75\xc2\xba\x61\xbd\xb0\x7e\xd8\x20\x6c\x18\x36\x0a\x1b\x87"
"\x4d\xc2\xa6\x61\xb3\xb0\x79\xd8\x22\x6c\x19\xb6\x0a\x5b\x87\x6d\xc2"
"\xb6\x61\xbb\xb0\x7d\xd8\x21\xec\x18\x76\x0a\x3b\x87\x5d\xc2\xae\x61"
"\xb7\xb0\x7b\xd8\x23\xec\x19\xf6\x0a\x7b\x87\x7d\xc2\xbe\x61\xbf\xb0"
"\x7f\x38\x20\x1c\x18\x0e\x0a\x07\x87\x43\xc2\xa1\xe1\xb0\x70\x78\x38"
"\x22\x1c\x19\x8e\x0a\x47\x87\x63\xc2\xb1\xe1\xb8\x70\x7c\x38\x21\x9c"
"\x18\x4e\x0a\x27\x87\x53\xc2\xa9\xe1\xb4\x70\x7a\x38\x23\x9c\x19\xce"
"\x0a\x67\x87\x73\xc2\xb9\xe1\xbc\x70\x7e\xb8\x20\x5c\x18\x2e\x0a\x17"
"\x87\x4b\xc2\xa5\xe1\xb2\x70\x79\xb8\x22\x5c\x19\xae\x0a\x57\x87\x6b"
"\xc2\xb5\xe1\xba\x70\x7d\xb8\x21\xdc\x18\x6e\x0a\x37\x87\x5b\xc2\xad"
"\xe1\xb6\x70\x7b\xb8\x23\xdc\x19\xee\x0a\x77\x87\x7b\xc2\xbd\xe1\xbe"
"\x70\x7f\x78\x20\x3c\x18\x1e\x0a\x0f\x87\x47\xc2\xa3\xe1\xb1\xf0\x78"
"\x78\x22\x3c\x19\x9e\x0a\x4f\x87\x67\xc2\xb3\xe1\xb9\xf0\x7c\x78\x21"
"\xbc\x18\x5e\x0a\x2f\x87\x57\xc2\xab\xe1\xb5\xf0\x7a\x78\x23\xbc\x19"
"\xde\x0a\x6f\x87\x77\xc2\xbb\xe1\xbd\xf0\x7e\xf8\x20\x7c\x18\x3e\x0a"
"\x1f\x87\x4f\xc2\xa7\xe1\xb3\xf0\x79\xf8\x22\x7c\x19\xbe\x0a\x5f\x87"
"\x6f\xc2\xb7\xe1\xbb\xf0\x7d\xf8\x21\xfc\x18\x7e\x0a\x3f\x87\x5f\xc2"
"\xaf\xe1\xb7\xf0\x7b\xf8\x23\xfc\x19\xfe\x0a\x7f\x87\x7f\xc2\xbf\xe1"
"\xbf\x30\x2e\x8c\x0f\x13\xc2\xc4\x30\x49\x94\x34\x4a\x16\x25\x8f\x52"
"\x44\x29\xa3\x54\x51\xea\x28\x4d\x94\x36\x4a\x17\xa5\x8f\x32\x44\x19"
"\xa3\x4c\x51\xe6\x28\x4b\x94\x35\xca\x16\x65\x8f\x72\x44\x39\xa3\x5c"
"\x51\xee\x28\x4f\x94\x37\xca\x17\xe5\x8f\x0a\x44\x05\xa3\x42\x51\xe1"
"\xa8\x48\x54\x34\x2a\x16\x15\x8f\x4a\x44\x25\xa3\x52\x51\xe9\xa8\x4c"
"\x54\x36\x2a\x17\x95\x8f\x2a\x44\x15\xa3\x4a\x51\xe5\xa8\x4a\x54\x35"
"\xaa\x16\x55\x8f\x6a\x44\x35\xa3\x5a\x51\xed\x08\x8b\xf0\x88\x88\xc8"
"\x88\x8a\xe8\x88\x89\xd8\x88\x8b\xf8\x48\x88\xc4\x48\x8a\xe4\x48\x89"
"\xd4\x48\x8b\xf4\xc8\x88\xcc\xc8\x8a\xec\xc8\x89\xdc\xc8\x8b\xfc\x28"
"\x88\xc2\x28\x8a\x40\x04\x23\x14\xc5\xa2\x3a\x51\xdd\xa8\x5e\x54\x3f"
"\x6a\x10\x35\x8c\x1a\x45\x8d\xa3\x26\x51\xd3\xa8\x59\xd4\x3c\x6a\x11"
"\xb5\x8c\x5a\x45\xad\xa3\x36\x51\xdb\xa8\x5d\xd4\x3e\xea\x10\x75\x8c"
"\x3a\x45\x9d\xa3\x2e\x51\xd7\xa8\x5b\xd4\x3d\xea\x11\xf5\x8c\x7a\x45"
"\xbd\xa3\x3e\x51\xdf\xa8\x5f\xd4\x3f\x1a\x10\x0d\x8c\x06\x45\x83\xa3"
"\x21\xd1\xd0\x68\x58\x34\x3c\x1a\x11\x8d\x8c\x46\x45\xa3\xa3\x31\xd1"
"\xd8\x68\x5c\x34\x3e\x9a\x10\x4d\x8c\x26\x45\x93\xa3\x29\xd1\xd4\x68"
"\x5a\x34\x3d\x9a\x11\xcd\x8c\x66\x45\xb3\xa3\x39\xd1\xdc\x68\x5e\x34"
"\x3f\x5a\x10\x2d\x8c\x16\x45\x8b\xa3\x25\xd1\xd2\x68\x59\xb4\x3c\x5a"
"\x11\xad\x8c\x56\x45\xab\xa3\x35\xd1\xda\x68\x5d\xb4\x3e\xda\x10\x6d"
"\x8c\x36\x45\x9b\xa3\x2d\xd1\xd6\x68\x5b\xb4\x3d\xda\x11\xed\x8c\x76"
"\x45\xbb\xa3\x3d\xd1\xde\x68\x5f\xb4\x3f\x3a\x10\x1d\x8c\x0e\x45\x87"
"\xa3\x23\xd1\xd1\xe8\x58\x74\x3c\x3a\x11\x9d\x8c\x4e\x45\xa7\xa3\x33"
"\xd1\xd9\xe8\x5c\x74\x3e\xba\x10\x5d\x8c\x2e\x45\x97\xa3\x2b\xd1\xd5"
"\xe8\x5a\x74\x3d\xba\x11\xdd\x8c\x6e\x45\xb7\xa3\x3b\xd1\xdd\xe8\x5e"
"\x74\x3f\x7a\x10\x3d\x8c\x1e\x45\x8f\xa3\x27\xd1\xd3\xe8\x59\xf4\x3c"
"\x7a\x11\xbd\x8c\x5e\x45\xaf\xa3\x37\xd1\xdb\xe8\x5d\xf4\x3e\xfa\x10"
"\x7d\x8c\x3e\x45\x9f\xa3\x2f\xd1\xd7\xe8\x5b\xf4\x3d\xfa\x11\xfd\x8c"
"\x7e\x45\xbf\xa3\x3f\xd1\xdf\xe8\x5f\x14\x17\xc5\x47\x09\x51\x62\x94"
"\x04\x24\x05\xc9\x40\x72\x90\x02\xa4\x04\xa9\x40\x6a\x90\x06\xa4\x05"
"\xe9\x40\x7a\x90\x01\x64\x04\x99\x40\x66\x90\x05\x64\x05\xd9\x40\x76"
"\x90\x03\xe4\x04\xb9\x40\x6e\x90\x07\xe4\x05\xf9\x40\x7e\x50\x00\x14"
"\x04\x85\x40\x61\x50\x04\x14\x05\xc5\x40\x71\x50\x02\x94\x04\xa5\x40"
"\x69\x50\x06\x94\x05\xe5\x40\x79\x50\x01\x54\x04\x95\x40\x65\x50\x05"
"\x54\x05\xd5\x40\x75\x50\x03\xd4\x04\xb5\x40\x6d\x80\x01\x1c\x10\x80"
"\x04\x14\xa0\x01\x03\x58\xc0\x01\x1e\x08\x40\x04\x12\x90\x81\x02\x54"
"\xa0\x01\x1d\x18\xc0\x04\x16\xb0\x81\x03\x5c\xe0\x01\x1f\x04\x20\x04"
"\x11\x00\x00\x02\x04\x62\xa0\x0e\xa8\x0b\xea\x81\xfa\xa0\x01\x68\x08"
"\x1a\x81\xc6\xa0\x09\x68\x0a\x9a\x81\xe6\xa0\x05\x68\x09\x5a\x81\xd6"
"\xa0\x0d\x68\x0b\xda\x81\xf6\xa0\x03\xe8\x08\x3a\x81\xce\xa0\x0b\xe8"
"\x0a\xba\x81\xee\xa0\x07\xe8\x09\x7a\x81\xde\xa0\x0f\xe8\x0b\xfa\x81"
"\xfe\x60\x00\x18\x08\x06\x81\xc1\x60\x08\x18\x0a\x86\x81\xe1\x60\x04"
"\x18\x09\x46\x81\xd1\x60\x0c\x18\x0b\xc6\x81\xf1\x60\x02\x98\x08\x26"
"\x81\xc9\x60\x0a\x98\x0a\xa6\x81\xe9\x60\x06\x98\x09\x66\x81\xd9\x60"
"\x0e\x98\x0b\xe6\x81\xf9\x60\x01\x58\x08\x16\x81\xc5\x60\x09\x58\x0a"
"\x96\x81\xe5\x60\x05\x58\x09\x56\x81\xd5\x60\x0d\x58\x0b\xd6\x81\xf5"
"\x60\x03\xd8\x08\x36\x81\xcd\x60\x0b\xd8\x0a\xb6\x81\xed\x60\x07\xd8"
"\x09\x76\x81\xdd\x60\x0f\xd8\x0b\xf6\x81\xfd\xe0\x00\x38\x08\x0e\x81"
"\xc3\xe0\x08\x38\x0a\x8e\x81\xe3\xe0\x04\x38\x09\x4e\x81\xd3\xe0\x0c"
"\x38\x0b\xce\x81\xf3\xe0\x02\xb8\x08\x2e\x81\xcb\xe0\x0a\xb8\x0a\xae"
"\x81\xeb\xe0\x06\xb8\x09\x6e\x81\xdb\xe0\x0e\xb8\x0b\xee\x81\xfb\xe0"
"\x01\x78\x08\x1e\x81\xc7\xe0\x09\x78\x0a\x9e\x81\xe7\xe0\x05\x78\x09"
"\x5e\x81\xd7\xe0\x0d\x78\x0b\xde\x81\xf7\xe0\x03\xf8\x08\x3e\x81\xcf"
"\xe0\x0b\xf8\x0a\xbe\x81\xef\xe0\x07\xf8\x09\x7e\x81\xdf\xe0\x0f\xf8"
"\x0b\xfe\x81\x38\x10\x0f\x12\x40\x22\x48\x02\x93\xc2\x64\x30\x39\x4c"
"\x01\x53\xc2\x54\x30\x35\x4c\x03\xd3\xc2\x74\x30\x3d\xcc\x00\x33\xc2"
"\x4c\x30\x33\xcc\x02\xb3\xc2\x6c\x30\x3b\xcc\x01\x73\xc2\x5c\x30\x37"
"\xcc\x03\xf3\xc2\x7c\x30\x3f\x2c\x00\x0b\xc2\x42\xb0\x30\x2c\x02\x8b"
"\xc2\x62\xb0\x38\x2c\x01\x4b\xc2\x52\xb0\x34\x2c\x03\xcb\xc2\x72\xb0"
"\x3c\xac\x00\x2b\xc2\x4a\xb0\x32\xac\x02\xab\xc2\x6a\xb0\x3a\xac\x01"
"\x6b\xc2\x5a\xb0\x36\xc4\x20\x0e\x09\x48\x42\x0a\xd2\x90\x81\x2c\xe4"
"\x20\x0f\x05\x28\x42\x09\xca\x50\x81\x2a\xd4\xa0\x0e\x0d\x68\x42\x0b"
"\xda\xd0\x81\x2e\xf4\xa0\x0f\x03\x18\xc2\x08\x02\x08\x21\x82\x31\x58"
"\x07\xd6\x85\xf5\x60\x7d\xd8\x00\x36\x84\x8d\x60\x63\xd8\x04\x36\x85"
"\xcd\x60\x73\xd8\x02\xb6\x84\xad\x60\x6b\xd8\x06\xb6\x85\xed\x60\x7b"
"\xd8\x01\x76\x84\x9d\x60\x67\xd8\x05\x76\x85\xdd\x60\x77\xd8\x03\xf6"
"\x84\xbd\x60\x6f\xd8\x07\xf6\x85\xfd\x60\x7f\x38\x00\x0e\x84\x83\xe0"
"\x60\x38\x04\x0e\x85\xc3\xe0\x70\x38\x02\x8e\x84\xa3\xe0\x68\x38\x06"
"\x8e\x85\xe3\xe0\x78\x38\x01\x4e\x84\x93\xe0\x64\x38\x05\x4e\x85\xd3"
"\xe0\x74\x38\x03\xce\x84\xb3\xe0\x6c\x38\x07\xce\x85\xf3\xe0\x7c\xb8"
"\x00\x2e\x84\x8b\xe0\x62\xb8\x04\x2e\x85\xcb\xe0\x72\xb8\x02\xae\x84"
"\xab\xe0\x6a\xb8\x06\xae\x85\xeb\xe0\x7a\xb8\x01\x6e\x84\x9b\xe0\x66"
"\xb8\x05\x6e\x85\xdb\xe0\x76\xb8\x03\xee\x84\xbb\xe0\x6e\xb8\x07\xee"
"\x85\xfb\xe0\x7e\x78\x00\x1e\x84\x87\xe0\x61\x78\x04\x1e\x85\xc7\xe0"
"\x71\x78\x02\x9e\x84\xa7\xe0\x69\x78\x06\x9e\x85\xe7\xe0\x79\x78\x01"
"\x5e\x84\x97\xe0\x65\x78\x05\x5e\x85\xd7\xe0\x75\x78\x03\xde\x84\xb7"
"\xe0\x6d\x78\x07\xde\x85\xf7\xe0\x7d\xf8\x00\x3e\x84\x8f\xe0\x63\xf8"
"\x04\x3e\x85\xcf\xe0\x73\xf8\x02\xbe\x84\xaf\xe0\x6b\xf8\x06\xbe\x85"
"\xef\xe0\x7b\xf8\x01\x7e\x84\x9f\xe0\x67\xf8\x05\x7e\x85\xdf\xe0\x77"
"\xf8\x03\xfe\x84\xbf\xe0\x6f\xf8\x07\xfe\x85\xff\x60\x1c\x8c\x87\x09"
"\x30\x11\x26\x41\x49\x51\x32\x94\x1c\xa5\x40\x29\x51\x2a\x94\x1a\xa5"
"\x41\x69\x51\x3a\x94\x1e\x65\x40\x19\x51\x26\x94\x19\x65\x41\x59\x51"
"\x36\x94\x1d\xe5\x40\x39\x51\x2e\x94\x1b\xe5\x41\x79\x51\x3e\x94\x1f"
"\x15\x40\x05\x51\x21\x54\x18\x15\x41\x45\x51\x31\x54\x1c\x95\x40\x25"
"\x51\x29\x54\x1a\x95\x41\x65\x51\x39\x54\x1e\x55\x40\x15\x51\x25\x54"
"\x19\x55\x41\x55\x51\x35\x54\x1d\xd5\x40\x35\x51\x2d\x54\x1b\x61\x08"
"\x47\x04\x22\x11\x85\x68\xc4\x20\x16\x71\x88\x47\x02\x12\x91\x84\x64"
"\xa4\x20\x15\x69\x48\x47\x06\x32\x91\x85\x6c\xe4\x20\x17\x79\xc8\x47"
"\x01\x0a\x51\x84\x00\x82\x08\xa1\x18\xaa\x83\xea\xa2\x7a\xa8\x3e\x6a"
"\x80\x1a\xa2\x46\xa8\x31\x6a\x82\x9a\xa2\x66\xa8\x39\x6a\x81\x5a\xa2"
"\x56\xa8\x35\x6a\x83\xda\xa2\x76\xa8\x3d\xea\x80\x3a\xa2\x4e\xa8\x33"
"\xea\x82\xba\xa2\x6e\xa8\x3b\xea\x81\x7a\xa2\x5e\xa8\x37\xea\x83\xfa"
"\xa2\x7e\xa8\x3f\x1a\x80\x06\xa2\x41\x68\x30\x1a\x82\x86\xa2\x61\x68"
"\x38\x1a\x81\x46\xa2\x51\x68\x34\x1a\x83\xc6\xa2\x71\x68\x3c\x9a\x80"
"\x26\xa2\x49\x68\x32\x9a\x82\xa6\xa2\x69\x68\x3a\x9a\x81\x66\xa2\x59"
"\x68\x36\x9a\x83\xe6\xa2\x79\x68\x3e\x5a\x80\x16\xa2\x45\x68\x31\x5a"
"\x82\x96\xa2\x65\x68\x39\x5a\x81\x56\xa2\x55\x68\x35\x5a\x83\xd6\xa2"
"\x75\x68\x3d\xda\x80\x36\xa2\x4d\x68\x33\xda\x82\xb6\xa2\x6d\x68\x3b"
"\xda\x81\x76\xa2\x5d\x68\x37\xda\x83\xf6\xa2\x7d\x68\x3f\x3a\x80\x0e"
"\xa2\x43\xe8\x30\x3a\x82\x8e\xa2\x63\xe8\x38\x3a\x81\x4e\xa2\x53\xe8"
"\x34\x3a\x83\xce\xa2\x73\xe8\x3c\xba\x80\x2e\xa2\x4b\xe8\x32\xba\x82"
"\xae\xa2\x6b\xe8\x3a\xba\x81\x6e\xa2\x5b\xe8\x36\xba\x83\xee\xa2\x7b"
"\xe8\x3e\x7a\x80\x1e\xa2\x47\xe8\x31\x7a\x82\x9e\xa2\x67\xe8\x39\x7a"
"\x81\x5e\xa2\x57\xe8\x35\x7a\x83\xde\xa2\x77\xe8\x3d\xfa\x80\x3e\xa2"
"\x4f\xe8\x33\xfa\x82\xbe\xa2\x6f\xe8\x3b\xfa\x81\x7e\xa2\x5f\xe8\x37"
"\xfa\x83\xfe\xa2\x7f\x28\x0e\xc5\xa3\x04\x94\x88\x92\xc4\x92\xc6\x92"
"\xc5\x92\xc7\x52\xc4\x52\xc6\x52\xc5\x52\xc7\xd2\xc4\xd2\xc6\xd2\xc5"
"\xd2\xc7\x32\xc4\x32\xc6\x32\xc5\x32\xc7\xb2\xc4\xb2\xc6\xb2\xc5\xb2"
"\xc7\x72\xc4\x72\xc6\x72\xc5\x72\xc7\xf2\xc4\xf2\xc6\xf2\xc5\xf2\xc7"
"\x0a\xc4\x0a\xc6\x0a\xc5\x0a\xc7\x8a\xc4\x8a\xc6\x8a\xc5\x8a\xc7\x4a"
"\xc4\x4a\xc6\x4a\xc5\x4a\xc7\xca\xc4\xca\xc6\xca\xc5\xca\xc7\x2a\xc4"
"\x2a\xc6\x2a\xc5\x2a\xc7\xaa\xc4\xaa\xc6\xaa\xc5\xaa\xc7\x6a\xc4\x6a"
"\xc6\x6a\xc5\x6a\xc7\xb0\x18\x1e\x23\x62\x64\x8c\x8a\xd1\xb1\xff\x04"
"\xc0\x03\x60\x9d\x39\x00\x00\xe0\x8e\x9d\x6d\xdb\xb6\x6d\x7b\xbf\x6d"
"\x2b\xc9\x9b\x6d\xdb\xb6\x6d\xdb\xb6\x8d\x9b\xed\xb6\xf7\x35\xc2\x1a"
"\x63\x4d\xb0\xa6\x58\x33\xac\x39\xd6\x02\x6b\x89\xb5\xc2\x5a\x63\x6d"
"\xb0\xb6\x58\x3b\xac\x3d\xd6\x01\xeb\x88\x75\xc2\x3a\x63\x5d\xb0\xae"
"\x58\x37\xac\x3b\xd6\x03\xeb\x89\xf5\xc2\x7a\x63\x18\x86\x63\x04\x46"
"\x62\x14\x46\x63\x0c\xc6\x62\x1c\xc6\x63\x02\x26\x62\x12\x26\x63\x0a"
"\xa6\x62\x1a\xa6\x63\x06\x66\x62\x16\x66\x63\x0e\xe6\x62\x1e\xe6\x63"
"\x01\x16\x62\x11\x06\x30\x88\x21\x2c\x86\xf5\xc1\xfa\x62\xfd\xb0\xfe"
"\xd8\x00\x6c\x20\x36\x08\x1b\x8c\x0d\xc1\x86\x62\xc3\xb0\xe1\xd8\x08"
"\x6c\x24\x36\x0a\x1b\x8d\x8d\xc1\xc6\x62\xe3\xb0\xf1\xd8\x04\x6c\x22"
"\x36\x09\x9b\x8c\x4d\xc1\xa6\x62\xd3\xb0\xe9\xd8\x0c\x6c\x26\x36\x0b"
"\x9b\x8d\xcd\xc1\xe6\x62\xf3\xb0\xf9\xd8\x02\x6c\x21\xb6\x08\x5b\x8c"
"\x2d\xc1\x96\x62\xcb\xb0\xe5\xd8\x0a\x6c\x25\xb6\x0a\x5b\x8d\xad\xc1"
"\xd6\x62\xeb\xb0\xf5\xd8\x06\x6c\x23\xb6\x09\xdb\x8c\x6d\xc1\xb6\x62"
"\xdb\xb0\xed\xd8\x0e\x6c\x27\xb6\x0b\xdb\x8d\xed\xc1\xf6\x62\xfb\xb0"
"\xfd\xd8\x01\xec\x20\x76\x08\x3b\x8c\x1d\xc1\x8e\x62\xc7\xb0\xe3\xd8"
"\x09\xec\x24\x76\x0a\x3b\x8d\x9d\xc1\xce\x62\xe7\xb0\xf3\xd8\x05\xec"
"\x22\x76\x09\xbb\x8c\x5d\xc1\xae\x62\xd7\xb0\xeb\xd8\x0d\xec\x26\x76"
"\x0b\xbb\x8d\xdd\xc1\xee\x62\xf7\xb0\xfb\xd8\x03\xec\x21\xf6\x08\x7b"
"\x8c\x3d\xc1\x9e\x62\xcf\xb0\xe7\xd8\x0b\xec\x25\xf6\x0a\x7b\x8d\xfd"
"\x87\xbd\xc1\xde\x62\xef\xb0\xf7\xd8\x07\xec\x23\xf6\x09\xfb\x8c\x7d"
"\xc1\xbe\x62\xdf\xb0\xef\xd8\x0f\xec\x27\xf6\x0b\xfb\x8d\xfd\xc1\xfe"
"\x62\xff\xb0\x04\x2c\x11\x4b\xc2\xe2\xf0\x64\x78\x72\x3c\x05\x9e\x12"
"\x4f\x85\xa7\xc6\xe3\xf1\x34\x78\x5a\x3c\x1d\x9e\x1e\xcf\x80\x67\xc4"
"\x33\xe1\x99\xf1\x2c\x78\x56\x3c\x1b\x9e\x1d\xcf\x81\xe7\xc4\x73\xe1"
"\xb9\xf1\x3c\x78\x5e\x3c\x1f\x9e\x1f\x2f\x80\x17\xc4\x0b\xe1\x85\xf1"
"\x22\x78\x51\xbc\x18\x5e\x1c\x2f\x81\x97\xc4\x4b\xe1\xa5\xf1\x32\x78"
"\x59\xbc\x1c\x5e\x1e\xaf\x80\x57\xc4\x2b\xe1\x95\xf1\x2a\x78\x55\xbc"
"\x1a\x5e\x1d\xaf\x81\xd7\xc4\x6b\xe1\xb5\xf1\x3a\x78\x5d\xbc\x1e\x5e"
"\x1f\x6f\x80\x37\xc4\x1b\xe1\x8d\xf1\x26\x78\x53\xbc\x19\xde\x1c\x6f"
"\x81\xb7\xc4\x5b\xe1\xad\xf1\x36\x78\x5b\xbc\x1d\xde\x1e\xef\x80\x77"
"\xc4\x3b\xe1\x9d\xf1\x2e\x78\x57\xbc\x1b\xde\x1d\xef\x81\xf7\xc4\x7b"
"\xe1\xbd\x71\x0c\xc7\x71\x02\x27\x71\x0a\xa7\x71\x06\x67\x71\x0e\xe7"
"\x71\x01\x17\x71\x09\x97\x71\x05\x57\x71\x0d\xd7\x71\x03\x37\x71\x0b"
"\xb7\x71\x07\x77\x71\x0f\xf7\xf1\x00\x0f\xf1\x08\x07\x38\xc4\x11\x1e"
"\xc3\xfb\xe0\x7d\xf1\x7e\x78\x7f\x7c\x00\x3e\x10\x1f\x84\x0f\xc6\x87"
"\xe0\x43\xf1\x61\xf8\x70\x7c\x04\x3e\x12\x1f\x85\x8f\xc6\xc7\xe0\x63"
"\xf1\x71\xf8\x78\x7c\x02\x3e\x11\x9f\x84\x4f\xc6\xa7\xe0\x53\xf1\x69"
"\xf8\x74\x7c\x06\x3e\x13\x9f\x85\xcf\xc6\xe7\xe0\x73\xf1\x79\xf8\x7c"
"\x7c\x01\xbe\x10\x5f\x84\x2f\xc6\x97\xe0\x4b\xf1\x65\xf8\x72\x7c\x05"
"\xbe\x12\x5f\x85\xaf\xc6\xd7\xe0\x6b\xf1\x75\xf8\x7a\x7c\x03\xbe\x11"
"\xdf\x84\x6f\xc6\xb7\xe0\x5b\xf1\x6d\xf8\x76\x7c\x07\xbe\x13\xdf\x85"
"\xef\xc6\xf7\xe0\x7b\xf1\x7d\xf8\x7e\xfc\x00\x7e\x10\x3f\x84\x1f\xc6"
"\x8f\xe0\x47\xf1\x63\xf8\x71\xfc\x04\x7e\x12\x3f\x85\x9f\xc6\xcf\xe0"
"\x67\xf1\x73\xf8\x79\xfc\x02\x7e\x11\xbf\x84\x5f\xc6\xaf\xe0\x57\xf1"
"\x6b\xf8\x75\xfc\x06\x7e\x13\xbf\x85\xdf\xc6\xef\xe0\x77\xf1\x7b\xf8"
"\x7d\xfc\x01\xfe\x10\x7f\x84\x3f\xc6\x9f\xe0\x4f\xf1\x67\xf8\x73\xfc"
"\x05\xfe\x12\x7f\x85\xbf\xc6\xff\xc3\xdf\xe0\x6f\xf1\x77\xf8\x7b\xfc"
"\x03\xfe\x11\xff\x84\x7f\xc6\xbf\xe0\x5f\xf1\x6f\xf8\x77\xfc\x07\xfe"
"\x13\xff\x85\xff\xc6\xff\xe0\x7f\xf1\x7f\x78\x02\x9e\x88\x27\xe1\x71"
"\x44\x32\x22\x39\x91\x82\x48\x49\xa4\x22\x52\x13\xf1\x44\x1a\x22\x2d"
"\x91\x8e\x48\x4f\x64\x20\x32\x12\x99\x88\xcc\x44\x16\x22\x2b\x91\x8d"
"\xc8\x4e\xe4\x20\x72\x12\xb9\x88\xdc\x44\x1e\x22\x2f\x91\x8f\xc8\x4f"
"\x14\x20\x0a\x12\x85\x88\xc2\x44\x11\xa2\x28\x51\x8c\x28\x4e\x94\x20"
"\x4a\x12\xa5\x88\xd2\x44\x19\xa2\x2c\x51\x8e\x28\x4f\x54\x20\x2a\x12"
"\x95\x88\xca\x44\x15\xa2\x2a\x51\x8d\xa8\x4e\xd4\x20\x6a\x12\xb5\x88"
"\xda\x44\x1d\xa2\x2e\x51\x8f\xa8\x4f\x34\x20\x1a\x12\x8d\x88\xc6\x44"
"\x13\xa2\x29\xd1\x8c\x68\x4e\xb4\x20\x5a\x12\xad\x88\xd6\x44\x1b\xa2"
"\x2d\xd1\x8e\x68\x4f\x74\x20\x3a\x12\x9d\x88\xce\x44\x17\xa2\x2b\xd1"
"\x8d\xe8\x4e\xf4\x20\x7a\x12\xbd\x88\xde\x04\x46\xe0\x04\x41\x90\x04"
"\x45\xd0\x04\x43\xb0\x04\x47\xf0\x84\x40\x88\x84\x44\xc8\x84\x42\xa8"
"\x84\x46\xe8\x84\x41\x98\x84\x45\xd8\x84\x43\xb8\x84\x47\xf8\x44\x40"
"\x84\x44\x44\x00\x02\x12\x88\x88\x11\x7d\x88\xbe\x44\x3f\xa2\x3f\x31"
"\x80\x18\x48\x0c\x22\x06\x13\x43\x88\xa1\xc4\x30\x62\x38\x31\x82\x18"
"\x49\x8c\x22\x46\x13\x63\x88\xb1\xc4\x38\x62\x3c\x31\x81\x98\x48\x4c"
"\x22\x26\x13\x53\x88\xa9\xc4\x34\x62\x3a\x31\x83\x98\x49\xcc\x22\x66"
"\x13\x73\x88\xb9\xc4\x3c\x62\x3e\xb1\x80\x58\x48\x2c\x22\x16\x13\x4b"
"\x88\xa5\xc4\x32\x62\x39\xb1\x82\x58\x49\xac\x22\x56\x13\x6b\x88\xb5"
"\xc4\x3a\x62\x3d\xb1\x81\xd8\x48\x6c\x22\x36\x13\x5b\x88\xad\xc4\x36"
"\x62\x3b\xb1\x83\xd8\x49\xec\x22\x76\x13\x7b\x88\xbd\xc4\x3e\x62\x3f"
"\x71\x80\x38\x48\x1c\x22\x0e\x13\x47\x88\xa3\xc4\x31\xe2\x38\x71\x82"
"\x38\x49\x9c\x22\x4e\x13\x67\x88\xb3\xc4\x39\xe2\x3c\x71\x81\xb8\x48"
"\x5c\x22\x2e\x13\x57\x88\xab\xc4\x35\xe2\x3a\x71\x83\xb8\x49\xdc\x22"
"\x6e\x13\x77\x88\xbb\xc4\x3d\xe2\x3e\xf1\x80\x78\x48\x3c\x22\x1e\x13"
"\x4f\x88\xa7\xc4\x33\xe2\x39\xf1\x82\x78\x49\xbc\x22\x5e\x13\xff\x11"
"\x6f\x88\xb7\xc4\x3b\xe2\x3d\xf1\x81\xf8\x48\x7c\x22\x3e\x13\x5f\x88"
"\xaf\xc4\x37\xe2\x3b\xf1\x83\xf8\x49\xfc\x22\x7e\x13\x7f\x88\xbf\xc4"
"\x3f\x22\x81\x48\x24\x92\x88\x38\x32\x19\x99\x9c\x4c\x41\xa6\x24\x53"
"\x91\xa9\xc9\x78\x32\x0d\x99\x96\x4c\x47\xa6\x27\x33\x90\x19\xc9\x4c"
"\x64\x66\x32\x0b\x99\x95\xcc\x46\x66\x27\x73\x90\x39\xc9\x5c\x64\x6e"
"\x32\x0f\x99\x97\xcc\x47\xe6\x27\x0b\x90\x05\xc9\x42\x64\x61\xb2\x08"
"\x59\x94\x2c\x46\x16\x27\x4b\x90\x25\xc9\x52\x64\x69\xb2\x0c\x59\x96"
"\x2c\x47\x96\x27\x2b\x90\x15\xc9\x4a\x64\x65\xb2\x0a\x59\x95\xac\x46"
"\x56\x27\x6b\x90\x35\xc9\x5a\x64\x6d\xb2\x0e\x59\x97\xac\x47\xd6\x27"
"\x1b\x90\x0d\xc9\x46\x64\x63\xb2\x09\xd9\x94\x6c\x46\x36\x27\x5b\x90"
"\x2d\xc9\x56\x64\x6b\xb2\x0d\xd9\x96\x6c\x47\xb6\x27\x3b\x90\x1d\xc9"
"\x4e\x64\x67\xb2\x0b\xd9\x95\xec\x46\x76\x27\x7b\x90\x3d\xc9\x5e\x64"
"\x6f\x12\x23\x71\x92\x20\x49\x92\x22\x69\x92\x21\x59\x92\x23\x79\x52"
"\x20\x45\x52\x22\x65\x52\x21\x55\x52\x23\x75\xd2\x20\x4d\xd2\x22\x6d"
"\xd2\x21\x5d\xd2\x23\x7d\x32\x20\x43\x32\x22\x01\x09\x49\x44\xc6\xc8"
"\x3e\x64\x5f\xb2\x1f\xd9\x9f\x1c\x40\x0e\x24\x07\x91\x83\xc9\x21\xe4"
"\x50\x72\x18\x39\x9c\x1c\x41\x8e\x24\x47\x91\xa3\xc9\x31\xe4\x58\x72"
"\x1c\x39\x9e\x9c\x40\x4e\x24\x27\x91\x93\xc9\x29\xe4\x54\x72\x1a\x39"
"\x9d\x9c\x41\xce\x24\x67\x91\xb3\xc9\x39\xe4\x5c\x72\x1e\x39\x9f\x5c"
"\x40\x2e\x24\x17\x91\x8b\xc9\x25\xe4\x52\x72\x19\xb9\x9c\x5c\x41\xae"
"\x24\x57\x91\xab\xc9\x35\xe4\x5a\x72\x1d\xb9\x9e\xdc\x40\x6e\x24\x37"
"\x91\x9b\xc9\x2d\xe4\x56\x72\x1b\xb9\x9d\xdc\x41\xee\x24\x77\x91\xbb"
"\xc9\x3d\xe4\x5e\x72\x1f\xb9\x9f\x3c\x40\x1e\x24\x0f\x91\x87\xc9\x23"
"\xe4\x51\xf2\x18\x79\x9c\x3c\x41\x9e\x24\x4f\x91\xa7\xc9\x33\xe4\x59"
"\xf2\x1c\x79\x9e\xbc\x40\x5e\x24\x2f\x91\x97\xc9\x2b\xe4\x55\xf2\x1a"
"\x79\x9d\xbc\x41\xde\x24\x6f\x91\xb7\xc9\x3b\xe4\x5d\xf2\x1e\x79\x9f"
"\x7c\x40\x3e\x24\x1f\x91\x8f\xc9\x27\xe4\x53\xf2\x19\xf9\x9c\x7c\x41"
"\xbe\x24\x5f\x91\xaf\xc9\xff\xc8\x37\xe4\x5b\xf2\x1d\xf9\x9e\xfc\x40"
"\x7e\x24\x3f\x91\x9f\xc9\x2f\xe4\x57\xf2\x1b\xf9\x9d\xfc\x41\xfe\x24"
"\x7f\x91\xbf\xc9\x3f\xe4\x5f\xf2\x1f\x99\x40\x26\x92\x49\x64\x1c\x95"
"\x8c\x4a\x4e\xa5\xa0\x52\x52\xa9\xa8\xd4\x54\x3c\x95\x86\x4a\x4b\xa5"
"\xa3\xd2\x53\x19\xa8\x8c\x54\x26\x2a\x33\x95\x85\xca\x4a\x65\xa3\xb2"
"\x53\x39\xa8\x9c\x54\x2e\x2a\x37\x95\x87\xca\x4b\xe5\xa3\xf2\x53\x05"
"\xa8\x82\x54\x21\xaa\x30\x55\x84\x2a\x4a\x15\xa3\x8a\x53\x25\xa8\x92"
"\x54\x29\xaa\x34\x55\x86\x2a\x4b\x95\xa3\xca\x53\x15\xa8\x8a\x54\x25"
"\xaa\x32\x55\x85\xaa\x4a\x55\xa3\xaa\x53\x35\xa8\x9a\x54\x2d\xaa\x36"
"\x55\x87\xaa\x4b\xd5\xa3\xea\x53\x0d\xa8\x86\x54\x23\xaa\x31\xd5\x84"
"\x6a\x4a\x35\xa3\x9a\x53\x2d\xa8\x96\x54\x2b\xaa\x35\xd5\x86\x6a\x4b"
"\xb5\xa3\xda\x53\x1d\xa8\x8e\x54\x27\xaa\x33\xd5\x85\xea\x4a\x75\xa3"
"\xba\x53\x3d\xa8\x9e\x54\x2f\xaa\x37\x85\x51\x38\x45\x50\x24\x45\x51"
"\x34\xc5\x50\x2c\xc5\x51\x3c\x25\x50\x22\x25\x51\x32\xa5\x50\x2a\xa5"
"\x51\x3a\x65\x50\x26\x65\x51\x36\xe5\x50\x2e\xe5\x51\x3e\x15\x50\x21"
"\x15\x51\x80\x82\x14\xa2\x62\x54\x1f\xaa\x2f\xd5\x8f\xea\x4f\x0d\xa0"
"\x06\x52\x83\xa8\xc1\xd4\x10\x6a\x28\x35\x8c\x1a\x4e\x8d\xa0\x46\x52"
"\xa3\xa8\xd1\xd4\x18\x6a\x2c\x35\x8e\x1a\x4f\x4d\xa0\x26\x52\x93\xa8"
"\xc9\xd4\x14\x6a\x2a\x35\x8d\x9a\x4e\xcd\xa0\x66\x52\xb3\xa8\xd9\xd4"
"\x1c\x6a\x2e\x35\x8f\x9a\x4f\x2d\xa0\x16\x52\x8b\xa8\xc5\xd4\x12\x6a"
"\x29\xb5\x8c\x5a\x4e\xad\xa0\x56\x52\xab\xa8\xd5\xd4\x1a\x6a\x2d\xb5"
"\x8e\x5a\x4f\x6d\xa0\x36\x52\x9b\xa8\xcd\xd4\x16\x6a\x2b\xb5\x8d\xda"
"\x4e\xed\xa0\x76\x52\xbb\xa8\xdd\xd4\x1e\x6a\x2f\xb5\x8f\xda\x4f\x1d"
"\xa0\x0e\x52\x87\xa8\xc3\xd4\x11\xea\x28\x75\x8c\x3a\x4e\x9d\xa0\x4e"
"\x52\xa7\xa8\xd3\xd4\x19\xea\x2c\x75\x8e\x3a\x4f\x5d\xa0\x2e\x52\x97"
"\xa8\xcb\xd4\x15\xea\x2a\x75\x8d\xba\x4e\xdd\xa0\x6e\x52\xb7\xa8\xdb"
"\xd4\x1d\xea\x2e\x75\x8f\xba\x4f\x3d\xa0\x1e\x52\x8f\xa8\xc7\xd4\x13"
"\xea\x29\xf5\x8c\x7a\x4e\xbd\xa0\x5e\x52\xaf\xa8\xd7\xd4\x7f\xd4\x1b"
"\xea\x2d\xf5\x8e\x7a\x4f\x7d\xa0\x3e\x52\x9f\xa8\xcf\xd4\x17\xea\x2b"
"\xf5\x8d\xfa\x4e\xfd\xa0\x7e\x52\xbf\xa8\xdf\xd4\x1f\xea\x2f\xf5\x8f"
"\x4a\xa0\x12\xa9\x24\x2a\x8e\x4e\x46\x27\xa7\x53\xd0\x29\xe9\x54\x74"
"\x6a\x3a\x9e\x4e\x43\xa7\xa5\xd3\xd1\xe9\xe9\x0c\x74\x46\x3a\x13\x9d"
"\x99\xce\x42\x67\xa5\xb3\xd1\xd9\xe9\x1c\x74\x4e\x3a\x17\x9d\x9b\xce"
"\x43\xe7\xa5\xf3\xd1\xf9\xe9\x02\x74\x41\xba\x10\x5d\x98\x2e\x42\x17"
"\xa5\x8b\xd1\xc5\xe9\x12\x74\x49\xba\x14\x5d\x9a\x2e\x43\x97\xa5\xcb"
"\xd1\xe5\xe9\x0a\x74\x45\xba\x12\x5d\x99\xae\x42\x57\xa5\xab\xd1\xd5"
"\xe9\x1a\x74\x4d\xba\x16\x5d\x9b\xae\x43\xd7\xa5\xeb\xd1\xf5\xe9\x06"
"\x74\x43\xba\x11\xdd\x98\x6e\x42\x37\xa5\x9b\xd1\xcd\xe9\x16\x74\x4b"
"\xba\x15\xdd\x9a\x6e\x43\xb7\xa5\xdb\xd1\xed\xe9\x0e\x74\x47\xba\x13"
"\xdd\x99\xee\x42\x77\xa5\xbb\xd1\xdd\xe9\x1e\x74\x4f\xba\x17\xdd\x9b"
"\xc6\x68\x9c\x26\x68\x92\xa6\x68\x9a\x66\x68\x96\xe6\x68\x9e\x16\x68"
"\x91\x96\x68\x99\x56\x68\x95\xd6\x68\x9d\x36\x68\x93\xb6\x68\x9b\x76"
"\x68\x97\xf6\x68\x9f\x0e\xe8\x90\x8e\x68\x40\x43\x1a\xd1\x31\xba\x0f"
"\xdd\x97\xee\x47\xf7\xa7\x07\xd0\x03\xe9\x41\xf4\x60\x7a\x08\x3d\x94"
"\x1e\x46\x0f\xa7\x47\xd0\x23\xe9\x51\xf4\x68\x7a\x0c\x3d\x96\x1e\x47"
"\x8f\xa7\x27\xd0\x13\xe9\x49\xf4\x64\x7a\x0a\x3d\x95\x9e\x46\x4f\xa7"
"\x67\xd0\x33\xe9\x59\xf4\x6c\x7a\x0e\x3d\x97\x9e\x47\xcf\xa7\x17\xd0"
"\x0b\xe9\x45\xf4\x62\x7a\x09\xbd\x94\x5e\x46\x2f\xa7\x57\xd0\x2b\xe9"
"\x55\xf4\x6a\x7a\x0d\xbd\x96\x5e\x47\xaf\xa7\x37\xd0\x1b\xe9\x4d\xf4"
"\x66\x7a\x0b\xbd\x95\xde\x46\x6f\xa7\x77\xd0\x3b\xe9\x5d\xf4\x6e\x7a"
"\x0f\xbd\x97\xde\x47\xef\xa7\x0f\xd0\x07\xe9\x43\xf4\x61\xfa\x08\x7d"
"\x94\x3e\x46\x1f\xa7\x4f\xd0\x27\xe9\x53\xf4\x69\xfa\x0c\x7d\x96\x3e"
"\x47\x9f\xa7\x2f\xd0\x17\xe9\x4b\xf4\x65\xfa\x0a\x7d\x95\xbe\x46\x5f"
"\xa7\x6f\xd0\x37\xe9\x5b\xf4\x6d\xfa\x0e\x7d\x97\xbe\x47\xdf\xa7\x1f"
"\xd0\x0f\xe9\x47\xf4\x63\xfa\x09\xfd\x94\x7e\x46\x3f\xa7\x5f\xd0\x2f"
"\xe9\x57\xf4\x6b\xfa\x3f\xfa\x0d\xfd\x96\x7e\x47\xbf\xa7\x3f\xd0\x1f"
"\xe9\x4f\xf4\x67\xfa\x0b\xfd\x95\xfe\x46\x7f\xa7\x7f\xd0\x3f\xe9\x5f"
"\xf4\x6f\xfa\x0f\xfd\x97\xfe\x47\x27\xd0\x89\x74\x12\x1d\xc7\x24\x63"
"\x92\x33\x29\x98\x94\x4c\x2a\x26\x35\x13\xcf\xa4\x61\xd2\x32\xe9\x98"
"\xf4\x4c\x06\x26\x23\x93\x89\xc9\xcc\x64\x61\xb2\x32\xd9\x98\xec\x4c"
"\x0e\x26\x27\x93\x8b\xc9\xcd\xe4\x61\xf2\x32\xf9\x98\xfc\x4c\x01\xa6"
"\x20\x53\x88\x29\xcc\x14\x61\x8a\x32\xc5\x98\xe2\x4c\x09\xa6\x24\x53"
"\x8a\x29\xcd\x94\x61\xca\x32\xe5\x98\xf2\x4c\x05\xa6\x22\x53\x89\xa9"
"\xcc\x54\x61\xaa\x32\xd5\x98\xea\x4c\x0d\xa6\x26\x53\x8b\xa9\xcd\xd4"
"\x61\xea\x32\xf5\x98\xfa\x4c\x03\xa6\x21\xd3\x88\x69\xcc\x34\x61\x9a"
"\x32\xcd\x98\xe6\x4c\x0b\xa6\x25\xd3\x8a\x69\xcd\xb4\x61\xda\x32\xed"
"\x98\xf6\x4c\x07\xa6\x23\xd3\x89\xe9\xcc\x74\x61\xba\x32\xdd\x98\xee"
"\x4c\x0f\xa6\x27\xd3\x8b\xe9\xcd\x60\x0c\xce\x10\x0c\xc9\x50\x0c\xcd"
"\x30\x0c\xcb\x70\x0c\xcf\x08\x8c\xc8\x48\x8c\xcc\x28\x8c\xca\x68\x8c"
"\xce\x18\x8c\xc9\x58\x8c\xcd\x38\x8c\xcb\x78\x8c\xcf\x04\x4c\xc8\x44"
"\x0c\x60\x20\x83\x98\x18\xd3\x87\xe9\xcb\xf4\x63\xfa\x33\x03\x98\x81"
"\xcc\x20\x66\x30\x33\x84\x19\xca\x0c\x63\x86\x33\x23\x98\x91\xcc\x28"
"\x66\x34\x33\x86\x19\xcb\x8c\x63\xc6\x33\x13\x98\x89\xcc\x24\x66\x32"
"\x33\x85\x99\xca\x4c\x63\xa6\x33\x33\x98\x99\xcc\x2c\x66\x36\x33\x87"
"\x99\xcb\xcc\x63\xe6\x33\x0b\x98\x85\xcc\x22\x66\x31\xb3\x84\x59\xca"
"\x2c\x63\x96\x33\x2b\x98\x95\xcc\x2a\x66\x35\xb3\x86\x59\xcb\xac\x63"
"\xd6\x33\x1b\x98\x8d\xcc\x26\x66\x33\xb3\x85\xd9\xca\x6c\x63\xb6\x33"
"\x3b\x98\x9d\xcc\x2e\x66\x37\xb3\x87\xd9\xcb\xec\x63\xf6\x33\x07\x98"
"\x83\xcc\x21\xe6\x30\x73\x84\x39\xca\x1c\x63\x8e\x33\x27\x98\x93\xcc"
"\x29\xe6\x34\x73\x86\x39\xcb\x9c\x63\xce\x33\x17\x98\x8b\xcc\x25\xe6"
"\x32\x73\x85\xb9\xca\x5c\x63\xae\x33\x37\x98\x9b\xcc\x2d\xe6\x36\x73"
"\x87\xb9\xcb\xdc\x63\xee\x33\x0f\x98\x87\xcc\x23\xe6\x31\xf3\x84\x79"
"\xca\x3c\x63\x9e\x33\x2f\x98\x97\xcc\x2b\xe6\x35\x13\x1f\xf7\x86\x79"
"\xcb\xbc\x63\xde\x33\x1f\x98\x8f\xcc\x27\xe6\x33\xf3\x85\xf9\xca\x7c"
"\x63\xbe\x33\x3f\x98\x9f\xcc\x2f\xe6\x37\xf3\x87\xf9\xcb\xfc\x63\x12"
"\x98\x44\x26\x89\x89\x63\x93\xb1\xc9\xd9\x14\x6c\x4a\x36\x15\x9b\x9a"
"\x8d\x67\xd3\xb0\x69\xd9\x74\x6c\x7a\x36\x03\x9b\x91\xcd\xc4\x66\x66"
"\xb3\xb0\x59\xd9\x6c\x6c\x76\x36\x07\x9b\x93\xcd\xc5\xe6\x66\xf3\xb0"
"\x79\xd9\x7c\x6c\x7e\xb6\x00\x5b\x90\x2d\xc4\x16\x66\x8b\xb0\x45\xd9"
"\x62\x6c\x71\xb6\x04\x5b\x92\x2d\xc5\x96\x66\xcb\xb0\x65\xd9\x72\x6c"
"\x79\xb6\x02\x5b\x91\xad\xc4\x56\x66\xab\xb0\x55\xd9\x6a\x6c\x75\xb6"
"\x06\x5b\x93\xad\xc5\xd6\x66\xeb\xb0\x75\xd9\x7a\x6c\x7d\xb6\x01\xdb"
"\x90\x6d\xc4\x36\x66\x9b\xb0\x4d\xd9\x66\x6c\x73\xb6\x05\xdb\x92\x6d"
"\xc5\xb6\x66\xdb\xb0\x6d\xd9\x76\x6c\x7b\xb6\x03\xdb\x91\xed\xc4\x76"
"\x66\xbb\xb0\x5d\xd9\x6e\x6c\x77\xb6\x07\xdb\x93\xed\xc5\xf6\x66\x31"
"\x16\x67\x09\x96\x64\x29\x96\x66\x19\x96\x65\x39\x96\x67\x05\x56\x64"
"\x25\x56\x66\x15\x56\x65\x35\x56\x67\x0d\xd6\x64\x2d\xd6\x66\x1d\xd6"
"\x65\x3d\xd6\x67\x03\x36\x64\x23\x16\xb0\x90\x45\x6c\x8c\xed\xc3\xf6"
"\x65\xfb\xb1\xfd\xd9\x01\xec\x40\x76\x10\x3b\x98\x1d\xc2\x0e\x65\x87"
"\xb1\xc3\xd9\x11\xec\x48\x76\x14\x3b\x9a\x1d\xc3\x8e\x65\xc7\xb1\xe3"
"\xd9\x09\xec\x44\x76\x12\x3b\x99\x9d\xc2\x4e\x65\xa7\xb1\xd3\xd9\x19"
"\xec\x4c\x76\x16\x3b\x9b\x9d\xc3\xce\x65\xe7\xb1\xf3\xd9\x05\xec\x42"
"\x76\x11\xbb\x98\x5d\xc2\x2e\x65\x97\xb1\xcb\xd9\x15\xec\x4a\x76\x15"
"\xbb\x9a\x5d\xc3\xae\x65\xd7\xb1\xeb\xd9\x0d\xec\x46\x76\x13\xbb\x99"
"\xdd\xc2\x6e\x65\xb7\xb1\xdb\xd9\x1d\xec\x4e\x76\x17\xbb\x9b\xdd\xc3"
"\xee\x65\xf7\xb1\xfb\xd9\x03\xec\x41\xf6\x10\x7b\x98\x3d\xc2\x1e\x65"
"\x8f\xb1\xc7\xd9\x13\xec\x49\xf6\x14\x7b\x9a\x3d\xc3\x9e\x65\xcf\xb1"
"\xe7\xd9\x0b\xec\x45\xf6\x12\x7b\x99\xbd\xc2\x5e\x65\xaf\xb1\xd7\xd9"
"\x1b\xec\x4d\xf6\x16\x7b\x9b\xbd\xc3\xde\x65\xef\xb1\xf7\xd9\x07\xec"
"\x43\xf6\x11\xfb\x98\x7d\xc2\x3e\x65\x9f\xb1\xcf\xd9\x17\xec\x4b\xf6"
"\x15\xfb\x9a\xfd\x8f\x7d\xc3\xbe\x65\xdf\xb1\xef\xd9\x0f\xec\x47\xf6"
"\x13\xfb\x99\xfd\xc2\x7e\x65\xbf\xb1\xdf\xd9\x1f\xec\x4f\xf6\x17\xfb"
"\x9b\xfd\xc3\xfe\x65\xff\xb1\x09\x6c\x22\x9b\xc4\xc6\x71\xc9\xb8\xe4"
"\x5c\x0a\x2e\x25\x97\x8a\x4b\xcd\xc5\x73\x69\xb8\xb4\x5c\x3a\x2e\x3d"
"\x97\x81\xcb\xc8\x65\xe2\x32\x73\x59\xb8\xac\x5c\x36\x2e\x3b\x97\x83"
"\xcb\xc9\xe5\xe2\x72\x73\x79\xb8\xbc\x5c\x3e\x2e\x3f\x57\x80\x2b\xc8"
"\x15\xe2\x0a\x73\x45\xb8\xa2\x5c\x31\xae\x38\x57\x82\x2b\xc9\x95\xe2"
"\x4a\x73\x65\xb8\xb2\x5c\x39\xae\x3c\x57\x81\xab\xc8\x55\xe2\x2a\x73"
"\x55\xb8\xaa\x5c\x35\xae\x3a\x57\x83\xab\xc9\xd5\xe2\x6a\x73\x75\xb8"
"\xba\x5c\x3d\xae\x3e\xd7\x80\x6b\xc8\x35\xe2\x1a\x73\x4d\xb8\xa6\x5c"
"\x33\xae\x39\xd7\x82\x6b\xc9\xb5\xe2\x5a\x73\x6d\xb8\xb6\x5c\x3b\xae"
"\x3d\xd7\x81\xeb\xc8\x75\xe2\x3a\x73\x5d\xb8\xae\x5c\x37\xae\x3b\xd7"
"\x83\xeb\xc9\xf5\xe2\x7a\x73\x18\x87\x73\x04\x47\x72\x14\x47\x73\x0c"
"\xc7\x72\x1c\xc7\x73\x02\x27\x72\x12\x27\x73\x0a\xa7\x72\x1a\xa7\x73"
"\x06\x67\x72\x16\x67\x73\x0e\xe7\x72\x1e\xe7\x73\x01\x17\x72\x11\x07"
"\x38\xc8\x21\x2e\xc6\xf5\xe1\xfa\x72\xfd\xb8\xfe\xdc\x00\x6e\x20\x37"
"\x88\x1b\xcc\x0d\xe1\x86\x72\xc3\xb8\xe1\xdc\x08\x6e\x24\x37\x8a\x1b"
"\xcd\x8d\xe1\xc6\x72\xe3\xb8\xf1\xdc\x04\x6e\x22\x37\x89\x9b\xcc\x4d"
"\xe1\xa6\x72\xd3\xb8\xe9\xdc\x0c\x6e\x26\x37\x8b\x9b\xcd\xcd\xe1\xe6"
"\x72\xf3\xb8\xf9\xdc\x02\x6e\x21\xb7\x88\x5b\xcc\x2d\xe1\x96\x72\xcb"
"\xb8\xe5\xdc\x0a\x6e\x25\xb7\x8a\x5b\xcd\xad\xe1\xd6\x72\xeb\xb8\xf5"
"\xdc\x06\x6e\x23\xb7\x89\xdb\xcc\x6d\xe1\xb6\x72\xdb\xb8\xed\xdc\x0e"
"\x6e\x27\xb7\x8b\xdb\xcd\xed\xe1\xf6\x72\xfb\xb8\xfd\xdc\x01\xee\x20"
"\x77\x88\x3b\xcc\x1d\xe1\x8e\x72\xc7\xb8\xe3\xdc\x09\xee\x24\x77\x8a"
"\x3b\xcd\x9d\xe1\xce\x72\xe7\xb8\xf3\xdc\x05\xee\x22\x77\x89\xbb\xcc"
"\x5d\xe1\xae\x72\xd7\xb8\xeb\xdc\x0d\xee\x26\x77\x8b\xbb\xcd\xdd\xe1"
"\xee\x72\xf7\xb8\xfb\xdc\x03\xee\x21\xf7\x88\x7b\xcc\x3d\xe1\x9e\x72"
"\xcf\xb8\xe7\xdc\x0b\xee\x25\xf7\x8a\x7b\xcd\xfd\xc7\xbd\xe1\xde\x72"
"\xef\xb8\xf7\xdc\x07\xee\x23\xf7\x89\xfb\xcc\x7d\xe1\xbe\x72\xdf\xb8"
"\xef\xdc\x0f\xee\x27\xf7\x8b\xfb\xcd\xfd\xe1\xfe\x72\xff\xb8\x04\x2e"
"\x91\x4b\xe2\xe2\xf8\x64\x7c\x72\x3e\x05\x9f\x92\x4f\xc5\xa7\xe6\xe3"
"\xf9\x34\x7c\x5a\x3e\x1d\x9f\x9e\xcf\xc0\x67\xe4\x33\xf1\x99\xf9\x2c"
"\x7c\x56\x3e\x1b\x9f\x9d\xcf\xc1\xe7\xe4\x73\xf1\xb9\xf9\x3c\x7c\x5e"
"\x3e\x1f\x9f\x9f\x2f\xc0\x17\xe4\x0b\xf1\x85\xf9\x22\x7c\x51\xbe\x18"
"\x5f\x9c\x2f\xc1\x97\xe4\x4b\xf1\xa5\xf9\x32\x7c\x59\xbe\x1c\x5f\x9e"
"\xaf\xc0\x57\xe4\x2b\xf1\x95\xf9\x2a\x7c\x55\xbe\x1a\x5f\x9d\xaf\xc1"
"\xd7\xe4\x6b\xf1\xb5\xf9\x3a\x7c\x5d\xbe\x1e\x5f\x9f\x6f\xc0\x37\xe4"
"\x1b\xf1\x8d\xf9\x26\x7c\x53\xbe\x19\xdf\x9c\x6f\xc1\xb7\xe4\x5b\xf1"
"\xad\xf9\x36\x7c\x5b\xbe\x1d\xdf\x9e\xef\xc0\x77\xe4\x3b\xf1\x9d\xf9"
"\x2e\x7c\x57\xbe\x1b\xdf\x9d\xef\xc1\xf7\xe4\x7b\xf1\xbd\x79\x8c\xc7"
"\x79\x82\x27\x79\x8a\xa7\x79\x86\x67\x79\x8e\xe7\x79\x81\x17\x79\x89"
"\x97\x79\x85\x57\x79\x8d\xd7\x79\x83\x37\x79\x8b\xb7\x79\x87\x77\x79"
"\x8f\xf7\xf9\x80\x0f\xf9\x88\x07\x3c\xe4\x11\x1f\xe3\xfb\xf0\x7d\xf9"
"\x7e\x7c\x7f\x7e\x00\x3f\x90\x1f\xc4\x0f\xe6\x87\xf0\x43\xf9\x61\xfc"
"\x70\x7e\x04\x3f\x92\x1f\xc5\x8f\xe6\xc7\xf0\x63\xf9\x71\xfc\x78\x7e"
"\x02\x3f\x91\x9f\xc4\x4f\xe6\xa7\xf0\x53\xf9\x69\xfc\x74\x7e\x06\x3f"
"\x93\x9f\xc5\xcf\xe6\xe7\xf0\x73\xf9\x79\xfc\x7c\x7e\x01\xbf\x90\x5f"
"\xc4\x2f\xe6\x97\xf0\x4b\xf9\x65\xfc\x72\x7e\x05\xbf\x92\x5f\xc5\xaf"
"\xe6\xd7\xf0\x6b\xf9\x75\xfc\x7a\x7e\x03\xbf\x91\xdf\xc4\x6f\xe6\xb7"
"\xf0\x5b\xf9\x6d\xfc\x76\x7e\x07\xbf\x93\xdf\xc5\xef\xe6\xf7\xf0\x7b"
"\xf9\x7d\xfc\x7e\xfe\x00\x7f\x90\x3f\xc4\x1f\xe6\x8f\xf0\x47\xf9\x63"
"\xfc\x71\xfe\x04\x7f\x92\x3f\xc5\x9f\xe6\xcf\xf0\x67\xf9\x73\xfc\x79"
"\xfe\x02\x7f\x91\xbf\xc4\x5f\xe6\xaf\xf0\x57\xf9\x6b\xfc\x75\xfe\x06"
"\x7f\x93\xbf\xc5\xdf\xe6\xef\xf0\x77\xf9\x7b\xfc\x7d\xfe\x01\xff\x90"
"\x7f\xc4\x3f\xe6\x9f\xf0\x4f\xf9\x67\xfc\x73\xfe\x05\xff\x92\x7f\xc5"
"\xbf\xe6\xff\xe3\xdf\xf0\x6f\xf9\x77\xfc\x7b\xfe\x03\xff\x91\xff\xc4"
"\x7f\xe6\xbf\xf0\x5f\xf9\x6f\xfc\x77\xfe\x07\xff\x93\xff\xc5\xff\xe6"
"\xff\xf0\x7f\xf9\x7f\x7c\x02\x9f\xc8\x27\xf1\x71\x42\x32\x21\xb9\x90"
"\x42\x48\x29\xa4\x12\x52\x0b\xf1\x42\x1a\x21\xad\x90\x4e\x48\x2f\x64"
"\x10\x32\x0a\x99\x84\xcc\x42\x16\x21\xab\x90\x4d\xc8\x2e\xe4\x10\x72"
"\x0a\xb9\x84\xdc\x42\x1e\x21\xaf\x90\x4f\xc8\x2f\x14\x10\x0a\x0a\x85"
"\x84\xc2\x42\x11\xa1\xa8\x50\x4c\x28\x2e\x94\x10\x4a\x0a\xa5\x84\xd2"
"\x42\x19\xa1\xac\x50\x4e\x28\x2f\x54\x10\x2a\x0a\x95\x84\xca\x42\x15"
"\xa1\xaa\x50\x4d\xa8\x2e\xd4\x10\x6a\x0a\xb5\x84\xda\x42\x1d\xa1\xae"
"\x50\x4f\xa8\x2f\x34\x10\x1a\x0a\x8d\x84\xc6\x42\x13\xa1\xa9\xd0\x4c"
"\x68\x2e\xb4\x10\x5a\x0a\xad\x84\xd6\x42\x1b\xa1\xad\xd0\x4e\x68\x2f"
"\x74\x10\x3a\x0a\x9d\x84\xce\x42\x17\xa1\xab\xd0\x4d\xe8\x2e\xf4\x10"
"\x7a\x0a\xbd\x84\xde\x02\x26\xe0\x02\x21\x90\x02\x25\xd0\x02\x23\xb0"
"\x02\x27\xf0\x82\x20\x88\x82\x24\xc8\x82\x22\xa8\x82\x26\xe8\x82\x21"
"\x98\x82\x25\xd8\x82\x23\xb8\x82\x27\xf8\x42\x20\x84\x42\x24\x00\x01"
"\x0a\x48\x88\x09\x7d\x84\xbe\x42\x3f\xa1\xbf\x30\x40\x18\x28\x0c\x12"
"\x06\x0b\x43\x84\xa1\xc2\x30\x61\xb8\x30\x42\x18\x29\x8c\x12\x46\x0b"
"\x63\x84\xb1\xc2\x38\x61\xbc\x30\x41\x98\x28\x4c\x12\x26\x0b\x53\x84"
"\xa9\xc2\x34\x61\xba\x30\x43\x98\x29\xcc\x12\x66\x0b\x73\x84\xb9\xc2"
"\x3c\x61\xbe\xb0\x40\x58\x28\x2c\x12\x16\x0b\x4b\x84\xa5\xc2\x32\x61"
"\xb9\xb0\x42\x58\x29\xac\x12\x56\x0b\x6b\x84\xb5\xc2\x3a\x61\xbd\xb0"
"\x41\xd8\x28\x6c\x12\x36\x0b\x5b\x84\xad\xc2\x36\x61\xbb\xb0\x43\xd8"
"\x29\xec\x12\x76\x0b\x7b\x84\xbd\xc2\x3e\x61\xbf\x70\x40\x38\x28\x1c"
"\x12\x0e\x0b\x47\x84\xa3\xc2\x31\xe1\xb8\x70\x42\x38\x29\x9c\x12\x4e"
"\x0b\x67\x84\xb3\xc2\x39\xe1\xbc\x70\x41\xb8\x28\x5c\x12\x2e\x0b\x57"
"\x84\xab\xc2\x35\xe1\xba\x70\x43\xb8\x29\xdc\x12\x6e\x0b\x77\x84\xbb"
"\xc2\x3d\xe1\xbe\xf0\x40\x78\x28\x3c\x12\x1e\x0b\x4f\x84\xa7\xc2\x33"
"\xe1\xb9\xf0\x42\x78\x29\xbc\x12\x5e\x0b\xff\x09\x6f\x84\xb7\xc2\x3b"
"\xe1\xbd\xf0\x41\xf8\x28\x7c\x12\x3e\x0b\x5f\x84\xaf\xc2\x37\xe1\xbb"
"\xf0\x43\xf8\x29\xfc\x12\x7e\x0b\x7f\x84\xbf\xc2\x3f\x21\x41\x48\x14"
"\x92\x84\x38\x31\x99\x98\x5c\x4c\x21\xa6\x14\x53\x89\xa9\xc5\x78\x31"
"\x8d\x98\x56\x4c\x27\xa6\x17\x33\x88\x19\xc5\x4c\x62\x66\x31\x8b\x98"
"\x55\xcc\x26\x66\x17\x73\x88\x39\xc5\x5c\x62\x6e\x31\x8f\x98\x57\xcc"
"\x27\xe6\x17\x0b\x88\x05\xc5\x42\x62\x61\xb1\x88\x58\x54\x2c\x26\x16"
"\x17\x4b\x88\x25\xc5\x52\x62\x69\xb1\x8c\x58\x56\x2c\x27\x96\x17\x2b"
"\x88\x15\xc5\x4a\x62\x65\xb1\x8a\x58\x55\xac\x26\x56\x17\x6b\x88\x35"
"\xc5\x5a\x62\x6d\xb1\x8e\x58\x57\xac\x27\xd6\x17\x1b\x88\x0d\xc5\x46"
"\x62\x63\xb1\x89\xd8\x54\x6c\x26\x36\x17\x5b\x88\x2d\xc5\x56\x62\x6b"
"\xb1\x8d\xd8\x56\x6c\x27\xb6\x17\x3b\x88\x1d\xc5\x4e\x62\x67\xb1\x8b"
"\xd8\x55\xec\x26\x76\x17\x7b\x88\x3d\xc5\x5e\x62\x6f\x11\x13\x71\x91"
"\x10\x49\x91\x12\x69\x91\x11\x59\x91\x13\x79\x51\x10\x45\x51\x12\x65"
"\x51\x11\x55\x51\x13\x75\xd1\x10\x4d\xd1\x12\x6d\xd1\x11\x5d\xd1\x13"
"\x7d\x31\x10\x43\x31\x12\x81\x08\x45\x24\xc6\xc4\x3e\x62\x5f\xb1\x9f"
"\xd8\x5f\x1c\x20\x0e\x14\x07\x89\x83\xc5\x21\xe2\x50\x71\x98\x38\x5c"
"\x1c\x21\x8e\x14\x47\x89\xa3\xc5\x31\xe2\x58\x71\x9c\x38\x5e\x9c\x20"
"\x4e\x14\x27\x89\x93\xc5\x29\xe2\x54\x71\x9a\x38\x5d\x9c\x21\xce\x14"
"\x67\x89\xb3\xc5\x39\xe2\x5c\x71\x9e\x38\x5f\x5c\x20\x2e\x14\x17\x89"
"\x8b\xc5\x25\xe2\x52\x71\x99\xb8\x5c\x5c\x21\xae\x14\x57\x89\xab\xc5"
"\x35\xe2\x5a\x71\x9d\xb8\x5e\xdc\x20\x6e\x14\x37\x89\x9b\xc5\x2d\xe2"
"\x56\x71\x9b\xb8\x5d\xdc\x21\xee\x14\x77\x89\xbb\xc5\x3d\xe2\x5e\x71"
"\x9f\xb8\x5f\x3c\x20\x1e\x14\x0f\x89\x87\xc5\x23\xe2\x51\xf1\x98\x78"
"\x5c\x3c\x21\x9e\x14\x4f\x89\xa7\xc5\x33\xe2\x59\xf1\x9c\x78\x5e\xbc"
"\x20\x5e\x14\x2f\x89\x97\xc5\x2b\xe2\x55\xf1\x9a\x78\x5d\xbc\x21\xde"
"\x14\x6f\x89\xb7\xc5\x3b\xe2\x5d\xf1\x9e\x78\x5f\x7c\x20\x3e\x14\x1f"
"\x89\x8f\xc5\x27\xe2\x53\xf1\x99\xf8\x5c\x7c\x21\xbe\x14\x5f\x89\xaf"
"\xc5\xff\xc4\x37\xe2\x5b\xf1\x9d\xf8\x5e\xfc\x20\x7e\x14\x3f\x89\x9f"
"\xc5\x2f\xe2\x57\xf1\x9b\xf8\x5d\xfc\x21\xfe\x14\x7f\x89\xbf\xc5\x3f"
"\xe2\x5f\xf1\x9f\x98\x20\x26\x8a\x49\x62\x9c\x94\x4c\x4a\x2e\xa5\x90"
"\x52\x4a\xa9\xa4\xd4\x52\xbc\x94\x46\x4a\x2b\xa5\x93\xd2\x4b\x19\xa4"
"\x8c\x52\x26\x29\xb3\x94\x45\xca\x2a\x65\x93\xb2\x4b\x39\xa4\x9c\x52"
"\x2e\x29\xb7\x94\x47\xca\x2b\xe5\x93\xf2\x4b\x05\xa4\x82\x52\x21\xa9"
"\xb0\x54\x44\x2a\x2a\x15\x93\x8a\x4b\x25\xa4\x92\x52\x29\xa9\xb4\x54"
"\x46\x2a\x2b\x95\x93\xca\x4b\x15\xa4\x8a\x52\x25\xa9\xb2\x54\x45\xaa"
"\x2a\x55\x93\xaa\x4b\x35\xa4\x9a\x52\x2d\xa9\xb6\x54\x47\xaa\x2b\xd5"
"\x93\xea\x4b\x0d\xa4\x86\x52\x23\xa9\xb1\xd4\x44\x6a\x2a\x35\x93\x9a"
"\x4b\x2d\xa4\x96\x52\x2b\xa9\xb5\xd4\x46\x6a\x2b\xb5\x93\xda\x4b\x1d"
"\xa4\x8e\x52\x27\xa9\xb3\xd4\x45\xea\x2a\x75\x93\xba\x4b\x3d\xa4\x9e"
"\x52\x2f\xa9\xb7\x84\x49\xb8\x44\x48\xa4\x44\x49\xb4\xc4\x48\xac\xc4"
"\x49\xbc\x24\x48\xa2\x24\x49\xb2\xa4\x48\xaa\xa4\x49\xba\x64\x48\xa6"
"\x64\x49\xb6\xe4\x48\xae\xe4\x49\xbe\x14\x48\xa1\x14\x49\x40\x82\x12"
"\x92\x62\x52\x1f\xa9\xaf\xd4\x4f\xea\x2f\x0d\x90\x06\x4a\x83\xa4\xc1"
"\xd2\x10\x69\xa8\x34\x4c\x1a\x2e\x8d\x90\x46\x4a\xa3\xa4\xd1\xd2\x18"
"\x69\xac\x34\x4e\x1a\x2f\x4d\x90\x26\x4a\x93\xa4\xc9\xd2\x14\x69\xaa"
"\x34\x4d\x9a\x2e\xcd\x90\x66\x4a\xb3\xa4\xd9\xd2\x1c\x69\xae\x34\x4f"
"\x9a\x2f\x2d\x90\x16\x4a\x8b\xa4\xc5\xd2\x12\x69\xa9\xb4\x4c\x5a\x2e"
"\xad\x90\x56\x4a\xab\xa4\xd5\xd2\x1a\x69\xad\xb4\x4e\x5a\x2f\x6d\x90"
"\x36\x4a\x9b\xa4\xcd\xd2\x16\x69\xab\xb4\x4d\xda\x2e\xed\x90\x76\x4a"
"\xbb\xa4\xdd\xd2\x1e\x69\xaf\xb4\x4f\xda\x2f\x1d\x90\x0e\x4a\x87\xa4"
"\xc3\xd2\x11\xe9\xa8\x74\x4c\x3a\x2e\x9d\x90\x4e\x4a\xa7\xa4\xd3\xd2"
"\x19\xe9\xac\x74\x4e\x3a\x2f\x5d\x90\x2e\x4a\x97\xa4\xcb\xd2\x15\xe9"
"\xaa\x74\x4d\xba\x2e\xdd\x90\x6e\x4a\xb7\xa4\xdb\xd2\x1d\xe9\xae\x74"
"\x4f\xba\x2f\x3d\x90\x1e\x4a\x8f\xa4\xc7\xd2\x13\xe9\xa9\xf4\x4c\x7a"
"\x2e\xbd\x90\x5e\x4a\xaf\xa4\xd7\xd2\x7f\xd2\x1b\xe9\xad\xf4\x4e\x7a"
"\x2f\x7d\x90\x3e\x4a\x9f\xa4\xcf\xd2\x17\xe9\xab\xf4\x4d\xfa\x2e\xfd"
"\x90\x7e\x4a\xbf\xa4\xdf\xd2\x1f\xe9\xaf\xf4\x4f\x4a\x90\x12\xa5\x24"
"\x29\x4e\x4e\x26\x27\x97\x53\xc8\x29\xe5\x54\x72\x6a\x39\x5e\x4e\x23"
"\xa7\x95\xd3\xc9\xe9\xe5\x0c\x72\x46\x39\x93\x9c\x59\xce\x22\x67\x95"
"\xb3\xc9\xd9\xe5\x1c\x72\x4e\x39\x97\x9c\x5b\xce\x23\xe7\x95\xf3\xc9"
"\xf9\xe5\x02\x72\x41\xb9\x90\x5c\x58\x2e\x22\x17\x95\x8b\xc9\xc5\xe5"
"\x12\x72\x49\xb9\x94\x5c\x5a\x2e\x23\x97\x95\xcb\xc9\xe5\xe5\x0a\x72"
"\x45\xb9\x92\x5c\x59\xae\x22\x57\x95\xab\xc9\xd5\xe5\x1a\x72\x4d\xb9"
"\x96\x5c\x5b\xae\x23\xd7\x95\xeb\xc9\xf5\xe5\x06\x72\x43\xb9\x91\xdc"
"\x58\x6e\x22\x37\x95\x9b\xc9\xcd\xe5\x16\x72\x4b\xb9\x95\xdc\x5a\x6e"
"\x23\xb7\x95\xdb\xc9\xed\xe5\x0e\x72\x47\xb9\x93\xdc\x59\xee\x22\x77"
"\x95\xbb\xc9\xdd\xe5\x1e\x72\x4f\xb9\x97\xdc\x5b\xc6\x64\x5c\x26\x64"
"\x52\xa6\x64\x5a\x66\x64\x56\xe6\x64\x5e\x16\x64\x51\x96\x64\x59\x56"
"\x64\x55\xd6\x64\x5d\x36\x64\x53\xb6\x64\x5b\x76\x64\x57\xf6\x64\x5f"
"\x0e\xe4\x50\x8e\x64\x20\x43\x19\xc9\x31\xb9\x8f\xdc\x57\xee\x27\xf7"
"\x97\x07\xc8\x03\xe5\x41\xf2\x60\x79\x88\x3c\x54\x1e\x26\x0f\x97\x47"
"\xc8\x23\xe5\x51\xf2\x68\x79\x8c\x3c\x56\x1e\x27\x8f\x97\x27\xc8\x13"
"\xe5\x49\xf2\x64\x79\x8a\x3c\x55\x9e\x26\x4f\x97\x67\xc8\x33\xe5\x59"
"\xf2\x6c\x79\x8e\x3c\x57\x9e\x27\xcf\x97\x17\xc8\x0b\xe5\x45\xf2\x62"
"\x79\x89\xbc\x54\x5e\x26\x2f\x97\x57\xc8\x2b\xe5\x55\xf2\x6a\x79\x8d"
"\xbc\x56\x5e\x27\xaf\x97\x37\xc8\x1b\xe5\x4d\xf2\x66\x79\x8b\xbc\x55"
"\xde\x26\x6f\x97\x77\xc8\x3b\xe5\x5d\xf2\x6e\x79\x8f\xbc\x57\xde\x27"
"\xef\x97\x0f\xc8\x07\xe5\x43\xf2\x61\xf9\x88\x7c\x54\x3e\x26\x1f\x97"
"\x4f\xc8\x27\xe5\x53\xf2\x69\xf9\x8c\x7c\x56\x3e\x27\x9f\x97\x2f\xc8"
"\x17\xe5\x4b\xf2\x65\xf9\x8a\x7c\x55\xbe\x26\x5f\x97\x6f\xc8\x37\xe5"
"\x5b\xf2\x6d\xf9\x8e\x7c\x57\xbe\x27\xdf\x97\x1f\xc8\x0f\xe5\x47\xf2"
"\x63\xf9\x89\xfc\x54\x7e\x26\x3f\x97\x5f\xc8\x2f\xe5\x57\xf2\x6b\xf9"
"\x3f\xf9\x8d\xfc\x56\x7e\x27\xbf\x97\x3f\xc8\x1f\xe5\x4f\xf2\x67\xf9"
"\x8b\xfc\x55\xfe\x26\x7f\x97\x7f\xc8\x3f\xe5\x5f\xf2\x6f\xf9\x8f\xfc"
"\x57\xfe\x27\x27\xc8\x89\x72\x92\x1c\xa7\x24\x53\x92\x2b\x29\x94\x94"
"\x4a\x2a\x25\xb5\x12\xaf\xa4\x51\xd2\x2a\xe9\x94\xf4\x4a\x06\x25\xa3"
"\x92\x49\xc9\xac\x64\x51\xb2\x2a\xd9\x94\xec\x4a\x0e\x25\xa7\x92\x4b"
"\xc9\xad\xe4\x51\xf2\x2a\xf9\x94\xfc\x4a\x01\xa5\xa0\x52\x48\x29\xac"
"\x14\x51\x8a\x2a\xc5\x94\xe2\x4a\x09\xa5\xa4\x52\x4a\x29\xad\x94\x51"
"\xca\x2a\xe5\x94\xf2\x4a\x05\xa5\xa2\x52\x49\xa9\xac\x54\x51\xaa\x2a"
"\xd5\x94\xea\x4a\x0d\xa5\xa6\x52\x4b\xa9\xad\xd4\x51\xea\x2a\xf5\x94"
"\xfa\x4a\x03\xa5\xa1\xd2\x48\x69\xac\x34\x51\x9a\x2a\xcd\x94\xe6\x4a"
"\x0b\xa5\xa5\xd2\x4a\x69\xad\xb4\x51\xda\x2a\xed\x94\xf6\x4a\x07\xa5"
"\xa3\xd2\x49\xe9\xac\x74\x51\xba\x2a\xdd\x94\xee\x4a\x0f\xa5\xa7\xd2"
"\x4b\xe9\xad\x60\x0a\xae\x10\x0a\xa9\x50\x0a\xad\x30\x0a\xab\x70\x0a"
"\xaf\x08\x8a\xa8\x48\x8a\xac\x28\x8a\xaa\x68\x8a\xae\x18\x8a\xa9\x58"
"\x8a\xad\x38\x8a\xab\x78\x8a\xaf\x04\x4a\xa8\x44\x0a\x50\xa0\x82\x94"
"\x98\xd2\x47\xe9\xab\xf4\x53\xfa\x2b\x03\x94\x81\xca\x20\x65\xb0\x32"
"\x44\x19\xaa\x0c\x53\x86\x2b\x23\x94\x91\xca\x28\x65\xb4\x32\x46\x19"
"\xab\x8c\x53\xc6\x2b\x13\x94\x89\xca\x24\x65\xb2\x32\x45\x99\xaa\x4c"
"\x53\xa6\x2b\x33\x94\x99\xca\x2c\x65\xb6\x32\x47\x99\xab\xcc\x53\xe6"
"\x2b\x0b\x94\x85\xca\x22\x65\xb1\xb2\x44\x59\xaa\x2c\x53\x96\x2b\x2b"
"\x94\x95\xca\x2a\x65\xb5\xb2\x46\x59\xab\xac\x53\xd6\x2b\x1b\x94\x8d"
"\xca\x26\x65\xb3\xb2\x45\xd9\xaa\x6c\x53\xb6\x2b\x3b\x94\x9d\xca\x2e"
"\x65\xb7\xb2\x47\xd9\xab\xec\x53\xf6\x2b\x07\x94\x83\xca\x21\xe5\xb0"
"\x72\x44\x39\xaa\x1c\x53\x8e\x2b\x27\x94\x93\xca\x29\xe5\xb4\x72\x46"
"\x39\xab\x9c\x53\xce\x2b\x17\x94\x8b\xca\x25\xe5\xb2\x72\x45\xb9\xaa"
"\x5c\x53\xae\x2b\x37\x94\x9b\xca\x2d\xe5\xb6\x72\x47\xb9\xab\xdc\x53"
"\xee\x2b\x0f\x94\x87\xca\x23\xe5\xb1\xf2\x44\x79\xaa\x3c\x53\x9e\x2b"
"\x2f\x94\x97\xca\x2b\xe5\xb5\xf2\x9f\xf2\x46\x79\xab\xbc\x53\xde\x2b"
"\x1f\x94\x8f\xca\x27\xe5\xb3\xf2\x45\xf9\xaa\x7c\x53\xbe\x2b\x3f\x94"
"\x9f\xca\x2f\xe5\xb7\xf2\x47\xf9\xab\xfc\x53\x12\x94\x44\x25\x49\x89"
"\x53\x93\xa9\xc9\xd5\x14\x6a\x4a\x35\x95\x9a\x5a\x8d\x57\xd3\xa8\x69"
"\xd5\x74\x6a\x7a\x35\x83\x9a\x51\xcd\xa4\x66\x56\xb3\xa8\x59\xd5\x6c"
"\x6a\x76\x35\x87\x9a\x53\xcd\xa5\xe6\x56\xf3\xa8\x79\xd5\x7c\x6a\x7e"
"\xb5\x80\x5a\x50\x2d\xa4\x16\x56\x8b\xa8\x45\xd5\x62\x6a\x71\xb5\x84"
"\x5a\x52\x2d\xa5\x96\x56\xcb\xa8\x65\xd5\x72\x6a\x79\xb5\x82\x5a\x51"
"\xad\xa4\x56\x56\xab\xa8\x55\xd5\x6a\x6a\x75\xb5\x86\x5a\x53\xad\xa5"
"\xd6\x56\xeb\xa8\x75\xd5\x7a\x6a\x7d\xb5\x81\xda\x50\x6d\xa4\x36\x56"
"\x9b\xa8\x4d\xd5\x66\x6a\x73\xb5\x85\xda\x52\x6d\xa5\xb6\x56\xdb\xa8"
"\x6d\xd5\x76\x6a\x7b\xb5\x83\xda\x51\xed\xa4\x76\x56\xbb\xa8\x5d\xd5"
"\x6e\x6a\x77\xb5\x87\xda\x53\xed\xa5\xf6\x56\x31\x15\x57\x09\x95\x54"
"\x29\x95\x56\x19\x95\x55\x39\x95\x57\x05\x55\x54\x25\x55\x56\x15\x55"
"\x55\x35\x55\x57\x0d\xd5\x54\x2d\xd5\x56\x1d\xd5\x55\x3d\xd5\x57\x03"
"\x35\x54\x23\x15\xa8\x50\x45\x6a\x4c\xed\xa3\xf6\x55\xfb\xa9\xfd\xd5"
"\x01\xea\x40\x75\x90\x3a\x58\x1d\xa2\x0e\x55\x87\xa9\xc3\xd5\x11\xea"
"\x48\x75\x94\x3a\x5a\x1d\xa3\x8e\x55\xc7\xa9\xe3\xd5\x09\xea\x44\x75"
"\x92\x3a\x59\x9d\xa2\x4e\x55\xa7\xa9\xd3\xd5\x19\xea\x4c\x75\x96\x3a"
"\x5b\x9d\xa3\xce\x55\xe7\xa9\xf3\xd5\x05\xea\x42\x75\x91\xba\x58\x5d"
"\xa2\x2e\x55\x97\xa9\xcb\xd5\x15\xea\x4a\x75\x95\xba\x5a\x5d\xa3\xae"
"\x55\xd7\xa9\xeb\xd5\x0d\xea\x46\x75\x93\xba\x59\xdd\xa2\x6e\x55\xb7"
"\xa9\xdb\xd5\x1d\xea\x4e\x75\x97\xba\x5b\xdd\xa3\xee\x55\xf7\xa9\xfb"
"\xd5\x03\xea\x41\xf5\x90\x7a\x58\x3d\xa2\x1e\x55\x8f\xa9\xc7\xd5\x13"
"\xea\x49\xf5\x94\x7a\x5a\x3d\xa3\x9e\x55\xcf\xa9\xe7\xd5\x0b\xea\x45"
"\xf5\x92\x7a\x59\xbd\xa2\x5e\x55\xaf\xa9\xd7\xd5\x1b\xea\x4d\xf5\x96"
"\x7a\x5b\xbd\xa3\xde\x55\xef\xa9\xf7\xd5\x07\xea\x43\xf5\x91\xfa\x58"
"\x7d\xa2\x3e\x55\x9f\xa9\xcf\xd5\x17\xea\x4b\xf5\x95\xfa\x5a\xfd\x4f"
"\x7d\xa3\xbe\x55\xdf\xa9\xef\xd5\x0f\xea\x47\xf5\x93\xfa\x59\xfd\xa2"
"\x7e\x55\xbf\xa9\xdf\xd5\x1f\xea\x4f\xf5\x97\xfa\x5b\xfd\xa3\xfe\x55"
"\xff\xa9\x09\x6a\xa2\x9a\xa4\xc6\x69\xc9\xb4\xe4\x5a\x0a\x2d\xa5\x96"
"\x4a\x4b\xad\xc5\x6b\x69\xb4\xb4\x5a\x3a\x2d\xbd\x96\x41\xcb\xa8\x65"
"\xd2\x32\x6b\x59\xb4\xac\x5a\x36\x2d\xbb\x96\x43\xcb\xa9\xe5\xd2\x72"
"\x6b\x79\xb4\xbc\x5a\x3e\x2d\xbf\x56\x40\x2b\xa8\x15\xd2\x0a\x6b\x45"
"\xb4\xa2\x5a\x31\xad\xb8\x56\x42\x2b\xa9\x95\xd2\x4a\x6b\x65\xb4\xb2"
"\x5a\x39\xad\xbc\x56\x41\xab\xa8\x55\xd2\x2a\x6b\x55\xb4\xaa\x5a\x35"
"\xad\xba\x56\x43\xab\xa9\xd5\xd2\x6a\x6b\x75\xb4\xba\x5a\x3d\xad\xbe"
"\xd6\x40\x6b\xa8\x35\xd2\x1a\x6b\x4d\xb4\xa6\x5a\x33\xad\xb9\xd6\x42"
"\x6b\xa9\xb5\xd2\x5a\x6b\x6d\xb4\xb6\x5a\x3b\xad\xbd\xd6\x41\xeb\xa8"
"\x75\xd2\x3a\x6b\x5d\xb4\xae\x5a\x37\xad\xbb\xd6\x43\xeb\xa9\xf5\xd2"
"\x7a\x6b\x98\x86\x6b\x84\x46\x6a\x94\x46\x6b\x8c\xc6\x6a\x9c\xc6\x6b"
"\x82\x26\x6a\x92\x26\x6b\x8a\xa6\x6a\x9a\xa6\x6b\x86\x66\x6a\x96\x66"
"\x6b\x8e\xe6\x6a\x9e\xe6\x6b\x81\x16\x6a\x91\x06\x34\xa8\x21\x2d\xa6"
"\xf5\xd1\xfa\x6a\xfd\xb4\xfe\xda\x00\x6d\xa0\x36\x48\x1b\xac\x0d\xd1"
"\x86\x6a\xc3\xb4\xe1\xda\x08\x6d\xa4\x36\x4a\x1b\xad\x8d\xd1\xc6\x6a"
"\xe3\xb4\xf1\xda\x04\x6d\xa2\x36\x49\x9b\xac\x4d\xd1\xa6\x6a\xd3\xb4"
"\xe9\xda\x0c\x6d\xa6\x36\x4b\x9b\xad\xcd\xd1\xe6\x6a\xf3\xb4\xf9\xda"
"\x02\x6d\xa1\xb6\x48\x5b\xac\x2d\xd1\x96\x6a\xcb\xb4\xe5\xda\x0a\x6d"
"\xa5\xb6\x4a\x5b\xad\xad\xd1\xd6\x6a\xeb\xb4\xf5\xda\x06\x6d\xa3\xb6"
"\x49\xdb\xac\x6d\xd1\xb6\x6a\xdb\xb4\xed\xda\x0e\x6d\xa7\xb6\x4b\xdb"
"\xad\xed\xd1\xf6\x6a\xfb\xb4\xfd\xda\x01\xed\xa0\x76\x48\x3b\xac\x1d"
"\xd1\x8e\x6a\xc7\xb4\xe3\xda\x09\xed\xa4\x76\x4a\x3b\xad\x9d\xd1\xce"
"\x6a\xe7\xb4\xf3\xda\x05\xed\xa2\x76\x49\xbb\xac\x5d\xd1\xae\x6a\xd7"
"\xb4\xeb\xda\x0d\xed\xa6\x76\x4b\xbb\xad\xdd\xd1\xee\x6a\xf7\xb4\xfb"
"\xda\x03\xed\xa1\xf6\x48\x7b\xac\x3d\xd1\x9e\x6a\xcf\xb4\xe7\xda\x0b"
"\xed\xa5\xf6\x4a\x7b\xad\xfd\xa7\xbd\xd1\xde\x6a\xef\xb4\xf7\xda\x07"
"\xed\xa3\xf6\x49\xfb\xac\x7d\xd1\xbe\x6a\xdf\xb4\xef\xda\x0f\xed\xa7"
"\xf6\x4b\xfb\xad\xfd\xd1\xfe\x6a\xff\xb4\x04\x2d\x51\x4b\xd2\xe2\xf4"
"\x64\x7a\x72\x3d\x85\x9e\x52\x4f\xa5\xa7\xd6\xe3\xf5\x34\x7a\x5a\x3d"
"\x9d\x9e\x5e\xcf\xa0\x67\xd4\x33\xe9\x99\xf5\x2c\x7a\x56\x3d\x9b\x9e"
"\x5d\xcf\xa1\xe7\xd4\x73\xe9\xb9\xf5\x3c\x7a\x5e\x3d\x9f\x9e\x5f\x2f"
"\xa0\x17\xd4\x0b\xe9\x85\xf5\x22\x7a\x51\xbd\x98\x5e\x5c\x2f\xa1\x97"
"\xd4\x4b\xe9\xa5\xf5\x32\x7a\x59\xbd\x9c\x5e\x5e\xaf\xa0\x57\xd4\x2b"
"\xe9\x95\xf5\x2a\x7a\x55\xbd\x9a\x5e\x5d\xaf\xa1\xd7\xd4\x6b\xe9\xb5"
"\xf5\x3a\x7a\x5d\xbd\x9e\x5e\x5f\x6f\xa0\x37\xd4\x1b\xe9\x8d\xf5\x26"
"\x7a\x53\xbd\x99\xde\x5c\x6f\xa1\xb7\xd4\x5b\xe9\xad\xf5\x36\x7a\x5b"
"\xbd\x9d\xde\x5e\xef\xa0\x77\xd4\x3b\xe9\x9d\xf5\x2e\x7a\x57\xbd\x9b"
"\xde\x5d\xef\xa1\xf7\xd4\x7b\xe9\xbd\x75\x4c\xc7\x75\x42\x27\x75\x4a"
"\xa7\x75\x46\x67\x75\x4e\xe7\x75\x41\x17\x75\x49\x97\x75\x45\x57\x75"
"\x4d\xd7\x75\x43\x37\x75\x4b\xb7\x75\x47\x77\x75\x4f\xf7\xf5\x40\x0f"
"\xf5\x48\x07\x3a\xd4\x91\x1e\xd3\xfb\xe8\x7d\xf5\x7e\x7a\x7f\x7d\x80"
"\x3e\x50\x1f\xa4\x0f\xd6\x87\xe8\x43\xf5\x61\xfa\x70\x7d\x84\x3e\x52"
"\x1f\xa5\x8f\xd6\xc7\xe8\x63\xf5\x71\xfa\x78\x7d\x82\x3e\x51\x9f\xa4"
"\x4f\xd6\xa7\xe8\x53\xf5\x69\xfa\x74\x7d\x86\x3e\x53\x9f\xa5\xcf\xd6"
"\xe7\xe8\x73\xf5\x79\xfa\x7c\x7d\x81\xbe\x50\x5f\xa4\x2f\xd6\x97\xe8"
"\x4b\xf5\x65\xfa\x72\x7d\x85\xbe\x52\x5f\xa5\xaf\xd6\xd7\xe8\x6b\xf5"
"\x75\xfa\x7a\x7d\x83\xbe\x51\xdf\xa4\x6f\xd6\xb7\xe8\x5b\xf5\x6d\xfa"
"\x76\x7d\x87\xbe\x53\xdf\xa5\xef\xd6\xf7\xe8\x7b\xf5\x7d\xfa\x7e\xfd"
"\x80\x7e\x50\x3f\xa4\x1f\xd6\x8f\xe8\x47\xf5\x63\xfa\x71\xfd\x84\x7e"
"\x52\x3f\xa5\x9f\xd6\xcf\xe8\x67\xf5\x73\xfa\x79\xfd\x82\x7e\x51\xbf"
"\xa4\x5f\xd6\xaf\xe8\x57\xf5\x6b\xfa\x75\xfd\x86\x7e\x53\xbf\xa5\xdf"
"\xd6\xef\xe8\x77\xf5\x7b\xfa\x7d\xfd\x81\xfe\x50\x7f\xa4\x3f\xd6\x9f"
"\xe8\x4f\xf5\x67\xfa\x73\xfd\x85\xfe\x52\x7f\xa5\xbf\xd6\xff\xd3\xdf"
"\xe8\x6f\xf5\x77\xfa\x7b\xfd\x83\xfe\x51\xff\xa4\x7f\xd6\xbf\xe8\x5f"
"\xf5\x6f\xfa\x77\xfd\x87\xfe\x53\xff\xa5\xff\xd6\xff\xe8\x7f\xf5\x7f"
"\x7a\x82\x9e\xa8\x27\xe9\x71\x46\x32\x23\xb9\x91\xc2\x48\x69\xa4\x32"
"\x52\x1b\xf1\x46\x1a\x23\xad\x91\xce\x48\x6f\x64\x30\x32\x1a\x99\x8c"
"\xcc\x46\x16\x23\xab\x91\xcd\xc8\x6e\xe4\x30\x72\x1a\xb9\x8c\xdc\x46"
"\x1e\x23\xaf\x91\xcf\xc8\x6f\x14\x30\x0a\x1a\x85\x8c\xc2\x46\x11\xa3"
"\xa8\x51\xcc\x28\x6e\x94\x30\x4a\x1a\xa5\x8c\xd2\x46\x19\xa3\xac\x51"
"\xce\x28\x6f\x54\x30\x2a\x1a\x95\x8c\xca\x46\x15\xa3\xaa\x51\xcd\xa8"
"\x6e\xd4\x30\x6a\x1a\xb5\x8c\xda\x46\x1d\xa3\xae\x51\xcf\xa8\x6f\x34"
"\x30\x1a\x1a\x8d\x8c\xc6\x46\x13\xa3\xa9\xd1\xcc\x68\x6e\xb4\x30\x5a"
"\x1a\xad\x8c\xd6\x46\x1b\xa3\xad\xd1\xce\x68\x6f\x74\x30\x3a\x1a\x9d"
"\x8c\xce\x46\x17\xa3\xab\xd1\xcd\xe8\x6e\xf4\x30\x7a\x1a\xbd\x8c\xde"
"\x06\x66\xe0\x06\x61\x90\x06\x65\xd0\x06\x63\xb0\x06\x67\xf0\x86\x60"
"\x88\x86\x64\xc8\x86\x62\xa8\x86\x66\xe8\x86\x61\x98\x86\x65\xd8\x86"
"\x63\xb8\x86\x67\xf8\x46\x60\x84\x46\x64\x00\x03\x1a\xc8\x88\x19\x7d"
"\x8c\xbe\x46\x3f\xa3\xbf\x31\xc0\x18\x68\x0c\x32\x06\x1b\x43\x8c\xa1"
"\xc6\x30\x63\xb8\x31\xc2\x18\x69\x8c\x32\x46\x1b\x63\x8c\xb1\xc6\x38"
"\x63\xbc\x31\xc1\x98\x68\x4c\x32\x26\x1b\x53\x8c\xa9\xc6\x34\x63\xba"
"\x31\xc3\x98\x69\xcc\x32\x66\x1b\x73\x8c\xb9\xc6\x3c\x63\xbe\xb1\xc0"
"\x58\x68\x2c\x32\x16\x1b\x4b\x8c\xa5\xc6\x32\x63\xb9\xb1\xc2\x58\x69"
"\xac\x32\x56\x1b\x6b\x8c\xb5\xc6\x3a\x63\xbd\xb1\xc1\xd8\x68\x6c\x32"
"\x36\x1b\x5b\x8c\xad\xc6\x36\x63\xbb\xb1\xc3\xd8\x69\xec\x32\x76\x1b"
"\x7b\x8c\xbd\xc6\x3e\x63\xbf\x71\xc0\x38\x68\x1c\x32\x0e\x1b\x47\x8c"
"\xa3\xc6\x31\xe3\xb8\x71\xc2\x38\x69\x9c\x32\x4e\x1b\x67\x8c\xb3\xc6"
"\x39\xe3\xbc\x71\xc1\xb8\x68\x5c\x32\x2e\x1b\x57\x8c\xab\xc6\x35\xe3"
"\xba\x71\xc3\xb8\x69\xdc\x32\x6e\x1b\x77\x8c\xbb\xc6\x3d\xe3\xbe\xf1"
"\xc0\x78\x68\x3c\x32\x1e\x1b\x4f\x8c\xa7\xc6\x33\xe3\xb9\xf1\xc2\x78"
"\x69\xbc\x32\x5e\x1b\xff\x19\x6f\x8c\xb7\xc6\x3b\xe3\xbd\xf1\xc1\xf8"
"\x68\x7c\x32\x3e\x1b\x5f\x8c\xaf\xc6\x37\xe3\xbb\xf1\xc3\xf8\x69\xfc"
"\x32\x7e\x1b\x7f\x8c\xbf\xc6\x3f\x23\xc1\x48\x34\x92\x8c\x38\x33\x99"
"\x99\xdc\x4c\x61\xa6\x34\x53\x99\xa9\xcd\x78\x33\x8d\x99\xd6\x4c\x67"
"\xa6\x37\x33\x98\x19\xcd\x4c\x66\x66\x33\x8b\x99\xd5\xcc\x66\x66\x37"
"\x73\x98\x39\xcd\x5c\x66\x6e\x33\x8f\x99\xd7\xcc\x67\xe6\x37\x0b\x98"
"\x05\xcd\x42\x66\x61\xb3\x88\x59\xd4\x2c\x66\x16\x37\x4b\x98\x25\xcd"
"\x52\x66\x69\xb3\x8c\x59\xd6\x2c\x67\x96\x37\x2b\x98\x15\xcd\x4a\x66"
"\x65\xb3\x8a\x59\xd5\xac\x66\x56\x37\x6b\x98\x35\xcd\x5a\x66\x6d\xb3"
"\x8e\x59\xd7\xac\x67\xd6\x37\x1b\x98\x0d\xcd\x46\x66\x63\xb3\x89\xd9"
"\xd4\x6c\x66\x36\x37\x5b\x98\x2d\xcd\x56\x66\x6b\xb3\x8d\xd9\xd6\x6c"
"\x67\xb6\x37\x3b\x98\x1d\xcd\x4e\x66\x67\xb3\x8b\xd9\xd5\xec\x66\x76"
"\x37\x7b\x98\x3d\xcd\x5e\x66\x6f\x13\x33\x71\x93\x30\x49\x93\x32\x69"
"\x93\x31\x59\x93\x33\x79\x53\x30\x45\x53\x32\x65\x53\x31\x55\x53\x33"
"\x75\xd3\x30\x4d\xd3\x32\x6d\xd3\x31\x5d\xd3\x33\x7d\x33\x30\x43\x33"
"\x32\x81\x09\x4d\x64\xc6\xcc\x3e\x66\x5f\xb3\x9f\xd9\xdf\x1c\x60\x0e"
"\x34\x07\x99\x83\xcd\x21\xe6\x50\x73\x98\x39\xdc\x1c\x61\x8e\x34\x47"
"\x99\xa3\xcd\x31\xe6\x58\x73\x9c\x39\xde\x9c\x60\x4e\x34\x27\x99\x93"
"\xcd\x29\xe6\x54\x73\x9a\x39\xdd\x9c\x61\xce\x34\x67\x99\xb3\xcd\x39"
"\xe6\x5c\x73\x9e\x39\xdf\x5c\x60\x2e\x34\x17\x99\x8b\xcd\x25\xe6\x52"
"\x73\x99\xb9\xdc\x5c\x61\xae\x34\x57\x99\xab\xcd\x35\xe6\x5a\x73\x9d"
"\xb9\xde\xdc\x60\x6e\x34\x37\x99\x9b\xcd\x2d\xe6\x56\x73\x9b\xb9\xdd"
"\xdc\x61\xee\x34\x77\x99\xbb\xcd\x3d\xe6\x5e\x73\x9f\xb9\xdf\x3c\x60"
"\x1e\x34\x0f\x99\x87\xcd\x23\xe6\x51\xf3\x98\x79\xdc\x3c\x61\x9e\x34"
"\x4f\x99\xa7\xcd\x33\xe6\x59\xf3\x9c\x79\xde\xbc\x60\x5e\x34\x2f\x99"
"\x97\xcd\x2b\xe6\x55\xf3\x9a\x79\xdd\xbc\x61\xde\x34\x6f\x99\xb7\xcd"
"\x3b\xe6\x5d\xf3\x9e\x79\xdf\x7c\x60\x3e\x34\x1f\x99\x8f\xcd\x27\xe6"
"\x53\xf3\x99\xf9\xdc\x7c\x61\xbe\x34\x5f\x99\xaf\xcd\xff\xcc\x37\xe6"
"\x5b\xf3\x9d\xf9\xde\xfc\x60\x7e\x34\x3f\x99\x9f\xcd\x2f\xe6\x57\xf3"
"\x9b\xf9\xdd\xfc\x61\xfe\x34\x7f\x99\xbf\xcd\x3f\xe6\x5f\xf3\x9f\x99"
"\x60\x26\x9a\x49\x66\x9c\x95\xcc\x4a\x6e\xa5\xb0\x52\x5a\xa9\xac\xd4"
"\x56\xbc\x95\xc6\x4a\x6b\xa5\xb3\xd2\x5b\x19\xac\x8c\x56\x26\x2b\xb3"
"\x95\xc5\xca\x6a\x65\xb3\xb2\x5b\x39\xac\x9c\x56\x2e\x2b\xb7\x95\xc7"
"\xca\x6b\xe5\xb3\xf2\x5b\x05\xac\x82\x56\x21\xab\xb0\x55\xc4\x2a\x6a"
"\x15\xb3\x8a\x5b\x25\xac\x92\x56\x29\xab\xb4\x55\xc6\x2a\x6b\x95\xb3"
"\xca\x5b\x15\xac\x8a\x56\x25\xab\xb2\x55\xc5\xaa\x6a\x55\xb3\xaa\x5b"
"\x35\xac\x9a\x56\x2d\xab\xb6\x55\xc7\xaa\x6b\xd5\xb3\xea\x5b\x0d\xac"
"\x86\x56\x23\xab\xb1\xd5\xc4\x6a\x6a\x35\xb3\x9a\x5b\x2d\xac\x96\x56"
"\x2b\xab\xb5\xd5\xc6\x6a\x6b\xb5\xb3\xda\x5b\x1d\xac\x8e\x56\x27\xab"
"\xb3\xd5\xc5\xea\x6a\x75\xb3\xba\x5b\x3d\xac\x9e\x56\x2f\xab\xb7\x85"
"\x59\xb8\x45\x58\xa4\x45\x59\xb4\xc5\x58\xac\xc5\x59\xbc\x25\x58\xa2"
"\x25\x59\xb2\xa5\x58\xaa\xa5\x59\xba\x65\x58\xa6\x65\x59\xb6\xe5\x58"
"\xae\xe5\x59\xbe\x15\x58\xa1\x15\x59\xc0\x82\x16\xb2\x62\x56\x1f\xab"
"\xaf\xd5\xcf\xea\x6f\x0d\xb0\x06\x5a\x83\xac\xc1\xd6\x10\x6b\xa8\x35"
"\xcc\x1a\x6e\x8d\xb0\x46\x5a\xa3\xac\xd1\xd6\x18\x6b\xac\x35\xce\x1a"
"\x6f\x4d\xb0\x26\x5a\x93\xac\xc9\xd6\x14\x6b\xaa\x35\xcd\x9a\x6e\xcd"
"\xb0\x66\x5a\xb3\xac\xd9\xd6\x1c\x6b\xae\x35\xcf\x9a\x6f\x2d\xb0\x16"
"\x5a\x8b\xac\xc5\xd6\x12\x6b\xa9\xb5\xcc\x5a\x6e\xad\xb0\x56\x5a\xab"
"\xac\xd5\xd6\x1a\x6b\xad\xb5\xce\x5a\x6f\x6d\xb0\x36\x5a\x9b\xac\xcd"
"\xd6\x16\x6b\xab\xb5\xcd\xda\x6e\xed\xb0\x76\x5a\xbb\xac\xdd\xd6\x1e"
"\x6b\xaf\xb5\xcf\xda\x6f\x1d\xb0\x0e\x5a\x87\xac\xc3\xd6\x11\xeb\xa8"
"\x75\xcc\x3a\x6e\x9d\xb0\x4e\x5a\xa7\xac\xd3\xd6\x19\xeb\xac\x75\xce"
"\x3a\x6f\x5d\xb0\x2e\x5a\x97\xac\xcb\xd6\x15\xeb\xaa\x75\xcd\xba\x6e"
"\xdd\xb0\x6e\x5a\xb7\xac\xdb\xd6\x1d\xeb\xae\x75\xcf\xba\x6f\x3d\xb0"
"\x1e\x5a\x8f\xac\xc7\xd6\x13\xeb\xa9\xf5\xcc\x7a\x6e\xbd\xb0\x5e\x5a"
"\xaf\xac\xd7\xd6\x7f\xd6\x1b\xeb\xad\xf5\xce\x7a\x6f\x7d\xb0\x3e\x5a"
"\x9f\xac\xcf\xd6\x17\xeb\xab\xf5\xcd\xfa\x6e\xfd\xb0\x7e\x5a\xbf\xac"
"\xdf\xd6\x1f\xeb\xaf\xf5\xcf\x4a\xb0\x12\xad\x24\x2b\xce\x4e\x66\x27"
"\xb7\x53\xd8\x29\xed\x54\x76\x6a\x3b\xde\x4e\x63\xa7\xb5\xd3\xd9\xe9"
"\xed\x0c\x76\x46\x3b\x93\x9d\xd9\xce\x62\x67\xb5\xb3\xd9\xd9\xed\x1c"
"\x76\x4e\x3b\x97\x9d\xdb\xce\x63\xe7\xb5\xf3\xd9\xf9\xed\x02\x76\x41"
"\xbb\x90\x5d\xd8\x2e\x62\x17\xb5\x8b\xd9\xc5\xed\x12\x76\x49\xbb\x94"
"\x5d\xda\x2e\x63\x97\xb5\xcb\xd9\xe5\xed\x0a\x76\x45\xbb\x92\x5d\xd9"
"\xae\x62\x57\xb5\xab\xd9\xd5\xed\x1a\x76\x4d\xbb\x96\x5d\xdb\xae\x63"
"\xd7\xb5\xeb\xd9\xf5\xed\x06\x76\x43\xbb\x91\xdd\xd8\x6e\x62\x37\xb5"
"\x9b\xd9\xcd\xed\x16\x76\x4b\xbb\x95\xdd\xda\x6e\x63\xb7\xb5\xdb\xd9"
"\xed\xed\x0e\x76\x47\xbb\x93\xdd\xd9\xee\x62\x77\xb5\xbb\xd9\xdd\xed"
"\x1e\x76\x4f\xbb\x97\xdd\xdb\xc6\x6c\xdc\x26\x6c\xd2\xa6\x6c\xda\x66"
"\x6c\xd6\xe6\x6c\xde\x16\x6c\xd1\x96\x6c\xd9\x56\x6c\xd5\xd6\x6c\xdd"
"\x36\x6c\xd3\xb6\x6c\xdb\x76\x6c\xd7\xf6\x6c\xdf\x0e\xec\xd0\x8e\x6c"
"\x60\x43\x1b\xd9\x31\xbb\x8f\xdd\xd7\xee\x67\xf7\xb7\x07\xd8\x03\xed"
"\x41\xf6\x60\x7b\x88\x3d\xd4\x1e\x66\x0f\xb7\x47\xd8\x23\xed\x51\xf6"
"\x68\x7b\x8c\x3d\xd6\x1e\x67\x8f\xb7\x27\xd8\x13\xed\x49\xf6\x64\x7b"
"\x8a\x3d\xd5\x9e\x66\x4f\xb7\x67\xd8\x33\xed\x59\xf6\x6c\x7b\x8e\x3d"
"\xd7\x9e\x67\xcf\xb7\x17\xd8\x0b\xed\x45\xf6\x62\x7b\x89\xbd\xd4\x5e"
"\x66\x2f\xb7\x57\xd8\x2b\xed\x55\xf6\x6a\x7b\x8d\xbd\xd6\x5e\x67\xaf"
"\xb7\x37\xd8\x1b\xed\x4d\xf6\x66\x7b\x8b\xbd\xd5\xde\x66\x6f\xb7\x77"
"\xd8\x3b\xed\x5d\xf6\x6e\x7b\x8f\xbd\xd7\xde\x67\xef\xb7\x0f\xd8\x07"
"\xed\x43\xf6\x61\xfb\x88\x7d\xd4\x3e\x66\x1f\xb7\x4f\xd8\x27\xed\x53"
"\xf6\x69\xfb\x8c\x7d\xd6\x3e\x67\x9f\xb7\x2f\xd8\x17\xed\x4b\xf6\x65"
"\xfb\x8a\x7d\xd5\xbe\x66\x5f\xb7\x6f\xd8\x37\xed\x5b\xf6\x6d\xfb\x8e"
"\x7d\xd7\xbe\x67\xdf\xb7\x1f\xd8\x0f\xed\x47\xf6\x63\xfb\x89\xfd\xd4"
"\x7e\x66\x3f\xb7\x5f\xd8\x2f\xed\x57\xf6\x6b\xfb\x3f\xfb\x8d\xfd\xd6"
"\x7e\x67\xbf\xb7\x3f\xd8\x1f\xed\x4f\xf6\x67\xfb\x8b\xfd\xd5\xfe\x66"
"\x7f\xb7\x7f\xd8\x3f\xed\x5f\xf6\x6f\xfb\x8f\xfd\xd7\xfe\x67\x27\xd8"
"\x89\x76\x92\x1d\xe7\x24\x73\x92\x3b\x29\x9c\x94\x4e\x2a\x27\xb5\x13"
"\xef\xa4\x71\xd2\x3a\xe9\x9c\xf4\x4e\x06\x27\xa3\x93\xc9\xc9\xec\x64"
"\x71\xb2\x3a\xd9\x9c\xec\x4e\x0e\x27\xa7\x93\xcb\xc9\xed\xe4\x71\xf2"
"\x3a\xf9\x9c\xfc\x4e\x01\xa7\xa0\x53\xc8\x29\xec\x14\x71\x8a\x3a\xc5"
"\x9c\xe2\x4e\x09\xa7\xa4\x53\xca\x29\xed\x94\x71\xca\x3a\xe5\x9c\xf2"
"\x4e\x05\xa7\xa2\x53\xc9\xa9\xec\x54\x71\xaa\x3a\xd5\x9c\xea\x4e\x0d"
"\xa7\xa6\x53\xcb\xa9\xed\xd4\x71\xea\x3a\xf5\x9c\xfa\x4e\x03\xa7\xa1"
"\xd3\xc8\x69\xec\x34\x71\x9a\x3a\xcd\x9c\xe6\x4e\x0b\xa7\xa5\xd3\xca"
"\x69\xed\xb4\x71\xda\x3a\xed\x9c\xf6\x4e\x07\xa7\xa3\xd3\xc9\xe9\xec"
"\x74\x71\xba\x3a\xdd\x9c\xee\x4e\x0f\xa7\xa7\xd3\xcb\xe9\xed\x60\x0e"
"\xee\x10\x0e\xe9\x50\x0e\xed\x30\x0e\xeb\x70\x0e\xef\x08\x8e\xe8\x48"
"\x8e\xec\x28\x8e\xea\x68\x8e\xee\x18\x8e\xe9\x58\x8e\xed\x38\x8e\xeb"
"\x78\x8e\xef\x04\x4e\xe8\x44\x0e\x70\xa0\x83\x9c\x98\xd3\xc7\xe9\xeb"
"\xf4\x73\xfa\x3b\x03\x9c\x81\xce\x20\x67\xb0\x33\xc4\x19\xea\x0c\x73"
"\x86\x3b\x23\x9c\x91\xce\x28\x67\xb4\x33\xc6\x19\xeb\x8c\x73\xc6\x3b"
"\x13\x9c\x89\xce\x24\x67\xb2\x33\xc5\x99\xea\x4c\x73\xa6\x3b\x33\x9c"
"\x99\xce\x2c\x67\xb6\x33\xc7\x99\xeb\xcc\x73\xe6\x3b\x0b\x9c\x85\xce"
"\x22\x67\xb1\xb3\xc4\x59\xea\x2c\x73\x96\x3b\x2b\x9c\x95\xce\x2a\x67"
"\xb5\xb3\xc6\x59\xeb\xac\x73\xd6\x3b\x1b\x9c\x8d\xce\x26\x67\xb3\xb3"
"\xc5\xd9\xea\x6c\x73\xb6\x3b\x3b\x9c\x9d\xce\x2e\x67\xb7\xb3\xc7\xd9"
"\xeb\xec\x73\xf6\x3b\x07\x9c\x83\xce\x21\xe7\xb0\x73\xc4\x39\xea\x1c"
"\x73\x8e\x3b\x27\x9c\x93\xce\x29\xe7\xb4\x73\xc6\x39\xeb\x9c\x73\xce"
"\x3b\x17\x9c\x8b\xce\x25\xe7\xb2\x73\xc5\xb9\xea\x5c\x73\xae\x3b\x37"
"\x9c\x9b\xce\x2d\xe7\xb6\x73\xc7\xb9\xeb\xdc\x73\xee\x3b\x0f\x9c\x87"
"\xce\x23\xe7\xb1\xf3\xc4\x79\xea\x3c\x73\x9e\x3b\x2f\x9c\x97\xce\x2b"
"\xe7\xb5\xf3\x9f\xf3\xc6\x79\xeb\xbc\x73\xde\x3b\x1f\x9c\x8f\xce\x27"
"\xe7\xb3\xf3\xc5\xf9\xea\x7c\x73\xbe\x3b\x3f\x9c\x9f\xce\x2f\xe7\xb7"
"\xf3\xc7\xf9\xeb\xfc\x73\x12\x9c\x44\x27\xc9\x89\x73\x93\xb9\xc9\xdd"
"\x14\x6e\x4a\x37\x95\x9b\xda\x8d\x77\xd3\xb8\x69\xdd\x74\x6e\x7a\x37"
"\x83\x9b\xd1\xcd\xe4\x66\x76\xb3\xb8\x59\xdd\x6c\x6e\x76\x37\x87\x9b"
"\xd3\xcd\xe5\xe6\x76\xf3\xb8\x79\xdd\x7c\x6e\x7e\xb7\x80\x5b\xd0\x2d"
"\xe4\x16\x76\x8b\xb8\x45\xdd\x62\x6e\x71\xb7\x84\x5b\xd2\x2d\xe5\x96"
"\x76\xcb\xb8\x65\xdd\x72\x6e\x79\xb7\x82\x5b\xd1\xad\xe4\x56\x76\xab"
"\xb8\x55\xdd\x6a\x6e\x75\xb7\x86\x5b\xd3\xad\xe5\xd6\x76\xeb\xb8\x75"
"\xdd\x7a\x6e\x7d\xb7\x81\xdb\xd0\x6d\xe4\x36\x76\x9b\xb8\x4d\xdd\x66"
"\x6e\x73\xb7\x85\xdb\xd2\x6d\xe5\xb6\x76\xdb\xb8\x6d\xdd\x76\x6e\x7b"
"\xb7\x83\xdb\xd1\xed\xe4\x76\x76\xbb\xb8\x5d\xdd\x6e\x6e\x77\xb7\x87"
"\xdb\xd3\xed\xe5\xf6\x76\x31\x17\x77\x09\x97\x74\x29\x97\x76\x19\x97"
"\x75\x39\x97\x77\x05\x57\x74\x25\x57\x76\x15\x57\x75\x35\x57\x77\x0d"
"\xd7\x74\x2d\xd7\x76\x1d\xd7\x75\x3d\xd7\x77\x03\x37\x74\x23\x17\xb8"
"\xd0\x45\x6e\xcc\xed\xe3\xf6\x75\xfb\xb9\xfd\xdd\x01\xee\x40\x77\x90"
"\x3b\xd8\x1d\xe2\x0e\x75\x87\xb9\xc3\xdd\x11\xee\x48\x77\x94\x3b\xda"
"\x1d\xe3\x8e\x75\xc7\xb9\xe3\xdd\x09\xee\x44\x77\x92\x3b\xd9\x9d\xe2"
"\x4e\x75\xa7\xb9\xd3\xdd\x19\xee\x4c\x77\x96\x3b\xdb\x9d\xe3\xce\x75"
"\xe7\xb9\xf3\xdd\x05\xee\x42\x77\x91\xbb\xd8\x5d\xe2\x2e\x75\x97\xb9"
"\xcb\xdd\x15\xee\x4a\x77\x95\xbb\xda\x5d\xe3\xae\x75\xd7\xb9\xeb\xdd"
"\x0d\xee\x46\x77\x93\xbb\xd9\xdd\xe2\x6e\x75\xb7\xb9\xdb\xdd\x1d\xee"
"\x4e\x77\x97\xbb\xdb\xdd\xe3\xee\x75\xf7\xb9\xfb\xdd\x03\xee\x41\xf7"
"\x90\x7b\xd8\x3d\xe2\x1e\x75\x8f\xb9\xc7\xdd\x13\xee\x49\xf7\x94\x7b"
"\xda\x3d\xe3\x9e\x75\xcf\xb9\xe7\xdd\x0b\xee\x45\xf7\x92\x7b\xd9\xbd"
"\xe2\x5e\x75\xaf\xb9\xd7\xdd\x1b\xee\x4d\xf7\x96\x7b\xdb\xbd\xe3\xde"
"\x75\xef\xb9\xf7\xdd\x07\xee\x43\xf7\x91\xfb\xd8\x7d\xe2\x3e\x75\x9f"
"\xb9\xcf\xdd\x17\xee\x4b\xf7\x95\xfb\xda\xfd\xcf\x7d\xe3\xbe\x75\xdf"
"\xb9\xef\xdd\x0f\xee\x47\xf7\x93\xfb\xd9\xfd\xe2\x7e\x75\xbf\xb9\xdf"
"\xdd\x1f\xee\x4f\xf7\x97\xfb\xdb\xfd\xe3\xfe\x75\xff\xb9\x09\x6e\xa2"
"\x9b\xe4\xc6\x79\xc9\xbc\xe4\x5e\x0a\x2f\xa5\x97\xca\x4b\xed\xc5\x7b"
"\x69\xbc\xb4\x5e\x3a\x2f\xbd\x97\xc1\xcb\xe8\x65\xf2\x32\x7b\x59\xbc"
"\xac\x5e\x36\x2f\xbb\x97\xc3\xcb\xe9\xe5\xf2\x72\x7b\x79\xbc\xbc\x5e"
"\x3e\x2f\xbf\x57\xc0\x2b\xe8\x15\xf2\x0a\x7b\x45\xbc\xa2\x5e\x31\xaf"
"\xb8\x57\xc2\x2b\xe9\x95\xf2\x4a\x7b\x65\xbc\xb2\x5e\x39\xaf\xbc\x57"
"\xc1\xab\xe8\x55\xf2\x2a\x7b\x55\xbc\xaa\x5e\x35\xaf\xba\x57\xc3\xab"
"\xe9\xd5\xf2\x6a\x7b\x75\xbc\xba\x5e\x3d\xaf\xbe\xd7\xc0\x6b\xe8\x35"
"\xf2\x1a\x7b\x4d\xbc\xa6\x5e\x33\xaf\xb9\xd7\xc2\x6b\xe9\xb5\xf2\x5a"
"\x7b\x6d\xbc\xb6\x5e\x3b\xaf\xbd\xd7\xc1\xeb\xe8\x75\xf2\x3a\x7b\x5d"
"\xbc\xae\x5e\x37\xaf\xbb\xd7\xc3\xeb\xe9\xf5\xf2\x7a\x7b\x98\x87\x7b"
"\x84\x47\x7a\x94\x47\x7b\x8c\xc7\x7a\x9c\xc7\x7b\x82\x27\x7a\x92\x27"
"\x7b\x8a\xa7\x7a\x9a\xa7\x7b\x86\x67\x7a\x96\x67\x7b\x8e\xe7\x7a\x9e"
"\xe7\x7b\x81\x17\x7a\x91\x07\x3c\xe8\x21\x2f\xe6\xf5\xf1\xfa\x7a\xfd"
"\xbc\xfe\xde\x00\x6f\xa0\x37\xc8\x1b\xec\x0d\xf1\x86\x7a\xc3\xbc\xe1"
"\xde\x08\x6f\xa4\x37\xca\x1b\xed\x8d\xf1\xc6\x7a\xe3\xbc\xf1\xde\x04"
"\x6f\xa2\x37\xc9\x9b\xec\x4d\xf1\xa6\x7a\xd3\xbc\xe9\xde\x0c\x6f\xa6"
"\x37\xcb\x9b\xed\xcd\xf1\xe6\x7a\xf3\xbc\xf9\xde\x02\x6f\xa1\xb7\xc8"
"\x5b\xec\x2d\xf1\x96\x7a\xcb\xbc\xe5\xde\x0a\x6f\xa5\xb7\xca\x5b\xed"
"\xad\xf1\xd6\x7a\xeb\xbc\xf5\xde\x06\x6f\xa3\xb7\xc9\xdb\xec\x6d\xf1"
"\xb6\x7a\xdb\xbc\xed\xde\x0e\x6f\xa7\xb7\xcb\xdb\xed\xed\xf1\xf6\x7a"
"\xfb\xbc\xfd\xde\x01\xef\xa0\x77\xc8\x3b\xec\x1d\xf1\x8e\x7a\xc7\xbc"
"\xe3\xde\x09\xef\xa4\x77\xca\x3b\xed\x9d\xf1\xce\x7a\xe7\xbc\xf3\xde"
"\x05\xef\xa2\x77\xc9\xbb\xec\x5d\xf1\xae\x7a\xd7\xbc\xeb\xde\x0d\xef"
"\xa6\x77\xcb\xbb\xed\xdd\xf1\xee\x7a\xf7\xbc\xfb\xde\x03\xef\xa1\xf7"
"\xc8\x7b\xec\x3d\xf1\x9e\x7a\xcf\xbc\xe7\xde\x0b\xef\xa5\xf7\xca\x7b"
"\xed\xfd\xe7\xbd\xf1\xde\x7a\xef\xbc\xf7\xde\x07\xef\xa3\xf7\xc9\xfb"
"\xec\x7d\xf1\xbe\x7a\xdf\xbc\xef\xde\x0f\xef\xa7\xf7\xcb\xfb\xed\xfd"
"\xf1\xfe\x7a\xff\xbc\x04\x2f\xd1\x4b\xf2\xe2\xfc\x64\x7e\x72\x3f\x85"
"\x9f\xd2\x4f\xe5\xa7\xf6\xe3\xfd\x34\x7e\x5a\x3f\x9d\x9f\xde\xcf\xe0"
"\x67\xf4\x33\xf9\x99\xfd\x2c\x7e\x56\x3f\x9b\x9f\xdd\xcf\xe1\xe7\xf4"
"\x73\xf9\xb9\xfd\x3c\x7e\x5e\x3f\x9f\x9f\xdf\x2f\xe0\x17\xf4\x0b\xf9"
"\x85\xfd\x22\x7e\x51\xbf\x98\x5f\xdc\x2f\xe1\x97\xf4\x4b\xf9\xa5\xfd"
"\x32\x7e\x59\xbf\x9c\x5f\xde\xaf\xe0\x57\xf4\x2b\xf9\x95\xfd\x2a\x7e"
"\x55\xbf\x9a\x5f\xdd\xaf\xe1\xd7\xf4\x6b\xf9\xb5\xfd\x3a\x7e\x5d\xbf"
"\x9e\x5f\xdf\x6f\xe0\x37\xf4\x1b\xf9\x8d\xfd\x26\x7e\x53\xbf\x99\xdf"
"\xdc\x6f\xe1\xb7\xf4\x5b\xf9\xad\xfd\x36\x7e\x5b\xbf\x9d\xdf\xde\xef"
"\xe0\x77\xf4\x3b\xf9\x9d\xfd\x2e\x7e\x57\xbf\x9b\xdf\xdd\xef\xe1\xf7"
"\xf4\x7b\xf9\xbd\x7d\xcc\xc7\x7d\xc2\x27\x7d\xca\xa7\x7d\xc6\x67\x7d"
"\xce\xe7\x7d\xc1\x17\x7d\xc9\x97\x7d\xc5\x57\x7d\xcd\xd7\x7d\xc3\x37"
"\x7d\xcb\xb7\x7d\xc7\x77\x7d\xcf\xf7\xfd\xc0\x0f\xfd\xc8\x07\x3e\xf4"
"\x91\x1f\xf3\xfb\xf8\x7d\xfd\x7e\x7e\x7f\x7f\x80\x3f\xd0\x1f\xe4\x0f"
"\xf6\x87\xf8\x43\xfd\x61\xfe\x70\x7f\x84\x3f\xd2\x1f\xe5\x8f\xf6\xc7"
"\xf8\x63\xfd\x71\xfe\x78\x7f\x82\x3f\xd1\x9f\xe4\x4f\xf6\xa7\xf8\x53"
"\xfd\x69\xfe\x74\x7f\x86\x3f\xd3\x9f\xe5\xcf\xf6\xe7\xf8\x73\xfd\x79"
"\xfe\x7c\x7f\x81\xbf\xd0\x5f\xe4\x2f\xf6\x97\xf8\x4b\xfd\x65\xfe\x72"
"\x7f\x85\xbf\xd2\x5f\xe5\xaf\xf6\xd7\xf8\x6b\xfd\x75\xfe\x7a\x7f\x83"
"\xbf\xd1\xdf\xe4\x6f\xf6\xb7\xf8\x5b\xfd\x6d\xfe\x76\x7f\x87\xbf\xd3"
"\xdf\xe5\xef\xf6\xf7\xf8\x7b\xfd\x7d\xfe\x7e\xff\x80\x7f\xd0\x3f\xe4"
"\x1f\xf6\x8f\xf8\x47\xfd\x63\xfe\x71\xff\x84\x7f\xd2\x3f\xe5\x9f\xf6"
"\xcf\xf8\x67\xfd\x73\xfe\x79\xff\x82\x7f\xd1\xbf\xe4\x5f\xf6\xaf\xf8"
"\x57\xfd\x6b\xfe\x75\xff\x86\x7f\xd3\xbf\xe5\xdf\xf6\xef\xf8\x77\xfd"
"\x7b\xfe\x7d\xff\x81\xff\xd0\x7f\xe4\x3f\xf6\x9f\xf8\x4f\xfd\x67\xfe"
"\x73\xff\x85\xff\xd2\x7f\xe5\xbf\xf6\xff\xf3\xdf\xf8\x6f\xfd\x77\xfe"
"\x7b\xff\x83\xff\xd1\xff\xe4\x7f\xf6\xbf\xf8\x5f\xfd\x6f\xfe\x77\xff"
"\x87\xff\xd3\xff\xe5\xff\xf6\xff\xf8\x7f\xfd\x7f\x7e\x82\x9f\xe8\x27"
"\xf9\x71\x41\xb2\x20\x79\x90\x22\x48\x19\xa4\x0a\x52\x07\xf1\x41\x9a"
"\x20\x6d\x90\x2e\x48\x1f\x64\x08\x32\x06\x99\x82\xcc\x41\x96\x20\x6b"
"\x90\x2d\xc8\x1e\xe4\x08\x72\x06\xb9\x82\xdc\x41\x9e\x20\x6f\x90\x2f"
"\xc8\x1f\x14\x08\x0a\x06\x85\x82\xc2\x41\x91\xa0\x68\x50\x2c\x28\x1e"
"\x94\x08\x4a\x06\xa5\x82\xd2\x41\x99\xa0\x6c\x50\x2e\x28\x1f\x54\x08"
"\x2a\x06\x95\x82\xca\x41\x95\xa0\x6a\x50\x2d\xa8\x1e\xd4\x08\x6a\x06"
"\xb5\x82\xda\x41\x9d\xa0\x6e\x50\x2f\xa8\x1f\x34\x08\x1a\x06\x8d\x82"
"\xc6\x41\x93\xa0\x69\xd0\x2c\x68\x1e\xb4\x08\x5a\x06\xad\x82\xd6\x41"
"\x9b\xa0\x6d\xd0\x2e\x68\x1f\x74\x08\x3a\x06\x9d\x82\xce\x41\x97\xa0"
"\x6b\xd0\x2d\xe8\x1e\xf4\x08\x7a\x06\xbd\x82\xde\x01\x16\xe0\x01\x11"
"\x90\x01\x15\xd0\x01\x13\xb0\x01\x17\xf0\x81\x10\x88\x81\x14\xc8\x81"
"\x12\xa8\x81\x16\xe8\x81\x11\x98\x81\x15\xd8\x81\x13\xb8\x81\x17\xf8"
"\x41\x10\x84\x41\x14\x80\x00\x06\x28\x88\x05\x7d\x82\xbe\x41\xbf\xa0"
"\x7f\x30\x20\x18\x18\x0c\x0a\x06\x07\x43\x82\xa1\xc1\xb0\x60\x78\x30"
"\x22\x18\x19\x8c\x0a\x46\x07\x63\x82\xb1\xc1\xb8\x60\x7c\x30\x21\x98"
"\x18\x4c\x0a\x26\x07\x53\x82\xa9\xc1\xb4\x60\x7a\x30\x23\x98\x19\xcc"
"\x0a\x66\x07\x73\x82\xb9\xc1\xbc\x60\x7e\xb0\x20\x58\x18\x2c\x0a\x16"
"\x07\x4b\x82\xa5\xc1\xb2\x60\x79\xb0\x22\x58\x19\xac\x0a\x56\x07\x6b"
"\x82\xb5\xc1\xba\x60\x7d\xb0\x21\xd8\x18\x6c\x0a\x36\x07\x5b\x82\xad"
"\xc1\xb6\x60\x7b\xb0\x23\xd8\x19\xec\x0a\x76\x07\x7b\x82\xbd\xc1\xbe"
"\x60\x7f\x70\x20\x38\x18\x1c\x0a\x0e\x07\x47\x82\xa3\xc1\xb1\xe0\x78"
"\x70\x22\x38\x19\x9c\x0a\x4e\x07\x67\x82\xb3\xc1\xb9\xe0\x7c\x70\x21"
"\xb8\x18\x5c\x0a\x2e\x07\x57\x82\xab\xc1\xb5\xe0\x7a\x70\x23\xb8\x19"
"\xdc\x0a\x6e\x07\x77\x82\xbb\xc1\xbd\xe0\x7e\xf0\x20\x78\x18\x3c\x0a"
"\x1e\x07\x4f\x82\xa7\xc1\xb3\xe0\x79\xf0\x22\x78\x19\xbc\x0a\x5e\x07"
"\xff\x05\x6f\x82\xb7\xc1\xbb\xe0\x7d\xf0\x21\xf8\x18\x7c\x0a\x3e\x07"
"\x5f\x82\xaf\xc1\xb7\xe0\x7b\xf0\x23\xf8\x19\xfc\x0a\x7e\x07\x7f\x82"
"\xbf\xc1\xbf\x20\x21\x48\x0c\x92\x82\xb8\x30\x59\x98\x3c\x4c\x11\xa6"
"\x0c\x53\x85\xa9\xc3\xf8\x30\x4d\x98\x36\x4c\x17\xa6\x0f\x33\x84\x19"
"\xc3\x4c\x61\xe6\x30\x4b\x98\x35\xcc\x16\x66\x0f\x73\x84\x39\xc3\x5c"
"\x61\xee\x30\x4f\x98\x37\xcc\x17\xe6\x0f\x0b\x84\x05\xc3\x42\x61\xe1"
"\xb0\x48\x58\x34\x2c\x16\x16\x0f\x4b\x84\x25\xc3\x52\x61\xe9\xb0\x4c"
"\x58\x36\x2c\x17\x96\x0f\x2b\x84\x15\xc3\x4a\x61\xe5\xb0\x4a\x58\x35"
"\xac\x16\x56\x0f\x6b\x84\x35\xc3\x5a\x61\xed\xb0\x4e\x58\x37\xac\x17"
"\xd6\x0f\x1b\x84\x0d\xc3\x46\x61\xe3\xb0\x49\xd8\x34\x6c\x16\x36\x0f"
"\x5b\x84\x2d\xc3\x56\x61\xeb\xb0\x4d\xd8\x36\x6c\x17\xb6\x0f\x3b\x84"
"\x1d\xc3\x4e\x61\xe7\xb0\x4b\xd8\x35\xec\x16\x76\x0f\x7b\x84\x3d\xc3"
"\x5e\x61\xef\x10\x0b\xf1\x90\x08\xc9\x90\x0a\xe9\x90\x09\xd9\x90\x0b"
"\xf9\x50\x08\xc5\x50\x0a\xe5\x50\x09\xd5\x50\x0b\xf5\xd0\x08\xcd\xd0"
"\x0a\xed\xd0\x09\xdd\xd0\x0b\xfd\x30\x08\xc3\x30\x0a\x41\x08\x43\x14"
"\xc6\xc2\x3e\x61\xdf\xb0\x5f\xd8\x3f\x1c\x10\x0e\x0c\x07\x85\x83\xc3"
"\x21\xe1\xd0\x70\x58\x38\x3c\x1c\x11\x8e\x0c\x47\x85\xa3\xc3\x31\xe1"
"\xd8\x70\x5c\x38\x3e\x9c\x10\x4e\x0c\x27\x85\x93\xc3\x29\xe1\xd4\x70"
"\x5a\x38\x3d\x9c\x11\xce\x0c\x67\x85\xb3\xc3\x39\xe1\xdc\x70\x5e\x38"
"\x3f\x5c\x10\x2e\x0c\x17\x85\x8b\xc3\x25\xe1\xd2\x70\x59\xb8\x3c\x5c"
"\x11\xae\x0c\x57\x85\xab\xc3\x35\xe1\xda\x70\x5d\xb8\x3e\xdc\x10\x6e"
"\x0c\x37\x85\x9b\xc3\x2d\xe1\xd6\x70\x5b\xb8\x3d\xdc\x11\xee\x0c\x77"
"\x85\xbb\xc3\x3d\xe1\xde\x70\x5f\xb8\x3f\x3c\x10\x1e\x0c\x0f\x85\x87"
"\xc3\x23\xe1\xd1\xf0\x58\x78\x3c\x3c\x11\x9e\x0c\x4f\x85\xa7\xc3\x33"
"\xe1\xd9\xf0\x5c\x78\x3e\xbc\x10\x5e\x0c\x2f\x85\x97\xc3\x2b\xe1\xd5"
"\xf0\x5a\x78\x3d\xbc\x11\xde\x0c\x6f\x85\xb7\xc3\x3b\xe1\xdd\xf0\x5e"
"\x78\x3f\x7c\x10\x3e\x0c\x1f\x85\x8f\xc3\x27\xe1\xd3\xf0\x59\xf8\x3c"
"\x7c\x11\xbe\x0c\x5f\x85\xaf\xc3\xff\xc2\x37\xe1\xdb\xf0\x5d\xf8\x3e"
"\xfc\x10\x7e\x0c\x3f\x85\x9f\xc3\x2f\xe1\xd7\xf0\x5b\xf8\x3d\xfc\x11"
"\xfe\x0c\x7f\x85\xbf\xc3\x3f\xe1\xdf\xf0\x5f\x98\x10\x26\x86\x49\x61"
"\x5c\x94\x2c\x4a\x1e\xa5\x88\x52\x46\xa9\xa2\xd4\x51\x7c\x94\x26\x4a"
"\x1b\xa5\x8b\xd2\x47\x19\xa2\x8c\x51\xa6\x28\x73\x94\x25\xca\x1a\x65"
"\x8b\xb2\x47\x39\xa2\x9c\x51\xae\x28\x77\x94\x27\xca\x1b\xe5\x8b\xf2"
"\x47\x05\xa2\x82\x51\xa1\xa8\x70\x54\x24\x2a\x1a\x15\x8b\x8a\x47\x25"
"\xa2\x92\x51\xa9\xa8\x74\x54\x26\x2a\x1b\x95\x8b\xca\x47\x15\xa2\x8a"
"\x51\xa5\xa8\x72\x54\x25\xaa\x1a\x55\x8b\xaa\x47\x35\xa2\x9a\x51\xad"
"\xa8\x76\x54\x27\xaa\x1b\xd5\x8b\xea\x47\x0d\xa2\x86\x51\xa3\xa8\x71"
"\xd4\x24\x6a\x1a\x35\x8b\x9a\x47\x2d\xa2\x96\x51\xab\xa8\x75\xd4\x26"
"\x6a\x1b\xb5\x8b\xda\x47\x1d\xa2\x8e\x51\xa7\xa8\x73\xd4\x25\xea\x1a"
"\x75\x8b\xba\x47\x3d\xa2\x9e\x51\xaf\xa8\x77\x84\x45\x78\x44\x44\x64"
"\x44\x45\x74\xc4\x44\x6c\xc4\x45\x7c\x24\x44\x62\x24\x45\x72\xa4\x44"
"\x6a\xa4\x45\x7a\x64\x44\x66\x64\x45\x76\xe4\x44\x6e\xe4\x45\x7e\x14"
"\x44\x61\x14\x45\x20\x82\x11\x8a\x62\x51\x9f\xa8\x6f\xd4\x2f\xea\x1f"
"\x0d\x88\x06\x46\x83\xa2\xc1\xd1\x90\x68\x68\x34\x2c\x1a\x1e\x8d\x88"
"\x46\x46\xa3\xa2\xd1\xd1\x98\x68\x6c\x34\x2e\x1a\x1f\x4d\x88\x26\x46"
"\x93\xa2\xc9\xd1\x94\x68\x6a\x34\x2d\x9a\x1e\xcd\x88\x66\x46\xb3\xa2"
"\xd9\xd1\x9c\x68\x6e\x34\x2f\x9a\x1f\x2d\x88\x16\x46\x8b\xa2\xc5\xd1"
"\x92\x68\x69\xb4\x2c\x5a\x1e\xad\x88\x56\x46\xab\xa2\xd5\xd1\x9a\x68"
"\x6d\xb4\x2e\x5a\x1f\x6d\x88\x36\x46\x9b\xa2\xcd\xd1\x96\x68\x6b\xb4"
"\x2d\xda\x1e\xed\x88\x76\x46\xbb\xa2\xdd\xd1\x9e\x68\x6f\xb4\x2f\xda"
"\x1f\x1d\x88\x0e\x46\x87\xa2\xc3\xd1\x91\xe8\x68\x74\x2c\x3a\x1e\x9d"
"\x88\x4e\x46\xa7\xa2\xd3\xd1\x99\xe8\x6c\x74\x2e\x3a\x1f\x5d\x88\x2e"
"\x46\x97\xa2\xcb\xd1\x95\xe8\x6a\x74\x2d\xba\x1e\xdd\x88\x6e\x46\xb7"
"\xa2\xdb\xd1\x9d\xe8\x6e\x74\x2f\xba\x1f\x3d\x88\x1e\x46\x8f\xa2\xc7"
"\xd1\x93\xe8\x69\xf4\x2c\x7a\x1e\xbd\x88\x5e\x46\xaf\xa2\xd7\xd1\x7f"
"\xd1\x9b\xe8\x6d\xf4\x2e\x7a\x1f\x7d\x88\x3e\x46\x9f\xa2\xcf\xd1\x97"
"\xe8\x6b\xf4\x2d\xfa\x1e\xfd\x88\x7e\x46\xbf\xa2\xdf\xd1\x9f\xe8\x6f"
"\xf4\x2f\x4a\x88\x12\xa3\xa4\x28\x0e\x24\x03\xc9\x41\x0a\x90\x12\xa4"
"\x02\xa9\x41\x3c\x48\x03\xd2\x82\x74\x20\x3d\xc8\x00\x32\x82\x4c\x20"
"\x33\xc8\x02\xb2\x82\x6c\x20\x3b\xc8\x01\x72\x82\x5c\x20\x37\xc8\x03"
"\xf2\x82\x7c\x20\x3f\x28\x00\x0a\x82\x42\xa0\x30\x28\x02\x8a\x82\x62"
"\xa0\x38\x28\x01\x4a\x82\x52\xa0\x34\x28\x03\xca\x82\x72\xa0\x3c\xa8"
"\x00\x2a\x82\x4a\xa0\x32\xa8\x02\xaa\x82\x6a\xa0\x3a\xa8\x01\x6a\x82"
"\x5a\xa0\x36\xa8\x03\xea\x82\x7a\xa0\x3e\x68\x00\x1a\x82\x46\xa0\x31"
"\x68\x02\x9a\x82\x66\xa0\x39\x68\x01\x5a\x82\x56\xa0\x35\x68\x03\xda"
"\x82\x76\xa0\x3d\xe8\x00\x3a\x82\x4e\xa0\x33\xe8\x02\xba\x82\x6e\xa0"
"\x3b\xe8\x01\x7a\x82\x5e\xa0\x37\xc0\x00\x0e\x08\x40\x02\x0a\xd0\x80"
"\x01\x2c\xe0\x00\x0f\x04\x20\x02\x09\xc8\x40\x01\x2a\xd0\x80\x0e\x0c"
"\x60\x02\x0b\xd8\xc0\x01\x2e\xf0\x80\x0f\x02\x10\x82\x08\x00\x00\x01"
"\x02\x31\xd0\x07\xf4\x05\xfd\x40\x7f\x30\x00\x0c\x04\x83\xc0\x60\x30"
"\x04\x0c\x05\xc3\xc0\x70\x30\x02\x8c\x04\xa3\xc0\x68\x30\x06\x8c\x05"
"\xe3\xc0\x78\x30\x01\x4c\x04\x93\xc0\x64\x30\x05\x4c\x05\xd3\xc0\x74"
"\x30\x03\xcc\x04\xb3\xc0\x6c\x30\x07\xcc\x05\xf3\xc0\x7c\xb0\x00\x2c"
"\x04\x8b\xc0\x62\xb0\x04\x2c\x05\xcb\xc0\x72\xb0\x02\xac\x04\xab\xc0"
"\x6a\xb0\x06\xac\x05\xeb\xc0\x7a\xb0\x01\x6c\x04\x9b\xc0\x66\xb0\x05"
"\x6c\x05\xdb\xc0\x76\xb0\x03\xec\x04\xbb\xc0\x6e\xb0\x07\xec\x05\xfb"
"\xc0\x7e\x70\x00\x1c\x04\x87\xc0\x61\x70\x04\x1c\x05\xc7\xc0\x71\x70"
"\x02\x9c\x04\xa7\xc0\x69\x70\x06\x9c\x05\xe7\xc0\x79\x70\x01\x5c\x04"
"\x97\xc0\x65\x70\x05\x5c\x05\xd7\xc0\x75\x70\x03\xdc\x04\xb7\xc0\x6d"
"\x70\x07\xdc\x05\xf7\xc0\x7d\xf0\x00\x3c\x04\x8f\xc0\x63\xf0\x04\x3c"
"\x05\xcf\xc0\x73\xf0\x02\xbc\x04\xaf\xc0\x6b\xf0\x1f\x78\x03\xde\x82"
"\x77\xe0\x3d\xf8\x00\x3e\x82\x4f\xe0\x33\xf8\x02\xbe\x82\x6f\xe0\x3b"
"\xf8\x01\x7e\x82\x5f\xe0\x37\xf8\x03\xfe\x82\x7f\x20\x01\x24\x82\x24"
"\x10\x07\x93\xc1\xe4\x30\x05\x4c\x09\x53\xc1\xd4\x30\x1e\xa6\x81\x69"
"\x61\x3a\x98\x1e\x66\x80\x19\x61\x26\x98\x19\x66\x81\x59\x61\x36\x98"
"\x1d\xe6\x80\x39\x61\x2e\x98\x1b\xe6\x81\x79\x61\x3e\x98\x1f\x16\x80"
"\x05\x61\x21\x58\x18\x16\x81\x45\x61\x31\x58\x1c\x96\x80\x25\x61\x29"
"\x58\x1a\x96\x81\x65\x61\x39\x58\x1e\x56\x80\x15\x61\x25\x58\x19\x56"
"\x81\x55\x61\x35\x58\x1d\xd6\x80\x35\x61\x2d\x58\x1b\xd6\x81\x75\x61"
"\x3d\x58\x1f\x36\x80\x0d\x61\x23\xd8\x18\x36\x81\x4d\x61\x33\xd8\x1c"
"\xb6\x80\x2d\x61\x2b\xd8\x1a\xb6\x81\x6d\x61\x3b\xd8\x1e\x76\x80\x1d"
"\x61\x27\xd8\x19\x76\x81\x5d\x61\x37\xd8\x1d\xf6\x80\x3d\x61\x2f\xd8"
"\x1b\x62\x10\x87\x04\x24\x21\x05\x69\xc8\x40\x16\x72\x90\x87\x02\x14"
"\xa1\x04\x65\xa8\x40\x15\x6a\x50\x87\x06\x34\xa1\x05\x6d\xe8\x40\x17"
"\x7a\xd0\x87\x01\x0c\x61\x04\x01\x84\x10\xc1\x18\xec\x03\xfb\xc2\x7e"
"\xb0\x3f\x1c\x00\x07\xc2\x41\x70\x30\x1c\x02\x87\xc2\x61\x70\x38\x1c"
"\x01\x47\xc2\x51\x70\x34\x1c\x03\xc7\xc2\x71\x70\x3c\x9c\x00\x27\xc2"
"\x49\x70\x32\x9c\x02\xa7\xc2\x69\x70\x3a\x9c\x01\x67\xc2\x59\x70\x36"
"\x9c\x03\xe7\xc2\x79\x70\x3e\x5c\x00\x17\xc2\x45\x70\x31\x5c\x02\x97"
"\xc2\x65\x70\x39\x5c\x01\x57\xc2\x55\x70\x35\x5c\x03\xd7\xc2\x75\x70"
"\x3d\xdc\x00\x37\xc2\x4d\x70\x33\xdc\x02\xb7\xc2\x6d\x70\x3b\xdc\x01"
"\x77\xc2\x5d\x70\x37\xdc\x03\xf7\xc2\x7d\x70\x3f\x3c\x00\x0f\xc2\x43"
"\xf0\x30\x3c\x02\x8f\xc2\x63\xf0\x38\x3c\x01\x4f\xc2\x53\xf0\x34\x3c"
"\x03\xcf\xc2\x73\xf0\x3c\xbc\x00\x2f\xc2\x4b\xf0\x32\xbc\x02\xaf\xc2"
"\x6b\xf0\x3a\xbc\x01\x6f\xc2\x5b\xf0\x36\xbc\x03\xef\xc2\x7b\xf0\x3e"
"\x7c\x00\x1f\xc2\x47\xf0\x31\x7c\x02\x9f\xc2\x67\xf0\x39\x7c\x01\x5f"
"\xc2\x57\xf0\x35\xfc\x0f\xbe\x81\x6f\xe1\x3b\xf8\x1e\x7e\x80\x1f\xe1"
"\x27\xf8\x19\x7e\x81\x5f\xe1\x37\xf8\x1d\xfe\x80\x3f\xe1\x2f\xf8\x1b"
"\xfe\x81\x7f\xe1\x3f\x98\x00\x13\x61\x12\x8c\x43\xc9\x50\x72\x94\x02"
"\xa5\x44\xa9\x50\x6a\x14\x8f\xd2\xa0\xb4\x28\x1d\x4a\x8f\x32\xa0\x8c"
"\x28\x13\xca\x8c\xb2\xa0\xac\x28\x1b\xca\x8e\x72\xa0\x9c\x28\x17\xca"
"\x8d\xf2\xa0\xbc\x28\x1f\xca\x8f\x0a\xa0\x82\xa8\x10\x2a\x8c\x8a\xa0"
"\xa2\xa8\x18\x2a\x8e\x4a\xa0\x92\xa8\x14\x2a\x8d\xca\xa0\xb2\xa8\x1c"
"\x2a\x8f\x2a\xa0\x8a\xa8\x12\xaa\x8c\xaa\xa0\xaa\xa8\x1a\xaa\x8e\x6a"
"\xa0\x9a\xa8\x16\xaa\x8d\xea\xa0\xba\xa8\x1e\xaa\x8f\x1a\xa0\x86\xa8"
"\x11\x6a\x8c\x9a\xa0\xa6\xa8\x19\x6a\x8e\x5a\xa0\x96\xa8\x15\x6a\x8d"
"\xda\xa0\xb6\xa8\x1d\x6a\x8f\x3a\xa0\x8e\xa8\x13\xea\x8c\xba\xa0\xae"
"\xa8\x1b\xea\x8e\x7a\xa0\x9e\xa8\x17\xea\x8d\x30\x84\x23\x02\x91\x88"
"\x42\x34\x62\x10\x8b\x38\xc4\x23\x01\x89\x48\x42\x32\x52\x90\x8a\x34"
"\xa4\x23\x03\x99\xc8\x42\x36\x72\x90\x8b\x3c\xe4\xa3\x00\x85\x28\x42"
"\x00\x41\x84\x50\x0c\xf5\x41\x7d\x51\x3f\xd4\x1f\x0d\x40\x03\xd1\x20"
"\x34\x18\x0d\x41\x43\xd1\x30\x34\x1c\x8d\x40\x23\xd1\x28\x34\x1a\x8d"
"\x41\x63\xd1\x38\x34\x1e\x4d\x40\x13\xd1\x24\x34\x19\x4d\x41\x53\xd1"
"\x34\x34\x1d\xcd\x40\x33\xd1\x2c\x34\x1b\xcd\x41\x73\xd1\x3c\x34\x1f"
"\x2d\x40\x0b\xd1\x22\xb4\x18\x2d\x41\x4b\xd1\x32\xb4\x1c\xad\x40\x2b"
"\xd1\x2a\xb4\x1a\xad\x41\x6b\xd1\x3a\xb4\x1e\x6d\x40\x1b\xd1\x26\xb4"
"\x19\x6d\x41\x5b\xd1\x36\xb4\x1d\xed\x40\x3b\xd1\x2e\xb4\x1b\xed\x41"
"\x7b\xd1\x3e\xb4\x1f\x1d\x40\x07\xd1\x21\x74\x18\x1d\x41\x47\xd1\x31"
"\x74\x1c\x9d\x40\x27\xd1\x29\x74\x1a\x9d\x41\x67\xd1\x39\x74\x1e\x5d"
"\x40\x17\xd1\x25\x74\x19\x5d\x41\x57\xd1\x35\x74\x1d\xdd\x40\x37\xd1"
"\x2d\x74\x1b\xdd\x41\x77\xd1\x3d\x74\x1f\x3d\x40\x0f\xd1\x23\xf4\x18"
"\x3d\x41\x4f\xd1\x33\xf4\x1c\xbd\x40\x2f\xd1\x2b\xf4\x1a\xfd\x87\xde"
"\xa0\xb7\xe8\x1d\x7a\x8f\x3e\xa0\x8f\xe8\x13\xfa\x8c\xbe\xa0\xaf\xe8"
"\x1b\xfa\x8e\x7e\xa0\x9f\xe8\x17\xfa\x8d\xfe\xa0\xbf\xe8\x1f\x4a\x40"
"\x89\x28\x09\xc5\xc5\x92\xc5\x92\xc7\x52\xc4\x52\xc6\x52\xc5\x52\xc7"
"\xe2\x63\x69\x62\x69\x63\xe9\x62\xe9\x63\x19\x62\x19\x63\x99\x62\x99"
"\x63\x59\x62\x59\x63\xd9\x62\xd9\x63\x39\x62\x39\x63\xb9\x62\xb9\x63"
"\x79\x62\x79\x63\xf9\x62\xf9\x63\x05\x62\x05\x63\x85\x62\x85\x63\x45"
"\x62\x45\x63\xc5\x62\xc5\x63\x25\x62\x25\x63\xa5\x62\xa5\x63\x65\x62"
"\x65\x63\xe5\x62\xe5\x63\x15\x62\x15\x63\x95\x62\x95\x63\x55\x62\x55"
"\x63\xd5\x62\xd5\x63\x35\x62\x35\x63\xb5\x62\xb5\x63\x75\x62\x75\x63"
"\xf5\x62\xf5\x63\x0d\x62\x0d\x63\xff\x13\x00\x0f\x80\x59\x2f\x01\x00"
"\xc0\xab\x65\xdb\xb6\x6d\xdb\xb6\x6d\x2e\xff\x6d\x5b\x77\xf7\xd5\xcb"
"\x5a\xb6\x6d\xdb\xb6\x6d\xb7\xbd\x5f\x63\xac\x09\xd6\x14\x6b\x86\x35"
"\xc7\x5a\x60\x2d\xb1\x56\x58\x6b\xac\x0d\xd6\x16\x6b\x87\xb5\xc7\x3a"
"\x60\x1d\xb1\x4e\x58\x67\xac\x0b\xd6\x15\xeb\x86\x75\xc7\x7a\x60\x3d"
"\xb1\x5e\x58\x6f\xac\x0f\xd6\x17\xeb\x87\xf5\xc7\x06\x60\x03\xb1\x41"
"\xd8\x60\x6c\x08\x36\x14\x1b\x86\x0d\xc7\x46\x60\x23\xb1\x51\xd8\x68"
"\x6c\x0c\x36\x16\x1b\x87\x8d\xc7\x26\x60\x13\xb1\x49\xd8\x64\x6c\x0a"
"\x16\x8f\x4d\xc5\xa6\x61\xd3\xb1\x19\xd8\x4c\x6c\x16\x36\x1b\xc3\x30"
"\x1c\x23\x30\x12\xa3\x30\x1a\x63\x30\x16\xe3\x30\x1e\x13\x30\x11\x93"
"\x30\x19\x53\x30\x15\xd3\x30\x1d\x33\x30\x13\xb3\x30\x1b\x73\x30\x17"
"\xf3\x30\x1f\x0b\xb0\x10\x8b\x30\x80\x41\x0c\x61\x31\x6c\x0e\x36\x17"
"\xfb\x0f\x9b\x87\xcd\xc7\xe2\x13\x16\x62\x8b\xb0\xc5\xd8\x12\x6c\x29"
"\xb6\x0c\x5b\x8e\xad\xc0\x12\xb0\x95\xd8\x2a\x6c\x35\xb6\x06\x5b\x8b"
"\xad\xc3\xd6\x63\x1b\xb0\x8d\xd8\x26\x6c\x33\xb6\x05\xdb\x8a\x6d\xc3"
"\xb6\x63\x3b\xb0\x9d\xd8\x2e\x6c\x37\xb6\x07\xdb\x8b\xed\xc3\xf6\x63"
"\x07\xb0\x83\xd8\x21\xec\x30\x76\x04\x3b\x8a\x1d\xc3\x8e\x63\x27\xb0"
"\x93\xd8\x29\xec\x34\x76\x06\x3b\x8b\x9d\xc3\xce\x63\x17\xb0\x8b\xd8"
"\x25\xec\x32\x76\x05\xbb\x8a\x5d\xc3\xae\x63\x37\xb0\x9b\xd8\x2d\xec"
"\x36\x76\x07\xbb\x8b\xdd\xc3\xee\x63\x0f\xb0\x87\xd8\x23\xec\x31\xf6"
"\x04\x7b\x8a\x3d\xc3\x9e\x63\x2f\xb0\x97\xd8\x2b\xec\x35\xf6\x06\x7b"
"\x8b\xbd\xc3\xde\x63\x1f\xb0\x8f\xd8\x27\xec\x33\xf6\x05\xfb\x8a\x7d"
"\xc3\xbe\x63\x3f\xb0\x9f\xd8\x2f\xec\x37\xf6\x07\xfb\x8b\xfd\xc3\x12"
"\xb1\x24\x2c\x19\x9e\x1c\x4f\x81\xc7\xe1\x29\xf1\x54\x78\x6a\x3c\x0d"
"\x9e\x16\x4f\x87\xa7\xc7\x33\xe0\x19\xf1\x4c\x78\x66\x3c\x0b\x9e\x15"
"\xcf\x86\x67\xc7\x73\xe0\x39\xf1\x5c\x78\x6e\x3c\x0f\x9e\x17\xcf\x87"
"\xe7\xc7\x0b\xe0\x05\xf1\x42\x78\x61\xbc\x08\x5e\x14\x2f\x86\x17\xc7"
"\x4b\xe0\x25\xf1\x52\x78\x69\xbc\x0c\x5e\x16\x2f\x87\x97\xc7\x2b\xe0"
"\x15\xf1\x4a\x78\x65\xbc\x0a\x5e\x15\xaf\x86\x57\xc7\x6b\xe0\x35\xf1"
"\x5a\x78\x6d\xbc\x0e\x5e\x17\xaf\x87\xd7\xc7\x1b\xe0\x0d\xf1\x46\x78"
"\x63\xbc\x09\xde\x14\x6f\x86\x37\xc7\x5b\xe0\x2d\xf1\x56\x78\x6b\xbc"
"\x0d\xde\x16\x6f\x87\xb7\xc7\x3b\xe0\x1d\xf1\x4e\x78\x67\xbc\x0b\xde"
"\x15\xef\x86\x77\xc7\x7b\xe0\x3d\xf1\x5e\x78\x6f\xbc\x0f\xde\x17\xef"
"\x87\xf7\xc7\x07\xe0\x03\xf1\x41\xf8\x60\x7c\x08\x3e\x14\x1f\x86\x0f"
"\xc7\x47\xe0\x23\xf1\x51\xf8\x68\x7c\x0c\x3e\x16\x1f\x87\x8f\xc7\x27"
"\xe0\x13\xf1\x49\xf8\x64\x7c\x0a\x1e\x8f\x4f\xc5\xa7\xe1\xd3\xf1\x19"
"\xf8\x4c\x7c\x16\x3e\x1b\xc7\x70\x1c\x27\x70\x12\xa7\x70\x1a\x67\x70"
"\x16\xe7\x70\x1e\x17\x70\x11\x97\x70\x19\x57\x70\x15\xd7\x70\x1d\x37"
"\x70\x13\xb7\x70\x1b\x77\x70\x17\xf7\x70\x1f\x0f\xf0\x10\x8f\x70\x80"
"\x43\x1c\xe1\x31\x7c\x0e\x3e\x17\xff\x0f\x9f\x87\xcf\xc7\x17\xe0\x0b"
"\xf1\x45\xf8\x62\x7c\x09\xbe\x14\x5f\x86\x2f\xc7\x57\xe0\x09\xf8\x4a"
"\x7c\x15\xbe\x1a\x5f\x83\xaf\xc5\xd7\xe1\xeb\xf1\x0d\xf8\x46\x7c\x13"
"\xbe\x19\xdf\x82\x6f\xc5\xb7\xe1\xdb\xf1\x1d\xf8\x4e\x7c\x17\xbe\x1b"
"\xdf\x83\xef\xc5\xf7\xe1\xfb\xf1\x03\xf8\x41\xfc\x10\x7e\x18\x3f\x82"
"\x1f\xc5\x8f\xe1\xc7\xf1\x13\xf8\x49\xfc\x14\x7e\x1a\x3f\x83\x9f\xc5"
"\xcf\xe1\xe7\xf1\x0b\xf8\x45\xfc\x12\x7e\x19\xbf\x82\x5f\xc5\xaf\xe1"
"\xd7\xf1\x1b\xf8\x4d\xfc\x16\x7e\x1b\xbf\x83\xdf\xc5\xef\xe1\xf7\xf1"
"\x07\xf8\x43\xfc\x11\xfe\x18\x7f\x82\x3f\xc5\x9f\xe1\xcf\xf1\x17\xf8"
"\x4b\xfc\x15\xfe\x1a\x7f\x83\xbf\xc5\xdf\xe1\xef\xf1\x0f\xf8\x47\xfc"
"\x13\xfe\x19\xff\x82\x7f\xc5\xbf\xe1\xdf\xf1\x1f\xf8\x4f\xfc\x17\xfe"
"\x1b\xff\x83\xff\xc5\xff\xe1\x89\x78\x12\x9e\x8c\x48\x4e\xa4\x20\xe2"
"\x88\x94\x44\x2a\x22\x35\x91\x86\x48\x4b\xa4\x23\xd2\x13\x19\x88\x8c"
"\x44\x26\x22\x33\x91\x85\xc8\x4a\x64\x23\xb2\x13\x39\x88\x9c\x44\x2e"
"\x22\x37\x91\x87\xc8\x4b\xe4\x23\xf2\x13\x05\x88\x82\x44\x21\xa2\x30"
"\x51\x84\x28\x4a\x14\x23\x8a\x13\x25\x88\x92\x44\x29\xa2\x34\x51\x86"
"\x28\x4b\x94\x23\xca\x13\x15\x88\x8a\x44\x25\xa2\x32\x51\x85\xa8\x4a"
"\x54\x23\xaa\x13\x35\x88\x9a\x44\x2d\xa2\x36\x51\x87\xa8\x4b\xd4\x23"
"\xea\x13\x0d\x88\x86\x44\x23\xa2\x31\xd1\x84\x68\x4a\x34\x23\x9a\x13"
"\x2d\x88\x96\x44\x2b\xa2\x35\xd1\x86\x68\x4b\xb4\x23\xda\x13\x1d\x88"
"\x8e\x44\x27\xa2\x33\xd1\x85\xe8\x4a\x74\x23\xba\x13\x3d\x88\x9e\x44"
"\x2f\xa2\x37\xd1\x87\xe8\x4b\xf4\x23\xfa\x13\x03\x88\x81\xc4\x20\x62"
"\x30\x31\x84\x18\x4a\x0c\x23\x86\x13\x23\x88\x91\xc4\x28\x62\x34\x31"
"\x86\x18\x4b\x8c\x23\xc6\x13\x13\x88\x89\xc4\x24\x62\x32\x31\x85\x88"
"\x27\xa6\x12\xd3\x88\xe9\xc4\x0c\x62\x26\x31\x8b\x98\x4d\x60\x04\x4e"
"\x10\x04\x49\x50\x04\x4d\x30\x04\x4b\x70\x04\x4f\x08\x84\x48\x48\x84"
"\x4c\x28\x84\x4a\x68\x84\x4e\x18\x84\x49\x58\x84\x4d\x38\x84\x4b\x78"
"\x84\x4f\x04\x44\x48\x44\x04\x20\x20\x81\x88\x18\x31\x87\x98\x4b\xfc"
"\x47\xcc\x23\xe6\x13\x0b\x88\x85\xc4\x22\x62\x31\xb1\x84\x58\x4a\x2c"
"\x23\x96\x13\x2b\x88\x04\x62\x25\xb1\x8a\x58\x4d\xac\x21\xd6\x12\xeb"
"\x88\xf5\xc4\x06\x62\x23\xb1\x89\xd8\x4c\x6c\x21\xb6\x12\xdb\x88\xed"
"\xc4\x0e\x62\x27\xb1\x8b\xd8\x4d\xec\x21\xf6\x12\xfb\x88\xfd\xc4\x01"
"\xe2\x20\x71\x88\x38\x4c\x1c\x21\x8e\x12\xc7\x88\xe3\xc4\x09\xe2\x24"
"\x71\x8a\x38\x4d\x9c\x21\xce\x12\xe7\x88\xf3\xc4\x05\xe2\x22\x71\x89"
"\xb8\x4c\x5c\x21\xae\x12\xd7\x88\xeb\xc4\x0d\xe2\x26\x71\x8b\xb8\x4d"
"\xdc\x21\xee\x12\xf7\x88\xfb\xc4\x03\xe2\x21\xf1\x88\x78\x4c\x3c\x21"
"\x9e\x12\xcf\x88\xe7\xc4\x0b\xe2\x25\xf1\x8a\x78\x4d\xbc\x21\xde\x12"
"\xef\x88\xf7\xc4\x07\xe2\x23\xf1\x89\xf8\x4c\x7c\x21\xbe\x12\xdf\x88"
"\xef\xc4\x0f\xe2\x27\xf1\x8b\xf8\x4d\xfc\x21\xfe\x12\xff\x88\x44\x22"
"\x89\x48\x46\x26\x27\x53\x90\x71\x64\x4a\x32\x15\x99\x9a\x4c\x43\xa6"
"\x25\xd3\x91\xe9\xc9\x0c\x64\x46\x32\x13\x99\x99\xcc\x42\x66\x25\xb3"
"\x91\xd9\xc9\x1c\x64\x4e\x32\x17\x99\x9b\xcc\x43\xe6\x25\xf3\x91\xf9"
"\xc9\x02\x64\x41\xb2\x10\x59\x98\x2c\x42\x16\x25\x8b\x91\xc5\xc9\x12"
"\x64\x49\xb2\x14\x59\x9a\x2c\x43\x96\x25\xcb\x91\xe5\xc9\x0a\x64\x45"
"\xb2\x12\x59\x99\xac\x42\x56\x25\xab\x91\xd5\xc9\x1a\x64\x4d\xb2\x16"
"\x59\x9b\xac\x43\xd6\x25\xeb\x91\xf5\xc9\x06\x64\x43\xb2\x11\xd9\x98"
"\x6c\x42\x36\x25\x9b\x91\xcd\xc9\x16\x64\x4b\xb2\x15\xd9\x9a\x6c\x43"
"\xb6\x25\xdb\x91\xed\xc9\x0e\x64\x47\xb2\x13\xd9\x99\xec\x42\x76\x25"
"\xbb\x91\xdd\xc9\x1e\x64\x4f\xb2\x17\xd9\x9b\xec\x43\xf6\x25\xfb\x91"
"\xfd\xc9\x01\xe4\x40\x72\x10\x39\x98\x1c\x42\x0e\x25\x87\x91\xc3\xc9"
"\x11\xe4\x48\x72\x14\x39\x9a\x1c\x43\x8e\x25\xc7\x91\xe3\xc9\x09\xe4"
"\x44\x72\x12\x39\x99\x9c\x42\xc6\x93\x53\xc9\x69\xe4\x74\x72\x06\x39"
"\x93\x9c\x45\xce\x26\x31\x12\x27\x09\x92\x24\x29\x92\x26\x19\x92\x25"
"\x39\x92\x27\x05\x52\x24\x25\x52\x26\x15\x52\x25\x35\x52\x27\x0d\xd2"
"\x24\x2d\xd2\x26\x1d\xd2\x25\x3d\xd2\x27\x03\x32\x24\x23\x12\x90\x90"
"\x44\x64\x8c\x9c\x43\xce\x25\xff\x23\xe7\x91\xf3\xc9\x05\xe4\x42\x72"
"\x11\xb9\x98\x5c\x42\x2e\x25\x97\x91\xcb\xc9\x15\x64\x02\xb9\x92\x5c"
"\x45\xae\x26\xd7\x90\x6b\xc9\x75\xe4\x7a\x72\x03\xb9\x91\xdc\x44\x6e"
"\x26\xb7\x90\x5b\xc9\x6d\xe4\x76\x72\x07\xb9\x93\xdc\x45\xee\x26\xf7"
"\x90\x7b\xc9\x7d\xe4\x7e\xf2\x00\x79\x90\x3c\x44\x1e\x26\x8f\x90\x47"
"\xc9\x63\xe4\x71\xf2\x04\x79\x92\x3c\x45\x9e\x26\xcf\x90\x67\xc9\x73"
"\xe4\x79\xf2\x02\x79\x91\xbc\x44\x5e\x26\xaf\x90\x57\xc9\x6b\xe4\x75"
"\xf2\x06\x79\x93\xbc\x45\xde\x26\xef\x90\x77\xc9\x7b\xe4\x7d\xf2\x01"
"\xf9\x90\x7c\x44\x3e\x26\x9f\x90\x4f\xc9\x67\xe4\x73\xf2\x05\xf9\x92"
"\x7c\x45\xbe\x26\xdf\x90\x6f\xc9\x77\xe4\x7b\xf2\x03\xf9\x91\xfc\x44"
"\x7e\x26\xbf\x90\x5f\xc9\x6f\xe4\x77\xf2\x07\xf9\x93\xfc\x45\xfe\x26"
"\xff\x90\x7f\xc9\x7f\x64\x22\x99\x44\x26\xa3\x92\x53\x29\xa8\x38\x2a"
"\x25\x95\x8a\x4a\x4d\xa5\xa1\xd2\x52\xe9\xa8\xf4\x54\x06\x2a\x23\x95"
"\x89\xca\x4c\x65\xa1\xb2\x52\xd9\xa8\xec\x54\x0e\x2a\x27\x95\x8b\xca"
"\x4d\xe5\xa1\xf2\x52\xf9\xa8\xfc\x54\x01\xaa\x20\x55\x88\x2a\x4c\x15"
"\xa1\x8a\x52\xc5\xa8\xe2\x54\x09\xaa\x24\x55\x8a\x2a\x4d\x95\xa1\xca"
"\x52\xe5\xa8\xf2\x54\x05\xaa\x22\x55\x89\xaa\x4c\x55\xa1\xaa\x52\xd5"
"\xa8\xea\x54\x0d\xaa\x26\x55\x8b\xaa\x4d\xd5\xa1\xea\x52\xf5\xa8\xfa"
"\x54\x03\xaa\x21\xd5\x88\x6a\x4c\x35\xa1\x9a\x52\xcd\xa8\xe6\x54\x0b"
"\xaa\x25\xd5\x8a\x6a\x4d\xb5\xa1\xda\x52\xed\xa8\xf6\x54\x07\xaa\x23"
"\xd5\x89\xea\x4c\x75\xa1\xba\x52\xdd\xa8\xee\x54\x0f\xaa\x27\xd5\x8b"
"\xea\x4d\xf5\xa1\xfa\x52\xfd\xa8\xfe\xd4\x00\x6a\x20\x35\x88\x1a\x4c"
"\x0d\xa1\x86\x52\xc3\xa8\xe1\xd4\x08\x6a\x24\x35\x8a\x1a\x4d\x8d\xa1"
"\xc6\x52\xe3\xa8\xf1\xd4\x04\x6a\x22\x35\x89\x9a\x4c\x4d\xa1\xe2\xa9"
"\xa9\xd4\x34\x6a\x3a\x35\x83\x9a\x49\xcd\xa2\x66\x53\x18\x85\x53\x04"
"\x45\x52\x14\x45\x53\x0c\xc5\x52\x1c\xc5\x53\x02\x25\x52\x12\x25\x53"
"\x0a\xa5\x52\x1a\xa5\x53\x06\x65\x52\x16\x65\x53\x0e\xe5\x52\x1e\xe5"
"\x53\x01\x15\x52\x11\x05\x28\x48\x21\x2a\x46\xcd\xa1\xe6\x52\xff\x51"
"\xf3\xa8\xf9\xd4\x02\x6a\x21\xb5\x88\x5a\x4c\x2d\xa1\x96\x52\xcb\xa8"
"\xe5\xd4\x0a\x2a\x81\x5a\x49\xad\xa2\x56\x53\x6b\xa8\xb5\xd4\x3a\x6a"
"\x3d\xb5\x81\xda\x48\x6d\xa2\x36\x53\x5b\xa8\xad\xd4\x36\x6a\x3b\xb5"
"\x83\xda\x49\xed\xa2\x76\x53\x7b\xa8\xbd\xd4\x3e\x6a\x3f\x75\x80\x3a"
"\x48\x1d\xa2\x0e\x53\x47\xa8\xa3\xd4\x31\xea\x38\x75\x82\x3a\x49\x9d"
"\xa2\x4e\x53\x67\xa8\xb3\xd4\x39\xea\x3c\x75\x81\xba\x48\x5d\xa2\x2e"
"\x53\x57\xa8\xab\xd4\x35\xea\x3a\x75\x83\xba\x49\xdd\xa2\x6e\x53\x77"
"\xa8\xbb\xd4\x3d\xea\x3e\xf5\x80\x7a\x48\x3d\xa2\x1e\x53\x4f\xa8\xa7"
"\xd4\x33\xea\x39\xf5\x82\x7a\x49\xbd\xa2\x5e\x53\x6f\xa8\xb7\xd4\x3b"
"\xea\x3d\xf5\x81\xfa\x48\x7d\xa2\x3e\x53\x5f\xa8\xaf\xd4\x37\xea\x3b"
"\xf5\x83\xfa\x49\xfd\xa2\x7e\x53\x7f\xa8\xbf\xd4\x3f\x2a\x91\x4a\xa2"
"\x92\xd1\xc9\xe9\x14\x74\x1c\x9d\x92\x4e\x45\xa7\xa6\xd3\xd0\x69\xe9"
"\x74\x74\x7a\x3a\x03\x9d\x91\xce\x44\x67\xa6\xb3\xd0\x59\xe9\x6c\x74"
"\x76\x3a\x07\x9d\x93\xce\x45\xe7\xa6\xf3\xd0\x79\xe9\x7c\x74\x7e\xba"
"\x00\x5d\x90\x2e\x44\x17\xa6\x8b\xd0\x45\xe9\x62\x74\x71\xba\x04\x5d"
"\x92\x2e\x45\x97\xa6\xcb\xd0\x65\xe9\x72\x74\x79\xba\x02\x5d\x91\xae"
"\x44\x57\xa6\xab\xd0\x55\xe9\x6a\x74\x75\xba\x06\x5d\x93\xae\x45\xd7"
"\xa6\xeb\xd0\x75\xe9\x7a\x74\x7d\xba\x01\xdd\x90\x6e\x44\x37\xa6\x9b"
"\xd0\x4d\xe9\x66\x74\x73\xba\x05\xdd\x92\x6e\x45\xb7\xa6\xdb\xd0\x6d"
"\xe9\x76\x74\x7b\xba\x03\xdd\x91\xee\x44\x77\xa6\xbb\xd0\x5d\xe9\x6e"
"\x74\x77\xba\x07\xdd\x93\xee\x45\xf7\xa6\xfb\xd0\x7d\xe9\x7e\x74\x7f"
"\x7a\x00\x3d\x90\x1e\x44\x0f\xa6\x87\xd0\x43\xe9\x61\xf4\x70\x7a\x04"
"\x3d\x92\x1e\x45\x8f\xa6\xc7\xd0\x63\xe9\x71\xf4\x78\x7a\x02\x3d\x91"
"\x9e\x44\x4f\xa6\xa7\xd0\xf1\xf4\x54\x7a\x1a\x3d\x9d\x9e\x41\xcf\xa4"
"\x67\xd1\xb3\x69\x8c\xc6\x69\x82\x26\x69\x8a\xa6\x69\x86\x66\x69\x8e"
"\xe6\x69\x81\x16\x69\x89\x96\x69\x85\x56\x69\x8d\xd6\x69\x83\x36\x69"
"\x8b\xb6\x69\x87\x76\x69\x8f\xf6\xe9\x80\x0e\xe9\x88\x06\x34\xa4\x11"
"\x1d\xa3\xe7\xd0\x73\xe9\xff\xe8\x79\xf4\x7c\x7a\x01\xbd\x90\x5e\x44"
"\x2f\xa6\x97\xd0\x4b\xe9\x65\xf4\x72\x7a\x05\x9d\x40\xaf\xa4\x57\xd1"
"\xab\xe9\x35\xf4\x5a\x7a\x1d\xbd\x9e\xde\x40\x6f\xa4\x37\xd1\x9b\xe9"
"\x2d\xf4\x56\x7a\x1b\xbd\x9d\xde\x41\xef\xa4\x77\xd1\xbb\xe9\x3d\xf4"
"\x5e\x7a\x1f\xbd\x9f\x3e\x40\x1f\xa4\x0f\xd1\x87\xe9\x23\xf4\x51\xfa"
"\x18\x7d\x9c\x3e\x41\x9f\xa4\x4f\xd1\xa7\xe9\x33\xf4\x59\xfa\x1c\x7d"
"\x9e\xbe\x40\x5f\xa4\x2f\xd1\x97\xe9\x2b\xf4\x55\xfa\x1a\x7d\x9d\xbe"
"\x41\xdf\xa4\x6f\xd1\xb7\xe9\x3b\xf4\x5d\xfa\x1e\x7d\x9f\x7e\x40\x3f"
"\xa4\x1f\xd1\x8f\xe9\x27\xf4\x53\xfa\x19\xfd\x9c\x7e\x41\xbf\xa4\x5f"
"\xd1\xaf\xe9\x37\xf4\x5b\xfa\x1d\xfd\x9e\xfe\x40\x7f\xa4\x3f\xd1\x9f"
"\xe9\x2f\xf4\x57\xfa\x1b\xfd\x9d\xfe\x41\xff\xa4\x7f\xd1\xbf\xe9\x3f"
"\xf4\x5f\xfa\x1f\x9d\x48\x27\xd1\xc9\x98\xe4\x4c\x0a\x26\x8e\x49\xc9"
"\xa4\x62\x52\x33\x69\x98\xb4\x4c\x3a\x26\x3d\x93\x81\xc9\xc8\x64\x62"
"\x32\x33\x59\x98\xac\x4c\x36\x26\x3b\x93\x83\xc9\xc9\xe4\x62\x72\x33"
"\x79\x98\xbc\x4c\x3e\x26\x3f\x53\x80\x29\xc8\x14\x62\x0a\x33\x45\x98"
"\xa2\x4c\x31\xa6\x38\x53\x82\x29\xc9\x94\x62\x4a\x33\x65\x98\xb2\x4c"
"\x39\xa6\x3c\x53\x81\xa9\xc8\x54\x62\x2a\x33\x55\x98\xaa\x4c\x35\xa6"
"\x3a\x53\x83\xa9\xc9\xd4\x62\x6a\x33\x75\x98\xba\x4c\x3d\xa6\x3e\xd3"
"\x80\x69\xc8\x34\x62\x1a\x33\x4d\x98\xa6\x4c\x33\xa6\x39\xd3\x82\x69"
"\xc9\xb4\x62\x5a\x33\x6d\x98\xb6\x4c\x3b\xa6\x3d\xd3\x81\xe9\xc8\x74"
"\x62\x3a\x33\x5d\x98\xae\x4c\x37\xa6\x3b\xd3\x83\xe9\xc9\xf4\x62\x7a"
"\x33\x7d\x98\xbe\x4c\x3f\xa6\x3f\x33\x80\x19\xc8\x0c\x62\x06\x33\x43"
"\x98\xa1\xcc\x30\x66\x38\x33\x82\x19\xc9\x8c\x62\x46\x33\x63\x98\xb1"
"\xcc\x38\x66\x3c\x33\x81\x99\xc8\x4c\x62\x26\x33\x53\x98\x78\x66\x2a"
"\x33\x8d\x99\xce\xcc\x60\x66\x32\xb3\x98\xd9\x0c\xc6\xe0\x0c\xc1\x90"
"\x0c\xc5\xd0\x0c\xc3\xb0\x0c\xc7\xf0\x8c\xc0\x88\x8c\xc4\xc8\x8c\xc2"
"\xa8\x8c\xc6\xe8\x8c\xc1\x98\x8c\xc5\xd8\x8c\xc3\xb8\x8c\xc7\xf8\x4c"
"\xc0\x84\x4c\xc4\x00\x06\x32\x88\x89\x31\x73\x98\xb9\xcc\x7f\xcc\x3c"
"\x66\x3e\xb3\x80\x59\xc8\x2c\x62\x16\x33\x4b\x98\xa5\xcc\x32\x66\x39"
"\xb3\x82\x49\x60\x56\x32\xab\x98\xd5\xcc\x1a\x66\x2d\xb3\x8e\x59\xcf"
"\x6c\x60\x36\x32\x9b\x98\xcd\xcc\x16\x66\x2b\xb3\x8d\xd9\xce\xec\x60"
"\x76\x32\xbb\x98\xdd\xcc\x1e\x66\x2f\xb3\x8f\xd9\xcf\x1c\x60\x0e\x32"
"\x87\x98\xc3\xcc\x11\xe6\x28\x73\x8c\x39\xce\x9c\x60\x4e\x32\xa7\x98"
"\xd3\xcc\x19\xe6\x2c\x73\x8e\x39\xcf\x5c\x60\x2e\x32\x97\x98\xcb\xcc"
"\x15\xe6\x2a\x73\x8d\xb9\xce\xdc\x60\x6e\x32\xb7\x98\xdb\xcc\x1d\xe6"
"\x2e\x73\x8f\xb9\xcf\x3c\x60\x1e\x32\x8f\x98\xc7\xcc\x13\xe6\x29\xf3"
"\x8c\x79\xce\xbc\x60\x5e\x32\xaf\x98\xd7\xcc\x1b\xe6\x2d\xf3\x8e\x79"
"\xcf\x7c\x60\x3e\x32\x9f\x98\xcf\xcc\x17\xe6\x2b\xf3\x8d\xf9\xce\xfc"
"\x60\x7e\x32\xbf\x98\xdf\xcc\x1f\xe6\x2f\xf3\x8f\x49\x64\x92\x98\x64"
"\x6c\x72\x36\x05\x1b\xc7\xa6\x64\x53\xb1\xa9\xd9\x34\x6c\x5a\x36\x1d"
"\x9b\x9e\xcd\xc0\x66\x64\x33\xb1\x99\xd9\x2c\x6c\x56\x36\x1b\x9b\x9d"
"\xcd\xc1\xe6\x64\x73\xb1\xb9\xd9\x3c\x6c\x5e\x36\x1f\x9b\x9f\x2d\xc0"
"\x16\x64\x0b\xb1\x85\xd9\x22\x6c\x51\xb6\x18\x5b\x9c\x2d\xc1\x96\x64"
"\x4b\xb1\xa5\xd9\x32\x6c\x59\xb6\x1c\x5b\x9e\xad\xc0\x56\x64\x2b\xb1"
"\x95\xd9\x2a\x6c\x55\xb6\x1a\x5b\x9d\xad\xc1\xd6\x64\x6b\xb1\xb5\xd9"
"\x3a\x6c\x5d\xb6\x1e\x5b\x9f\x6d\xc0\x36\x64\x1b\xb1\x8d\xd9\x26\x6c"
"\x53\xb6\x19\xdb\x9c\x6d\xc1\xb6\x64\x5b\xb1\xad\xd9\x36\x6c\x5b\xb6"
"\x1d\xdb\x9e\xed\xc0\x76\x64\x3b\xb1\x9d\xd9\x2e\x6c\x57\xb6\x1b\xdb"
"\x9d\xed\xc1\xf6\x64\x7b\xb1\xbd\xd9\x3e\x6c\x5f\xb6\x1f\xdb\x9f\x1d"
"\xc0\x0e\x64\x07\xb1\x83\xd9\x21\xec\x50\x76\x18\x3b\x9c\x1d\xc1\x8e"
"\x64\x47\xb1\xa3\xd9\x31\xec\x58\x76\x1c\x3b\x9e\x9d\xc0\x4e\x64\x27"
"\xb1\x93\xd9\x29\x6c\x3c\x3b\x95\x9d\xc6\x4e\x67\x67\xb0\x33\xd9\x59"
"\xec\x6c\x16\x63\x71\x96\x60\x49\x96\x62\x69\x96\x61\x59\x96\x63\x79"
"\x56\x60\x45\x56\x62\x65\x56\x61\x55\x56\x63\x75\xd6\x60\x4d\xd6\x62"
"\x6d\xd6\x61\x5d\xd6\x63\x7d\x36\x60\x43\x36\x62\x01\x0b\x59\xc4\xc6"
"\xd8\x39\xec\x5c\xf6\x3f\x76\x1e\x3b\x9f\x5d\xc0\x2e\x64\x17\xb1\x8b"
"\xd9\x25\xec\x52\x76\x19\xbb\x9c\x5d\xc1\x26\xb0\x2b\xd9\x55\xec\x6a"
"\x76\x0d\xbb\x96\x5d\xc7\xae\x67\x37\xb0\x1b\xd9\x4d\xec\x66\x76\x0b"
"\xbb\x95\xdd\xc6\x6e\x67\x77\xb0\x3b\xd9\x5d\xec\x6e\x76\x0f\xbb\x97"
"\xdd\xc7\xee\x67\x0f\xb0\x07\xd9\x43\xec\x61\xf6\x08\x7b\x94\x3d\xc6"
"\x1e\x67\x4f\xb0\x27\xd9\x53\xec\x69\xf6\x0c\x7b\x96\x3d\xc7\x9e\x67"
"\x2f\xb0\x17\xd9\x4b\xec\x65\xf6\x0a\x7b\x95\xbd\xc6\x5e\x67\x6f\xb0"
"\x37\xd9\x5b\xec\x6d\xf6\x0e\x7b\x97\xbd\xc7\xde\x67\x1f\xb0\x0f\xd9"
"\x47\xec\x63\xf6\x09\xfb\x94\x7d\xc6\x3e\x67\x5f\xb0\x2f\xd9\x57\xec"
"\x6b\xf6\x0d\xfb\x96\x7d\xc7\xbe\x67\x3f\xb0\x1f\xd9\x4f\xec\x67\xf6"
"\x0b\xfb\x95\xfd\xc6\x7e\x67\x7f\xb0\x3f\xd9\x5f\xec\x6f\xf6\x0f\xfb"
"\x97\xfd\xc7\x26\xb2\x49\x6c\x32\x2e\x39\x97\x82\x8b\xe3\x52\x72\xa9"
"\xb8\xd4\x5c\x1a\x2e\x2d\x97\x8e\x4b\xcf\x65\xe0\x32\x72\x99\xb8\xcc"
"\x5c\x16\x2e\x2b\x97\x8d\xcb\xce\xe5\xe0\x72\x72\xb9\xb8\xdc\x5c\x1e"
"\x2e\x2f\x97\x8f\xcb\xcf\x15\xe0\x0a\x72\x85\xb8\xc2\x5c\x11\xae\x28"
"\x57\x8c\x2b\xce\x95\xe0\x4a\x72\xa5\xb8\xd2\x5c\x19\xae\x2c\x57\x8e"
"\x2b\xcf\x55\xe0\x2a\x72\x95\xb8\xca\x5c\x15\xae\x2a\x57\x8d\xab\xce"
"\xd5\xe0\x6a\x72\xb5\xb8\xda\x5c\x1d\xae\x2e\x57\x8f\xab\xcf\x35\xe0"
"\x1a\x72\x8d\xb8\xc6\x5c\x13\xae\x29\xd7\x8c\x6b\xce\xb5\xe0\x5a\x72"
"\xad\xb8\xd6\x5c\x1b\xae\x2d\xd7\x8e\x6b\xcf\x75\xe0\x3a\x72\x9d\xb8"
"\xce\x5c\x17\xae\x2b\xd7\x8d\xeb\xce\xf5\xe0\x7a\x72\xbd\xb8\xde\x5c"
"\x1f\xae\x2f\xd7\x8f\xeb\xcf\x0d\xe0\x06\x72\x83\xb8\xc1\xdc\x10\x6e"
"\x28\x37\x8c\x1b\xce\x8d\xe0\x46\x72\xa3\xb8\xd1\xdc\x18\x6e\x2c\x37"
"\x8e\x1b\xcf\x4d\xe0\x26\x72\x93\xb8\xc9\xdc\x14\x2e\x9e\x9b\xca\x4d"
"\xe3\xa6\x73\x33\xb8\x99\xdc\x2c\x6e\x36\x87\x71\x38\x47\x70\x24\x47"
"\x71\x34\xc7\x70\x2c\xc7\x71\x3c\x27\x70\x22\x27\x71\x32\xa7\x70\x2a"
"\xa7\x71\x3a\x67\x70\x26\x67\x71\x36\xe7\x70\x2e\xe7\x71\x3e\x17\x70"
"\x21\x17\x71\x80\x83\x1c\xe2\x62\xdc\x1c\x6e\x2e\xf7\x1f\x37\x8f\x9b"
"\xcf\x2d\xe0\x16\x72\x8b\xb8\xc5\xdc\x12\x6e\x29\xb7\x8c\x5b\xce\xad"
"\xe0\x12\xb8\x95\xdc\x2a\x6e\x35\xb7\x86\x5b\xcb\xad\xe3\xd6\x73\x1b"
"\xb8\x8d\xdc\x26\x6e\x33\xb7\x85\xdb\xca\x6d\xe3\xb6\x73\x3b\xb8\x9d"
"\xdc\x2e\x6e\x37\xb7\x87\xdb\xcb\xed\xe3\xf6\x73\x07\xb8\x83\xdc\x21"
"\xee\x30\x77\x84\x3b\xca\x1d\xe3\x8e\x73\x27\xb8\x93\xdc\x29\xee\x34"
"\x77\x86\x3b\xcb\x9d\xe3\xce\x73\x17\xb8\x8b\xdc\x25\xee\x32\x77\x85"
"\xbb\xca\x5d\xe3\xae\x73\x37\xb8\x9b\xdc\x2d\xee\x36\x77\x87\xbb\xcb"
"\xdd\xe3\xee\x73\x0f\xb8\x87\xdc\x23\xee\x31\xf7\x84\x7b\xca\x3d\xe3"
"\x9e\x73\x2f\xb8\x97\xdc\x2b\xee\x35\xf7\x86\x7b\xcb\xbd\xe3\xde\x73"
"\x1f\xb8\x8f\xdc\x27\xee\x33\xf7\x85\xfb\xca\x7d\xe3\xbe\x73\x3f\xb8"
"\x9f\xdc\x2f\xee\x37\xf7\x87\xfb\xcb\xfd\xe3\x12\xb9\x24\x2e\x19\x9f"
"\x9c\x4f\xc1\xc7\xf1\x29\xf9\x54\x7c\x6a\x3e\x0d\x9f\x96\x4f\xc7\xa7"
"\xe7\x33\xf0\x19\xf9\x4c\x7c\x66\x3e\x0b\x9f\x95\xcf\xc6\x67\xe7\x73"
"\xf0\x39\xf9\x5c\x7c\x6e\x3e\x0f\x9f\x97\xcf\xc7\xe7\xe7\x0b\xf0\x05"
"\xf9\x42\x7c\x61\xbe\x08\x5f\x94\x2f\xc6\x17\xe7\x4b\xf0\x25\xf9\x52"
"\x7c\x69\xbe\x0c\x5f\x96\x2f\xc7\x97\xe7\x2b\xf0\x15\xf9\x4a\x7c\x65"
"\xbe\x0a\x5f\x95\xaf\xc6\x57\xe7\x6b\xf0\x35\xf9\x5a\x7c\x6d\xbe\x0e"
"\x5f\x97\xaf\xc7\xd7\xe7\x1b\xf0\x0d\xf9\x46\x7c\x63\xbe\x09\xdf\x94"
"\x6f\xc6\x37\xe7\x5b\xf0\x2d\xf9\x56\x7c\x6b\xbe\x0d\xdf\x96\x6f\xc7"
"\xb7\xe7\x3b\xf0\x1d\xf9\x4e\x7c\x67\xbe\x0b\xdf\x95\xef\xc6\x77\xe7"
"\x7b\xf0\x3d\xf9\x5e\x7c\x6f\xbe\x0f\xdf\x97\xef\xc7\xf7\xe7\x07\xf0"
"\x03\xf9\x41\xfc\x60\x7e\x08\x3f\x94\x1f\xc6\x0f\xe7\x47\xf0\x23\xf9"
"\x51\xfc\x68\x7e\x0c\x3f\x96\x1f\xc7\x8f\xe7\x27\xf0\x13\xf9\x49\xfc"
"\x64\x7e\x0a\x1f\xcf\x4f\xe5\xa7\xf1\xd3\xf9\x19\xfc\x4c\x7e\x16\x3f"
"\x9b\xc7\x78\x9c\x27\x78\x92\xa7\x78\x9a\x67\x78\x96\xe7\x78\x9e\x17"
"\x78\x91\x97\x78\x99\x57\x78\x95\xd7\x78\x9d\x37\x78\x93\xb7\x78\x9b"
"\x77\x78\x97\xf7\x78\x9f\x0f\xf8\x90\x8f\x78\xc0\x43\x1e\xf1\x31\x7e"
"\x0e\x3f\x97\xff\x8f\x9f\xc7\xcf\xe7\x17\xf0\x0b\xf9\x45\xfc\x62\x7e"
"\x09\xbf\x94\x5f\xc6\x2f\xe7\x57\xf0\x09\xfc\x4a\x7e\x15\xbf\x9a\x5f"
"\xc3\xaf\xe5\xd7\xf1\xeb\xf9\x0d\xfc\x46\x7e\x13\xbf\x99\xdf\xc2\x6f"
"\xe5\xb7\xf1\xdb\xf9\x1d\xfc\x4e\x7e\x17\xbf\x9b\xdf\xc3\xef\xe5\xf7"
"\xf1\xfb\xf9\x03\xfc\x41\xfe\x10\x7f\x98\x3f\xc2\x1f\xe5\x8f\xf1\xc7"
"\xf9\x13\xfc\x49\xfe\x14\x7f\x9a\x3f\xc3\x9f\xe5\xcf\xf1\xe7\xf9\x0b"
"\xfc\x45\xfe\x12\x7f\x99\xbf\xc2\x5f\xe5\xaf\xf1\xd7\xf9\x1b\xfc\x4d"
"\xfe\x16\x7f\x9b\xbf\xc3\xdf\xe5\xef\xf1\xf7\xf9\x07\xfc\x43\xfe\x11"
"\xff\x98\x7f\xc2\x3f\xe5\x9f\xf1\xcf\xf9\x17\xfc\x4b\xfe\x15\xff\x9a"
"\x7f\xc3\xbf\xe5\xdf\xf1\xef\xf9\x0f\xfc\x47\xfe\x13\xff\x99\xff\xc2"
"\x7f\xe5\xbf\xf1\xdf\xf9\x1f\xfc\x4f\xfe\x17\xff\x9b\xff\xc3\xff\xe5"
"\xff\xf1\x89\x7c\x12\x9f\x4c\x48\x2e\xa4\x10\xe2\x84\x94\x42\x2a\x21"
"\xb5\x90\x46\x48\x2b\xa4\x13\xd2\x0b\x19\x84\x8c\x42\x26\x21\xb3\x90"
"\x45\xc8\x2a\x64\x13\xb2\x0b\x39\x84\x9c\x42\x2e\x21\xb7\x90\x47\xc8"
"\x2b\xe4\x13\xf2\x0b\x05\x84\x82\x42\x21\xa1\xb0\x50\x44\x28\x2a\x14"
"\x13\x8a\x0b\x25\x84\x92\x42\x29\xa1\xb4\x50\x46\x28\x2b\x94\x13\xca"
"\x0b\x15\x84\x8a\x42\x25\xa1\xb2\x50\x45\xa8\x2a\x54\x13\xaa\x0b\x35"
"\x84\x9a\x42\x2d\xa1\xb6\x50\x47\xa8\x2b\xd4\x13\xea\x0b\x0d\x84\x86"
"\x42\x23\xa1\xb1\xd0\x44\x68\x2a\x34\x13\x9a\x0b\x2d\x84\x96\x42\x2b"
"\xa1\xb5\xd0\x46\x68\x2b\xb4\x13\xda\x0b\x1d\x84\x8e\x42\x27\xa1\xb3"
"\xd0\x45\xe8\x2a\x74\x13\xba\x0b\x3d\x84\x9e\x42\x2f\xa1\xb7\xd0\x47"
"\xe8\x2b\xf4\x13\xfa\x0b\x03\x84\x81\xc2\x20\x61\xb0\x30\x44\x18\x2a"
"\x0c\x13\x86\x0b\x23\x84\x91\xc2\x28\x61\xb4\x30\x46\x18\x2b\x8c\x13"
"\xc6\x0b\x13\x84\x89\xc2\x24\x61\xb2\x30\x45\x88\x17\xa6\x0a\xd3\x84"
"\xe9\xc2\x0c\x61\xa6\x30\x4b\x98\x2d\x60\x02\x2e\x10\x02\x29\x50\x02"
"\x2d\x30\x02\x2b\x70\x02\x2f\x08\x82\x28\x48\x82\x2c\x28\x82\x2a\x68"
"\x82\x2e\x18\x82\x29\x58\x82\x2d\x38\x82\x2b\x78\x82\x2f\x04\x42\x28"
"\x44\x02\x10\xa0\x80\x84\x98\x30\x47\x98\x2b\xfc\x27\xcc\x13\xe6\x0b"
"\x0b\x84\x85\xc2\x22\x61\xb1\xb0\x44\x58\x2a\x2c\x13\x96\x0b\x2b\x84"
"\x04\x61\xa5\xb0\x4a\x58\x2d\xac\x11\xd6\x0a\xeb\x84\xf5\xc2\x06\x61"
"\xa3\xb0\x49\xd8\x2c\x6c\x11\xb6\x0a\xdb\x84\xed\xc2\x0e\x61\xa7\xb0"
"\x4b\xd8\x2d\xec\x11\xf6\x0a\xfb\x84\xfd\xc2\x01\xe1\xa0\x70\x48\x38"
"\x2c\x1c\x11\x8e\x0a\xc7\x84\xe3\xc2\x09\xe1\xa4\x70\x4a\x38\x2d\x9c"
"\x11\xce\x0a\xe7\x84\xf3\xc2\x05\xe1\xa2\x70\x49\xb8\x2c\x5c\x11\xae"
"\x0a\xd7\x84\xeb\xc2\x0d\xe1\xa6\x70\x4b\xb8\x2d\xdc\x11\xee\x0a\xf7"
"\x84\xfb\xc2\x03\xe1\xa1\xf0\x48\x78\x2c\x3c\x11\x9e\x0a\xcf\x84\xe7"
"\xc2\x0b\xe1\xa5\xf0\x4a\x78\x2d\xbc\x11\xde\x0a\xef\x84\xf7\xc2\x07"
"\xe1\xa3\xf0\x49\xf8\x2c\x7c\x11\xbe\x0a\xdf\x84\xef\xc2\x0f\xe1\xa7"
"\xf0\x4b\xf8\x2d\xfc\x11\xfe\x0a\xff\x84\x44\x21\x49\x48\x26\x26\x17"
"\x53\x88\x71\x62\x4a\x31\x95\x98\x5a\x4c\x23\xa6\x15\xd3\x89\xe9\xc5"
"\x0c\x62\x46\x31\x93\x98\x59\xcc\x22\x66\x15\xb3\x89\xd9\xc5\x1c\x62"
"\x4e\x31\x97\x98\x5b\xcc\x23\xe6\x15\xf3\x89\xf9\xc5\x02\x62\x41\xb1"
"\x90\x58\x58\x2c\x22\x16\x15\x8b\x89\xc5\xc5\x12\x62\x49\xb1\x94\x58"
"\x5a\x2c\x23\x96\x15\xcb\x89\xe5\xc5\x0a\x62\x45\xb1\x92\x58\x59\xac"
"\x22\x56\x15\xab\x89\xd5\xc5\x1a\x62\x4d\xb1\x96\x58\x5b\xac\x23\xd6"
"\x15\xeb\x89\xf5\xc5\x06\x62\x43\xb1\x91\xd8\x58\x6c\x22\x36\x15\x9b"
"\x89\xcd\xc5\x16\x62\x4b\xb1\x95\xd8\x5a\x6c\x23\xb6\x15\xdb\x89\xed"
"\xc5\x0e\x62\x47\xb1\x93\xd8\x59\xec\x22\x76\x15\xbb\x89\xdd\xc5\x1e"
"\x62\x4f\xb1\x97\xd8\x5b\xec\x23\xf6\x15\xfb\x89\xfd\xc5\x01\xe2\x40"
"\x71\x90\x38\x58\x1c\x22\x0e\x15\x87\x89\xc3\xc5\x11\xe2\x48\x71\x94"
"\x38\x5a\x1c\x23\x8e\x15\xc7\x89\xe3\xc5\x09\xe2\x44\x71\x92\x38\x59"
"\x9c\x22\xc6\x8b\x53\xc5\x69\xe2\x74\x71\x86\x38\x53\x9c\x25\xce\x16"
"\x31\x11\x17\x09\x91\x14\x29\x91\x16\x19\x91\x15\x39\x91\x17\x05\x51"
"\x14\x25\x51\x16\x15\x51\x15\x35\x51\x17\x0d\xd1\x14\x2d\xd1\x16\x1d"
"\xd1\x15\x3d\xd1\x17\x03\x31\x14\x23\x11\x88\x50\x44\x62\x4c\x9c\x23"
"\xce\x15\xff\x13\xe7\x89\xf3\xc5\x05\xe2\x42\x71\x91\xb8\x58\x5c\x22"
"\x2e\x15\x97\x89\xcb\xc5\x15\x62\x82\xb8\x52\x5c\x25\xae\x16\xd7\x88"
"\x6b\xc5\x75\xe2\x7a\x71\x83\xb8\x51\xdc\x24\x6e\x16\xb7\x88\x5b\xc5"
"\x6d\xe2\x76\x71\x87\xb8\x53\xdc\x25\xee\x16\xf7\x88\x7b\xc5\x7d\xe2"
"\x7e\xf1\x80\x78\x50\x3c\x24\x1e\x16\x8f\x88\x47\xc5\x63\xe2\x71\xf1"
"\x84\x78\x52\x3c\x25\x9e\x16\xcf\x88\x67\xc5\x73\xe2\x79\xf1\x82\x78"
"\x51\xbc\x24\x5e\x16\xaf\x88\x57\xc5\x6b\xe2\x75\xf1\x86\x78\x53\xbc"
"\x25\xde\x16\xef\x88\x77\xc5\x7b\xe2\x7d\xf1\x81\xf8\x50\x7c\x24\x3e"
"\x16\x9f\x88\x4f\xc5\x67\xe2\x73\xf1\x85\xf8\x52\x7c\x25\xbe\x16\xdf"
"\x88\x6f\xc5\x77\xe2\x7b\xf1\x83\xf8\x51\xfc\x24\x7e\x16\xbf\x88\x5f"
"\xc5\x6f\xe2\x77\xf1\x87\xf8\x53\xfc\x25\xfe\x16\xff\x88\x7f\xc5\x7f"
"\x62\xa2\x98\x24\x26\x93\x92\x4b\x29\xa4\x38\x29\xa5\x94\x4a\x4a\x2d"
"\xa5\x91\xd2\x4a\xe9\xa4\xf4\x52\x06\x29\xa3\x94\x49\xca\x2c\x65\x91"
"\xb2\x4a\xd9\xa4\xec\x52\x0e\x29\xa7\x94\x4b\xca\x2d\xe5\x91\xf2\x4a"
"\xf9\xa4\xfc\x52\x01\xa9\xa0\x54\x48\x2a\x2c\x15\x91\x8a\x4a\xc5\xa4"
"\xe2\x52\x09\xa9\xa4\x54\x4a\x2a\x2d\x95\x91\xca\x4a\xe5\xa4\xf2\x52"
"\x05\xa9\xa2\x54\x49\xaa\x2c\x55\x91\xaa\x4a\xd5\xa4\xea\x52\x0d\xa9"
"\xa6\x54\x4b\xaa\x2d\xd5\x91\xea\x4a\xf5\xa4\xfa\x52\x03\xa9\xa1\xd4"
"\x48\x6a\x2c\x35\x91\x9a\x4a\xcd\xa4\xe6\x52\x0b\xa9\xa5\xd4\x4a\x6a"
"\x2d\xb5\x91\xda\x4a\xed\xa4\xf6\x52\x07\xa9\xa3\xd4\x49\xea\x2c\x75"
"\x91\xba\x4a\xdd\xa4\xee\x52\x0f\xa9\xa7\xd4\x4b\xea\x2d\xf5\x91\xfa"
"\x4a\xfd\xa4\xfe\xd2\x00\x69\xa0\x34\x48\x1a\x2c\x0d\x91\x86\x4a\xc3"
"\xa4\xe1\xd2\x08\x69\xa4\x34\x4a\x1a\x2d\x8d\x91\xc6\x4a\xe3\xa4\xf1"
"\xd2\x04\x69\xa2\x34\x49\x9a\x2c\x4d\x91\xe2\xa5\xa9\xd2\x34\x69\xba"
"\x34\x43\x9a\x29\xcd\x92\x66\x4b\x98\x84\x4b\x84\x44\x4a\x94\x44\x4b"
"\x8c\xc4\x4a\x9c\xc4\x4b\x82\x24\x4a\x92\x24\x4b\x8a\xa4\x4a\x9a\xa4"
"\x4b\x86\x64\x4a\x96\x64\x4b\x8e\xe4\x4a\x9e\xe4\x4b\x81\x14\x4a\x91"
"\x04\x24\x28\x21\x29\x26\xcd\x91\xe6\x4a\xff\x49\xf3\xa4\xf9\xd2\x02"
"\x69\xa1\xb4\x48\x5a\x2c\x2d\x91\x96\x4a\xcb\xa4\xe5\xd2\x0a\x29\x41"
"\x5a\x29\xad\x92\x56\x4b\x6b\xa4\xb5\xd2\x3a\x69\xbd\xb4\x41\xda\x28"
"\x6d\x92\x36\x4b\x5b\xa4\xad\xd2\x36\x69\xbb\xb4\x43\xda\x29\xed\x92"
"\x76\x4b\x7b\xa4\xbd\xd2\x3e\x69\xbf\x74\x40\x3a\x28\x1d\x92\x0e\x4b"
"\x47\xa4\xa3\xd2\x31\xe9\xb8\x74\x42\x3a\x29\x9d\x92\x4e\x4b\x67\xa4"
"\xb3\xd2\x39\xe9\xbc\x74\x41\xba\x28\x5d\x92\x2e\x4b\x57\xa4\xab\xd2"
"\x35\xe9\xba\x74\x43\xba\x29\xdd\x92\x6e\x4b\x77\xa4\xbb\xd2\x3d\xe9"
"\xbe\xf4\x40\x7a\x28\x3d\x92\x1e\x4b\x4f\xa4\xa7\xd2\x33\xe9\xb9\xf4"
"\x42\x7a\x29\xbd\x92\x5e\x4b\x6f\xa4\xb7\xd2\x3b\xe9\xbd\xf4\x41\xfa"
"\x28\x7d\x92\x3e\x4b\x5f\xa4\xaf\xd2\x37\xe9\xbb\xf4\x43\xfa\x29\xfd"
"\x92\x7e\x4b\x7f\xa4\xbf\xd2\x3f\x29\x51\x4a\x92\x92\xc9\xc9\xe5\x14"
"\x72\x9c\x9c\x52\x4e\x25\xa7\x96\xd3\xc8\x69\xe5\x74\x72\x7a\x39\x83"
"\x9c\x51\xce\x24\x67\x96\xb3\xc8\x59\xe5\x6c\x72\x76\x39\x87\x9c\x53"
"\xce\x25\xe7\x96\xf3\xc8\x79\xe5\x7c\x72\x7e\xb9\x80\x5c\x50\x2e\x24"
"\x17\x96\x8b\xc8\x45\xe5\x62\x72\x71\xb9\x84\x5c\x52\x2e\x25\x97\x96"
"\xcb\xc8\x65\xe5\x72\x72\x79\xb9\x82\x5c\x51\xae\x24\x57\x96\xab\xc8"
"\x55\xe5\x6a\x72\x75\xb9\x86\x5c\x53\xae\x25\xd7\x96\xeb\xc8\x75\xe5"
"\x7a\x72\x7d\xb9\x81\xdc\x50\x6e\x24\x37\x96\x9b\xc8\x4d\xe5\x66\x72"
"\x73\xb9\x85\xdc\x52\x6e\x25\xb7\x96\xdb\xc8\x6d\xe5\x76\x72\x7b\xb9"
"\x83\xdc\x51\xee\x24\x77\x96\xbb\xc8\x5d\xe5\x6e\x72\x77\xb9\x87\xdc"
"\x53\xee\x25\xf7\x96\xfb\xc8\x7d\xe5\x7e\x72\x7f\x79\x80\x3c\x50\x1e"
"\x24\x0f\x96\x87\xc8\x43\xe5\x61\xf2\x70\x79\x84\x3c\x52\x1e\x25\x8f"
"\x96\xc7\xc8\x63\xe5\x71\xf2\x78\x79\x82\x3c\x51\x9e\x24\x4f\x96\xa7"
"\xc8\xf1\xf2\x54\x79\x9a\x3c\x5d\x9e\x21\xcf\x94\x67\xc9\xb3\x65\x4c"
"\xc6\x65\x42\x26\x65\x4a\xa6\x65\x46\x66\x65\x4e\xe6\x65\x41\x16\x65"
"\x49\x96\x65\x45\x56\x65\x4d\xd6\x65\x43\x36\x65\x4b\xb6\x65\x47\x76"
"\x65\x4f\xf6\xe5\x40\x0e\xe5\x48\x06\x32\x94\x91\x1c\x93\xe7\xc8\x73"
"\xe5\x38\x79\x9e\x3c\x5f\x5e\x20\x2f\x94\x17\xc9\x8b\xe5\x25\xf2\x52"
"\x79\x99\xbc\x5c\x5e\x21\x27\xc8\x2b\xe5\x55\xf2\x6a\x79\x8d\xbc\x56"
"\x5e\x27\xaf\x97\x37\xc8\x1b\xe5\x4d\xf2\x66\x79\x8b\xbc\x55\xde\x26"
"\x6f\x97\x77\xc8\x3b\xe5\x5d\xf2\x6e\x79\x8f\xbc\x57\xde\x27\xef\x97"
"\x0f\xc8\x07\xe5\x43\xf2\x61\xf9\x88\x7c\x54\x3e\x26\x1f\x97\x4f\xc8"
"\x27\xe5\x53\xf2\x69\xf9\x8c\x7c\x56\x3e\x27\x9f\x97\x2f\xc8\x17\xe5"
"\x4b\xf2\x65\xf9\x8a\x7c\x55\xbe\x26\x5f\x97\x6f\xc8\x37\xe5\x5b\xf2"
"\x6d\xf9\x8e\x7c\x57\xbe\x27\xdf\x97\x1f\xc8\x0f\xe5\x47\xf2\x63\xf9"
"\x89\xfc\x54\x7e\x26\x3f\x97\x5f\xc8\x2f\xe5\x57\xf2\x6b\xf9\x8d\xfc"
"\x56\x7e\x27\xbf\x97\x3f\xc8\x1f\xe5\x4f\xf2\x67\xf9\x8b\xfc\x55\xfe"
"\x26\x7f\x97\x7f\xc8\x3f\xe5\x5f\xf2\x6f\xf9\x8f\xfc\x57\xfe\x27\x27"
"\xca\x49\x72\x32\x25\xb9\x92\x42\x89\x53\x52\x2a\xa9\x94\xd4\x4a\x1a"
"\x25\xad\x92\x4e\x49\xaf\x64\x50\x32\x2a\x99\x94\xcc\x4a\x16\x25\xab"
"\x92\x4d\xc9\xae\xe4\x50\x72\x2a\xb9\x94\xdc\x4a\x1e\x25\xaf\x92\x4f"
"\xc9\xaf\x14\x50\x0a\x2a\x85\x94\xc2\x4a\x11\xa5\xa8\x52\x4c\x29\xae"
"\x94\x50\x4a\x2a\xa5\x94\xd2\x4a\x19\xa5\xac\x52\x4e\x29\xaf\x54\x50"
"\x2a\x2a\x95\x94\xca\x4a\x15\xa5\xaa\x52\x4d\xa9\xae\xd4\x50\x6a\x2a"
"\xb5\x94\xda\x4a\x1d\xa5\xae\x52\x4f\xa9\xaf\x34\x50\x1a\x2a\x8d\x94"
"\xc6\x4a\x13\xa5\xa9\xd2\x4c\x69\xae\xb4\x50\x5a\x2a\xad\x94\xd6\x4a"
"\x1b\xa5\xad\xd2\x4e\x69\xaf\x74\x50\x3a\x2a\x9d\x94\xce\x4a\x17\xa5"
"\xab\xd2\x4d\xe9\xae\xf4\x50\x7a\x2a\xbd\x94\xde\x4a\x1f\xa5\xaf\xd2"
"\x4f\xe9\xaf\x0c\x50\x06\x2a\x83\x94\xc1\xca\x10\x65\xa8\x32\x4c\x19"
"\xae\x8c\x50\x46\x2a\xa3\x94\xd1\xca\x18\x65\xac\x32\x4e\x19\xaf\x4c"
"\x50\x26\x2a\x93\x94\xc9\xca\x14\x25\x5e\x99\xaa\x4c\x53\xa6\x2b\x33"
"\x94\x99\xca\x2c\x65\xb6\x82\x29\xb8\x42\x28\xa4\x42\x29\xb4\xc2\x28"
"\xac\xc2\x29\xbc\x22\x28\xa2\x22\x29\xb2\xa2\x28\xaa\xa2\x29\xba\x62"
"\x28\xa6\x62\x29\xb6\xe2\x28\xae\xe2\x29\xbe\x12\x28\xa1\x12\x29\x40"
"\x81\x0a\x52\x62\xca\x1c\x65\xae\xf2\x9f\x32\x4f\x99\xaf\x2c\x50\x16"
"\x2a\x8b\x94\xc5\xca\x12\x65\xa9\xb2\x4c\x59\xae\xac\x50\x12\x94\x95"
"\xca\x2a\x65\xb5\xb2\x46\x59\xab\xac\x53\xd6\x2b\x1b\x94\x8d\xca\x26"
"\x65\xb3\xb2\x45\xd9\xaa\x6c\x53\xb6\x2b\x3b\x94\x9d\xca\x2e\x65\xb7"
"\xb2\x47\xd9\xab\xec\x53\xf6\x2b\x07\x94\x83\xca\x21\xe5\xb0\x72\x44"
"\x39\xaa\x1c\x53\x8e\x2b\x27\x94\x93\xca\x29\xe5\xb4\x72\x46\x39\xab"
"\x9c\x53\xce\x2b\x17\x94\x8b\xca\x25\xe5\xb2\x72\x45\xb9\xaa\x5c\x53"
"\xae\x2b\x37\x94\x9b\xca\x2d\xe5\xb6\x72\x47\xb9\xab\xdc\x53\xee\x2b"
"\x0f\x94\x87\xca\x23\xe5\xb1\xf2\x44\x79\xaa\x3c\x53\x9e\x2b\x2f\x94"
"\x97\xca\x2b\xe5\xb5\xf2\x46\x79\xab\xbc\x53\xde\x2b\x1f\x94\x8f\xca"
"\x27\xe5\xb3\xf2\x45\xf9\xaa\x7c\x53\xbe\x2b\x3f\x94\x9f\xca\x2f\xe5"
"\xb7\xf2\x47\xf9\xab\xfc\x53\x12\x95\x24\x25\x99\x9a\x5c\x4d\xa1\xc6"
"\xa9\x29\xd5\x54\x6a\x6a\x35\x8d\x9a\x56\x4d\xa7\xa6\x57\x33\xa8\x19"
"\xd5\x4c\x6a\x66\x35\x8b\x9a\x55\xcd\xa6\x66\x57\x73\xa8\x39\xd5\x5c"
"\x6a\x6e\x35\x8f\x9a\x57\xcd\xa7\xe6\x57\x0b\xa8\x05\xd5\x42\x6a\x61"
"\xb5\x88\x5a\x54\x2d\xa6\x16\x57\x4b\xa8\x25\xd5\x52\x6a\x69\xb5\x8c"
"\x5a\x56\x2d\xa7\x96\x57\x2b\xa8\x15\xd5\x4a\x6a\x65\xb5\x8a\x5a\x55"
"\xad\xa6\x56\x57\x6b\xa8\x35\xd5\x5a\x6a\x6d\xb5\x8e\x5a\x57\xad\xa7"
"\xd6\x57\x1b\xa8\x0d\xd5\x46\x6a\x63\xb5\x89\xda\x54\x6d\xa6\x36\x57"
"\x5b\xa8\x2d\xd5\x56\x6a\x6b\xb5\x8d\xda\x56\x6d\xa7\xb6\x57\x3b\xa8"
"\x1d\xd5\x4e\x6a\x67\xb5\x8b\xda\x55\xed\xa6\x76\x57\x7b\xa8\x3d\xd5"
"\x5e\x6a\x6f\xb5\x8f\xda\x57\xed\xa7\xf6\x57\x07\xa8\x03\xd5\x41\xea"
"\x60\x75\x88\x3a\x54\x1d\xa6\x0e\x57\x47\xa8\x23\xd5\x51\xea\x68\x75"
"\x8c\x3a\x56\x1d\xa7\x8e\x57\x27\xa8\x13\xd5\x49\xea\x64\x75\x8a\x1a"
"\xaf\x4e\x55\xa7\xa9\xd3\xd5\x19\xea\x4c\x75\x96\x3a\x5b\xc5\x54\x5c"
"\x25\x54\x52\xa5\x54\x5a\x65\x54\x56\xe5\x54\x5e\x15\x54\x51\x95\x54"
"\x59\x55\x54\x55\xd5\x54\x5d\x35\x54\x53\xb5\x54\x5b\x75\x54\x57\xf5"
"\x54\x5f\x0d\xd4\x50\x8d\x54\xa0\x42\x15\xa9\x31\x75\x8e\x3a\x57\xfd"
"\x4f\x9d\xa7\xce\x57\x17\xa8\x0b\xd5\x45\xea\x62\x75\x89\xba\x54\x5d"
"\xa6\x2e\x57\x57\xa8\x09\xea\x4a\x75\x95\xba\x5a\x5d\xa3\xae\x55\xd7"
"\xa9\xeb\xd5\x0d\xea\x46\x75\x93\xba\x59\xdd\xa2\x6e\x55\xb7\xa9\xdb"
"\xd5\x1d\xea\x4e\x75\x97\xba\x5b\xdd\xa3\xee\x55\xf7\xa9\xfb\xd5\x03"
"\xea\x41\xf5\x90\x7a\x58\x3d\xa2\x1e\x55\x8f\xa9\xc7\xd5\x13\xea\x49"
"\xf5\x94\x7a\x5a\x3d\xa3\x9e\x55\xcf\xa9\xe7\xd5\x0b\xea\x45\xf5\x92"
"\x7a\x59\xbd\xa2\x5e\x55\xaf\xa9\xd7\xd5\x1b\xea\x4d\xf5\x96\x7a\x5b"
"\xbd\xa3\xde\x55\xef\xa9\xf7\xd5\x07\xea\x43\xf5\x91\xfa\x58\x7d\xa2"
"\x3e\x55\x9f\xa9\xcf\xd5\x17\xea\x4b\xf5\x95\xfa\x5a\x7d\xa3\xbe\x55"
"\xdf\xa9\xef\xd5\x0f\xea\x47\xf5\x93\xfa\x59\xfd\xa2\x7e\x55\xbf\xa9"
"\xdf\xd5\x1f\xea\x4f\xf5\x97\xfa\x5b\xfd\xa3\xfe\x55\xff\xa9\x89\x6a"
"\x92\x9a\x4c\x4b\xae\xa5\xd0\xe2\xb4\x94\x5a\x2a\x2d\xb5\x96\x46\x4b"
"\xab\xa5\xd3\xd2\x6b\x19\xb4\x8c\x5a\x26\x2d\xb3\x96\x45\xcb\xaa\x65"
"\xd3\xb2\x6b\x39\xb4\x9c\x5a\x2e\x2d\xb7\x96\x47\xcb\xab\xe5\xd3\xf2"
"\x6b\x05\xb4\x82\x5a\x21\xad\xb0\x56\x44\x2b\xaa\x15\xd3\x8a\x6b\x25"
"\xb4\x92\x5a\x29\xad\xb4\x56\x46\x2b\xab\x95\xd3\xca\x6b\x15\xb4\x8a"
"\x5a\x25\xad\xb2\x56\x45\xab\xaa\x55\xd3\xaa\x6b\x35\xb4\x9a\x5a\x2d"
"\xad\xb6\x56\x47\xab\xab\xd5\xd3\xea\x6b\x0d\xb4\x86\x5a\x23\xad\xb1"
"\xd6\x44\x6b\xaa\x35\xd3\x9a\x6b\x2d\xb4\x96\x5a\x2b\xad\xb5\xd6\x46"
"\x6b\xab\xb5\xd3\xda\x6b\x1d\xb4\x8e\x5a\x27\xad\xb3\xd6\x45\xeb\xaa"
"\x75\xd3\xba\x6b\x3d\xb4\x9e\x5a\x2f\xad\xb7\xd6\x47\xeb\xab\xf5\xd3"
"\xfa\x6b\x03\xb4\x81\xda\x20\x6d\xb0\x36\x44\x1b\xaa\x0d\xd3\x86\x6b"
"\x23\xb4\x91\xda\x28\x6d\xb4\x36\x46\x1b\xab\x8d\xd3\xc6\x6b\x13\xb4"
"\x89\xda\x24\x6d\xb2\x36\x45\x8b\xd7\xa6\x6a\xd3\xb4\xe9\xda\x0c\x6d"
"\xa6\x36\x4b\x9b\xad\x61\x1a\xae\x11\x1a\xa9\x51\x1a\xad\x31\x1a\xab"
"\x71\x1a\xaf\x09\x9a\xa8\x49\x9a\xac\x29\x9a\xaa\x69\x9a\xae\x19\x9a"
"\xa9\x59\x9a\xad\x39\x9a\xab\x79\x9a\xaf\x05\x5a\xa8\x45\x1a\xd0\xa0"
"\x86\xb4\x98\x36\x47\x9b\xab\xfd\xa7\xcd\xd3\xe6\x6b\x0b\xb4\x85\xda"
"\x22\x6d\xb1\xb6\x44\x5b\xaa\x2d\xd3\x96\x6b\x2b\xb4\x04\x6d\xa5\xb6"
"\x4a\x5b\xad\xad\xd1\xd6\x6a\xeb\xb4\xf5\xda\x06\x6d\xa3\xb6\x49\xdb"
"\xac\x6d\xd1\xb6\x6a\xdb\xb4\xed\xda\x0e\x6d\xa7\xb6\x4b\xdb\xad\xed"
"\xd1\xf6\x6a\xfb\xb4\xfd\xda\x01\xed\xa0\x76\x48\x3b\xac\x1d\xd1\x8e"
"\x6a\xc7\xb4\xe3\xda\x09\xed\xa4\x76\x4a\x3b\xad\x9d\xd1\xce\x6a\xe7"
"\xb4\xf3\xda\x05\xed\xa2\x76\x49\xbb\xac\x5d\xd1\xae\x6a\xd7\xb4\xeb"
"\xda\x0d\xed\xa6\x76\x4b\xbb\xad\xdd\xd1\xee\x6a\xf7\xb4\xfb\xda\x03"
"\xed\xa1\xf6\x48\x7b\xac\x3d\xd1\x9e\x6a\xcf\xb4\xe7\xda\x0b\xed\xa5"
"\xf6\x4a\x7b\xad\xbd\xd1\xde\x6a\xef\xb4\xf7\xda\x07\xed\xa3\xf6\x49"
"\xfb\xac\x7d\xd1\xbe\x6a\xdf\xb4\xef\xda\x0f\xed\xa7\xf6\x4b\xfb\xad"
"\xfd\xd1\xfe\x6a\xff\xb4\x44\x2d\x49\x4b\xa6\x27\xd7\x53\xe8\x71\x7a"
"\x4a\x3d\x95\x9e\x5a\x4f\xa3\xa7\xd5\xd3\xe9\xe9\xf5\x0c\x7a\x46\x3d"
"\x93\x9e\x59\xcf\xa2\x67\xd5\xb3\xe9\xd9\xf5\x1c\x7a\x4e\x3d\x97\x9e"
"\x5b\xcf\xa3\xe7\xd5\xf3\xe9\xf9\xf5\x02\x7a\x41\xbd\x90\x5e\x58\x2f"
"\xa2\x17\xd5\x8b\xe9\xc5\xf5\x12\x7a\x49\xbd\x94\x5e\x5a\x2f\xa3\x97"
"\xd5\xcb\xe9\xe5\xf5\x0a\x7a\x45\xbd\x92\x5e\x59\xaf\xa2\x57\xd5\xab"
"\xe9\xd5\xf5\x1a\x7a\x4d\xbd\x96\x5e\x5b\xaf\xa3\xd7\xd5\xeb\xe9\xf5"
"\xf5\x06\x7a\x43\xbd\x91\xde\x58\x6f\xa2\x37\xd5\x9b\xe9\xcd\xf5\x16"
"\x7a\x4b\xbd\x95\xde\x5a\x6f\xa3\xb7\xd5\xdb\xe9\xed\xf5\x0e\x7a\x47"
"\xbd\x93\xde\x59\xef\xa2\x77\xd5\xbb\xe9\xdd\xf5\x1e\x7a\x4f\xbd\x97"
"\xde\x5b\xef\xa3\xf7\xd5\xfb\xe9\xfd\xf5\x01\xfa\x40\x7d\x90\x3e\x58"
"\x1f\xa2\x0f\xd5\x87\xe9\xc3\xf5\x11\xfa\x48\x7d\x94\x3e\x5a\x1f\xa3"
"\x8f\xd5\xc7\xe9\xe3\xf5\x09\xfa\x44\x7d\x92\x3e\x59\x9f\xa2\xc7\xeb"
"\x53\xf5\x69\xfa\x74\x7d\x86\x3e\x53\x9f\xa5\xcf\xd6\x31\x1d\xd7\x09"
"\x9d\xd4\x29\x9d\xd6\x19\x9d\xd5\x39\x9d\xd7\x05\x5d\xd4\x25\x5d\xd6"
"\x15\x5d\xd5\x35\x5d\xd7\x0d\xdd\xd4\x2d\xdd\xd6\x1d\xdd\xd5\x3d\xdd"
"\xd7\x03\x3d\xd4\x23\x1d\xe8\x50\x47\x7a\x4c\x9f\xa3\xcf\xd5\xff\xd3"
"\xe7\xe9\xf3\xf5\x05\xfa\x42\x7d\x91\xbe\x58\x5f\xa2\x2f\xd5\x97\xe9"
"\xcb\xf5\x15\x7a\x82\xbe\x52\x5f\xa5\xaf\xd6\xd7\xe8\x6b\xf5\x75\xfa"
"\x7a\x7d\x83\xbe\x51\xdf\xa4\x6f\xd6\xb7\xe8\x5b\xf5\x6d\xfa\x76\x7d"
"\x87\xbe\x53\xdf\xa5\xef\xd6\xf7\xe8\x7b\xf5\x7d\xfa\x7e\xfd\x80\x7e"
"\x50\x3f\xa4\x1f\xd6\x8f\xe8\x47\xf5\x63\xfa\x71\xfd\x84\x7e\x52\x3f"
"\xa5\x9f\xd6\xcf\xe8\x67\xf5\x73\xfa\x79\xfd\x82\x7e\x51\xbf\xa4\x5f"
"\xd6\xaf\xe8\x57\xf5\x6b\xfa\x75\xfd\x86\x7e\x53\xbf\xa5\xdf\xd6\xef"
"\xe8\x77\xf5\x7b\xfa\x7d\xfd\x81\xfe\x50\x7f\xa4\x3f\xd6\x9f\xe8\x4f"
"\xf5\x67\xfa\x73\xfd\x85\xfe\x52\x7f\xa5\xbf\xd6\xdf\xe8\x6f\xf5\x77"
"\xfa\x7b\xfd\x83\xfe\x51\xff\xa4\x7f\xd6\xbf\xe8\x5f\xf5\x6f\xfa\x77"
"\xfd\x87\xfe\x53\xff\xa5\xff\xd6\xff\xe8\x7f\xf5\x7f\x7a\xa2\x9e\xa4"
"\x27\x33\x92\x1b\x29\x8c\x38\x23\xa5\x91\xca\x48\x6d\xa4\x31\xd2\x1a"
"\xe9\x8c\xf4\x46\x06\x23\xa3\x91\xc9\xc8\x6c\x64\x31\xb2\x1a\xd9\x8c"
"\xec\x46\x0e\x23\xa7\x91\xcb\xc8\x6d\xe4\x31\xf2\x1a\xf9\x8c\xfc\x46"
"\x01\xa3\xa0\x51\xc8\x28\x6c\x14\x31\x8a\x1a\xc5\x8c\xe2\x46\x09\xa3"
"\xa4\x51\xca\x28\x6d\x94\x31\xca\x1a\xe5\x8c\xf2\x46\x05\xa3\xa2\x51"
"\xc9\xa8\x6c\x54\x31\xaa\x1a\xd5\x8c\xea\x46\x0d\xa3\xa6\x51\xcb\xa8"
"\x6d\xd4\x31\xea\x1a\xf5\x8c\xfa\x46\x03\xa3\xa1\xd1\xc8\x68\x6c\x34"
"\x31\x9a\x1a\xcd\x8c\xe6\x46\x0b\xa3\xa5\xd1\xca\x68\x6d\xb4\x31\xda"
"\x1a\xed\x8c\xf6\x46\x07\xa3\xa3\xd1\xc9\xe8\x6c\x74\x31\xba\x1a\xdd"
"\x8c\xee\x46\x0f\xa3\xa7\xd1\xcb\xe8\x6d\xf4\x31\xfa\x1a\xfd\x8c\xfe"
"\xc6\x00\x63\xa0\x31\xc8\x18\x6c\x0c\x31\x86\x1a\xc3\x8c\xe1\xc6\x08"
"\x63\xa4\x31\xca\x18\x6d\x8c\x31\xc6\x1a\xe3\x8c\xf1\xc6\x04\x63\xa2"
"\x31\xc9\x98\x6c\x4c\x31\xe2\x8d\xa9\xc6\x34\x63\xba\x31\xc3\x98\x69"
"\xcc\x32\x66\x1b\x98\x81\x1b\x84\x41\x1a\x94\x41\x1b\x8c\xc1\x1a\x9c"
"\xc1\x1b\x82\x21\x1a\x92\x21\x1b\x8a\xa1\x1a\x9a\xa1\x1b\x86\x61\x1a"
"\x96\x61\x1b\x8e\xe1\x1a\x9e\xe1\x1b\x81\x11\x1a\x91\x01\x0c\x68\x20"
"\x23\x66\xcc\x31\xe6\x1a\xff\x19\xf3\x8c\xf9\xc6\x02\x63\xa1\xb1\xc8"
"\x58\x6c\x2c\x31\x96\x1a\xcb\x8c\xe5\xc6\x0a\x23\xc1\x58\x69\xac\x32"
"\x56\x1b\x6b\x8c\xb5\xc6\x3a\x63\xbd\xb1\xc1\xd8\x68\x6c\x32\x36\x1b"
"\x5b\x8c\xad\xc6\x36\x63\xbb\xb1\xc3\xd8\x69\xec\x32\x76\x1b\x7b\x8c"
"\xbd\xc6\x3e\x63\xbf\x71\xc0\x38\x68\x1c\x32\x0e\x1b\x47\x8c\xa3\xc6"
"\x31\xe3\xb8\x71\xc2\x38\x69\x9c\x32\x4e\x1b\x67\x8c\xb3\xc6\x39\xe3"
"\xbc\x71\xc1\xb8\x68\x5c\x32\x2e\x1b\x57\x8c\xab\xc6\x35\xe3\xba\x71"
"\xc3\xb8\x69\xdc\x32\x6e\x1b\x77\x8c\xbb\xc6\x3d\xe3\xbe\xf1\xc0\x78"
"\x68\x3c\x32\x1e\x1b\x4f\x8c\xa7\xc6\x33\xe3\xb9\xf1\xc2\x78\x69\xbc"
"\x32\x5e\x1b\x6f\x8c\xb7\xc6\x3b\xe3\xbd\xf1\xc1\xf8\x68\x7c\x32\x3e"
"\x1b\x5f\x8c\xaf\xc6\x37\xe3\xbb\xf1\xc3\xf8\x69\xfc\x32\x7e\x1b\x7f"
"\x8c\xbf\xc6\x3f\x23\xd1\x48\x32\x92\x99\xc9\xcd\x14\x66\x9c\x99\xd2"
"\x4c\x65\xa6\x36\xd3\x98\x69\xcd\x74\x66\x7a\x33\x83\x99\xd1\xcc\x64"
"\x66\x36\xb3\x98\x59\xcd\x6c\x66\x76\x33\x87\x99\xd3\xcc\x65\xe6\x36"
"\xf3\x98\x79\xcd\x7c\x66\x7e\xb3\x80\x59\xd0\x2c\x64\x16\x36\x8b\x98"
"\x45\xcd\x62\x66\x71\xb3\x84\x59\xd2\x2c\x65\x96\x36\xcb\x98\x65\xcd"
"\x72\x66\x79\xb3\x82\x59\xd1\xac\x64\x56\x36\xab\x98\x55\xcd\x6a\x66"
"\x75\xb3\x86\x59\xd3\xac\x65\xd6\x36\xeb\x98\x75\xcd\x7a\x66\x7d\xb3"
"\x81\xd9\xd0\x6c\x64\x36\x36\x9b\x98\x4d\xcd\x66\x66\x73\xb3\x85\xd9"
"\xd2\x6c\x65\xb6\x36\xdb\x98\x6d\xcd\x76\x66\x7b\xb3\x83\xd9\xd1\xec"
"\x64\x76\x36\xbb\x98\x5d\xcd\x6e\x66\x77\xb3\x87\xd9\xd3\xec\x65\xf6"
"\x36\xfb\x98\x7d\xcd\x7e\x66\x7f\x73\x80\x39\xd0\x1c\x64\x0e\x36\x87"
"\x98\x43\xcd\x61\xe6\x70\x73\x84\x39\xd2\x1c\x65\x8e\x36\xc7\x98\x63"
"\xcd\x71\xe6\x78\x73\x82\x39\xd1\x9c\x64\x4e\x36\xa7\x98\xf1\xe6\x54"
"\x73\x9a\x39\xdd\x9c\x61\xce\x34\x67\x99\xb3\x4d\xcc\xc4\x4d\xc2\x24"
"\x4d\xca\xa4\x4d\xc6\x64\x4d\xce\xe4\x4d\xc1\x14\x4d\xc9\x94\x4d\xc5"
"\x54\x4d\xcd\xd4\x4d\xc3\x34\x4d\xcb\xb4\x4d\xc7\x74\x4d\xcf\xf4\xcd"
"\xc0\x0c\xcd\xc8\x04\x26\x34\x91\x19\x33\xe7\x98\x73\xcd\xff\xcc\x79"
"\xe6\x7c\x73\x81\xb9\xd0\x5c\x64\x2e\x36\x97\x98\x4b\xcd\x65\xe6\x72"
"\x73\x85\x99\x60\xae\x34\x57\x99\xab\xcd\x35\xe6\x5a\x73\x9d\xb9\xde"
"\xdc\x60\x6e\x34\x37\x99\x9b\xcd\x2d\xe6\x56\x73\x9b\xb9\xdd\xdc\x61"
"\xee\x34\x77\x99\xbb\xcd\x3d\xe6\x5e\x73\x9f\xb9\xdf\x3c\x60\x1e\x34"
"\x0f\x99\x87\xcd\x23\xe6\x51\xf3\x98\x79\xdc\x3c\x61\x9e\x34\x4f\x99"
"\xa7\xcd\x33\xe6\x59\xf3\x9c\x79\xde\xbc\x60\x5e\x34\x2f\x99\x97\xcd"
"\x2b\xe6\x55\xf3\x9a\x79\xdd\xbc\x61\xde\x34\x6f\x99\xb7\xcd\x3b\xe6"
"\x5d\xf3\x9e\x79\xdf\x7c\x60\x3e\x34\x1f\x99\x8f\xcd\x27\xe6\x53\xf3"
"\x99\xf9\xdc\x7c\x61\xbe\x34\x5f\x99\xaf\xcd\x37\xe6\x5b\xf3\x9d\xf9"
"\xde\xfc\x60\x7e\x34\x3f\x99\x9f\xcd\x2f\xe6\x57\xf3\x9b\xf9\xdd\xfc"
"\x61\xfe\x34\x7f\x99\xbf\xcd\x3f\xe6\x5f\xf3\x9f\x99\x68\x26\x99\xc9"
"\xac\xe4\x56\x0a\x2b\xce\x4a\x69\xa5\xb2\x52\x5b\x69\xac\xb4\x56\x3a"
"\x2b\xbd\x95\xc1\xca\x68\x65\xb2\x32\x5b\x59\xac\xac\x56\x36\x2b\xbb"
"\x95\xc3\xca\x69\xe5\xb2\x72\x5b\x79\xac\xbc\x56\x3e\x2b\xbf\x55\xc0"
"\x2a\x68\x15\xb2\x0a\x5b\x45\xac\xa2\x56\x31\xab\xb8\x55\xc2\x2a\x69"
"\x95\xb2\x4a\x5b\x65\xac\xb2\x56\x39\xab\xbc\x55\xc1\xaa\x68\x55\xb2"
"\x2a\x5b\x55\xac\xaa\x56\x35\xab\xba\x55\xc3\xaa\x69\xd5\xb2\x6a\x5b"
"\x75\xac\xba\x56\x3d\xab\xbe\xd5\xc0\x6a\x68\x35\xb2\x1a\x5b\x4d\xac"
"\xa6\x56\x33\xab\xb9\xd5\xc2\x6a\x69\xb5\xb2\x5a\x5b\x6d\xac\xb6\x56"
"\x3b\xab\xbd\xd5\xc1\xea\x68\x75\xb2\x3a\x5b\x5d\xac\xae\x56\x37\xab"
"\xbb\xd5\xc3\xea\x69\xf5\xb2\x7a\x5b\x7d\xac\xbe\x56\x3f\xab\xbf\x35"
"\xc0\x1a\x68\x0d\xb2\x06\x5b\x43\xac\xa1\xd6\x30\x6b\xb8\x35\xc2\x1a"
"\x69\x8d\xb2\x46\x5b\x63\xac\xb1\xd6\x38\x6b\xbc\x35\xc1\x9a\x68\x4d"
"\xb2\x26\x5b\x53\xac\x78\x6b\xaa\x35\xcd\x9a\x6e\xcd\xb0\x66\x5a\xb3"
"\xac\xd9\x16\x66\xe1\x16\x61\x91\x16\x65\xd1\x16\x63\xb1\x16\x67\xf1"
"\x96\x60\x89\x96\x64\xc9\x96\x62\xa9\x96\x66\xe9\x96\x61\x99\x96\x65"
"\xd9\x96\x63\xb9\x96\x67\xf9\x56\x60\x85\x56\x64\x01\x0b\x5a\xc8\x8a"
"\x59\x73\xac\xb9\xd6\x7f\xd6\x3c\x6b\xbe\xb5\xc0\x5a\x68\x2d\xb2\x16"
"\x5b\x4b\xac\xa5\xd6\x32\x6b\xb9\xb5\xc2\x4a\xb0\x56\x5a\xab\xac\xd5"
"\xd6\x1a\x6b\xad\xb5\xce\x5a\x6f\x6d\xb0\x36\x5a\x9b\xac\xcd\xd6\x16"
"\x6b\xab\xb5\xcd\xda\x6e\xed\xb0\x76\x5a\xbb\xac\xdd\xd6\x1e\x6b\xaf"
"\xb5\xcf\xda\x6f\x1d\xb0\x0e\x5a\x87\xac\xc3\xd6\x11\xeb\xa8\x75\xcc"
"\x3a\x6e\x9d\xb0\x4e\x5a\xa7\xac\xd3\xd6\x19\xeb\xac\x75\xce\x3a\x6f"
"\x5d\xb0\x2e\x5a\x97\xac\xcb\xd6\x15\xeb\xaa\x75\xcd\xba\x6e\xdd\xb0"
"\x6e\x5a\xb7\xac\xdb\xd6\x1d\xeb\xae\x75\xcf\xba\x6f\x3d\xb0\x1e\x5a"
"\x8f\xac\xc7\xd6\x13\xeb\xa9\xf5\xcc\x7a\x6e\xbd\xb0\x5e\x5a\xaf\xac"
"\xd7\xd6\x1b\xeb\xad\xf5\xce\x7a\x6f\x7d\xb0\x3e\x5a\x9f\xac\xcf\xd6"
"\x17\xeb\xab\xf5\xcd\xfa\x6e\xfd\xb0\x7e\x5a\xbf\xac\xdf\xd6\x1f\xeb"
"\xaf\xf5\xcf\x4a\xb4\x92\xac\x64\x76\x72\x3b\x85\x1d\x67\xa7\xb4\x53"
"\xd9\xa9\xed\x34\x76\x5a\x3b\x9d\x9d\xde\xce\x60\x67\xb4\x33\xd9\x99"
"\xed\x2c\x76\x56\x3b\x9b\x9d\xdd\xce\x61\xe7\xb4\x73\xd9\xb9\xed\x3c"
"\x76\x5e\x3b\x9f\x9d\xdf\x2e\x60\x17\xb4\x0b\xd9\x85\xed\x22\x76\x51"
"\xbb\x98\x5d\xdc\x2e\x61\x97\xb4\x4b\xd9\xa5\xed\x32\x76\x59\xbb\x9c"
"\x5d\xde\xae\x60\x57\xb4\x2b\xd9\x95\xed\x2a\x76\x55\xbb\x9a\x5d\xdd"
"\xae\x61\xd7\xb4\x6b\xd9\xb5\xed\x3a\x76\x5d\xbb\x9e\x5d\xdf\x6e\x60"
"\x37\xb4\x1b\xd9\x8d\xed\x26\x76\x53\xbb\x99\xdd\xdc\x6e\x61\xb7\xb4"
"\x5b\xd9\xad\xed\x36\x76\x5b\xbb\x9d\xdd\xde\xee\x60\x77\xb4\x3b\xd9"
"\x9d\xed\x2e\x76\x57\xbb\x9b\xdd\xdd\xee\x61\xf7\xb4\x7b\xd9\xbd\xed"
"\x3e\x76\x5f\xbb\x9f\xdd\xdf\x1e\x60\x0f\xb4\x07\xd9\x83\xed\x21\xf6"
"\x50\x7b\x98\x3d\xdc\x1e\x61\x8f\xb4\x47\xd9\xa3\xed\x31\xf6\x58\x7b"
"\x9c\x3d\xde\x9e\x60\x4f\xb4\x27\xd9\x93\xed\x29\x76\xbc\x3d\xd5\x9e"
"\x66\x4f\xb7\x67\xd8\x33\xed\x59\xf6\x6c\x1b\xb3\x71\x9b\xb0\x49\x9b"
"\xb2\x69\x9b\xb1\x59\x9b\xb3\x79\x5b\xb0\x45\x5b\xb2\x65\x5b\xb1\x55"
"\x5b\xb3\x75\xdb\xb0\x4d\xdb\xb2\x6d\xdb\xb1\x5d\xdb\xb3\x7d\x3b\xb0"
"\x43\x3b\xb2\x81\x0d\x6d\x64\xc7\xec\x39\xf6\x5c\xfb\x3f\x7b\x9e\x3d"
"\xdf\x5e\x60\x2f\xb4\x17\xd9\x8b\xed\x25\xf6\x52\x7b\x99\xbd\xdc\x5e"
"\x61\x27\xd8\x2b\xed\x55\xf6\x6a\x7b\x8d\xbd\xd6\x5e\x67\xaf\xb7\x37"
"\xd8\x1b\xed\x4d\xf6\x66\x7b\x8b\xbd\xd5\xde\x66\x6f\xb7\x77\xd8\x3b"
"\xed\x5d\xf6\x6e\x7b\x8f\xbd\xd7\xde\x67\xef\xb7\x0f\xd8\x07\xed\x43"
"\xf6\x61\xfb\x88\x7d\xd4\x3e\x66\x1f\xb7\x4f\xd8\x27\xed\x53\xf6\x69"
"\xfb\x8c\x7d\xd6\x3e\x67\x9f\xb7\x2f\xd8\x17\xed\x4b\xf6\x65\xfb\x8a"
"\x7d\xd5\xbe\x66\x5f\xb7\x6f\xd8\x37\xed\x5b\xf6\x6d\xfb\x8e\x7d\xd7"
"\xbe\x67\xdf\xb7\x1f\xd8\x0f\xed\x47\xf6\x63\xfb\x89\xfd\xd4\x7e\x66"
"\x3f\xb7\x5f\xd8\x2f\xed\x57\xf6\x6b\xfb\x8d\xfd\xd6\x7e\x67\xbf\xb7"
"\x3f\xd8\x1f\xed\x4f\xf6\x67\xfb\x8b\xfd\xd5\xfe\x66\x7f\xb7\x7f\xd8"
"\x3f\xed\x5f\xf6\x6f\xfb\x8f\xfd\xd7\xfe\x67\x27\xda\x49\x76\x32\x27"
"\xb9\x93\xc2\x89\x73\x52\x3a\xa9\x9c\xd4\x4e\x1a\x27\xad\x93\xce\x49"
"\xef\x64\x70\x32\x3a\x99\x9c\xcc\x4e\x16\x27\xab\x93\xcd\xc9\xee\xe4"
"\x70\x72\x3a\xb9\x9c\xdc\x4e\x1e\x27\xaf\x93\xcf\xc9\xef\x14\x70\x0a"
"\x3a\x85\x9c\xc2\x4e\x11\xa7\xa8\x53\xcc\x29\xee\x94\x70\x4a\x3a\xa5"
"\x9c\xd2\x4e\x19\xa7\xac\x53\xce\x29\xef\x54\x70\x2a\x3a\x95\x9c\xca"
"\x4e\x15\xa7\xaa\x53\xcd\xa9\xee\xd4\x70\x6a\x3a\xb5\x9c\xda\x4e\x1d"
"\xa7\xae\x53\xcf\xa9\xef\x34\x70\x1a\x3a\x8d\x9c\xc6\x4e\x13\xa7\xa9"
"\xd3\xcc\x69\xee\xb4\x70\x5a\x3a\xad\x9c\xd6\x4e\x1b\xa7\xad\xd3\xce"
"\x69\xef\x74\x70\x3a\x3a\x9d\x9c\xce\x4e\x17\xa7\xab\xd3\xcd\xe9\xee"
"\xf4\x70\x7a\x3a\xbd\x9c\xde\x4e\x1f\xa7\xaf\xd3\xcf\xe9\xef\x0c\x70"
"\x06\x3a\x83\x9c\xc1\xce\x10\x67\xa8\x33\xcc\x19\xee\x8c\x70\x46\x3a"
"\xa3\x9c\xd1\xce\x18\x67\xac\x33\xce\x19\xef\x4c\x70\x26\x3a\x93\x9c"
"\xc9\xce\x14\x27\xde\x99\xea\x4c\x73\xa6\x3b\x33\x9c\x99\xce\x2c\x67"
"\xb6\x83\x39\xb8\x43\x38\xa4\x43\x39\xb4\xc3\x38\xac\xc3\x39\xbc\x23"
"\x38\xa2\x23\x39\xb2\xa3\x38\xaa\xa3\x39\xba\x63\x38\xa6\x63\x39\xb6"
"\xe3\x38\xae\xe3\x39\xbe\x13\x38\xa1\x13\x39\xc0\x81\x0e\x72\x62\xce"
"\x1c\x67\xae\xf3\x9f\x33\xcf\x99\xef\x2c\x70\x16\x3a\x8b\x9c\xc5\xce"
"\x12\x67\xa9\xb3\xcc\x59\xee\xac\x70\x12\x9c\x95\xce\x2a\x67\xb5\xb3"
"\xc6\x59\xeb\xac\x73\xd6\x3b\x1b\x9c\x8d\xce\x26\x67\xb3\xb3\xc5\xd9"
"\xea\x6c\x73\xb6\x3b\x3b\x9c\x9d\xce\x2e\x67\xb7\xb3\xc7\xd9\xeb\xec"
"\x73\xf6\x3b\x07\x9c\x83\xce\x21\xe7\xb0\x73\xc4\x39\xea\x1c\x73\x8e"
"\x3b\x27\x9c\x93\xce\x29\xe7\xb4\x73\xc6\x39\xeb\x9c\x73\xce\x3b\x17"
"\x9c\x8b\xce\x25\xe7\xb2\x73\xc5\xb9\xea\x5c\x73\xae\x3b\x37\x9c\x9b"
"\xce\x2d\xe7\xb6\x73\xc7\xb9\xeb\xdc\x73\xee\x3b\x0f\x9c\x87\xce\x23"
"\xe7\xb1\xf3\xc4\x79\xea\x3c\x73\x9e\x3b\x2f\x9c\x97\xce\x2b\xe7\xb5"
"\xf3\xc6\x79\xeb\xbc\x73\xde\x3b\x1f\x9c\x8f\xce\x27\xe7\xb3\xf3\xc5"
"\xf9\xea\x7c\x73\xbe\x3b\x3f\x9c\x9f\xce\x2f\xe7\xb7\xf3\xc7\xf9\xeb"
"\xfc\x73\x12\x9d\x24\x27\x99\x9b\xdc\x4d\xe1\xc6\xb9\x29\xdd\x54\x6e"
"\x6a\x37\x8d\x9b\xd6\x4d\xe7\xa6\x77\x33\xb8\x19\xdd\x4c\x6e\x66\x37"
"\x8b\x9b\xd5\xcd\xe6\x66\x77\x73\xb8\x39\xdd\x5c\x6e\x6e\x37\x8f\x9b"
"\xd7\xcd\xe7\xe6\x77\x0b\xb8\x05\xdd\x42\x6e\x61\xb7\x88\x5b\xd4\x2d"
"\xe6\x16\x77\x4b\xb8\x25\xdd\x52\x6e\x69\xb7\x8c\x5b\xd6\x2d\xe7\x96"
"\x77\x2b\xb8\x15\xdd\x4a\x6e\x65\xb7\x8a\x5b\xd5\xad\xe6\x56\x77\x6b"
"\xb8\x35\xdd\x5a\x6e\x6d\xb7\x8e\x5b\xd7\xad\xe7\xd6\x77\x1b\xb8\x0d"
"\xdd\x46\x6e\x63\xb7\x89\xdb\xd4\x6d\xe6\x36\x77\x5b\xb8\x2d\xdd\x56"
"\x6e\x6b\xb7\x8d\xdb\xd6\x6d\xe7\xb6\x77\x3b\xb8\x1d\xdd\x4e\x6e\x67"
"\xb7\x8b\xdb\xd5\xed\xe6\x76\x77\x7b\xb8\x3d\xdd\x5e\x6e\x6f\xb7\x8f"
"\xdb\xd7\xed\xe7\xf6\x77\x07\xb8\x03\xdd\x41\xee\x60\x77\x88\x3b\xd4"
"\x1d\xe6\x0e\x77\x47\xb8\x23\xdd\x51\xee\x68\x77\x8c\x3b\xd6\x1d\xe7"
"\x8e\x77\x27\xb8\x13\xdd\x49\xee\x64\x77\x8a\x1b\xef\x4e\x75\xa7\xb9"
"\xd3\xdd\x19\xee\x4c\x77\x96\x3b\xdb\xc5\x5c\xdc\x25\x5c\xd2\xa5\x5c"
"\xda\x65\x5c\xd6\xe5\x5c\xde\x15\x5c\xd1\x95\x5c\xd9\x55\x5c\xd5\xd5"
"\x5c\xdd\x35\x5c\xd3\xb5\x5c\xdb\x75\x5c\xd7\xf5\x5c\xdf\x0d\xdc\xd0"
"\x8d\x5c\xe0\x42\x17\xb9\x31\x77\x8e\x3b\xd7\xfd\xcf\x9d\xe7\xce\x77"
"\x17\xb8\x0b\xdd\x45\xee\x62\x77\x89\xbb\xd4\x5d\xe6\x2e\x77\x57\xb8"
"\x09\xee\x4a\x77\x95\xbb\xda\x5d\xe3\xae\x75\xd7\xb9\xeb\xdd\x0d\xee"
"\x46\x77\x93\xbb\xd9\xdd\xe2\x6e\x75\xb7\xb9\xdb\xdd\x1d\xee\x4e\x77"
"\x97\xbb\xdb\xdd\xe3\xee\x75\xf7\xb9\xfb\xdd\x03\xee\x41\xf7\x90\x7b"
"\xd8\x3d\xe2\x1e\x75\x8f\xb9\xc7\xdd\x13\xee\x49\xf7\x94\x7b\xda\x3d"
"\xe3\x9e\x75\xcf\xb9\xe7\xdd\x0b\xee\x45\xf7\x92\x7b\xd9\xbd\xe2\x5e"
"\x75\xaf\xb9\xd7\xdd\x1b\xee\x4d\xf7\x96\x7b\xdb\xbd\xe3\xde\x75\xef"
"\xb9\xf7\xdd\x07\xee\x43\xf7\x91\xfb\xd8\x7d\xe2\x3e\x75\x9f\xb9\xcf"
"\xdd\x17\xee\x4b\xf7\x95\xfb\xda\x7d\xe3\xbe\x75\xdf\xb9\xef\xdd\x0f"
"\xee\x47\xf7\x93\xfb\xd9\xfd\xe2\x7e\x75\xbf\xb9\xdf\xdd\x1f\xee\x4f"
"\xf7\x97\xfb\xdb\xfd\xe3\xfe\x75\xff\xb9\x89\x6e\x92\x9b\xcc\x4b\xee"
"\xa5\xf0\xe2\xbc\x94\x5e\x2a\x2f\xb5\x97\xc6\x4b\xeb\xa5\xf3\xd2\x7b"
"\x19\xbc\x8c\x5e\x26\x2f\xb3\x97\xc5\xcb\xea\x65\xf3\xb2\x7b\x39\xbc"
"\x9c\x5e\x2e\x2f\xb7\x97\xc7\xcb\xeb\xe5\xf3\xf2\x7b\x05\xbc\x82\x5e"
"\x21\xaf\xb0\x57\xc4\x2b\xea\x15\xf3\x8a\x7b\x25\xbc\x92\x5e\x29\xaf"
"\xb4\x57\xc6\x2b\xeb\x95\xf3\xca\x7b\x15\xbc\x8a\x5e\x25\xaf\xb2\x57"
"\xc5\xab\xea\x55\xf3\xaa\x7b\x35\xbc\x9a\x5e\x2d\xaf\xb6\x57\xc7\xab"
"\xeb\xd5\xf3\xea\x7b\x0d\xbc\x86\x5e\x23\xaf\xb1\xd7\xc4\x6b\xea\x35"
"\xf3\x9a\x7b\x2d\xbc\x96\x5e\x2b\xaf\xb5\xd7\xc6\x6b\xeb\xb5\xf3\xda"
"\x7b\x1d\xbc\x8e\x5e\x27\xaf\xb3\xd7\xc5\xeb\xea\x75\xf3\xba\x7b\x3d"
"\xbc\x9e\x5e\x2f\xaf\xb7\xd7\xc7\xeb\xeb\xf5\xf3\xfa\x7b\x03\xbc\x81"
"\xde\x20\x6f\xb0\x37\xc4\x1b\xea\x0d\xf3\x86\x7b\x23\xbc\x91\xde\x28"
"\x6f\xb4\x37\xc6\x1b\xeb\x8d\xf3\xc6\x7b\x13\xbc\x89\xde\x24\x6f\xb2"
"\x37\xc5\x8b\xf7\xa6\x7a\xd3\xbc\xe9\xde\x0c\x6f\xa6\x37\xcb\x9b\xed"
"\x61\x1e\xee\x11\x1e\xe9\x51\x1e\xed\x31\x1e\xeb\x71\x1e\xef\x09\x9e"
"\xe8\x49\x9e\xec\x29\x9e\xea\x69\x9e\xee\x19\x9e\xe9\x59\x9e\xed\x39"
"\x9e\xeb\x79\x9e\xef\x05\x5e\xe8\x45\x1e\xf0\xa0\x87\xbc\x98\x37\xc7"
"\x9b\xeb\xfd\xe7\xcd\xf3\xe6\x7b\x0b\xbc\x85\xde\x22\x6f\xb1\xb7\xc4"
"\x5b\xea\x2d\xf3\x96\x7b\x2b\xbc\x04\x6f\xa5\xb7\xca\x5b\xed\xad\xf1"
"\xd6\x7a\xeb\xbc\xf5\xde\x06\x6f\xa3\xb7\xc9\xdb\xec\x6d\xf1\xb6\x7a"
"\xdb\xbc\xed\xde\x0e\x6f\xa7\xb7\xcb\xdb\xed\xed\xf1\xf6\x7a\xfb\xbc"
"\xfd\xde\x01\xef\xa0\x77\xc8\x3b\xec\x1d\xf1\x8e\x7a\xc7\xbc\xe3\xde"
"\x09\xef\xa4\x77\xca\x3b\xed\x9d\xf1\xce\x7a\xe7\xbc\xf3\xde\x05\xef"
"\xa2\x77\xc9\xbb\xec\x5d\xf1\xae\x7a\xd7\xbc\xeb\xde\x0d\xef\xa6\x77"
"\xcb\xbb\xed\xdd\xf1\xee\x7a\xf7\xbc\xfb\xde\x03\xef\xa1\xf7\xc8\x7b"
"\xec\x3d\xf1\x9e\x7a\xcf\xbc\xe7\xde\x0b\xef\xa5\xf7\xca\x7b\xed\xbd"
"\xf1\xde\x7a\xef\xbc\xf7\xde\x07\xef\xa3\xf7\xc9\xfb\xec\x7d\xf1\xbe"
"\x7a\xdf\xbc\xef\xde\x0f\xef\xa7\xf7\xcb\xfb\xed\xfd\xf1\xfe\x7a\xff"
"\xbc\x44\x2f\xc9\x4b\xe6\x27\xf7\x53\xf8\x71\x7e\x4a\x3f\x95\x9f\xda"
"\x4f\xe3\xa7\xf5\xd3\xf9\xe9\xfd\x0c\x7e\x46\x3f\x93\x9f\xd9\xcf\xe2"
"\x67\xf5\xb3\xf9\xd9\xfd\x1c\x7e\x4e\x3f\x97\x9f\xdb\xcf\xe3\xe7\xf5"
"\xf3\xf9\xf9\xfd\x02\x7e\x41\xbf\x90\x5f\xd8\x2f\xe2\x17\xf5\x8b\xf9"
"\xc5\xfd\x12\x7e\x49\xbf\x94\x5f\xda\x2f\xe3\x97\xf5\xcb\xf9\xe5\xfd"
"\x0a\x7e\x45\xbf\x92\x5f\xd9\xaf\xe2\x57\xf5\xab\xf9\xd5\xfd\x1a\x7e"
"\x4d\xbf\x96\x5f\xdb\xaf\xe3\xd7\xf5\xeb\xf9\xf5\xfd\x06\x7e\x43\xbf"
"\x91\xdf\xd8\x6f\xe2\x37\xf5\x9b\xf9\xcd\xfd\x16\x7e\x4b\xbf\x95\xdf"
"\xda\x6f\xe3\xb7\xf5\xdb\xf9\xed\xfd\x0e\x7e\x47\xbf\x93\xdf\xd9\xef"
"\xe2\x77\xf5\xbb\xf9\xdd\xfd\x1e\x7e\x4f\xbf\x97\xdf\xdb\xef\xe3\xf7"
"\xf5\xfb\xf9\xfd\xfd\x01\xfe\x40\x7f\x90\x3f\xd8\x1f\xe2\x0f\xf5\x87"
"\xf9\xc3\xfd\x11\xfe\x48\x7f\x94\x3f\xda\x1f\xe3\x8f\xf5\xc7\xf9\xe3"
"\xfd\x09\xfe\x44\x7f\x92\x3f\xd9\x9f\xe2\xc7\xfb\x53\xfd\x69\xfe\x74"
"\x7f\x86\x3f\xd3\x9f\xe5\xcf\xf6\x31\x1f\xf7\x09\x9f\xf4\x29\x9f\xf6"
"\x19\x9f\xf5\x39\x9f\xf7\x05\x5f\xf4\x25\x5f\xf6\x15\x5f\xf5\x35\x5f"
"\xf7\x0d\xdf\xf4\x2d\xdf\xf6\x1d\xdf\xf5\x3d\xdf\xf7\x03\x3f\xf4\x23"
"\x1f\xf8\xd0\x47\x7e\xcc\x9f\xe3\xcf\xf5\xff\xf3\xe7\xf9\xf3\xfd\x05"
"\xfe\x42\x7f\x91\xbf\xd8\x5f\xe2\x2f\xf5\x97\xf9\xcb\xfd\x15\x7e\x82"
"\xbf\xd2\x5f\xe5\xaf\xf6\xd7\xf8\x6b\xfd\x75\xfe\x7a\x7f\x83\xbf\xd1"
"\xdf\xe4\x6f\xf6\xb7\xf8\x5b\xfd\x6d\xfe\x76\x7f\x87\xbf\xd3\xdf\xe5"
"\xef\xf6\xf7\xf8\x7b\xfd\x7d\xfe\x7e\xff\x80\x7f\xd0\x3f\xe4\x1f\xf6"
"\x8f\xf8\x47\xfd\x63\xfe\x71\xff\x84\x7f\xd2\x3f\xe5\x9f\xf6\xcf\xf8"
"\x67\xfd\x73\xfe\x79\xff\x82\x7f\xd1\xbf\xe4\x5f\xf6\xaf\xf8\x57\xfd"
"\x6b\xfe\x75\xff\x86\x7f\xd3\xbf\xe5\xdf\xf6\xef\xf8\x77\xfd\x7b\xfe"
"\x7d\xff\x81\xff\xd0\x7f\xe4\x3f\xf6\x9f\xf8\x4f\xfd\x67\xfe\x73\xff"
"\x85\xff\xd2\x7f\xe5\xbf\xf6\xdf\xf8\x6f\xfd\x77\xfe\x7b\xff\x83\xff"
"\xd1\xff\xe4\x7f\xf6\xbf\xf8\x5f\xfd\x6f\xfe\x77\xff\x87\xff\xd3\xff"
"\xe5\xff\xf6\xff\xf8\x7f\xfd\x7f\x7e\xa2\x9f\xe4\x27\x0b\x92\x07\x29"
"\x82\xb8\x20\x65\x90\x2a\x48\x1d\xa4\x09\xd2\x06\xe9\x82\xf4\x41\x86"
"\x20\x63\x90\x29\xc8\x1c\x64\x09\xb2\x06\xd9\x82\xec\x41\x8e\x20\x67"
"\x90\x2b\xc8\x1d\xe4\x09\xf2\x06\xf9\x82\xfc\x41\x81\xa0\x60\x50\x28"
"\x28\x1c\x14\x09\x8a\x06\xc5\x82\xe2\x41\x89\xa0\x64\x50\x2a\x28\x1d"
"\x94\x09\xca\x06\xe5\x82\xf2\x41\x85\xa0\x62\x50\x29\xa8\x1c\x54\x09"
"\xaa\x06\xd5\x82\xea\x41\x8d\xa0\x66\x50\x2b\xa8\x1d\xd4\x09\xea\x06"
"\xf5\x82\xfa\x41\x83\xa0\x61\xd0\x28\x68\x1c\x34\x09\x9a\x06\xcd\x82"
"\xe6\x41\x8b\xa0\x65\xd0\x2a\x68\x1d\xb4\x09\xda\x06\xed\x82\xf6\x41"
"\x87\xa0\x63\xd0\x29\xe8\x1c\x74\x09\xba\x06\xdd\x82\xee\x41\x8f\xa0"
"\x67\xd0\x2b\xe8\x1d\xf4\x09\xfa\x06\xfd\x82\xfe\xc1\x80\x60\x60\x30"
"\x28\x18\x1c\x0c\x09\x86\x06\xc3\x82\xe1\xc1\x88\x60\x64\x30\x2a\x18"
"\x1d\x8c\x09\xc6\x06\xe3\x82\xf1\xc1\x84\x60\x62\x30\x29\x98\x1c\x4c"
"\x09\xe2\x83\xa9\xc1\xb4\x60\x7a\x30\x23\x98\x19\xcc\x0a\x66\x07\x58"
"\x80\x07\x44\x40\x06\x54\x40\x07\x4c\xc0\x06\x5c\xc0\x07\x42\x20\x06"
"\x52\x20\x07\x4a\xa0\x06\x5a\xa0\x07\x46\x60\x06\x56\x60\x07\x4e\xe0"
"\x06\x5e\xe0\x07\x41\x10\x06\x51\x00\x02\x18\xa0\x20\x16\xcc\x09\xe6"
"\x06\xff\x05\xf3\x82\xf9\xc1\x82\x60\x61\xb0\x28\x58\x1c\x2c\x09\x96"
"\x06\xcb\x82\xe5\xc1\x8a\x20\x21\x58\x19\xac\x0a\x56\x07\x6b\x82\xb5"
"\xc1\xba\x60\x7d\xb0\x21\xd8\x18\x6c\x0a\x36\x07\x5b\x82\xad\xc1\xb6"
"\x60\x7b\xb0\x23\xd8\x19\xec\x0a\x76\x07\x7b\x82\xbd\xc1\xbe\x60\x7f"
"\x70\x20\x38\x18\x1c\x0a\x0e\x07\x47\x82\xa3\xc1\xb1\xe0\x78\x70\x22"
"\x38\x19\x9c\x0a\x4e\x07\x67\x82\xb3\xc1\xb9\xe0\x7c\x70\x21\xb8\x18"
"\x5c\x0a\x2e\x07\x57\x82\xab\xc1\xb5\xe0\x7a\x70\x23\xb8\x19\xdc\x0a"
"\x6e\x07\x77\x82\xbb\xc1\xbd\xe0\x7e\xf0\x20\x78\x18\x3c\x0a\x1e\x07"
"\x4f\x82\xa7\xc1\xb3\xe0\x79\xf0\x22\x78\x19\xbc\x0a\x5e\x07\x6f\x82"
"\xb7\xc1\xbb\xe0\x7d\xf0\x21\xf8\x18\x7c\x0a\x3e\x07\x5f\x82\xaf\xc1"
"\xb7\xe0\x7b\xf0\x23\xf8\x19\xfc\x0a\x7e\x07\x7f\x82\xbf\xc1\xbf\x20"
"\x31\x48\x0a\x92\x85\xc9\xc3\x14\x61\x5c\x98\x32\x4c\x15\xa6\x0e\xd3"
"\x84\x69\xc3\x74\x61\xfa\x30\x43\x98\x31\xcc\x14\x66\x0e\xb3\x84\x59"
"\xc3\x6c\x61\xf6\x30\x47\x98\x33\xcc\x15\xe6\x0e\xf3\x84\x79\xc3\x7c"
"\x61\xfe\xb0\x40\x58\x30\x2c\x14\x16\x0e\x8b\x84\x45\xc3\x62\x61\xf1"
"\xb0\x44\x58\x32\x2c\x15\x96\x0e\xcb\x84\x65\xc3\x72\x61\xf9\xb0\x42"
"\x58\x31\xac\x14\x56\x0e\xab\x84\x55\xc3\x6a\x61\xf5\xb0\x46\x58\x33"
"\xac\x15\xd6\x0e\xeb\x84\x75\xc3\x7a\x61\xfd\xb0\x41\xd8\x30\x6c\x14"
"\x36\x0e\x9b\x84\x4d\xc3\x66\x61\xf3\xb0\x45\xd8\x32\x6c\x15\xb6\x0e"
"\xdb\x84\x6d\xc3\x76\x61\xfb\xb0\x43\xd8\x31\xec\x14\x76\x0e\xbb\x84"
"\x5d\xc3\x6e\x61\xf7\xb0\x47\xd8\x33\xec\x15\xf6\x0e\xfb\x84\x7d\xc3"
"\x7e\x61\xff\x70\x40\x38\x30\x1c\x14\x0e\x0e\x87\x84\x43\xc3\x61\xe1"
"\xf0\x70\x44\x38\x32\x1c\x15\x8e\x0e\xc7\x84\x63\xc3\x71\xe1\xf8\x70"
"\x42\x38\x31\x9c\x14\x4e\x0e\xa7\x84\xf1\xe1\xd4\x70\x5a\x38\x3d\x9c"
"\x11\xce\x0c\x67\x85\xb3\x43\x2c\xc4\x43\x22\x24\x43\x2a\xa4\x43\x26"
"\x64\x43\x2e\xe4\x43\x21\x14\x43\x29\x94\x43\x25\x54\x43\x2d\xd4\x43"
"\x23\x34\x43\x2b\xb4\x43\x27\x74\x43\x2f\xf4\xc3\x20\x0c\xc3\x28\x04"
"\x21\x0c\x51\x18\x0b\xe7\x84\x73\xc3\xff\xc2\x79\xe1\xfc\x70\x41\xb8"
"\x30\x5c\x14\x2e\x0e\x97\x84\x4b\xc3\x65\xe1\xf2\x70\x45\x98\x10\xae"
"\x0c\x57\x85\xab\xc3\x35\xe1\xda\x70\x5d\xb8\x3e\xdc\x10\x6e\x0c\x37"
"\x85\x9b\xc3\x2d\xe1\xd6\x70\x5b\xb8\x3d\xdc\x11\xee\x0c\x77\x85\xbb"
"\xc3\x3d\xe1\xde\x70\x5f\xb8\x3f\x3c\x10\x1e\x0c\x0f\x85\x87\xc3\x23"
"\xe1\xd1\xf0\x58\x78\x3c\x3c\x11\x9e\x0c\x4f\x85\xa7\xc3\x33\xe1\xd9"
"\xf0\x5c\x78\x3e\xbc\x10\x5e\x0c\x2f\x85\x97\xc3\x2b\xe1\xd5\xf0\x5a"
"\x78\x3d\xbc\x11\xde\x0c\x6f\x85\xb7\xc3\x3b\xe1\xdd\xf0\x5e\x78\x3f"
"\x7c\x10\x3e\x0c\x1f\x85\x8f\xc3\x27\xe1\xd3\xf0\x59\xf8\x3c\x7c\x11"
"\xbe\x0c\x5f\x85\xaf\xc3\x37\xe1\xdb\xf0\x5d\xf8\x3e\xfc\x10\x7e\x0c"
"\x3f\x85\x9f\xc3\x2f\xe1\xd7\xf0\x5b\xf8\x3d\xfc\x11\xfe\x0c\x7f\x85"
"\xbf\xc3\x3f\xe1\xdf\xf0\x5f\x98\x18\x26\x85\xc9\xa2\xe4\x51\x8a\x28"
"\x2e\x4a\x19\xa5\x8a\x52\x47\x69\xa2\xb4\x51\xba\x28\x7d\x94\x21\xca"
"\x18\x65\x8a\x32\x47\x59\xa2\xac\x51\xb6\x28\x7b\x94\x23\xca\x19\xe5"
"\x8a\x72\x47\x79\xa2\xbc\x51\xbe\x28\x7f\x54\x20\x2a\x18\x15\x8a\x0a"
"\x47\x45\xa2\xa2\x51\xb1\xa8\x78\x54\x22\x2a\x19\x95\x8a\x4a\x47\x65"
"\xa2\xb2\x51\xb9\xa8\x7c\x54\x21\xaa\x18\x55\x8a\x2a\x47\x55\xa2\xaa"
"\x51\xb5\xa8\x7a\x54\x23\xaa\x19\xd5\x8a\x6a\x47\x75\xa2\xba\x51\xbd"
"\xa8\x7e\xd4\x20\x6a\x18\x35\x8a\x1a\x47\x4d\xa2\xa6\x51\xb3\xa8\x79"
"\xd4\x22\x6a\x19\xb5\x8a\x5a\x47\x6d\xa2\xb6\x51\xbb\xa8\x7d\xd4\x21"
"\xea\x18\x75\x8a\x3a\x47\x5d\xa2\xae\x51\xb7\xa8\x7b\xd4\x23\xea\x19"
"\xf5\x8a\x7a\x47\x7d\xa2\xbe\x51\xbf\xa8\x7f\x34\x20\x1a\x18\x0d\x8a"
"\x06\x47\x43\xa2\xa1\xd1\xb0\x68\x78\x34\x22\x1a\x19\x8d\x8a\x46\x47"
"\x63\xa2\xb1\xd1\xb8\x68\x7c\x34\x21\x9a\x18\x4d\x8a\x26\x47\x53\xa2"
"\xf8\x68\x6a\x34\x2d\x9a\x1e\xcd\x88\x66\x46\xb3\xa2\xd9\x11\x16\xe1"
"\x11\x11\x91\x11\x15\xd1\x11\x13\xb1\x11\x17\xf1\x91\x10\x89\x91\x14"
"\xc9\x91\x12\xa9\x91\x16\xe9\x91\x11\x99\x91\x15\xd9\x91\x13\xb9\x91"
"\x17\xf9\x51\x10\x85\x51\x14\x81\x08\x46\x28\x8a\x45\x73\xa2\xb9\xd1"
"\x7f\xd1\xbc\x68\x7e\xb4\x20\x5a\x18\x2d\x8a\x16\x47\x4b\xa2\xa5\xd1"
"\xb2\x68\x79\xb4\x22\x4a\x88\x56\x46\xab\xa2\xd5\xd1\x9a\x68\x6d\xb4"
"\x2e\x5a\x1f\x6d\x88\x36\x46\x9b\xa2\xcd\xd1\x96\x68\x6b\xb4\x2d\xda"
"\x1e\xed\x88\x76\x46\xbb\xa2\xdd\xd1\x9e\x68\x6f\xb4\x2f\xda\x1f\x1d"
"\x88\x0e\x46\x87\xa2\xc3\xd1\x91\xe8\x68\x74\x2c\x3a\x1e\x9d\x88\x4e"
"\x46\xa7\xa2\xd3\xd1\x99\xe8\x6c\x74\x2e\x3a\x1f\x5d\x88\x2e\x46\x97"
"\xa2\xcb\xd1\x95\xe8\x6a\x74\x2d\xba\x1e\xdd\x88\x6e\x46\xb7\xa2\xdb"
"\xd1\x9d\xe8\x6e\x74\x2f\xba\x1f\x3d\x88\x1e\x46\x8f\xa2\xc7\xd1\x93"
"\xe8\x69\xf4\x2c\x7a\x1e\xbd\x88\x5e\x46\xaf\xa2\xd7\xd1\x9b\xe8\x6d"
"\xf4\x2e\x7a\x1f\x7d\x88\x3e\x46\x9f\xa2\xcf\xd1\x97\xe8\x6b\xf4\x2d"
"\xfa\x1e\xfd\x88\x7e\x46\xbf\xa2\xdf\xd1\x9f\xe8\x6f\xf4\x2f\x4a\x8c"
"\x92\xa2\x64\x20\x39\x48\x01\xe2\x40\x4a\x90\x0a\xa4\x06\x69\x40\x5a"
"\x90\x0e\xa4\x07\x19\x40\x46\x90\x09\x64\x06\x59\x40\x56\x90\x0d\x64"
"\x07\x39\x40\x4e\x90\x0b\xe4\x06\x79\x40\x5e\x90\x0f\xe4\x07\x05\x40"
"\x41\x50\x08\x14\x06\x45\x40\x51\x50\x0c\x14\x07\x25\x40\x49\x50\x0a"
"\x94\x06\x65\x40\x59\x50\x0e\x94\x07\x15\x40\x45\x50\x09\x54\x06\x55"
"\x40\x55\x50\x0d\x54\x07\x35\x40\x4d\x50\x0b\xd4\x06\x75\x40\x5d\x50"
"\x0f\xd4\x07\x0d\x40\x43\xd0\x08\x34\x06\x4d\x40\x53\xd0\x0c\x34\x07"
"\x2d\x40\x4b\xd0\x0a\xb4\x06\x6d\x40\x5b\xd0\x0e\xb4\x07\x1d\x40\x47"
"\xd0\x09\x74\x06\x5d\x40\x57\xd0\x0d\x74\x07\x3d\x40\x4f\xd0\x0b\xf4"
"\x06\x7d\x40\x5f\xd0\x0f\xf4\x07\x03\xc0\x40\x30\x08\x0c\x06\x43\xc0"
"\x50\x30\x0c\x0c\x07\x23\xc0\x48\x30\x0a\x8c\x06\x63\xc0\x58\x30\x0e"
"\x8c\x07\x13\xc0\x44\x30\x09\x4c\x06\x53\x40\x3c\x98\x0a\xa6\x81\xe9"
"\x60\x06\x98\x09\x66\x81\xd9\x00\x03\x38\x20\x00\x09\x28\x40\x03\x06"
"\xb0\x80\x03\x3c\x10\x80\x08\x24\x20\x03\x05\xa8\x40\x03\x3a\x30\x80"
"\x09\x2c\x60\x03\x07\xb8\xc0\x03\x3e\x08\x40\x08\x22\x00\x00\x04\x08"
"\xc4\xc0\x1c\x30\x17\xfc\x07\xe6\x81\xf9\x60\x01\x58\x08\x16\x81\xc5"
"\x60\x09\x58\x0a\x96\x81\xe5\x60\x05\x48\x00\x2b\xc1\x2a\xb0\x1a\xac"
"\x01\x6b\xc1\x3a\xb0\x1e\x6c\x00\x1b\xc1\x26\xb0\x19\x6c\x01\x5b\xc1"
"\x36\xb0\x1d\xec\x00\x3b\xc1\x2e\xb0\x1b\xec\x01\x7b\xc1\x3e\xb0\x1f"
"\x1c\x00\x07\xc1\x21\x70\x18\x1c\x01\x47\xc1\x31\x70\x1c\x9c\x00\x27"
"\xc1\x29\x70\x1a\x9c\x01\x67\xc1\x39\x70\x1e\x5c\x00\x17\xc1\x25\x70"
"\x19\x5c\x01\x57\xc1\x35\x70\x1d\xdc\x00\x37\xc1\x2d\x70\x1b\xdc\x01"
"\x77\xc1\x3d\x70\x1f\x3c\x00\x0f\xc1\x23\xf0\x18\x3c\x01\x4f\xc1\x33"
"\xf0\x1c\xbc\x00\x2f\xc1\x2b\xf0\x1a\xbc\x01\x6f\xc1\x3b\xf0\x1e\x7c"
"\x00\x1f\xc1\x27\xf0\x19\x7c\x01\x5f\xc1\x37\xf0\x1d\xfc\x00\x3f\xc1"
"\x2f\xf0\x1b\xfc\x01\x7f\xc1\x3f\x90\x08\x92\x40\x32\x98\x1c\xa6\x80"
"\x71\x30\x25\x4c\x05\x53\xc3\x34\x30\x2d\x4c\x07\xd3\xc3\x0c\x30\x23"
"\xcc\x04\x33\xc3\x2c\x30\x2b\xcc\x06\xb3\xc3\x1c\x30\x27\xcc\x05\x73"
"\xc3\x3c\x30\x2f\xcc\x07\xf3\xc3\x02\xb0\x20\x2c\x04\x0b\xc3\x22\xb0"
"\x28\x2c\x06\x8b\xc3\x12\xb0\x24\x2c\x05\x4b\xc3\x32\xb0\x2c\x2c\x07"
"\xcb\xc3\x0a\xb0\x22\xac\x04\x2b\xc3\x2a\xb0\x2a\xac\x06\xab\xc3\x1a"
"\xb0\x26\xac\x05\x6b\xc3\x3a\xb0\x2e\xac\x07\xeb\xc3\x06\xb0\x21\x6c"
"\x04\x1b\xc3\x26\xb0\x29\x6c\x06\x9b\xc3\x16\xb0\x25\x6c\x05\x5b\xc3"
"\x36\xb0\x2d\x6c\x07\xdb\xc3\x0e\xb0\x23\xec\x04\x3b\xc3\x2e\xb0\x2b"
"\xec\x06\xbb\xc3\x1e\xb0\x27\xec\x05\x7b\xc3\x3e\xb0\x2f\xec\x07\xfb"
"\xc3\x01\x70\x20\x1c\x04\x07\xc3\x21\x70\x28\x1c\x06\x87\xc3\x11\x70"
"\x24\x1c\x05\x47\xc3\x31\x70\x2c\x1c\x07\xc7\xc3\x09\x70\x22\x9c\x04"
"\x27\xc3\x29\x30\x1e\x4e\x85\xd3\xe0\x74\x38\x03\xce\x84\xb3\xe0\x6c"
"\x88\x41\x1c\x12\x90\x84\x14\xa4\x21\x03\x59\xc8\x41\x1e\x0a\x50\x84"
"\x12\x94\xa1\x02\x55\xa8\x41\x1d\x1a\xd0\x84\x16\xb4\xa1\x03\x5d\xe8"
"\x41\x1f\x06\x30\x84\x11\x04\x10\x42\x04\x63\x70\x0e\x9c\x0b\xff\x83"
"\xf3\xe0\x7c\xb8\x00\x2e\x84\x8b\xe0\x62\xb8\x04\x2e\x85\xcb\xe0\x72"
"\xb8\x02\x26\xc0\x95\x70\x15\x5c\x0d\xd7\xc0\xb5\x70\x1d\x5c\x0f\x37"
"\xc0\x8d\x70\x13\xdc\x0c\xb7\xc0\xad\x70\x1b\xdc\x0e\x77\xc0\x9d\x70"
"\x17\xdc\x0d\xf7\xc0\xbd\x70\x1f\xdc\x0f\x0f\xc0\x83\xf0\x10\x3c\x0c"
"\x8f\xc0\xa3\xf0\x18\x3c\x0e\x4f\xc0\x93\xf0\x14\x3c\x0d\xcf\xc0\xb3"
"\xf0\x1c\x3c\x0f\x2f\xc0\x8b\xf0\x12\xbc\x0c\xaf\xc0\xab\xf0\x1a\xbc"
"\x0e\x6f\xc0\x9b\xf0\x16\xbc\x0d\xef\xc0\xbb\xf0\x1e\xbc\x0f\x1f\xc0"
"\x87\xf0\x11\x7c\x0c\x9f\xc0\xa7\xf0\x19\x7c\x0e\x5f\xc0\x97\xf0\x15"
"\x7c\x0d\xdf\xc0\xb7\xf0\x1d\x7c\x0f\x3f\xc0\x8f\xf0\x13\xfc\x0c\xbf"
"\xc0\xaf\xf0\x1b\xfc\x0e\x7f\xc0\x9f\xf0\x17\xfc\x0d\xff\xc0\xbf\xf0"
"\x1f\x4c\x84\x49\x30\x19\x4a\x8e\x52\xa0\x38\x94\x12\xa5\x42\xa9\x51"
"\x1a\x94\x16\xa5\x43\xe9\x51\x06\x94\x11\x65\x42\x99\x51\x16\x94\x15"
"\x65\x43\xd9\x51\x0e\x94\x13\xe5\x42\xb9\x51\x1e\x94\x17\xe5\x43\xf9"
"\x51\x01\x54\x10\x15\x42\x85\x51\x11\x54\x14\x15\x43\xc5\x51\x09\x54"
"\x12\x95\x42\xa5\x51\x19\x54\x16\x95\x43\xe5\x51\x05\x54\x11\x55\x42"
"\x95\x51\x15\x54\x15\x55\x43\xd5\x51\x0d\x54\x13\xd5\x42\xb5\x51\x1d"
"\x54\x17\xd5\x43\xf5\x51\x03\xd4\x10\x35\x42\x8d\x51\x13\xd4\x14\x35"
"\x43\xcd\x51\x0b\xd4\x12\xb5\x42\xad\x51\x1b\xd4\x16\xb5\x43\xed\x51"
"\x07\xd4\x11\x75\x42\x9d\x51\x17\xd4\x15\x75\x43\xdd\x51\x0f\xd4\x13"
"\xf5\x42\xbd\x51\x1f\xd4\x17\xf5\x43\xfd\xd1\x00\x34\x10\x0d\x42\x83"
"\xd1\x10\x34\x14\x0d\x43\xc3\xd1\x08\x34\x12\x8d\x42\xa3\xd1\x18\x34"
"\x16\x8d\x43\xe3\xd1\x04\x34\x11\x4d\x42\x93\xd1\x14\x14\x8f\xa6\xa2"
"\x69\x68\x3a\x9a\x81\x66\xa2\x59\x68\x36\xc2\x10\x8e\x08\x44\x22\x0a"
"\xd1\x88\x41\x2c\xe2\x10\x8f\x04\x24\x22\x09\xc9\x48\x41\x2a\xd2\x90"
"\x8e\x0c\x64\x22\x0b\xd9\xc8\x41\x2e\xf2\x90\x8f\x02\x14\xa2\x08\x01"
"\x04\x11\x42\x31\x34\x07\xcd\x45\xff\xa1\x79\x68\x3e\x5a\x80\x16\xa2"
"\x45\x68\x31\x5a\x82\x96\xa2\x65\x68\x39\x5a\x81\x12\xd0\x4a\xb4\x0a"
"\xad\x46\x6b\xd0\x5a\xb4\x0e\xad\x47\x1b\xd0\x46\xb4\x09\x6d\x46\x5b"
"\xd0\x56\xb4\x0d\x6d\x47\x3b\xd0\x4e\xb4\x0b\xed\x46\x7b\xd0\x5e\xb4"
"\x0f\xed\x47\x07\xd0\x41\x74\x08\x1d\x46\x47\xd0\x51\x74\x0c\x1d\x47"
"\x27\xd0\x49\x74\x0a\x9d\x46\x67\xd0\x59\x74\x0e\x9d\x47\x17\xd0\x45"
"\x74\x09\x5d\x46\x57\xd0\x55\x74\x0d\x5d\x47\x37\xd0\x4d\x74\x0b\xdd"
"\x46\x77\xd0\x5d\x74\x0f\xdd\x47\x0f\xd0\x43\xf4\x08\x3d\x46\x4f\xd0"
"\x53\xf4\x0c\x3d\x47\x2f\xd0\x4b\xf4\x0a\xbd\x46\x6f\xd0\x5b\xf4\x0e"
"\xbd\x47\x1f\xd0\x47\xf4\x09\x7d\x46\x5f\xd0\x57\xf4\x0d\x7d\x47\x3f"
"\xd0\x4f\xf4\x0b\xfd\x46\x7f\xd0\x5f\xf4\x0f\x25\xa2\x24\x94\x2c\x96"
"\x3c\x96\x22\x16\x17\x4b\x19\x4b\x15\x4b\x1d\x4b\x13\x4b\x1b\x4b\x17"
"\x4b\x1f\xcb\x10\xcb\x18\xcb\x14\xcb\x1c\xcb\x12\xcb\x1a\xcb\x16\xcb"
"\x1e\xcb\x11\xcb\x19\xcb\x15\xcb\x1d\xcb\x13\xcb\x1b\xcb\x17\xcb\x1f"
"\x2b\x10\x2b\x18\x2b\x14\x2b\x1c\x2b\x12\x2b\x1a\x2b\x16\x2b\x1e\x2b"
"\x11\x2b\x19\x2b\x15\x2b\x1d\x2b\x13\x2b\x1b\x2b\x17\x2b\x1f\xab\x10"
"\xab\x18\xab\x14\xab\x1c\xab\x12\xab\x1a\xab\x16\xab\x1e\xab\x11\xab"
"\x19\xab\x15\xab\x1d\xab\x13\xab\x1b\xab\x17\xab\x1f\x6b\x10\x6b\x18"
"\x6b\x14\xfb\x9f\x44\x7b\x5c\xcc\xb3\x49\x00\x00\x5a\xdb\xb6\x6d\xfb"
"\x2b\x52\xdb\xa9\x6d\xdb\x4e\xd3\xd4\xd6\x63\xdb\x9a\x99\xb7\xb6\x6d"
"\xdb\xb6\xb5\x3f\xf6\x5c\xc7\x69\x8c\x35\xc1\x9a\x62\xcd\xb0\x38\xac"
"\x39\xd6\x02\x6b\x89\xb5\xc2\x5a\x63\x6d\xb0\xb6\x58\x3b\xac\x3d\xd6"
"\x01\xeb\x88\x75\xc2\x3a\x63\x5d\xb0\xae\x58\x37\xac\x3b\xd6\x03\xeb"
"\x89\xc5\x63\xbd\xb0\xde\x58\x1f\xac\x2f\xd6\x0f\xeb\x8f\x0d\xc0\x06"
"\x62\x83\xb0\xc1\xd8\x10\x6c\x28\x36\x0c\x1b\x8e\x8d\xc0\x46\x62\xa3"
"\xb0\xd1\xd8\x18\x6c\x2c\x36\x0e\x1b\x8f\x4d\xc0\x26\x62\x93\xb0\xc9"
"\xd8\x14\x6c\x2a\x36\x0d\x9b\x8e\xcd\xc0\x66\x62\xb3\xb0\xd9\xd8\x1c"
"\x6c\x2e\x36\x0f\x9b\x8f\x2d\xc0\x16\x62\x8b\xb0\xc5\xd8\x12\x2c\x01"
"\x5b\x8a\x25\x62\xcb\xb0\x24\x6c\x39\xb6\x02\x5b\x89\xad\xc2\x56\x63"
"\x6b\xb0\xb5\xd8\x3a\x6c\x3d\xb6\x01\xdb\x88\x6d\xc2\x36\x63\x5b\xb0"
"\xad\xd8\x36\x6c\x3b\x86\x61\x38\x46\x60\x24\x46\x61\x34\xc6\x60\x2c"
"\xc6\x61\x3c\x26\x60\x22\x26\x61\x32\xa6\x60\x2a\xa6\x61\x3a\x66\x60"
"\x26\x66\x61\x36\xe6\x60\x2e\xe6\x61\x3e\x16\x60\x21\x16\x61\x00\x83"
"\x18\xc2\x62\xd8\x0e\x6c\x27\xb6\x0b\xdb\x8d\xed\xc1\xf6\x62\xfb\xb0"
"\xfd\xd8\x01\xec\x20\x76\x08\x3b\x8c\x1d\xc1\x8e\x62\xc7\xb0\xe3\xd8"
"\x09\xec\x24\x76\x0a\x3b\x8d\x9d\xc1\xce\x62\xe7\xb0\xf3\xd8\x05\xec"
"\x22\x76\x09\xbb\x8c\x5d\xc1\xae\x62\xd7\xb0\xeb\xd8\x0d\xec\x26\x76"
"\x0b\xbb\x8d\xdd\xc1\xee\x62\xf7\xb0\xfb\xd8\x03\xec\x21\xf6\x08\x7b"
"\x8c\x3d\xc1\x9e\x62\xcf\xb0\xe7\xd8\x0b\xec\x25\xf6\x0a\x7b\x8d\xbd"
"\xc1\xde\x62\xef\xb0\xf7\xd8\x07\xec\x23\xf6\x09\xfb\x8c\x7d\xc1\xbe"
"\x62\xdf\xb0\xef\xd8\x0f\xec\x27\xf6\x0b\xfb\x8d\xfd\xc1\xfe\x62\xff"
"\xb0\x64\x78\x72\x3c\x05\x9e\x12\x4f\x85\xa7\xc6\xd3\xe0\x69\xf1\x74"
"\x78\x7a\x3c\x03\x9e\x11\xcf\x84\x67\xc6\xb3\xe0\x59\xf1\x6c\x78\x76"
"\x3c\x07\x9e\x13\xcf\x85\xe7\xc6\xf3\xe0\x79\xf1\x7c\x78\x7e\xbc\x00"
"\x5e\x10\x2f\x84\x17\xc6\x8b\xe0\x45\xf1\x62\x78\x71\xbc\x04\x5e\x12"
"\x2f\x85\x97\xc6\xcb\xe0\x65\xf1\x72\x78\x79\xbc\x02\x5e\x11\xaf\x84"
"\x57\xc6\xab\xe0\x55\xf1\x6a\x78\x75\xbc\x06\x5e\x13\xaf\x85\xd7\xc6"
"\xeb\xe0\x75\xf1\x7a\x78\x7d\xbc\x01\xde\x10\x6f\x84\xff\x87\x37\xc6"
"\x9b\xe0\x4d\xf1\x66\x78\x1c\xde\x1c\x6f\x81\xb7\xc4\x5b\xe1\xad\xf1"
"\x36\x78\x5b\xbc\x1d\xde\x1e\xef\x80\x77\xc4\x3b\xe1\x9d\xf1\x2e\x78"
"\x57\xbc\x1b\xde\x1d\xef\x81\xf7\xc4\xe3\xf1\x5e\x78\x6f\xbc\x0f\xde"
"\x17\xef\x87\xf7\xc7\x07\xe0\x03\xf1\x41\xf8\x60\x7c\x08\x3e\x14\x1f"
"\x86\x0f\xc7\x47\xe0\x23\xf1\x51\xf8\x68\x7c\x0c\x3e\x16\x1f\x87\x8f"
"\xc7\x27\xe0\x13\xf1\x49\xf8\x64\x7c\x0a\x3e\x15\x9f\x86\x4f\xc7\x67"
"\xe0\x33\xf1\x59\xf8\x6c\x7c\x0e\x3e\x17\x9f\x87\xcf\xc7\x17\xe0\x0b"
"\xf1\x45\xf8\x62\x7c\x09\x9e\x80\x2f\xc5\x13\xf1\x65\x78\x12\xbe\x1c"
"\x5f\x81\xaf\xc4\x57\xe1\xab\xf1\x35\xf8\x5a\x7c\x1d\xbe\x1e\xdf\x80"
"\x6f\xc4\x37\xe1\x9b\xf1\x2d\xf8\x56\x7c\x1b\xbe\x1d\xc7\x70\x1c\x27"
"\x70\x12\xa7\x70\x1a\x67\x70\x16\xe7\x70\x1e\x17\x70\x11\x97\x70\x19"
"\x57\x70\x15\xd7\x70\x1d\x37\x70\x13\xb7\x70\x1b\x77\x70\x17\xf7\x70"
"\x1f\x0f\xf0\x10\x8f\x70\x80\x43\x1c\xe1\x31\x7c\x07\xbe\x13\xdf\x85"
"\xef\xc6\xf7\xe0\x7b\xf1\x7d\xf8\x7e\xfc\x00\x7e\x10\x3f\x84\x1f\xc6"
"\x8f\xe0\x47\xf1\x63\xf8\x71\xfc\x04\x7e\x12\x3f\x85\x9f\xc6\xcf\xe0"
"\x67\xf1\x73\xf8\x79\xfc\x02\x7e\x11\xbf\x84\x5f\xc6\xaf\xe0\x57\xf1"
"\x6b\xf8\x75\xfc\x06\x7e\x13\xbf\x85\xdf\xc6\xef\xe0\x77\xf1\x7b\xf8"
"\x7d\xfc\x01\xfe\x10\x7f\x84\x3f\xc6\x9f\xe0\x4f\xf1\x67\xf8\x73\xfc"
"\x05\xfe\x12\x7f\x85\xbf\xc6\xdf\xe0\x6f\xf1\x77\xf8\x7b\xfc\x03\xfe"
"\x11\xff\x84\x7f\xc6\xbf\xe0\x5f\xf1\x6f\xf8\x77\xfc\x07\xfe\x13\xff"
"\x85\xff\xc6\xff\xe0\x7f\xf1\x7f\x78\x32\x22\x39\x91\x82\x48\x49\xa4"
"\x22\x52\x13\x69\x88\xb4\x44\x3a\x22\x3d\x91\x81\xc8\x48\x64\x22\x32"
"\x13\x59\x88\xac\x44\x36\x22\x3b\x91\x83\xc8\x49\xe4\x22\x72\x13\x79"
"\x88\xbc\x44\x3e\x22\x3f\x51\x80\x28\x48\x14\x22\x0a\x13\x45\x88\xa2"
"\x44\x31\xa2\x38\x51\x82\x28\x49\x94\x22\x4a\x13\x65\x88\xb2\x44\x39"
"\xa2\x3c\x51\x81\xa8\x48\x54\x22\x2a\x13\x55\x88\xaa\x44\x35\xa2\x3a"
"\x51\x83\xa8\x49\xd4\x22\x6a\x13\x75\x88\xba\x44\x3d\xa2\x3e\xd1\x80"
"\x68\x48\x34\x22\xfe\x23\x1a\x13\x4d\x88\xa6\x44\x33\x22\x8e\x68\x4e"
"\xb4\x20\x5a\x12\xad\x88\xd6\x44\x1b\xa2\x2d\xd1\x8e\x68\x4f\x74\x20"
"\x3a\x12\x9d\x88\xce\x44\x17\xa2\x2b\xd1\x8d\xe8\x4e\xf4\x20\x7a\x12"
"\xf1\x44\x2f\xa2\x37\xd1\x87\xe8\x4b\xf4\x23\xfa\x13\x03\x88\x81\xc4"
"\x20\x62\x30\x31\x84\x18\x4a\x0c\x23\x86\x13\x23\x88\x91\xc4\x28\x62"
"\x34\x31\x86\x18\x4b\x8c\x23\xc6\x13\x13\x88\x89\xc4\x24\x62\x32\x31"
"\x85\x98\x4a\x4c\x23\xa6\x13\x33\x88\x99\xc4\x2c\x62\x36\x31\x87\x98"
"\x4b\xcc\x23\xe6\x13\x0b\x88\x85\xc4\x22\x62\x31\xb1\x84\x48\x20\x96"
"\x12\x89\xc4\x32\x22\x89\x58\x4e\xac\x20\x56\x12\xab\x88\xd5\xc4\x1a"
"\x62\x2d\xb1\x8e\x58\x4f\x6c\x20\x36\x12\x9b\x88\xcd\xc4\x16\x62\x2b"
"\xb1\x8d\xd8\x4e\x60\x04\x4e\x10\x04\x49\x50\x04\x4d\x30\x04\x4b\x70"
"\x04\x4f\x08\x84\x48\x48\x84\x4c\x28\x84\x4a\x68\x84\x4e\x18\x84\x49"
"\x58\x84\x4d\x38\x84\x4b\x78\x84\x4f\x04\x44\x48\x44\x04\x20\x20\x81"
"\x88\x18\xb1\x83\xd8\x49\xec\x22\x76\x13\x7b\x88\xbd\xc4\x3e\x62\x3f"
"\x71\x80\x38\x48\x1c\x22\x0e\x13\x47\x88\xa3\xc4\x31\xe2\x38\x71\x82"
"\x38\x49\x9c\x22\x4e\x13\x67\x88\xb3\xc4\x39\xe2\x3c\x71\x81\xb8\x48"
"\x5c\x22\x2e\x13\x57\x88\xab\xc4\x35\xe2\x3a\x71\x83\xb8\x49\xdc\x22"
"\x6e\x13\x77\x88\xbb\xc4\x3d\xe2\x3e\xf1\x80\x78\x48\x3c\x22\x1e\x13"
"\x4f\x88\xa7\xc4\x33\xe2\x39\xf1\x82\x78\x49\xbc\x22\x5e\x13\x6f\x88"
"\xb7\xc4\x3b\xe2\x3d\xf1\x81\xf8\x48\x7c\x22\x3e\x13\x5f\x88\xaf\xc4"
"\x37\xe2\x3b\xf1\x83\xf8\x49\xfc\x22\x7e\x13\x7f\x88\xbf\xc4\x3f\x22"
"\x19\x99\x9c\x4c\x41\xa6\x24\x53\x91\xa9\xc9\x34\x64\x5a\x32\x1d\x99"
"\x9e\xcc\x40\x66\x24\x33\x91\x99\xc9\x2c\x64\x56\x32\x1b\x99\x9d\xcc"
"\x41\xe6\x24\x73\x91\xb9\xc9\x3c\x64\x5e\x32\x1f\x99\x9f\x2c\x40\x16"
"\x24\x0b\x91\x85\xc9\x22\x64\x51\xb2\x18\x59\x9c\x2c\x41\x96\x24\x4b"
"\x91\xa5\xc9\x32\x64\x59\xb2\x1c\x59\x9e\xac\x40\x56\x24\x2b\x91\x95"
"\xc9\x2a\x64\x55\xb2\x1a\x59\x9d\xac\x41\xd6\x24\x6b\x91\xb5\xc9\x3a"
"\x64\x5d\xb2\x1e\x59\x9f\x6c\x40\x36\x24\x1b\x91\xff\x91\x8d\xc9\x26"
"\x64\x53\xb2\x19\x19\x47\x36\x27\x5b\x90\x2d\xc9\x56\x64\x6b\xb2\x0d"
"\xd9\x96\x6c\x47\xb6\x27\x3b\x90\x1d\xc9\x4e\x64\x67\xb2\x0b\xd9\x95"
"\xec\x46\x76\x27\x7b\x90\x3d\xc9\x78\xb2\x17\xd9\x9b\xec\x43\xf6\x25"
"\xfb\x91\xfd\xc9\x01\xe4\x40\x72\x10\x39\x98\x1c\x42\x0e\x25\x87\x91"
"\xc3\xc9\x11\xe4\x48\x72\x14\x39\x9a\x1c\x43\x8e\x25\xc7\x91\xe3\xc9"
"\x09\xe4\x44\x72\x12\x39\x99\x9c\x42\x4e\x25\xa7\x91\xd3\xc9\x19\xe4"
"\x4c\x72\x16\x39\x9b\x9c\x43\xce\x25\xe7\x91\xf3\xc9\x05\xe4\x42\x72"
"\x11\xb9\x98\x5c\x42\x26\x90\x4b\xc9\x44\x72\x19\x99\x44\x2e\x27\x57"
"\x90\x2b\xc9\x55\xe4\x6a\x72\x0d\xb9\x96\x5c\x47\xae\x27\x37\x90\x1b"
"\xc9\x4d\xe4\x66\x72\x0b\xb9\x95\xdc\x46\x6e\x27\x31\x12\x27\x09\x92"
"\x24\x29\x92\x26\x19\x92\x25\x39\x92\x27\x05\x52\x24\x25\x52\x26\x15"
"\x52\x25\x35\x52\x27\x0d\xd2\x24\x2d\xd2\x26\x1d\xd2\x25\x3d\xd2\x27"
"\x03\x32\x24\x23\x12\x90\x90\x44\x64\x8c\xdc\x41\xee\x24\x77\x91\xbb"
"\xc9\x3d\xe4\x5e\x72\x1f\xb9\x9f\x3c\x40\x1e\x24\x0f\x91\x87\xc9\x23"
"\xe4\x51\xf2\x18\x79\x9c\x3c\x41\x9e\x24\x4f\x91\xa7\xc9\x33\xe4\x59"
"\xf2\x1c\x79\x9e\xbc\x40\x5e\x24\x2f\x91\x97\xc9\x2b\xe4\x55\xf2\x1a"
"\x79\x9d\xbc\x41\xde\x24\x6f\x91\xb7\xc9\x3b\xe4\x5d\xf2\x1e\x79\x9f"
"\x7c\x40\x3e\x24\x1f\x91\x8f\xc9\x27\xe4\x53\xf2\x19\xf9\x9c\x7c\x41"
"\xbe\x24\x5f\x91\xaf\xc9\x37\xe4\x5b\xf2\x1d\xf9\x9e\xfc\x40\x7e\x24"
"\x3f\x91\x9f\xc9\x2f\xe4\x57\xf2\x1b\xf9\x9d\xfc\x41\xfe\x24\x7f\x91"
"\xbf\xc9\x3f\xe4\x5f\xf2\x1f\x99\x8c\x4a\x4e\xa5\xa0\x52\x52\xa9\xa8"
"\xd4\x54\x1a\x2a\x2d\x95\x8e\x4a\x4f\x65\xa0\x32\x52\x99\xa8\xcc\x54"
"\x16\x2a\x2b\x95\x8d\xca\x4e\xe5\xa0\x72\x52\xb9\xa8\xdc\x54\x1e\x2a"
"\x2f\x95\x8f\xca\x4f\x15\xa0\x0a\x52\x85\xa8\xc2\x54\x11\xaa\x28\x55"
"\x8c\x2a\x4e\x95\xa0\x4a\x52\xa5\xa8\xd2\x54\x19\xaa\x2c\x55\x8e\x2a"
"\x4f\x55\xa0\x2a\x52\x95\xa8\xca\x54\x15\xaa\x2a\x55\x8d\xaa\x4e\xd5"
"\xa0\x6a\x52\xb5\xa8\xda\x54\x1d\xaa\x2e\x55\x8f\xaa\x4f\x35\xa0\x1a"
"\x52\x8d\xa8\xff\xa8\xc6\x54\x13\xaa\x29\xd5\x8c\x8a\xa3\x9a\x53\x2d"
"\xa8\x96\x54\x2b\xaa\x35\xd5\x86\x6a\x4b\xb5\xa3\xda\x53\x1d\xa8\x8e"
"\x54\x27\xaa\x33\xd5\x85\xea\x4a\x75\xa3\xba\x53\x3d\xa8\x9e\x54\x3c"
"\xd5\x8b\xea\x4d\xf5\xa1\xfa\x52\xfd\xa8\xfe\xd4\x00\x6a\x20\x35\x88"
"\x1a\x4c\x0d\xa1\x86\x52\xc3\xa8\xe1\xd4\x08\x6a\x24\x35\x8a\x1a\x4d"
"\x8d\xa1\xc6\x52\xe3\xa8\xf1\xd4\x04\x6a\x22\x35\x89\x9a\x4c\x4d\xa1"
"\xa6\x52\xd3\xa8\xe9\xd4\x0c\x6a\x26\x35\x8b\x9a\x4d\xcd\xa1\xe6\x52"
"\xf3\xa8\xf9\xd4\x02\x6a\x21\xb5\x88\x5a\x4c\x2d\xa1\x12\xa8\xa5\x54"
"\x22\xb5\x8c\x4a\xa2\x96\x53\x2b\xa8\x95\xd4\x2a\x6a\x35\xb5\x86\x5a"
"\x4b\xad\xa3\xd6\x53\x1b\xa8\x8d\xd4\x26\x6a\x33\xb5\x85\xda\x4a\x6d"
"\xa3\xb6\x53\x18\x85\x53\x04\x45\x52\x14\x45\x53\x0c\xc5\x52\x1c\xc5"
"\x53\x02\x25\x52\x12\x25\x53\x0a\xa5\x52\x1a\xa5\x53\x06\x65\x52\x16"
"\x65\x53\x0e\xe5\x52\x1e\xe5\x53\x01\x15\x52\x11\x05\x28\x48\x21\x2a"
"\x46\xed\xa0\x76\x52\xbb\xa8\xdd\xd4\x1e\x6a\x2f\xb5\x8f\xda\x4f\x1d"
"\xa0\x0e\x52\x87\xa8\xc3\xd4\x11\xea\x28\x75\x8c\x3a\x4e\x9d\xa0\x4e"
"\x52\xa7\xa8\xd3\xd4\x19\xea\x2c\x75\x8e\x3a\x4f\x5d\xa0\x2e\x52\x97"
"\xa8\xcb\xd4\x15\xea\x2a\x75\x8d\xba\x4e\xdd\xa0\x6e\x52\xb7\xa8\xdb"
"\xd4\x1d\xea\x2e\x75\x8f\xba\x4f\x3d\xa0\x1e\x52\x8f\xa8\xc7\xd4\x13"
"\xea\x29\xf5\x8c\x7a\x4e\xbd\xa0\x5e\x52\xaf\xa8\xd7\xd4\x1b\xea\x2d"
"\xf5\x8e\x7a\x4f\x7d\xa0\x3e\x52\x9f\xa8\xcf\xd4\x17\xea\x2b\xf5\x8d"
"\xfa\x4e\xfd\xa0\x7e\x52\xbf\xa8\xdf\xd4\x1f\xea\x2f\xf5\x8f\x4a\x46"
"\x27\xa7\x53\xd0\x29\xe9\x54\x74\x6a\x3a\x0d\x9d\x96\x4e\x47\xa7\xa7"
"\x33\xd0\x19\xe9\x4c\x74\x66\x3a\x0b\x9d\x95\xce\x46\x67\xa7\x73\xd0"
"\x39\xe9\x5c\x74\x6e\x3a\x0f\x9d\x97\xce\x47\xe7\xa7\x0b\xd0\x05\xe9"
"\x42\x74\x61\xba\x08\x5d\x94\x2e\x46\x17\xa7\x4b\xd0\x25\xe9\x52\x74"
"\x69\xba\x0c\x5d\x96\x2e\x47\x97\xa7\x2b\xd0\x15\xe9\x4a\x74\x65\xba"
"\x0a\x5d\x95\xae\x46\x57\xa7\x6b\xd0\x35\xe9\x5a\x74\x6d\xba\x0e\x5d"
"\x97\xae\x47\xd7\xa7\x1b\xd0\x0d\xe9\x46\xf4\x7f\x74\x63\xba\x09\xdd"
"\x94\x6e\x46\xc7\xd1\xcd\xe9\x16\x74\x4b\xba\x15\xdd\x9a\x6e\x43\xb7"
"\xa5\xdb\xd1\xed\xe9\x0e\x74\x47\xba\x13\xdd\x99\xee\x42\x77\xa5\xbb"
"\xd1\xdd\xe9\x1e\x74\x4f\x3a\x9e\xee\x45\xf7\xa6\xfb\xd0\x7d\xe9\x7e"
"\x74\x7f\x7a\x00\x3d\x90\x1e\x44\x0f\xa6\x87\xd0\x43\xe9\x61\xf4\x70"
"\x7a\x04\x3d\x92\x1e\x45\x8f\xa6\xc7\xd0\x63\xe9\x71\xf4\x78\x7a\x02"
"\x3d\x91\x9e\x44\x4f\xa6\xa7\xd0\x53\xe9\x69\xf4\x74\x7a\x06\x3d\x93"
"\x9e\x45\xcf\xa6\xe7\xd0\x73\xe9\x79\xf4\x7c\x7a\x01\xbd\x90\x5e\x44"
"\x2f\xa6\x97\xd0\x09\xf4\x52\x3a\x91\x5e\x46\x27\xd1\xcb\xe9\x15\xf4"
"\x4a\x7a\x15\xbd\x9a\x5e\x43\xaf\xa5\xd7\xd1\xeb\xe9\x0d\xf4\x46\x7a"
"\x13\xbd\x99\xde\x42\x6f\xa5\xb7\xd1\xdb\x69\x8c\xc6\x69\x82\x26\x69"
"\x8a\xa6\x69\x86\x66\x69\x8e\xe6\x69\x81\x16\x69\x89\x96\x69\x85\x56"
"\x69\x8d\xd6\x69\x83\x36\x69\x8b\xb6\x69\x87\x76\x69\x8f\xf6\xe9\x80"
"\x0e\xe9\x88\x06\x34\xa4\x11\x1d\xa3\x77\xd0\x3b\xe9\x5d\xf4\x6e\x7a"
"\x0f\xbd\x97\xde\x47\xef\xa7\x0f\xd0\x07\xe9\x43\xf4\x61\xfa\x08\x7d"
"\x94\x3e\x46\x1f\xa7\x4f\xd0\x27\xe9\x53\xf4\x69\xfa\x0c\x7d\x96\x3e"
"\x47\x9f\xa7\x2f\xd0\x17\xe9\x4b\xf4\x65\xfa\x0a\x7d\x95\xbe\x46\x5f"
"\xa7\x6f\xd0\x37\xe9\x5b\xf4\x6d\xfa\x0e\x7d\x97\xbe\x47\xdf\xa7\x1f"
"\xd0\x0f\xe9\x47\xf4\x63\xfa\x09\xfd\x94\x7e\x46\x3f\xa7\x5f\xd0\x2f"
"\xe9\x57\xf4\x6b\xfa\x0d\xfd\x96\x7e\x47\xbf\xa7\x3f\xd0\x1f\xe9\x4f"
"\xf4\x67\xfa\x0b\xfd\x95\xfe\x46\x7f\xa7\x7f\xd0\x3f\xe9\x5f\xf4\x6f"
"\xfa\x0f\xfd\x97\xfe\x47\x27\x63\x92\x33\x29\x98\x94\x4c\x2a\x26\x35"
"\x93\x86\x49\xcb\xa4\x63\xd2\x33\x19\x98\x8c\x4c\x26\x26\x33\x93\x85"
"\xc9\xca\x64\x63\xb2\x33\x39\x98\x9c\x4c\x2e\x26\x37\x93\x87\xc9\xcb"
"\xe4\x63\xf2\x33\x05\x98\x82\x4c\x21\xa6\x30\x53\x84\x29\xca\x14\x63"
"\x8a\x33\x25\x98\x92\x4c\x29\xa6\x34\x53\x86\x29\xcb\x94\x63\xca\x33"
"\x15\x98\x8a\x4c\x25\xa6\x32\x53\x85\xa9\xca\x54\x63\xaa\x33\x35\x98"
"\x9a\x4c\x2d\xa6\x36\x53\x87\xa9\xcb\xd4\x63\xea\x33\x0d\x98\x86\x4c"
"\x23\xe6\x3f\xa6\x31\xd3\x84\x69\xca\x34\x63\xe2\x98\xe6\x4c\x0b\xa6"
"\x25\xd3\x8a\x69\xcd\xb4\x61\xda\x32\xed\x98\xf6\x4c\x07\xa6\x23\xd3"
"\x89\xe9\xcc\x74\x61\xba\x32\xdd\x98\xee\x4c\x0f\xa6\x27\x13\xcf\xf4"
"\x62\x7a\x33\x7d\x98\xbe\x4c\x3f\xa6\x3f\x33\x80\x19\xc8\x0c\x62\x06"
"\x33\x43\x98\xa1\xcc\x30\x66\x38\x33\x82\x19\xc9\x8c\x62\x46\x33\x63"
"\x98\xb1\xcc\x38\x66\x3c\x33\x81\x99\xc8\x4c\x62\x26\x33\x53\x98\xa9"
"\xcc\x34\x66\x3a\x33\x83\x99\xc9\xcc\x62\x66\x33\x73\x98\xb9\xcc\x3c"
"\x66\x3e\xb3\x80\x59\xc8\x2c\x62\x16\x33\x4b\x98\x04\x66\x29\x93\xc8"
"\x2c\x63\x92\x98\xe5\xcc\x0a\x66\x25\xb3\x8a\x59\xcd\xac\x61\xd6\x32"
"\xeb\x98\xf5\xcc\x06\x66\x23\xb3\x89\xd9\xcc\x6c\x61\xb6\x32\xdb\x98"
"\xed\x0c\xc6\xe0\x0c\xc1\x90\x0c\xc5\xd0\x0c\xc3\xb0\x0c\xc7\xf0\x8c"
"\xc0\x88\x8c\xc4\xc8\x8c\xc2\xa8\x8c\xc6\xe8\x8c\xc1\x98\x8c\xc5\xd8"
"\x8c\xc3\xb8\x8c\xc7\xf8\x4c\xc0\x84\x4c\xc4\x00\x06\x32\x88\x89\x31"
"\x3b\x98\x9d\xcc\x2e\x66\x37\xb3\x87\xd9\xcb\xec\x63\xf6\x33\x07\x98"
"\x83\xcc\x21\xe6\x30\x73\x84\x39\xca\x1c\x63\x8e\x33\x27\x98\x93\xcc"
"\x29\xe6\x34\x73\x86\x39\xcb\x9c\x63\xce\x33\x17\x98\x8b\xcc\x25\xe6"
"\x32\x73\x85\xb9\xca\x5c\x63\xae\x33\x37\x98\x9b\xcc\x2d\xe6\x36\x73"
"\x87\xb9\xcb\xdc\x63\xee\x33\x0f\x98\x87\xcc\x23\xe6\x31\xf3\x84\x79"
"\xca\x3c\x63\x9e\x33\x2f\x98\x97\xcc\x2b\xe6\x35\xf3\x86\x79\xcb\xbc"
"\x63\xde\x33\x1f\x98\x8f\xcc\x27\xe6\x33\xf3\x85\xf9\xca\x7c\x63\xbe"
"\x33\x3f\x98\x9f\xcc\x2f\xe6\x37\xf3\x87\xf9\xcb\xfc\x63\x92\xb1\xc9"
"\xd9\x14\x6c\x4a\x36\x15\x9b\x9a\x4d\xc3\xa6\x65\xd3\xb1\xe9\xd9\x0c"
"\x6c\x46\x36\x13\x9b\x99\xcd\xc2\x66\x65\xb3\xb1\xd9\xd9\x1c\x6c\x4e"
"\x36\x17\x9b\x9b\xcd\xc3\xe6\x65\xf3\xb1\xf9\xd9\x02\x6c\x41\xb6\x10"
"\x5b\x98\x2d\xc2\x16\x65\x8b\xb1\xc5\xd9\x12\x6c\x49\xb6\x14\x5b\x9a"
"\x2d\xc3\x96\x65\xcb\xb1\xe5\xd9\x0a\x6c\x45\xb6\x12\x5b\x99\xad\xc2"
"\x56\x65\xab\xb1\xd5\xd9\x1a\x6c\x4d\xb6\x16\x5b\x9b\xad\xc3\xd6\x65"
"\xeb\xb1\xf5\xd9\x06\x6c\x43\xb6\x11\xfb\x1f\xdb\x98\x6d\xc2\x36\x65"
"\x9b\xb1\x71\x6c\x73\xb6\x05\xdb\x92\x6d\xc5\xb6\x66\xdb\xb0\x6d\xd9"
"\x76\x6c\x7b\xb6\x03\xdb\x91\xed\xc4\x76\x66\xbb\xb0\x5d\xd9\x6e\x6c"
"\x77\xb6\x07\xdb\x93\x8d\x67\x7b\xb1\xbd\xd9\x3e\x6c\x5f\xb6\x1f\xdb"
"\x9f\x1d\xc0\x0e\x64\x07\xb1\x83\xd9\x21\xec\x50\x76\x58\xba\x64\xec"
"\x08\x76\x24\x3b\x8a\x1d\xcd\x8e\x61\xc7\xb2\xe3\xd8\xf1\xec\x04\x76"
"\x22\x3b\x89\x9d\xcc\x4e\x61\xa7\xb2\xd3\xd8\xe9\xec\x0c\x76\x26\x3b"
"\x8b\x9d\xcd\xce\x61\xe7\xb2\xf3\xd8\xf9\xec\x02\x76\x21\xbb\x88\x5d"
"\xcc\x2e\x61\x13\xd8\xa5\x6c\x22\xbb\x8c\x4d\x62\x97\xb3\x2b\xd8\x95"
"\xec\x2a\x76\x35\xbb\x86\x5d\xcb\xae\x63\xd7\xb3\x1b\xd8\x8d\xec\x26"
"\x76\x33\xbb\x85\xdd\xca\x6e\x63\xb7\xb3\x18\x8b\xb3\x04\x4b\xb2\x14"
"\x4b\xb3\x0c\xcb\xb2\x1c\xcb\xb3\x02\x2b\xb2\x12\x2b\xb3\x0a\xab\xb2"
"\x1a\xab\xb3\x06\x6b\xb2\x16\x6b\xb3\x0e\xeb\xb2\x1e\xeb\xb3\x01\x1b"
"\xb2\x11\x0b\x58\xc8\x22\x36\xc6\xee\x60\x77\xb2\xbb\xd8\xdd\xec\x1e"
"\x76\x2f\xbb\x8f\xdd\xcf\x1e\x60\x0f\xb2\x87\xd8\xc3\xec\x11\xf6\x28"
"\x7b\x8c\x3d\xce\x9e\x60\x4f\xb2\xa7\xd8\xd3\xec\x19\xf6\x2c\x7b\x8e"
"\x3d\xcf\x5e\x60\x2f\xb2\x97\xd8\xcb\xec\x15\xf6\x2a\x7b\x8d\xbd\xce"
"\xde\x60\x6f\xb2\xb7\xd8\xdb\xec\x1d\xf6\x2e\x7b\x8f\xbd\xcf\x3e\x60"
"\x1f\xb2\x8f\xd8\xc7\xec\x13\xf6\x29\xfb\x8c\x7d\xce\xbe\x60\x5f\xb2"
"\xaf\xd8\xd7\xec\x1b\xf6\x2d\xfb\x8e\x7d\xcf\x7e\x60\x3f\xb2\x9f\xd8"
"\xcf\xec\x17\xf6\x2b\xfb\x8d\xfd\xce\xfe\x60\x7f\xb2\xbf\xd8\xdf\xec"
"\x1f\xf6\x2f\xfb\x8f\x4d\xc6\x25\xe7\x52\x70\x29\xb9\x54\x5c\x6a\x2e"
"\x0d\x97\x96\x4b\xc7\xa5\xe7\x32\x70\x19\xb9\x4c\x5c\x66\x2e\x0b\x97"
"\x95\xcb\xc6\x65\xe7\x72\x70\x39\xb9\x5c\x5c\x6e\x2e\x0f\x97\x97\xcb"
"\xc7\xe5\xe7\x0a\x70\x05\xb9\x42\x5c\x61\xae\x08\x57\x94\x2b\xc6\x15"
"\xe7\x4a\x70\x25\xb9\x52\x5c\x69\xae\x0c\x57\x96\x2b\xc7\x95\xe7\x2a"
"\x70\x15\xb9\x4a\x5c\x65\xae\x0a\x57\x95\xab\xc6\x55\xe7\x6a\x70\x35"
"\xb9\x5a\x5c\x6d\xae\x0e\x57\x97\xab\xc7\xd5\xe7\x1a\x70\x0d\xb9\x46"
"\xdc\x7f\x5c\x63\xae\x09\xd7\x94\x6b\xc6\xc5\x71\xcd\xb9\x16\x5c\x4b"
"\xae\x15\xd7\x9a\x6b\xc3\xb5\xe5\xda\x71\xed\xb9\x0e\x5c\x47\xae\x13"
"\xd7\x99\xeb\xc2\x75\xe5\xba\x71\xdd\xb9\x1e\x5c\x4f\x2e\x9e\xeb\xc5"
"\xf5\xe6\xfa\x70\x7d\xb9\x7e\x5c\x7f\x6e\x00\x37\x90\x1b\xc4\x0d\xe6"
"\x86\x70\x43\xb9\x61\xdc\x70\x6e\x04\x37\x92\x1b\xc5\x8d\xe6\xc6\x70"
"\x63\xb9\x71\xdc\x78\x6e\x02\x37\x91\x9b\xc4\x4d\xe6\xa6\x70\x53\xb9"
"\x69\xdc\x74\x6e\x06\x37\x93\x9b\xc5\xcd\xe6\xe6\x70\x73\xb9\x79\xdc"
"\x7c\x6e\x01\xb7\x90\x5b\xc4\x2d\xe6\x96\x70\x09\xdc\x52\x2e\x91\x5b"
"\xc6\x25\x71\xcb\xb9\x15\xdc\x4a\x6e\x15\xb7\x9a\x5b\xc3\xad\xe5\xd6"
"\x71\xeb\xb9\x0d\xdc\x46\x6e\x13\xb7\x99\xdb\xc2\x6d\xe5\xb6\x71\xdb"
"\x39\x8c\xc3\x39\x82\x23\x39\x8a\xa3\x39\x86\x63\x39\x8e\xe3\x39\x81"
"\x13\x39\x89\x93\x39\x85\x53\x39\x8d\xd3\x39\x83\x33\x39\x8b\xb3\x39"
"\x87\x73\x39\x8f\xf3\xb9\x80\x0b\xb9\x88\x03\x1c\xe4\x10\x17\xe3\x76"
"\x70\x3b\xb9\x5d\xdc\x6e\x6e\x0f\xb7\x97\xdb\xc7\xed\xe7\x0e\x70\x07"
"\xb9\x43\xdc\x61\xee\x08\x77\x94\x3b\xc6\x1d\xe7\x4e\x70\x27\xb9\x53"
"\xdc\x69\xee\x0c\x77\x96\x3b\xc7\x9d\xe7\x2e\x70\x17\xb9\x4b\xdc\x65"
"\xee\x0a\x77\x95\xbb\xc6\x5d\xe7\x6e\x70\x37\xb9\x5b\xdc\x6d\xee\x0e"
"\x77\x97\xbb\xc7\xdd\xe7\x1e\x70\x0f\xb9\x47\xdc\x63\xee\x09\xf7\x94"
"\x7b\xc6\x3d\xe7\x5e\x70\x2f\xb9\x57\xdc\x6b\xee\x0d\xf7\x96\x7b\xc7"
"\xbd\xe7\x3e\x70\x1f\xb9\x4f\xdc\x67\xee\x0b\xf7\x95\xfb\xc6\x7d\xe7"
"\x7e\x70\x3f\xb9\x5f\xdc\x6f\xee\x0f\xf7\x97\xfb\xc7\x25\xe3\x93\xf3"
"\x29\xf8\x94\x7c\x2a\x3e\x35\x9f\x86\x4f\xcb\xa7\xe3\xd3\xf3\x19\xf8"
"\x8c\x7c\x26\x3e\x33\x9f\x85\xcf\xca\x67\xe3\xb3\xf3\x39\xf8\x9c\x7c"
"\x2e\x3e\x37\x9f\x87\xcf\xcb\xe7\xe3\xf3\xf3\x05\xf8\x82\x7c\x21\xbe"
"\x30\x5f\x84\x2f\xca\x17\xe3\x8b\xf3\x25\xf8\x92\x7c\x29\xbe\x34\x5f"
"\x86\x2f\xcb\x97\xe3\xcb\xf3\x15\xf8\x8a\x7c\x25\xbe\x32\x5f\x85\xaf"
"\xca\x57\xe3\xab\xf3\x35\xf8\x9a\x7c\x2d\xbe\x36\x5f\x87\xaf\xcb\xd7"
"\xe3\xeb\xf3\x0d\xf8\x86\x7c\x23\xfe\x3f\xbe\x31\xdf\x84\x6f\xca\x37"
"\xe3\xe3\xf8\xe6\x7c\x0b\xbe\x25\xdf\x8a\x6f\xcd\xb7\xe1\xdb\xf2\xed"
"\xf8\xf6\x7c\x07\xbe\x23\xdf\x89\xef\xcc\x77\xe1\xbb\xf2\xdd\xf8\xee"
"\x7c\x0f\xbe\x27\x1f\xcf\xf7\xe2\x7b\xf3\x7d\xf8\xbe\x7c\x3f\xbe\x3f"
"\x3f\x80\x1f\xc8\x0f\xe2\x07\xf3\x43\xf8\xa1\xfc\x30\x7e\x38\x3f\x82"
"\x1f\xc9\x8f\xe2\x47\xf3\x63\xf8\xb1\xfc\x38\x7e\x3c\x3f\x81\x9f\xc8"
"\x4f\xe2\x27\xf3\x53\xf8\xa9\xfc\x34\x7e\x3a\x3f\x83\x9f\xc9\xcf\xe2"
"\x67\xf3\x73\xf8\xb9\xfc\x3c\x7e\x3e\xbf\x80\x5f\xc8\x2f\xe2\x17\xf3"
"\x4b\xf8\x04\x7e\x29\x9f\xc8\x2f\xe3\x93\xf8\xe5\xfc\x0a\x7e\x25\xbf"
"\x8a\x5f\xcd\xaf\xe1\xd7\xf2\xeb\xf8\xf5\xfc\x06\x7e\x23\xbf\x89\xdf"
"\xcc\x6f\xe1\xb7\xf2\xdb\xf8\xed\x3c\xc6\xe3\x3c\xc1\x93\x3c\xc5\xd3"
"\x3c\xc3\xb3\x3c\xc7\xf3\xbc\xc0\x8b\xbc\xc4\xcb\xbc\xc2\xab\xbc\xc6"
"\xeb\xbc\xc1\x9b\xbc\xc5\xdb\xbc\xc3\xbb\xbc\xc7\xfb\x7c\xc0\x87\x7c"
"\xc4\x03\x1e\xf2\x88\x8f\xf1\x3b\xf8\x9d\xfc\x2e\x7e\x37\xbf\x87\xdf"
"\xcb\xef\xe3\xf7\xf3\x07\xf8\x83\xfc\x21\xfe\x30\x7f\x84\x3f\xca\x1f"
"\xe3\x8f\xf3\x27\xf8\x93\xfc\x29\xfe\x34\x7f\x86\x3f\xcb\x9f\xe3\xcf"
"\xf3\x17\xf8\x8b\xfc\x25\xfe\x32\x7f\x85\xbf\xca\x5f\xe3\xaf\xf3\x37"
"\xf8\x9b\xfc\x2d\xfe\x36\x7f\x87\xbf\xcb\xdf\xe3\xef\xf3\x0f\xf8\x87"
"\xfc\x23\xfe\x31\xff\x84\x7f\xca\x3f\xe3\x9f\xf3\x2f\xf8\x97\xfc\x2b"
"\xfe\x35\xff\x86\x7f\xcb\xbf\xe3\xdf\xf3\x1f\xf8\x8f\xfc\x27\xfe\x33"
"\xff\x85\xff\xca\x7f\xe3\xbf\xa7\x48\xf6\x7f\xfc\x1f\xfe\x2f\xff\x8f"
"\x4f\x26\x24\x17\x52\x08\x29\x85\x54\x42\x6a\x21\x8d\x90\x56\x48\x27"
"\xa4\x17\x32\x08\x19\x85\x4c\x42\x66\x21\x8b\x90\x55\xc8\x26\x64\x17"
"\x72\x08\x39\x85\x5c\x42\x6e\x21\x8f\x90\x57\xc8\x27\xe4\x17\x0a\x08"
"\x05\x85\x42\x42\x61\xa1\x88\x50\x54\x28\x26\x14\x17\x4a\x08\x25\x85"
"\x52\x42\x69\xa1\x8c\x50\x56\x28\x27\x94\x17\x2a\x08\x15\x85\x4a\x42"
"\x65\xa1\x8a\x50\x55\xa8\x26\x54\x17\x6a\x08\x35\x85\x5a\x42\x6d\xa1"
"\x8e\x50\x57\xa8\x27\xd4\x17\x1a\x08\x0d\x85\x46\xc2\x7f\x42\x63\xa1"
"\x89\xd0\x54\x68\x26\xc4\x09\xcd\x85\x16\x42\x4b\xa1\x95\xd0\x5a\x68"
"\x23\xb4\x15\xda\x09\xed\x85\x0e\x42\x47\xa1\x93\xd0\x59\xe8\x22\x74"
"\x15\xba\x09\xdd\x85\x1e\x42\x4f\x21\x5e\xe8\x25\xf4\x16\xfa\x08\x7d"
"\x85\x7e\x42\x7f\x61\x80\x30\x50\x18\x24\x0c\x16\x86\x08\x43\x85\x61"
"\xc2\x70\x61\x84\x30\x52\x18\x25\x8c\x16\xc6\x08\x63\x85\x71\xc2\x78"
"\x61\x82\x30\x51\x98\x24\x4c\x16\xa6\x08\x53\x85\x69\xc2\x74\x61\x86"
"\x30\x53\x98\x25\xcc\x16\xe6\x08\x73\x85\x79\xc2\x7c\x61\x81\xb0\x50"
"\x58\x24\x2c\x16\x96\x08\x09\xc2\x52\x21\x51\x58\x26\x24\x09\xcb\x85"
"\x15\xc2\x4a\x61\x95\xb0\x5a\x58\x23\xac\x15\xd6\x09\xeb\x85\x0d\xc2"
"\x46\x61\x93\xb0\x59\xd8\x22\x6c\x15\xb6\x09\xdb\x05\x4c\xc0\x05\x42"
"\x20\x05\x4a\xa0\x05\x46\x60\x05\x4e\xe0\x05\x41\x10\x05\x49\x90\x05"
"\x45\x50\x05\x4d\xd0\x05\x43\x30\x05\x4b\xb0\x05\x47\x70\x05\x4f\xf0"
"\x85\x40\x08\x85\x48\x00\x02\x14\x90\x10\x13\x76\x08\x3b\x85\x5d\xc2"
"\x6e\x61\x8f\xb0\x57\xd8\x27\xec\x17\x0e\x08\x07\x85\x43\xc2\x61\xe1"
"\x88\x70\x54\x38\x26\x1c\x17\x4e\x08\x27\x85\x53\xc2\x69\xe1\x8c\x70"
"\x56\x38\x27\x9c\x17\x2e\x08\x17\x85\x4b\xc2\x65\xe1\x8a\x70\x55\xb8"
"\x26\x5c\x17\x6e\x08\x37\x85\x5b\xc2\x6d\xe1\x8e\x70\x57\xb8\x27\xdc"
"\x17\x1e\x08\x0f\x85\x47\xc2\x63\xe1\x89\xf0\x54\x78\x26\x3c\x17\x5e"
"\x08\x2f\x85\x57\xc2\x6b\xe1\x8d\xf0\x56\x78\x27\xbc\x17\x3e\x08\x1f"
"\x85\x4f\xc2\x67\xe1\x8b\xf0\x55\xf8\x26\x7c\x17\x7e\x08\x3f\x85\x5f"
"\xc2\x6f\xe1\x8f\xf0\x57\xf8\x27\x24\x13\x93\x8b\x29\xc4\x94\x62\x2a"
"\x31\xb5\x98\x46\x4c\x2b\xa6\x13\xd3\x8b\x19\xc4\x8c\x62\x26\x31\xb3"
"\x98\x45\xcc\x2a\x66\x13\xb3\x8b\x39\xc4\x9c\x62\x2e\x31\xb7\x98\x47"
"\xcc\x2b\xe6\x13\xf3\x8b\x05\xc4\x82\x62\x21\xb1\xb0\x58\x44\x2c\x2a"
"\x16\x13\x8b\x8b\x25\xc4\x92\x62\x29\xb1\xb4\x58\x46\x2c\x2b\x96\x13"
"\xcb\x8b\x15\xc4\x8a\x62\x25\xb1\xb2\x58\x45\xac\x2a\x56\x13\xab\x8b"
"\x35\xc4\x9a\x62\x2d\xb1\xb6\x58\x47\xac\x2b\xd6\x13\xeb\x8b\x0d\xc4"
"\x86\x62\x23\xf1\x3f\xb1\xb1\xd8\x44\x6c\x2a\x36\x13\xe3\xc4\xe6\x62"
"\x0b\xb1\xa5\xd8\x4a\x6c\x2d\xb6\x11\xdb\x8a\xed\xc4\xf6\x62\x07\xb1"
"\xa3\xd8\x49\xec\x2c\x76\x11\xbb\x8a\xdd\xc4\xee\x62\x0f\xb1\xa7\x18"
"\x2f\xf6\x12\x7b\x8b\x7d\xc4\xbe\x62\x3f\xb1\xbf\x38\x40\x1c\x28\x0e"
"\x12\x07\x8b\x43\xc4\xa1\xe2\x30\x71\xb8\x38\x42\x1c\x29\x8e\x12\x47"
"\x8b\x63\xc4\xb1\xe2\x38\x71\xbc\x38\x41\x9c\x28\x4e\x12\x27\x8b\x53"
"\xc4\xa9\xe2\x34\x71\xba\x38\x43\x9c\x29\xce\x12\x67\x8b\x73\xc4\xb9"
"\xe2\x3c\x71\xbe\xb8\x40\x5c\x28\x2e\x12\x17\x8b\x4b\xc4\x04\x71\xa9"
"\x98\x28\x2e\x13\x93\xc4\xe5\xe2\x0a\x71\xa5\xb8\x4a\x5c\x2d\xae\x11"
"\xd7\x8a\xeb\xc4\xf5\xe2\x06\x71\xa3\xb8\x49\xdc\x2c\x6e\x11\xb7\x8a"
"\xdb\xc4\xed\x22\x26\xe2\x22\x21\x92\x22\x25\xd2\x22\x23\xb2\x22\x27"
"\xf2\xa2\x20\x8a\xa2\x24\xca\xa2\x22\xaa\xa2\x26\xea\xa2\x21\x9a\xa2"
"\x25\xda\xa2\x23\xba\xa2\x27\xfa\x62\x20\x86\x62\x24\x02\x11\x8a\x48"
"\x8c\x89\x3b\xc4\x9d\xe2\x2e\x71\xb7\xb8\x47\xdc\x2b\xee\x13\xf7\x8b"
"\x07\xc4\x83\xe2\x21\xf1\xb0\x78\x44\x3c\x2a\x1e\x13\x8f\x8b\x27\xc4"
"\x93\xe2\x29\xf1\xb4\x78\x46\x3c\x2b\x9e\x13\xcf\x8b\x17\xc4\x8b\xe2"
"\x25\xf1\xb2\x78\x45\xbc\x2a\x5e\x13\xaf\x8b\x37\xc4\x9b\xe2\x2d\xf1"
"\xb6\x78\x47\xbc\x2b\xde\x13\xef\x8b\x0f\xc4\x87\xe2\x23\xf1\xb1\xf8"
"\x44\x7c\x2a\x3e\x13\x9f\x8b\x2f\xc4\x97\xe2\x2b\xf1\xb5\xf8\x46\x7c"
"\x2b\xbe\x13\xdf\x8b\x1f\xc4\x8f\xe2\x27\xf1\xb3\xf8\x45\xfc\x2a\x7e"
"\x13\xbf\x8b\x3f\xc4\x9f\xe2\x2f\xf1\xb7\xf8\x47\xfc\x2b\xfe\x13\x93"
"\x49\xc9\xa5\x14\x52\x4a\x29\x95\x94\x5a\x4a\x23\xa5\x95\xd2\x49\xe9"
"\xa5\x0c\x52\x46\x29\x93\x94\x59\xca\x22\x65\x95\xb2\x49\xd9\xa5\x1c"
"\x52\x4e\x29\x97\x94\x5b\xca\x23\xe5\x95\xf2\x49\xf9\xa5\x02\x52\x41"
"\xa9\x90\x54\x58\x2a\x22\x15\x95\x8a\x49\xc5\xa5\x12\x52\x49\xa9\x94"
"\x54\x5a\x2a\x23\x95\x95\xca\x49\xe5\xa5\x0a\x52\x45\xa9\x92\x54\x59"
"\xaa\x22\x55\x95\xaa\x49\xd5\xa5\x1a\x52\x4d\xa9\x96\x54\x5b\xaa\x23"
"\xd5\x95\xea\x49\xf5\xa5\x06\x52\x43\xa9\x91\xf4\x9f\xd4\x58\x6a\x22"
"\x35\x95\x9a\x49\x71\x52\x73\xa9\x85\xd4\x52\x6a\x25\xb5\x96\xda\x48"
"\x6d\xa5\x76\x52\x7b\xa9\x83\xd4\x51\xea\x24\x75\x96\xba\x48\x5d\xa5"
"\x6e\x52\x77\xa9\x87\xd4\x53\x8a\x97\x7a\x49\xbd\xa5\x3e\x52\x5f\xa9"
"\x9f\xd4\x5f\x1a\x20\x0d\x94\x06\x49\x83\xa5\x21\xd2\x50\x69\x98\x34"
"\x5c\x1a\x21\x8d\x94\x46\x49\xa3\xa5\x31\xd2\x58\x69\x9c\x34\x5e\x9a"
"\x20\x4d\x94\x26\x49\x93\xa5\x29\xd2\x54\x69\x9a\x34\x5d\x9a\x21\xcd"
"\x94\x66\x49\xb3\xa5\x39\xd2\x5c\x69\x9e\x34\x5f\x5a\x20\x2d\x94\x16"
"\x49\x8b\xa5\x25\x52\x82\xb4\x54\x4a\x94\x96\x49\x49\xd2\x72\x69\x85"
"\xb4\x52\x5a\x25\xad\x96\xd6\x48\x6b\xa5\x75\xd2\x7a\x69\x83\xb4\x51"
"\xda\x24\x6d\x96\xb6\x48\x5b\xa5\x6d\xd2\x76\x09\x93\x70\x89\x90\x48"
"\x89\x92\x68\x89\x91\x58\x89\x93\x78\x49\x90\x44\x49\x92\x64\x49\x91"
"\x54\x49\x93\x74\xc9\x90\x4c\xc9\x92\x6c\xc9\x91\x5c\xc9\x93\x7c\x29"
"\x90\x42\x29\x92\x80\x04\x25\x24\xc5\xa4\x1d\xd2\x4e\x69\x97\xb4\x5b"
"\xda\x23\xed\x95\xf6\x49\xfb\xa5\x03\xd2\x41\xe9\x90\x74\x58\x3a\x22"
"\x1d\x95\x8e\x49\xc7\xa5\x13\xd2\x49\xe9\x94\x74\x5a\x3a\x23\x9d\x95"
"\xce\x49\xe7\xa5\x0b\xd2\x45\xe9\x92\x74\x59\xba\x22\x5d\x95\xae\x49"
"\xd7\xa5\x1b\xd2\x4d\xe9\x96\x74\x5b\xba\x23\xdd\x95\xee\x49\xf7\xa5"
"\x07\xd2\x43\xe9\x91\xf4\x58\x7a\x22\x3d\x95\x9e\x49\xcf\xa5\x17\xd2"
"\x4b\xe9\x95\xf4\x5a\x7a\x23\xbd\x95\xde\x49\xef\xa5\x0f\xd2\x47\xe9"
"\x93\xf4\x59\xfa\x22\x7d\x95\xbe\x49\xdf\xa5\x1f\xd2\x4f\xe9\x97\xf4"
"\x5b\xfa\x23\xfd\x95\xfe\x49\xc9\xe4\xe4\x72\x0a\x39\xa5\x9c\x4a\x4e"
"\x2d\xa7\x91\xd3\xca\xe9\xe4\xf4\x72\x06\x39\xa3\x9c\x49\xce\x2c\x67"
"\x91\xb3\xca\xd9\xe4\xec\x72\x0e\x39\xa7\x9c\x4b\xce\x2d\xe7\x91\xf3"
"\xca\xf9\xe4\xfc\x72\x01\xb9\xa0\x5c\x48\x2e\x2c\x17\x91\x8b\xca\xc5"
"\xe4\xe2\x72\x09\xb9\xa4\x5c\x4a\x2e\x2d\x97\x91\xcb\xca\xe5\xe4\xf2"
"\x72\x05\xb9\xa2\x5c\x49\xae\x2c\x57\x91\xab\xca\xd5\xe4\xea\x72\x0d"
"\xb9\xa6\x5c\x4b\xae\x2d\xd7\x91\xeb\xca\xf5\xe4\xfa\x72\x03\xb9\xa1"
"\xdc\x48\xfe\x4f\x6e\x2c\x37\x91\x9b\xca\xcd\xe4\x38\xb9\xb9\xdc\x42"
"\x6e\x29\xb7\x92\x5b\xcb\x6d\xe4\xb6\x72\x3b\xb9\xbd\xdc\x41\xee\x28"
"\x77\x92\x3b\xcb\x5d\xe4\xae\x72\x37\xb9\xbb\xdc\x43\xee\x29\xc7\xcb"
"\xbd\xe4\xde\x72\x1f\xb9\xaf\xdc\x4f\xee\x2f\x0f\x90\x07\xca\x83\xe4"
"\xc1\xf2\x10\x79\xa8\x3c\x4c\x1e\x2e\x8f\x90\x47\xca\xa3\xe4\xd1\xf2"
"\x18\x79\xac\x3c\x4e\x1e\x2f\x4f\x90\x27\xca\x93\xe4\xc9\xf2\x14\x79"
"\xaa\x3c\x4d\x9e\x2e\xcf\x90\x67\xca\xb3\xe4\xd9\xf2\x1c\x79\xae\x3c"
"\x4f\x9e\x2f\x2f\x90\x17\xca\x8b\xe4\xc5\xf2\x12\x39\x41\x5e\x2a\x27"
"\xca\xcb\xe4\x24\x79\xb9\xbc\x42\x5e\x29\xaf\x92\x57\xcb\x6b\xe4\xb5"
"\xf2\x3a\x79\xbd\xbc\x41\xde\x28\x6f\x92\x37\xcb\x5b\xe4\xad\xf2\x36"
"\x79\xbb\x8c\xc9\xb8\x4c\xc8\xa4\x4c\xc9\xb4\xcc\xc8\xac\xcc\xc9\xbc"
"\x2c\xc8\xa2\x2c\xc9\xb2\xac\xc8\xaa\xac\xc9\xba\x6c\xc8\xa6\x6c\xc9"
"\xb6\xec\xc8\xae\xec\xc9\xbe\x1c\xc8\xa1\x1c\xc9\x40\x86\x32\x92\x63"
"\xf2\x0e\x79\xa7\xbc\x4b\xde\x2d\xef\x91\xf7\xca\xfb\xe4\xfd\xf2\x01"
"\xf9\xa0\x7c\x48\x3e\x2c\x1f\x91\x8f\xca\xc7\xe4\xe3\xf2\x09\xf9\xa4"
"\x7c\x4a\x3e\x2d\x9f\x91\xcf\xca\xe7\xe4\xf3\xf2\x05\xf9\xa2\x7c\x49"
"\xbe\x2c\x5f\x91\xaf\xca\xd7\xe4\xeb\xf2\x0d\xf9\xa6\x7c\x4b\xbe\x2d"
"\xdf\x91\xef\xca\xf7\xe4\xfb\xf2\x03\xf9\xa1\xfc\x48\x7e\x2c\x3f\x91"
"\x9f\xca\xcf\xe4\xe7\xf2\x0b\xf9\xa5\xfc\x4a\x7e\x2d\xbf\x91\xdf\xca"
"\xef\xe4\xf7\xf2\x07\xf9\xa3\xfc\x49\xfe\x2c\x7f\x91\xbf\xca\xdf\xe4"
"\xef\xf2\x0f\xf9\xa7\xfc\x4b\xfe\x2d\xff\x91\xff\xca\xff\xe4\x64\x4a"
"\x72\x25\x85\x92\x52\x49\xa5\xa4\x56\xd2\x28\x69\x95\x74\x4a\x7a\x25"
"\x83\x92\x51\xc9\xa4\x64\x56\xb2\x28\x59\x95\x6c\x4a\x76\x25\x87\x92"
"\x53\xc9\xa5\xe4\x56\xf2\x28\x79\x95\x7c\x4a\x7e\xa5\x80\x52\x50\x29"
"\xa4\x14\x56\x8a\x28\x45\x95\x62\x4a\x71\xa5\x84\x52\x52\x29\xa5\x94"
"\x56\xca\x28\x65\x95\x72\x4a\x79\xa5\x82\x52\x51\xa9\xa4\x54\x56\xaa"
"\x28\x55\x95\x6a\x4a\x75\xa5\x86\x52\x53\xa9\xa5\xd4\x56\xea\x28\x75"
"\x95\x7a\x4a\x7d\xa5\x81\xd2\x50\x69\xa4\xfc\xa7\x34\x56\x9a\x28\x4d"
"\x95\xa6\x4a\x9c\x12\xa7\xb4\x50\x5a\x28\xad\x94\x56\x4a\x1b\xa5\x8d"
"\xd2\x4e\x69\xa7\x74\x50\x3a\x28\x9d\x94\x4e\x4a\x17\xa5\x8b\xd2\x4d"
"\xe9\xa6\xf4\x50\x7a\x28\xf1\x4a\xbc\xd2\x5b\xe9\xad\xf4\x55\xfa\x2a"
"\xfd\x95\xfe\xca\x40\x65\xa0\x32\x58\x19\xa2\x0c\x55\x86\x2a\xc3\x95"
"\xe1\xca\x48\x65\xa4\x32\x5a\x19\xad\x8c\x55\xc6\x2a\xe3\x95\xf1\xca"
"\x44\x65\x92\x32\x59\x99\xa2\x4c\x55\xa6\x29\xd3\x95\x19\xca\x4c\x65"
"\x96\x32\x5b\x99\xa3\xcc\x55\xe6\x29\xf3\x95\x05\xca\x42\x65\x91\xb2"
"\x58\x59\xac\x24\x28\x09\x4a\xa2\x92\xa8\x24\x29\x49\xca\x0a\x65\x85"
"\xb2\x4a\x59\xa5\xac\x51\xd6\x28\xeb\x94\x75\xca\x06\x65\x83\xb2\x49"
"\xd9\xa4\x6c\x51\xb6\x28\xdb\x94\x6d\x0a\xa6\xe0\x0a\xa1\x90\x0a\xa5"
"\xd0\x0a\xa3\xb0\x0a\xa7\xf0\x8a\xa0\x88\x8a\xa4\xc8\x8a\xa2\xa8\x8a"
"\xa6\xe8\x8a\xa1\x98\x8a\xa5\xd8\x8a\xa3\xb8\x8a\xa7\xf8\x4a\xa0\x84"
"\x4a\xa4\x00\x05\x2a\x48\x89\x29\x3b\x94\x9d\xca\x2e\x65\xb7\xb2\x47"
"\xd9\xab\xec\x53\xf6\x2b\x07\x94\x83\xca\x21\xe5\xb0\x72\x44\x39\xaa"
"\x1c\x53\x8e\x2b\x27\x94\x93\xca\x29\xe5\xb4\x72\x46\x39\xab\x9c\x53"
"\xce\x2b\x17\x94\x8b\xca\x25\xe5\xb2\x72\x45\xb9\xaa\x5c\x53\xae\x2b"
"\x37\x94\x9b\xca\x2d\xe5\xb6\x72\x47\xb9\xab\xdc\x53\xee\x2b\x0f\x94"
"\x87\xca\x23\xe5\xb1\xf2\x44\x79\xaa\x3c\x53\x9e\x2b\x2f\x94\x97\xca"
"\x2b\xe5\xb5\xf2\x46\x79\xab\xbc\x53\xde\x2b\x1f\x94\x8f\xca\x27\xe5"
"\xb3\xf2\x45\xf9\xaa\x7c\x53\xbe\x2b\x3f\x94\x9f\xca\x2f\xe5\xb7\xf2"
"\x47\xf9\xab\xfc\x53\x92\xa9\xc9\xd5\x14\x6a\x4a\x35\x95\x9a\x5a\x4d"
"\xa3\xa6\x55\xd3\xa9\xe9\xd5\x0c\x6a\x46\x35\x93\x9a\x59\xcd\xa2\x66"
"\x55\xb3\xa9\xd9\xd5\x1c\x6a\x4e\x35\x97\x9a\x5b\xcd\xa3\xe6\x55\xf3"
"\xa9\xf9\xd5\x02\x6a\x41\xb5\x90\x5a\x58\x2d\xa2\x16\x55\x8b\xa9\xc5"
"\xd5\x12\x6a\x09\xb5\x94\x5a\x4a\x2d\xa3\x96\x51\xcb\xa9\xe5\xd4\x0a"
"\x6a\x05\xb5\x92\x5a\x49\xad\xa2\x56\x51\xab\xa9\xd5\xd5\x1a\x6a\x0d"
"\xb5\x96\x5a\x4b\xad\xa3\xd6\x51\xeb\xa9\xf5\xd4\x06\x6a\x03\xb5\x91"
"\xda\x48\x6d\xac\x36\x56\x9b\xaa\x4d\xd5\x38\x35\x4e\x6d\xa1\xb6\x50"
"\x5b\xa9\xad\xd4\x36\x6a\x1b\xb5\x9d\xda\x4e\xed\xa0\x76\x50\x3b\xa9"
"\x9d\xd4\x2e\x6a\x17\xb5\x9b\xda\x4d\xed\xa1\xf6\x50\xe3\xd5\x78\xb5"
"\xb7\xda\x5b\xed\xab\xf6\x55\xfb\xab\xfd\xd5\x81\xea\x40\x75\xb0\x3a"
"\x58\x1d\xaa\x0e\x55\x87\xab\xc3\xd5\x91\xea\x48\x75\xb4\x3a\x5a\x1d"
"\xab\x8e\x55\xc7\xab\xe3\xd5\x89\xea\x44\x75\xb2\x3a\x45\x9d\xaa\x4e"
"\x53\xa7\xab\x33\xd4\x99\xea\x2c\x75\xb6\x3a\x47\x9d\xa3\xce\x53\xe7"
"\xa9\x0b\xd4\x85\xea\x42\x75\xb1\xba\x58\x4d\x50\x13\xd4\x44\x35\x51"
"\x4d\x52\x93\xd4\x15\xea\x4a\x75\x95\xba\x5a\x5d\xad\xae\x55\xd7\xa9"
"\xeb\xd5\x0d\xea\x46\x75\x93\xba\x59\xdd\xa2\x6e\x55\xb7\xa9\xdb\x55"
"\x4c\xc5\x55\x42\x25\x55\x4a\xa5\x55\x46\x65\x55\x4e\xe5\x55\x41\x15"
"\x55\x49\x95\x55\x45\x55\x55\x4d\xd5\x55\x43\x35\x55\x4b\xb5\x55\x47"
"\x75\x55\x4f\xf5\xd5\x40\x0d\xd5\x48\x05\x2a\x54\x91\x1a\x53\x77\xa8"
"\x3b\xd5\x5d\xea\x6e\x75\x8f\xba\x57\xdd\xa7\xee\x57\x0f\xa8\x07\xd5"
"\x43\xea\x61\xf5\x88\x7a\x54\x3d\xa6\x1e\x57\x4f\xa8\x27\xd5\x53\xea"
"\x69\xf5\x8c\x7a\x56\x3d\xa7\x9e\x57\x2f\xa8\x17\xd5\x4b\xea\x65\xf5"
"\x8a\x7a\x55\xbd\xa6\x5e\x57\x6f\xa8\x37\xd5\x5b\xea\x6d\xf5\x8e\x7a"
"\x57\xbd\xa7\xde\x57\x1f\xa8\x0f\xd5\x47\xea\x63\xf5\x89\xfa\x54\x7d"
"\xa6\x3e\x57\x5f\xa8\x2f\xd5\x57\xea\x6b\xf5\x8d\xfa\x56\x7d\xa7\xbe"
"\x57\x3f\xa8\x1f\xd5\x4f\xea\x67\xf5\x8b\xfa\x55\xfd\xa6\x7e\x57\x7f"
"\xa8\x3f\xd5\x5f\xea\x6f\xf5\x8f\xfa\x57\xfd\xa7\x26\xd3\x92\x6b\x29"
"\xb4\x94\x5a\x2a\x2d\xb5\x96\x46\x4b\xab\xa5\xd3\xd2\x6b\x19\xb4\x8c"
"\x5a\x26\x2d\xb3\x96\x45\xcb\xaa\x65\xd3\xb2\x6b\x39\xb4\x9c\x5a\x2e"
"\x2d\xb7\x96\x47\xcb\xab\xe5\xd3\xf2\x6b\x05\xb4\x82\x5a\x21\xad\xb0"
"\x56\x44\x2b\xaa\x15\xd3\x8a\x6b\x25\xb4\x92\x5a\x29\xad\xb4\x56\x46"
"\x2b\xab\x95\xd3\xca\x6b\x15\xb4\x8a\x5a\x25\xad\xb2\x56\x45\xab\xaa"
"\x55\xd3\xaa\x6b\x35\xb4\x9a\x5a\x2d\xad\xb6\x56\x47\xab\xab\xd5\xd3"
"\xea\x6b\x0d\xb4\x86\x5a\x23\xed\x3f\xad\xb1\xd6\x44\x6b\xaa\x35\xd3"
"\xe2\xb4\xe6\x5a\x0b\xad\xa5\xd6\x4a\x6b\xad\xb5\xd1\xda\x6a\xed\xb4"
"\xf6\x5a\x07\xad\xa3\xd6\x49\xeb\xac\x75\xd1\xba\x6a\xdd\xb4\xee\x5a"
"\x0f\xad\xa7\x16\xaf\xf5\xd2\x7a\x6b\x7d\xb4\xbe\x5a\x3f\xad\xbf\x36"
"\x40\x1b\xa8\x0d\xd2\x06\x6b\x43\xb4\xa1\xda\x30\x6d\xb8\x36\x42\x1b"
"\xa9\x8d\xd2\x46\x6b\x63\xb4\xb1\xda\x38\x6d\xbc\x36\x41\x9b\xa8\x4d"
"\xd2\x26\x6b\x53\xb4\xa9\xda\x34\x6d\xba\x36\x43\x9b\xa9\xcd\xd2\x66"
"\x6b\x73\xb4\xb9\xda\x3c\x6d\xbe\xb6\x40\x5b\xa8\x2d\xd2\x16\x6b\x4b"
"\xb4\x04\x6d\xa9\x96\xa8\x2d\xd3\x92\xb4\xe5\xda\x0a\x6d\xa5\xb6\x4a"
"\x5b\xad\xad\xd1\xd6\x6a\xeb\xb4\xf5\xda\x06\x6d\xa3\xb6\x49\xdb\xac"
"\x6d\xd1\xb6\x6a\xdb\xb4\xed\x1a\xa6\xe1\x1a\xa1\x91\x1a\xa5\xd1\x1a"
"\xa3\xb1\x1a\xa7\xf1\x9a\xa0\x89\x9a\xa4\xc9\x9a\xa2\xa9\x9a\xa6\xe9"
"\x9a\xa1\x99\x9a\xa5\xd9\x9a\xa3\xb9\x9a\xa7\xf9\x5a\xa0\x85\x5a\xa4"
"\x01\x0d\x6a\x48\x8b\x69\x3b\xb4\x9d\xda\x2e\x6d\xb7\xb6\x47\xdb\xab"
"\xed\xd3\xf6\x6b\x07\xb4\x83\xda\x21\xed\xb0\x76\x44\x3b\xaa\x1d\xd3"
"\x8e\x6b\x27\xb4\x93\xda\x29\xed\xb4\x76\x46\x3b\xab\x9d\xd3\xce\x6b"
"\x17\xb4\x8b\xda\x25\xed\xb2\x76\x45\xbb\xaa\x5d\xd3\xae\x6b\x37\xb4"
"\x9b\xda\x2d\xed\xb6\x76\x47\xbb\xab\xdd\xd3\xee\x6b\x0f\xb4\x87\xda"
"\x23\xed\xb1\xf6\x44\x7b\xaa\x3d\xd3\x9e\x6b\x2f\xb4\x97\xda\x2b\xed"
"\xb5\xf6\x46\x7b\xab\xbd\xd3\xde\x6b\x1f\xb4\x8f\xda\x27\xed\xb3\xf6"
"\x45\xfb\xaa\x7d\xd3\xbe\x6b\x3f\xb4\x9f\xda\x2f\xed\xb7\xf6\x47\xfb"
"\xab\xfd\xd3\x92\xe9\xc9\xf5\x14\x7a\x4a\x3d\x95\x9e\x5a\x4f\xa3\xa7"
"\xd5\xd3\xe9\xe9\xf5\x0c\x7a\x46\x3d\x93\x9e\x59\xcf\xa2\x67\xd5\xb3"
"\xe9\xd9\xf5\x1c\x7a\x4e\x3d\x97\x9e\x5b\xcf\xa3\xe7\xd5\xf3\xe9\xf9"
"\xf5\x02\x7a\x41\xbd\x90\x5e\x58\x2f\xa2\x17\xd5\x8b\xe9\xc5\xf5\x12"
"\x7a\x49\xbd\x94\x5e\x5a\x2f\xa3\x97\xd5\xcb\xe9\xe5\xf5\x0a\x7a\x45"
"\xbd\x92\x5e\x59\xaf\xa2\x57\xd5\xab\xe9\xd5\xf5\x1a\x7a\x4d\xbd\x96"
"\x5e\x5b\xaf\xa3\xd7\xd5\xeb\xe9\xf5\xf5\x06\x7a\x43\xbd\x91\xfe\x9f"
"\xde\x58\x6f\xa2\x37\xd5\x9b\xe9\x71\x7a\x73\xbd\x85\xde\x52\x6f\xa5"
"\xb7\xd6\xdb\xe8\x6d\xf5\x76\x7a\x7b\xbd\x83\xde\x51\xef\xa4\x77\xd6"
"\xbb\xe8\x5d\xf5\x6e\x7a\x77\xbd\x87\xde\x53\x8f\xd7\x7b\xe9\xbd\xf5"
"\x3e\x7a\x5f\xbd\x9f\xde\x5f\x1f\xa0\x0f\xd4\x07\xe9\x83\xf5\x21\xfa"
"\x50\x7d\x98\x3e\x5c\x1f\xa1\x8f\xd4\x47\xe9\xa3\xf5\x31\xfa\x58\x7d"
"\x9c\x3e\x5e\x9f\xa0\x4f\xd4\x27\xe9\x93\xf5\x29\xfa\x54\x7d\x9a\x3e"
"\x5d\x9f\xa1\xcf\xd4\x67\xe9\xb3\xf5\x39\xfa\x5c\x7d\x9e\x3e\x5f\x5f"
"\xa0\x2f\xd4\x17\xe9\x8b\xf5\x25\x7a\x82\xbe\x54\x4f\xd4\x97\xe9\x49"
"\xfa\x72\x7d\x85\xbe\x52\x5f\xa5\xaf\xd6\xd7\xe8\x6b\xf5\x75\xfa\x7a"
"\x7d\x83\xbe\x51\xdf\xa4\x6f\xd6\xb7\xe8\x5b\xf5\x6d\xfa\x76\x1d\xd3"
"\x71\x9d\xd0\x49\x9d\xd2\x69\x9d\xd1\x59\x9d\xd3\x79\x5d\xd0\x45\x5d"
"\xd2\x65\x5d\xd1\x55\x5d\xd3\x75\xdd\xd0\x4d\xdd\xd2\x6d\xdd\xd1\x5d"
"\xdd\xd3\x7d\x3d\xd0\x43\x3d\xd2\x81\x0e\x75\xa4\xc7\xf4\x1d\xfa\x4e"
"\x7d\x97\xbe\x5b\xdf\xa3\xef\xd5\xf7\xe9\xfb\xf5\x03\xfa\x41\xfd\x90"
"\x7e\x58\x3f\xa2\x1f\xd5\x8f\xe9\xc7\xf5\x13\xfa\x49\xfd\x94\x7e\x5a"
"\x3f\xa3\x9f\xd5\xcf\xe9\xe7\xf5\x0b\xfa\x45\xfd\x92\x7e\x59\xbf\xa2"
"\x5f\xd5\xaf\xe9\xd7\xf5\x1b\xfa\x4d\xfd\x96\x7e\x5b\xbf\xa3\xdf\xd5"
"\xef\xe9\xf7\xf5\x07\xfa\x43\xfd\x91\xfe\x58\x7f\xa2\x3f\xd5\x9f\xe9"
"\xcf\xf5\x17\xfa\x4b\xfd\x95\xfe\x5a\x7f\xa3\xbf\xd5\xdf\xe9\xef\xf5"
"\x0f\xfa\x47\xfd\x93\xfe\x59\xff\xa2\x7f\xd5\xbf\xe9\xdf\xf5\x1f\xfa"
"\x4f\xfd\x97\xfe\x5b\xff\xa3\xff\xd5\xff\xe9\xc9\x8c\xe4\x46\x0a\x23"
"\xa5\x91\xca\x48\x6d\xa4\x31\xd2\x1a\xe9\x8c\xf4\x46\x06\x23\xa3\x91"
"\xc9\xc8\x6c\x64\x31\xb2\x1a\xd9\x8c\xec\x46\x0e\x23\xa7\x91\xcb\xc8"
"\x6d\xe4\x31\xf2\x1a\xf9\x8c\xfc\x46\x01\xa3\xa0\x51\xc8\x28\x6c\x14"
"\x31\x8a\x1a\xc5\x8c\xe2\x46\x09\xa3\xa4\x51\xca\x28\x6d\x94\x31\xca"
"\x1a\xe5\x8c\xf2\x46\x05\xa3\xa2\x51\xc9\xa8\x6c\x54\x31\xaa\x1a\xd5"
"\x8c\xea\x46\x0d\xa3\xa6\x51\xcb\xa8\x6d\xd4\x31\xea\x1a\xf5\x8c\xfa"
"\x46\x03\xa3\xa1\xd1\xc8\xf8\xcf\x68\x6c\x34\x31\x9a\x1a\xcd\x8c\x38"
"\xa3\xb9\xd1\xc2\x68\x69\xb4\x32\x5a\x1b\x6d\x8c\xb6\x46\x3b\xa3\xbd"
"\xd1\xc1\xe8\x68\x74\x32\x3a\x1b\x5d\x8c\xae\x46\x37\xa3\xbb\xd1\xc3"
"\xe8\x69\xc4\x1b\xbd\x8c\xde\x46\x1f\xa3\xaf\xd1\xcf\xe8\x6f\x0c\x30"
"\x06\x1a\x83\x8c\xc1\xc6\x10\x63\xa8\x31\xcc\x18\x6e\x8c\x30\x46\x1a"
"\xa3\x8c\xd1\xc6\x18\x63\xac\x31\xce\x18\x6f\x4c\x30\x26\x1a\x93\x8c"
"\xc9\xc6\x14\x63\xaa\x31\xcd\x98\x6e\xcc\x30\x66\x1a\xb3\x8c\xd9\xc6"
"\x1c\x63\xae\x31\xcf\x98\x6f\x2c\x30\x16\x1a\x8b\x8c\xc5\xc6\x12\x23"
"\xc1\x58\x6a\x24\x1a\xcb\x8c\x24\x63\xb9\xb1\xc2\x58\x69\xac\x32\x56"
"\x1b\x6b\x8c\xb5\xc6\x3a\x63\xbd\xb1\xc1\xd8\x68\x6c\x32\x36\x1b\x5b"
"\x8c\xad\xc6\x36\x63\xbb\x81\x19\xb8\x41\x18\xa4\x41\x19\xb4\xc1\x18"
"\xac\xc1\x19\xbc\x21\x18\xa2\x21\x19\xb2\xa1\x18\xaa\xa1\x19\xba\x61"
"\x18\xa6\x61\x19\xb6\xe1\x18\xae\xe1\x19\xbe\x11\x18\xa1\x11\x19\xc0"
"\x80\x06\x32\x62\xc6\x0e\x63\xa7\xb1\xcb\xd8\x6d\xec\x31\xf6\x1a\xfb"
"\x8c\xfd\xc6\x01\xe3\xa0\x71\xc8\x38\x6c\x1c\x31\x8e\x1a\xc7\x8c\xe3"
"\xc6\x09\xe3\xa4\x71\xca\x38\x6d\x9c\x31\xce\x1a\xe7\x8c\xf3\xc6\x05"
"\xe3\xa2\x71\xc9\xb8\x6c\x5c\x31\xae\x1a\xd7\x8c\xeb\xc6\x0d\xe3\xa6"
"\x71\xcb\xb8\x6d\xdc\x31\xee\x1a\xf7\x8c\xfb\xc6\x03\xe3\xa1\xf1\xc8"
"\x78\x6c\x3c\x31\x9e\x1a\xcf\x8c\xe7\xc6\x0b\xe3\xa5\xf1\xca\x78\x6d"
"\xbc\x31\xde\x1a\xef\x8c\xf7\xc6\x07\xe3\xa3\xf1\xc9\xf8\x6c\x7c\x31"
"\xbe\x1a\xdf\x8c\xef\xc6\x0f\xe3\xa7\xf1\xcb\xf8\x6d\xfc\x31\xfe\x1a"
"\xff\x8c\x64\x66\x72\x33\x85\x99\xd2\x4c\x65\xa6\x36\xd3\x98\x69\xcd"
"\x74\x66\x7a\x33\x83\x99\xd1\xcc\x64\x66\x36\xb3\x98\x59\xcd\x6c\x66"
"\x76\x33\x87\x99\xd3\xcc\x65\xe6\x36\xf3\x98\x79\xcd\x7c\x66\x7e\xb3"
"\x80\x59\xd0\x2c\x64\x16\x36\x8b\x98\x45\xcd\x62\x66\x71\xb3\x84\x59"
"\xd2\x2c\x65\x96\x36\xcb\x98\x65\xcd\x72\x66\x79\xb3\x82\x59\xd1\xac"
"\x64\x56\x36\xab\x98\x55\xcd\x6a\x66\x75\xb3\x86\x59\xd3\xac\x65\xd6"
"\x36\xeb\x98\x75\xcd\x7a\x66\x7d\xb3\x81\xd9\xd0\x6c\x64\xfe\x67\x36"
"\x36\x9b\x98\x4d\xcd\x66\x66\x9c\xd9\xdc\x6c\x61\xb6\x34\x5b\x99\xad"
"\xcd\x36\x66\x5b\xb3\x9d\xd9\xde\xec\x60\x76\x34\x3b\x99\x9d\xcd\x2e"
"\x66\x57\xb3\x9b\xd9\xdd\xec\x61\xf6\x34\xe3\xcd\x5e\x66\x6f\xb3\x8f"
"\xd9\xd7\xec\x67\xf6\x37\x07\x98\x03\xcd\x41\xe6\x60\x73\x88\x39\xd4"
"\x1c\x66\x0e\x37\x47\x98\x23\xcd\x51\xe6\x68\x73\x8c\x39\xd6\x1c\x67"
"\x8e\x37\x27\x98\x13\xcd\x49\xe6\x64\x73\x8a\x39\xd5\x9c\x66\x4e\x37"
"\x67\x98\x33\xcd\x59\xe6\x6c\x73\x8e\x39\xd7\x9c\x67\xce\x37\x17\x98"
"\x0b\xcd\x45\xe6\x62\x73\x89\x99\x60\x2e\x35\x13\xcd\x65\x66\x92\xb9"
"\xdc\x5c\x61\xae\x34\x57\x99\xab\xcd\x35\xe6\x5a\x73\x9d\xb9\xde\xdc"
"\x60\x6e\x34\x37\x99\x9b\xcd\x2d\xe6\x56\x73\x9b\xb9\xdd\xc4\x4c\xdc"
"\x24\x4c\xd2\xa4\x4c\xda\x64\x4c\xd6\xe4\x4c\xde\x14\x4c\xd1\x94\x4c"
"\xd9\x54\x4c\xd5\xd4\x4c\xdd\x34\x4c\xd3\xb4\x4c\xdb\x74\x4c\xd7\xf4"
"\x4c\xdf\x0c\xcc\xd0\x8c\x4c\x60\x42\x13\x99\x31\x73\x87\xb9\xd3\xdc"
"\x65\xee\x36\xf7\x98\x7b\xcd\x7d\xe6\x7e\xf3\x80\x79\xd0\x3c\x64\x1e"
"\x36\x8f\x98\x47\xcd\x63\xe6\x71\xf3\x84\x79\xd2\x3c\x65\x9e\x36\xcf"
"\x98\x67\xcd\x73\xe6\x79\xf3\x82\x79\xd1\xbc\x64\x5e\x36\xaf\x98\x57"
"\xcd\x6b\xe6\x75\xf3\x86\x79\xd3\xbc\x65\xde\x36\xef\x98\x77\xcd\x7b"
"\xe6\x7d\xf3\x81\xf9\xd0\x7c\x64\x3e\x36\x9f\x98\x4f\xcd\x67\xe6\x73"
"\xf3\x85\xf9\xd2\x7c\x65\xbe\x36\xdf\x98\x6f\xcd\x77\xe6\x7b\xf3\x83"
"\xf9\xd1\xfc\x64\x7e\x36\xbf\x98\x5f\xcd\x6f\xe6\x77\xf3\x87\xf9\xd3"
"\xfc\x65\xfe\x36\xff\x98\x7f\xcd\x7f\x66\x32\x2b\xb9\x95\xc2\x4a\x69"
"\xa5\xb2\x52\x5b\x69\xac\xb4\x56\x3a\x2b\xbd\x95\xc1\xca\x68\x65\xb2"
"\x32\x5b\x59\xac\xac\x56\x36\x2b\xbb\x95\xc3\xca\x69\xe5\xb2\x72\x5b"
"\x79\xac\xbc\x56\x3e\x2b\xbf\x55\xc0\x2a\x68\x15\xb2\x0a\x5b\x45\xac"
"\xa2\x56\x31\xab\xb8\x55\xc2\x2a\x69\x95\xb2\x4a\x5b\x65\xac\xb2\x56"
"\x39\xab\xbc\x55\xc1\xaa\x68\x55\xb2\x2a\x5b\x55\xac\xaa\x56\x35\xab"
"\xba\x55\xc3\xaa\x69\xd5\xb2\x6a\x5b\x75\xac\xba\x56\x3d\xab\xbe\xd5"
"\xc0\x6a\x68\x35\xb2\xfe\xb3\x1a\x5b\x4d\xac\xa6\x56\x33\x2b\xce\x6a"
"\x6e\xb5\xb0\x5a\x5a\xad\xac\xd6\x56\x1b\xab\xad\xd5\xce\x6a\x6f\x75"
"\xb0\x3a\x5a\x9d\xac\xce\x56\x17\xab\xab\xd5\xcd\xea\x6e\xf5\xb0\x7a"
"\x5a\xf1\x56\x2f\xab\xb7\xd5\xc7\xea\x6b\xf5\xb3\xfa\x5b\x03\xac\x81"
"\xd6\x20\x6b\xb0\x35\xc4\x1a\x6a\x0d\xb3\x86\x5b\x23\xac\x91\xd6\x28"
"\x6b\xb4\x35\xc6\x1a\x6b\x8d\xb3\xc6\x5b\x13\xac\x89\xd6\x24\x6b\xb2"
"\x35\xc5\x9a\x6a\x4d\xb3\xa6\x5b\x33\xac\x99\xd6\x2c\x6b\xb6\x35\xc7"
"\x9a\x6b\xcd\xb3\xe6\x5b\x0b\xac\x85\xd6\x22\x6b\xb1\xb5\xc4\x4a\xb0"
"\x96\x5a\x89\xd6\x32\x2b\xc9\x5a\x6e\xad\xb0\x56\x5a\xab\xac\xd5\xd6"
"\x1a\x6b\xad\xb5\xce\x5a\x6f\x6d\xb0\x36\x5a\x9b\xac\xcd\xd6\x16\x6b"
"\xab\xb5\xcd\xda\x6e\x61\x16\x6e\x11\x16\x69\x51\x16\x6d\x31\x16\x6b"
"\x71\x16\x6f\x09\x96\x68\x49\x96\x6c\x29\x96\x6a\x69\x96\x6e\x19\x96"
"\x69\x59\x96\x6d\x39\x96\x6b\x79\x96\x6f\x05\x56\x68\x45\x16\xb0\xa0"
"\x85\xac\x98\xb5\xc3\xda\x69\xed\xb2\x76\x5b\x7b\xac\xbd\xd6\x3e\x6b"
"\xbf\x75\xc0\x3a\x68\x1d\xb2\x0e\x5b\x47\xac\xa3\xd6\x31\xeb\xb8\x75"
"\xc2\x3a\x69\x9d\xb2\x4e\x5b\x67\xac\xb3\xd6\x39\xeb\xbc\x75\xc1\xba"
"\x68\x5d\xb2\x2e\x5b\x57\xac\xab\xd6\x35\xeb\xba\x75\xc3\xba\x69\xdd"
"\xb2\x6e\x5b\x77\xac\xbb\xd6\x3d\xeb\xbe\xf5\xc0\x7a\x68\x3d\xb2\x1e"
"\x5b\x4f\xac\xa7\xd6\x33\xeb\xb9\xf5\xc2\x7a\x69\xbd\xb2\x5e\x5b\x6f"
"\xac\xb7\xd6\x3b\xeb\xbd\xf5\xc1\xfa\x68\x7d\xb2\x3e\x5b\x5f\xac\xaf"
"\xd6\x37\xeb\xbb\xf5\xc3\xfa\x69\xfd\xb2\x7e\x5b\x7f\xac\xbf\xd6\x3f"
"\x2b\x99\x9d\xdc\x4e\x61\xa7\xb4\x53\xd9\xa9\xed\x34\x76\x5a\x3b\x9d"
"\x9d\xde\xce\x60\x67\xb4\x33\xd9\x99\xed\x2c\x76\x56\x3b\x9b\x9d\xdd"
"\xce\x61\xe7\xb4\x73\xd9\xb9\xed\x3c\x76\x5e\x3b\x9f\x9d\xdf\x2e\x60"
"\x17\xb4\x0b\xd9\x85\xed\x22\x76\x51\xbb\x98\x5d\xdc\x2e\x61\x97\xb4"
"\x4b\xd9\xa5\xed\x32\x76\x59\xbb\x9c\x5d\xde\xae\x60\x57\xb4\x2b\xd9"
"\x95\xed\x2a\x76\x55\xbb\x9a\x5d\xdd\xae\x61\xd7\xb4\x6b\xd9\xb5\xed"
"\x3a\x76\x5d\xbb\x9e\x5d\xdf\x6e\x60\x37\xb4\x1b\xd9\xff\xd9\x8d\xed"
"\x26\x76\x53\xbb\x99\x1d\x67\x37\xb7\x5b\xd8\x2d\xed\x56\x76\x6b\xbb"
"\x8d\xdd\xd6\x6e\x67\xb7\xb7\x3b\xd8\x1d\xed\x4e\x76\x67\xbb\x8b\xdd"
"\xd5\xee\x66\x77\xb7\x7b\xd8\x3d\xed\x78\xbb\x97\xdd\xdb\xee\x63\xf7"
"\xb5\xfb\xd9\xfd\xed\x01\xf6\x40\x7b\x90\x3d\xd8\x1e\x62\x0f\xb5\x87"
"\xd9\xc3\xed\x11\xf6\x48\x7b\x94\x3d\xda\x1e\x63\x8f\xb5\xc7\xd9\xe3"
"\xed\x09\xf6\x44\x7b\x92\x3d\xd9\x9e\x62\x4f\xb5\xa7\xd9\xd3\xed\x19"
"\xf6\x4c\x7b\x96\x3d\xdb\x9e\x63\xcf\xb5\xe7\xd9\xf3\xed\x05\xf6\x42"
"\x7b\x91\xbd\xd8\x5e\x62\x27\xd8\x4b\xed\x44\x7b\x99\x9d\x64\x2f\xb7"
"\x57\xd8\x2b\xed\x55\xf6\x6a\x7b\x8d\xbd\xd6\x5e\x67\xaf\xb7\x37\xd8"
"\x1b\xed\x4d\xf6\x66\x7b\x8b\xbd\xd5\xde\x66\x6f\xb7\x31\x1b\xb7\x09"
"\x9b\xb4\x29\x9b\xb6\x19\x9b\xb5\x39\x9b\xb7\x05\x5b\xb4\x25\x5b\xb6"
"\x15\x5b\xb5\x35\x5b\xb7\x0d\xdb\xb4\x2d\xdb\xb6\x1d\xdb\xb5\x3d\xdb"
"\xb7\x03\x3b\xb4\x23\x1b\xd8\xd0\x46\x76\xcc\xde\x61\xef\xb4\x77\xd9"
"\xbb\xed\x3d\xf6\x5e\x7b\x9f\xbd\xdf\x3e\x60\x1f\xb4\x0f\xd9\x87\xed"
"\x23\xf6\x51\xfb\x98\x7d\xdc\x3e\x61\x9f\xb4\x4f\xd9\xa7\xed\x33\xf6"
"\x59\xfb\x9c\x7d\xde\xbe\x60\x5f\xb4\x2f\xd9\x97\xed\x2b\xf6\x55\xfb"
"\x9a\x7d\xdd\xbe\x61\xdf\xb4\x6f\xd9\xb7\xed\x3b\xf6\x5d\xfb\x9e\x7d"
"\xdf\x7e\x60\x3f\xb4\x1f\xd9\x8f\xed\x27\xf6\x53\xfb\x99\xfd\xdc\x7e"
"\x61\xbf\xb4\x5f\xd9\xaf\xed\x37\xf6\x5b\xfb\x9d\xfd\xde\xfe\x60\x7f"
"\xb4\x3f\xd9\x9f\xed\x2f\xf6\x57\xfb\x9b\xfd\xdd\xfe\x61\xff\xb4\x7f"
"\xd9\xbf\xed\x3f\xf6\x5f\xfb\x9f\x9d\xcc\x49\xee\xa4\x70\x52\x3a\xa9"
"\x9c\xd4\x4e\x1a\x27\xad\x93\xce\x49\xef\x64\x70\x32\x3a\x99\x9c\xcc"
"\x4e\x16\x27\xab\x93\xcd\xc9\xee\xe4\x70\x72\x3a\xb9\x9c\xdc\x4e\x1e"
"\x27\xaf\x93\xcf\xc9\xef\x14\x70\x0a\x3a\x85\x9c\xc2\x4e\x11\xa7\xa8"
"\x53\xcc\x29\xee\x94\x70\x4a\x3a\xa5\x9c\xd2\x4e\x19\xa7\xac\x53\xce"
"\x29\xef\x54\x70\x2a\x3a\x95\x9c\xca\x4e\x15\xa7\xaa\x53\xcd\xa9\xee"
"\xd4\x70\x6a\x3a\xb5\x9c\xda\x4e\x1d\xa7\xae\x53\xcf\xa9\xef\x34\x70"
"\x1a\x3a\x8d\x9c\xff\x9c\xc6\x4e\x13\xa7\xa9\xd3\xcc\x89\x73\x9a\x3b"
"\x2d\x9c\x96\x4e\x2b\xa7\xb5\xd3\xc6\x69\xeb\xb4\x73\xda\x3b\x1d\x9c"
"\x8e\x4e\x27\xa7\xb3\xd3\xc5\xe9\xea\x74\x73\xba\x3b\x3d\x9c\x9e\x4e"
"\xbc\xd3\xcb\xe9\xed\xf4\x71\xfa\x3a\xfd\x9c\xfe\xce\x00\x67\xa0\x33"
"\xc8\x19\xec\x0c\x71\x86\x3a\xc3\x9c\xe1\xce\x08\x67\xa4\x33\xca\x19"
"\xed\x8c\x71\xc6\x3a\xe3\x9c\xf1\xce\x04\x67\xa2\x33\xc9\x99\xec\x4c"
"\x71\xa6\x3a\xd3\x9c\xe9\xce\x0c\x67\xa6\x33\xcb\x99\xed\xcc\x71\xe6"
"\x3a\xf3\x9c\xf9\xce\x02\x67\xa1\xb3\xc8\x59\xec\x2c\x71\x12\x9c\xa5"
"\x4e\xa2\xb3\xcc\x49\x72\x96\x3b\x2b\x9c\x95\xce\x2a\x67\xb5\xb3\xc6"
"\x59\xeb\xac\x73\xd6\x3b\x1b\x9c\x8d\xce\x26\x67\xb3\xb3\xc5\xd9\xea"
"\x6c\x73\xb6\x3b\x98\x83\x3b\x84\x43\x3a\x94\x43\x3b\x8c\xc3\x3a\x9c"
"\xc3\x3b\x82\x23\x3a\x92\x23\x3b\x8a\xa3\x3a\x9a\xa3\x3b\x86\x63\x3a"
"\x96\x63\x3b\x8e\xe3\x3a\x9e\xe3\x3b\x81\x13\x3a\x91\x03\x1c\xe8\x20"
"\x27\xe6\xec\x70\x76\x3a\xbb\x9c\xdd\xce\x1e\x67\xaf\xb3\xcf\xd9\xef"
"\x1c\x70\x0e\x3a\x87\x9c\xc3\xce\x11\xe7\xa8\x73\xcc\x39\xee\x9c\x70"
"\x4e\x3a\xa7\x9c\xd3\xce\x19\xe7\xac\x73\xce\x39\xef\x5c\x70\x2e\x3a"
"\x97\x9c\xcb\xce\x15\xe7\xaa\x73\xcd\xb9\xee\xdc\x70\x6e\x3a\xb7\x9c"
"\xdb\xce\x1d\xe7\xae\x73\xcf\xb9\xef\x3c\x70\x1e\x3a\x8f\x9c\xc7\xce"
"\x13\xe7\xa9\xf3\xcc\x79\xee\xbc\x70\x5e\x3a\xaf\x9c\xd7\xce\x1b\xe7"
"\xad\xf3\xce\x79\xef\x7c\x70\x3e\x3a\x9f\x9c\xcf\xce\x17\xe7\xab\xf3"
"\xcd\xf9\xee\xfc\x70\x7e\x3a\xbf\x9c\xdf\xce\x1f\xe7\xaf\xf3\xcf\x49"
"\xe6\x26\x77\x53\xb8\x29\xdd\x54\x6e\x6a\x37\x8d\x9b\xd6\x4d\xe7\xa6"
"\x77\x33\xb8\x19\xdd\x4c\x6e\x66\x37\x8b\x9b\xd5\xcd\xe6\x66\x77\x73"
"\xb8\x39\xdd\x5c\x6e\x6e\x37\x8f\x9b\xd7\xcd\xe7\xe6\x77\x0b\xb8\x05"
"\xdd\x42\x6e\x61\xb7\x88\x5b\xd4\x2d\xe6\x16\x77\x4b\xb8\x25\xdd\x52"
"\x6e\x69\xb7\x8c\x5b\xd6\x2d\xe7\x96\x77\x2b\xb8\x15\xdd\x4a\x6e\x65"
"\xb7\x8a\x5b\xd5\xad\xe6\x56\x77\x6b\xb8\x35\xdd\x5a\x6e\x6d\xb7\x8e"
"\x5b\xd7\xad\xe7\xd6\x77\x1b\xb8\x0d\xdd\x46\xee\x7f\x6e\x63\xb7\x89"
"\xdb\xd4\x6d\xe6\xc6\xb9\xcd\xdd\x16\x6e\x4b\xb7\x95\xdb\xda\x6d\xe3"
"\xb6\x75\xdb\xb9\xed\xdd\x0e\x6e\x47\xb7\x93\xdb\xd9\xed\xe2\x76\x75"
"\xbb\xb9\xdd\xdd\x1e\x6e\x4f\x37\xde\xed\xe5\xf6\x76\xfb\xb8\x7d\xdd"
"\x7e\x6e\x7f\x77\x80\x3b\xd0\x1d\xe4\x0e\x76\x87\xb8\x43\xdd\x61\xee"
"\x70\x77\x84\x3b\xd2\x1d\xe5\x8e\x76\xc7\xb8\x63\xdd\x71\xee\x78\x77"
"\x82\x3b\xd1\x9d\xe4\x4e\x76\xa7\xb8\x53\xdd\x69\xee\x74\x77\x86\x3b"
"\xd3\x9d\xe5\xce\x76\xe7\xb8\x73\xdd\x79\xee\x7c\x77\x81\xbb\xd0\x5d"
"\xe4\x2e\x76\x97\xb8\x09\xee\x52\x37\xd1\x5d\xe6\x26\xb9\xcb\xdd\x15"
"\xee\x4a\x77\x95\xbb\xda\x5d\xe3\xae\x75\xd7\xb9\xeb\xdd\x0d\xee\x46"
"\x77\x93\xbb\xd9\xdd\xe2\x6e\x75\xb7\xb9\xdb\x5d\xcc\xc5\x5d\xc2\x25"
"\x5d\xca\xa5\x5d\xc6\x65\x5d\xce\xe5\x5d\xc1\x15\x5d\xc9\x95\x5d\xc5"
"\x55\x5d\xcd\xd5\x5d\xc3\x35\x5d\xcb\xb5\x5d\xc7\x75\x5d\xcf\xf5\xdd"
"\xc0\x0d\xdd\xc8\x05\x2e\x74\x91\x1b\x73\x77\xb8\x3b\xdd\x5d\xee\x6e"
"\x77\x8f\xbb\xd7\xdd\xe7\xee\x77\x0f\xb8\x07\xdd\x43\xee\x61\xf7\x88"
"\x7b\xd4\x3d\xe6\x1e\x77\x4f\xb8\x27\xdd\x53\xee\x69\xf7\x8c\x7b\xd6"
"\x3d\xe7\x9e\x77\x2f\xb8\x17\xdd\x4b\xee\x65\xf7\x8a\x7b\xd5\xbd\xe6"
"\x5e\x77\x6f\xb8\x37\xdd\x5b\xee\x6d\xf7\x8e\x7b\xd7\xbd\xe7\xde\x77"
"\x1f\xb8\x0f\xdd\x47\xee\x63\xf7\x89\xfb\xd4\x7d\xe6\x3e\x77\x5f\xb8"
"\x2f\xdd\x57\xee\x6b\xf7\x8d\xfb\xd6\x7d\xe7\xbe\x77\x3f\xb8\x1f\xdd"
"\x4f\xee\x67\xf7\x8b\xfb\xd5\xfd\xe6\x7e\x77\x7f\xb8\x3f\xdd\x5f\xee"
"\x6f\xf7\x8f\xfb\xd7\xfd\xe7\x26\xf3\x92\x7b\x29\xbc\x94\x5e\x2a\x2f"
"\xb5\x97\xc6\x4b\xeb\xa5\xf3\xd2\x7b\x19\xbc\x8c\x5e\x26\x2f\xb3\x97"
"\xc5\xcb\xea\x65\xf3\xb2\x7b\x39\xbc\x9c\x5e\x2e\x2f\xb7\x97\xc7\xcb"
"\xeb\xe5\xf3\xf2\x7b\x05\xbc\x82\x5e\x21\xaf\xb0\x57\xc4\x2b\xea\x15"
"\xf3\x8a\x7b\x25\xbc\x92\x5e\x29\xaf\xb4\x57\xc6\x2b\xeb\x95\xf3\xca"
"\x7b\x15\xbc\x8a\x5e\x25\xaf\xb2\x57\xc5\xab\xea\x55\xf3\xaa\x7b\x35"
"\xbc\x9a\x5e\x2d\xaf\xb6\x57\xc7\xab\xeb\xd5\xf3\xea\x7b\x0d\xbc\x86"
"\x5e\x23\xef\x3f\xaf\xb1\xd7\xc4\x6b\xea\x35\xf3\xe2\xbc\xe6\x5e\x0b"
"\xaf\xa5\xd7\xca\x6b\xed\xb5\xf1\xda\x7a\xed\xbc\xf6\x5e\x07\xaf\xa3"
"\xd7\xc9\xeb\xec\x75\xf1\xba\x7a\xdd\xbc\xee\x5e\x0f\xaf\xa7\x17\xef"
"\xf5\xf2\x7a\x7b\x7d\xbc\xbe\x5e\x3f\xaf\xbf\x37\xc0\x1b\xe8\x0d\xf2"
"\x06\x7b\x43\xbc\xa1\xde\x30\x6f\xb8\x37\xc2\x1b\xe9\x8d\xf2\x46\x7b"
"\x63\xbc\xb1\xde\x38\x6f\xbc\x37\xc1\x9b\xe8\x4d\xf2\x26\x7b\x53\xbc"
"\xa9\xde\x34\x6f\xba\x37\xc3\x9b\xe9\xcd\xf2\x66\x7b\x73\xbc\xb9\xde"
"\x3c\x6f\xbe\xb7\xc0\x5b\xe8\x2d\xf2\x16\x7b\x4b\xbc\x04\x6f\xa9\x97"
"\xe8\x2d\xf3\x92\xbc\xe5\xde\x0a\x6f\xa5\xb7\xca\x5b\xed\xad\xf1\xd6"
"\x7a\xeb\xbc\xf5\xde\x06\x6f\xa3\xb7\xc9\xdb\xec\x6d\xf1\xb6\x7a\xdb"
"\xbc\xed\x1e\xe6\xe1\x1e\xe1\x91\x1e\xe5\xd1\x1e\xe3\xb1\x1e\xe7\xf1"
"\x9e\xe0\x89\x9e\xe4\xc9\x9e\xe2\xa9\x9e\xe6\xe9\x9e\xe1\x99\x9e\xe5"
"\xd9\x9e\xe3\xb9\x9e\xe7\xf9\x5e\xe0\x85\x5e\xe4\x01\x0f\x7a\xc8\x8b"
"\x79\x3b\xbc\x9d\xde\x2e\x6f\xb7\xb7\xc7\xdb\xeb\xed\xf3\xf6\x7b\x07"
"\xbc\x83\xde\x21\xef\xb0\x77\xc4\x3b\xea\x1d\xf3\x8e\x7b\x27\xbc\x93"
"\xde\x29\xef\xb4\x77\xc6\x3b\xeb\x9d\xf3\xce\x7b\x17\xbc\x8b\xde\x25"
"\xef\xb2\x77\xc5\xbb\xea\x5d\xf3\xae\x7b\x37\xbc\x9b\xde\x2d\xef\xb6"
"\x77\xc7\xbb\xeb\xdd\xf3\xee\x7b\x0f\xbc\x87\xde\x23\xef\xb1\xf7\xc4"
"\x7b\xea\x3d\xf3\x9e\x7b\x2f\xbc\x97\xde\x2b\xef\xb5\xf7\xc6\x7b\xeb"
"\xbd\xf3\xde\x7b\x1f\xbc\x8f\xde\x27\xef\xb3\xf7\xc5\xfb\xea\x7d\xf3"
"\xbe\x7b\x3f\xbc\x9f\xde\x2f\xef\xb7\xf7\xc7\xfb\xeb\xfd\xf3\x92\xf9"
"\xc9\xfd\x14\x7e\x4a\x3f\x95\x9f\xda\x4f\xe3\xa7\xf5\xd3\xf9\xe9\xfd"
"\x0c\x7e\x46\x3f\x93\x9f\xd9\xcf\xe2\x67\xf5\xb3\xf9\xd9\xfd\x1c\x7e"
"\x4e\x3f\x97\x9f\xdb\xcf\xe3\xe7\xf5\xf3\xf9\xf9\xfd\x02\x7e\x41\xbf"
"\x90\x5f\xd8\x2f\xe2\x17\xf5\x8b\xf9\xc5\xfd\x12\x7e\x49\xbf\x94\x5f"
"\xda\x2f\xe3\x97\xf5\xcb\xf9\xe5\xfd\x0a\x7e\x45\xbf\x92\x5f\xd9\xaf"
"\xe2\x57\xf5\xab\xf9\xd5\xfd\x1a\x7e\x4d\xbf\x96\x5f\xdb\xaf\xe3\xd7"
"\xf5\xeb\xf9\xf5\xfd\x06\x7e\x43\xbf\x91\xff\x9f\xdf\xd8\x6f\xe2\x37"
"\xf5\x9b\xf9\x71\x7e\x73\xbf\x85\xdf\xd2\x6f\xe5\xb7\xf6\xdb\xf8\x6d"
"\xfd\x76\x7e\x7b\xbf\x83\xdf\xd1\xef\xe4\x77\xf6\xbb\xf8\x5d\xfd\x6e"
"\x7e\x77\xbf\x87\xdf\xd3\x8f\xf7\x7b\xf9\xbd\xfd\x3e\x7e\x5f\xbf\x9f"
"\xdf\xdf\x1f\xe0\x0f\xf4\x07\xf9\x83\xfd\x21\xfe\x50\x7f\x98\x3f\xdc"
"\x1f\xe1\x8f\xf4\x47\xf9\xa3\xfd\x31\xfe\x58\x7f\x9c\x3f\xde\x9f\xe0"
"\x4f\xf4\x27\xf9\x93\xfd\x29\xfe\x54\x7f\x9a\x3f\xdd\x9f\xe1\xcf\xf4"
"\x67\xf9\xb3\xfd\x39\xfe\x5c\x7f\x9e\x3f\xdf\x5f\xe0\x2f\xf4\x17\xf9"
"\x8b\xfd\x25\x7e\x82\xbf\xd4\x4f\xf4\x97\xf9\x49\xfe\x72\x7f\x85\xbf"
"\xd2\x5f\xe5\xaf\xf6\xd7\xf8\x6b\xfd\x75\xfe\x7a\x7f\x83\xbf\xd1\xdf"
"\xe4\x6f\xf6\xb7\xf8\x5b\xfd\x6d\xfe\x76\x1f\xf3\x71\x9f\xf0\x49\x9f"
"\xf2\x69\x9f\xf1\x59\x9f\xf3\x79\x5f\xf0\x45\x5f\xf2\x65\x5f\xf1\x55"
"\x5f\xf3\x75\xdf\xf0\x4d\xdf\xf2\x6d\xdf\xf1\x5d\xdf\xf3\x7d\x3f\xf0"
"\x43\x3f\xf2\x81\x0f\x7d\xe4\xc7\xfc\x1d\xfe\x4e\x7f\x97\xbf\xdb\xdf"
"\xe3\xef\xf5\xf7\xf9\xfb\xfd\x03\xfe\x41\xff\x90\x7f\xd8\x3f\xe2\x1f"
"\xf5\x8f\xf9\xc7\xfd\x13\xfe\x49\xff\x94\x7f\xda\x3f\xe3\x9f\xf5\xcf"
"\xf9\xe7\xfd\x0b\xfe\x45\xff\x92\x7f\xd9\xbf\xe2\x5f\xf5\xaf\xf9\xd7"
"\xfd\x1b\xfe\x4d\xff\x96\x7f\xdb\xbf\xe3\xdf\xf5\xef\xf9\xf7\xfd\x07"
"\xfe\x43\xff\x91\xff\xd8\x7f\xe2\x3f\xf5\x9f\xf9\xcf\xfd\x17\xfe\x4b"
"\xff\x95\xff\xda\x7f\xe3\xbf\xf5\xdf\xf9\xef\xfd\x0f\xfe\x47\xff\x93"
"\xff\xd9\xff\xe2\x7f\xf5\xbf\xf9\xdf\xfd\x1f\xfe\x4f\xff\x97\xff\xdb"
"\xff\xe3\xff\xf5\xff\xf9\xc9\x82\xe4\x41\x8a\x20\x65\x90\x2a\x48\x1d"
"\xa4\x09\xd2\x06\xe9\x82\xf4\x41\x86\x20\x63\x90\x29\xc8\x1c\x64\x09"
"\xb2\x06\xd9\x82\xec\x41\x8e\x20\x67\x90\x2b\xc8\x1d\xe4\x09\xf2\x06"
"\xf9\x82\xfc\x41\x81\xa0\x60\x50\x28\x28\x1c\x14\x09\x8a\x06\xc5\x82"
"\xe2\x41\x89\xa0\x64\x50\x2a\x28\x1d\x94\x09\xca\x06\xe5\x82\xf2\x41"
"\x85\xa0\x62\x50\x29\xa8\x1c\x54\x09\xaa\x06\xd5\x82\xea\x41\x8d\xa0"
"\x66\x50\x2b\xa8\x1d\xd4\x09\xea\x06\xf5\x82\xfa\x41\x83\xa0\x61\xd0"
"\xe8\x44\xfc\xff\x1f\x42\xd0\x2c\x88\x0b\x9a\x07\x2d\x82\x96\x41\xab"
"\xa0\x75\xd0\x26\x68\x1b\xb4\x0b\xda\x07\x1d\x82\x8e\x41\xa7\xa0\x73"
"\xd0\x25\xe8\x1a\x74\x0b\xba\x07\x3d\x82\x9e\x41\x7c\xd0\x2b\xe8\x1d"
"\xf4\x09\xfa\x06\xfd\x82\xfe\xc1\x80\x60\x60\x30\x28\x18\x1c\x0c\x09"
"\x86\x06\xc3\x82\xe1\xc1\x88\x60\x64\x30\x2a\x18\x1d\x8c\x09\xc6\x06"
"\xe3\x82\xf1\xc1\x84\x60\x62\x30\x29\x98\x1c\x4c\x09\xa6\x06\xd3\x82"
"\xe9\xc1\x8c\x60\x66\x30\x2b\x98\x1d\xcc\x09\xe6\x06\xf3\x82\xf9\xc1"
"\x82\x60\x61\xb0\x28\x58\x1c\x2c\x09\x12\x82\xa5\x41\x62\xb0\x2c\x48"
"\x0a\x96\x07\x2b\x82\x95\xc1\xaa\x60\x75\xb0\x26\x58\x1b\xac\x0b\xd6"
"\x07\x1b\x82\x8d\xc1\xa6\x60\x73\xb0\x25\xd8\x1a\x6c\x0b\xb6\x07\x58"
"\x80\x07\x44\x40\x06\x54\x40\x07\x4c\xc0\x06\x5c\xc0\x07\x42\x20\x06"
"\x52\x20\x07\x4a\xa0\x06\x5a\xa0\x07\x46\x60\x06\x56\x60\x07\x4e\xe0"
"\x06\x5e\xe0\x07\x41\x10\x06\x51\x00\x02\x18\xa0\x20\x16\xec\x08\x76"
"\x06\xbb\x82\xdd\xc1\x9e\x60\x6f\xb0\x2f\xd8\x1f\x1c\x08\x0e\x06\x87"
"\x82\xc3\xc1\x91\xe0\x68\x70\x2c\x38\x1e\x9c\x08\x4e\x06\xa7\x82\xd3"
"\xc1\x99\xe0\x6c\x70\x2e\x38\x1f\x5c\x08\x2e\x06\x97\x82\xcb\xc1\x95"
"\xe0\x6a\x70\x2d\xb8\x1e\xdc\x08\x6e\x06\xb7\x82\xdb\xc1\x9d\xe0\x6e"
"\x70\x2f\xb8\x1f\x3c\x08\x1e\x06\x8f\x82\xc7\xc1\x93\xe0\x69\xf0\x2c"
"\x78\x1e\xbc\x08\x5e\x06\xaf\x82\xd7\xc1\x9b\xe0\x6d\xf0\x2e\x78\x1f"
"\x7c\x08\x3e\x06\x9f\x82\xcf\xc1\x97\xe0\x6b\xf0\x2d\xf8\x1e\xfc\x08"
"\x7e\x06\xbf\x82\xdf\xc1\x9f\xe0\x6f\xf0\x2f\x48\x16\x26\x0f\x53\x84"
"\x29\xc3\x54\x61\xea\x30\x4d\x98\x36\x4c\x17\xa6\x0f\x33\x84\x19\xc3"
"\x4c\x61\xe6\x30\x4b\x98\x35\xcc\x16\x66\x0f\x73\x84\x39\xc3\x5c\x61"
"\xee\x30\x4f\x98\x37\xcc\x17\xe6\x0f\x0b\x84\x05\xc3\x42\x61\xe1\xb0"
"\x48\x58\x34\x2c\x16\x16\x0f\x4b\x84\x25\xc3\x52\x61\xe9\xb0\x4c\x58"
"\x36\x2c\x17\x96\x0f\x2b\x84\x15\xc3\x4a\x61\xe5\xb0\x4a\x58\x35\xac"
"\x16\x56\x0f\x6b\x84\x35\xc3\x5a\x61\xed\xb0\x4e\x58\x37\xac\x17\xd6"
"\x0f\x1b\x84\x0d\xc3\x46\xe1\x7f\x61\xe3\xb0\x49\xd8\x34\x6c\x16\xc6"
"\x85\xcd\xc3\x16\x61\xcb\xb0\x55\xd8\x3a\x6c\x13\xb6\x0d\xdb\x85\xed"
"\xc3\x0e\x61\xc7\xb0\x53\xd8\x39\xec\x12\x76\x0d\xbb\x85\xdd\xc3\x1e"
"\x61\xcf\x30\x3e\xec\x15\xf6\x0e\xfb\x84\x7d\xc3\x7e\x61\xff\x70\x40"
"\x38\x30\x1c\x14\x0e\x0e\x87\x84\x43\xc3\x61\xe1\xf0\x70\x44\x38\x32"
"\x1c\x15\x8e\x0e\xc7\x84\x63\xc3\x71\xe1\xf8\x70\x42\x38\x31\x9c\x14"
"\x4e\x0e\xa7\x84\x53\xc3\x69\xe1\xf4\x70\x46\x38\x33\x9c\x15\xce\x0e"
"\xe7\x84\x73\xc3\x79\xe1\xfc\x70\x41\xb8\x30\x5c\x14\x2e\x0e\x97\x84"
"\x09\xe1\xd2\x30\x31\x5c\x16\x26\x85\xcb\xc3\x15\xe1\xca\x70\x55\xb8"
"\x3a\x5c\x13\xae\x0d\xd7\x85\xeb\xc3\x0d\xe1\xc6\x70\x53\xb8\x39\xdc"
"\x12\x6e\x0d\xb7\x85\xdb\x43\x2c\xc4\x43\x22\x24\x43\x2a\xa4\x43\x26"
"\x64\x43\x2e\xe4\x43\x21\x14\x43\x29\x94\x43\x25\x54\x43\x2d\xd4\x43"
"\x23\x34\x43\x2b\xb4\x43\x27\x74\x43\x2f\xf4\xc3\x20\x0c\xc3\x28\x04"
"\x21\x0c\x51\x18\x0b\x77\x84\x3b\xc3\x5d\xe1\xee\x70\x4f\xb8\x37\xdc"
"\x17\xee\x0f\x0f\x84\x07\xc3\x43\xe1\xe1\xf0\x48\x78\x34\x3c\x16\x1e"
"\x0f\x4f\x84\x27\xc3\x53\xe1\xe9\xf0\x4c\x78\x36\x3c\x17\x9e\x0f\x2f"
"\x84\x17\xc3\x4b\xe1\xe5\xf0\x4a\x78\x35\xbc\x16\x5e\x0f\x6f\x84\x37"
"\xc3\x5b\xe1\xed\xf0\x4e\x78\x37\xbc\x17\xde\x0f\x1f\x84\x0f\xc3\x47"
"\xe1\xe3\xf0\x49\xf8\x34\x7c\x16\x3e\x0f\x5f\x84\x2f\xc3\x57\xe1\xeb"
"\xf0\x4d\xf8\x36\x7c\x17\xbe\x0f\x3f\x84\x1f\xc3\x4f\xe1\xe7\xf0\x4b"
"\xf8\x35\xfc\x16\x7e\x0f\x7f\x84\x3f\xc3\x5f\xe1\xef\xf0\x4f\xf8\x37"
"\xfc\x17\x26\x8b\x92\x47\x29\xa2\x94\x51\xaa\x28\x75\x94\x26\x4a\x1b"
"\xa5\x8b\xd2\x47\x19\xa2\x8c\x51\xa6\x28\x73\x94\x25\xca\x1a\x65\x8b"
"\xb2\x47\x39\xa2\x9c\x51\xae\x28\x77\x94\x27\xca\x1b\xe5\x8b\xf2\x47"
"\x05\xa2\x82\x51\xa1\xa8\x70\x54\x24\x2a\x1a\x15\x8b\x8a\x47\x25\xa2"
"\x92\x51\xa9\xa8\x74\x54\x26\x2a\x1b\x95\x8b\xca\x47\x15\xa2\x8a\x51"
"\xa5\xa8\x72\x54\x25\xaa\x1a\x55\x8b\xaa\x47\x35\xa2\x9a\x51\xad\xa8"
"\x76\x54\x27\xaa\x1b\xd5\x8b\xea\x47\x0d\xa2\x86\x51\xa3\xe8\xbf\xa8"
"\x71\xd4\x24\x6a\x1a\x35\x8b\xe2\xa2\xe6\x51\x8b\xa8\x65\xd4\x2a\x6a"
"\x1d\xb5\x89\xda\x46\xed\xa2\xf6\x51\x87\xa8\x63\xd4\x29\xea\x1c\x75"
"\x89\xba\x46\xdd\xa2\xee\x51\x8f\xa8\x67\x14\x1f\xf5\x8a\x7a\x47\x7d"
"\xa2\xbe\x51\xbf\xa8\x7f\x34\x20\x1a\x18\x0d\x8a\x06\x47\x43\xa2\xa1"
"\xd1\xb0\x68\x78\x34\x22\x1a\x19\x8d\x8a\x46\x47\x63\xa2\xb1\xd1\xb8"
"\x68\x7c\x34\x21\x9a\x18\x4d\x8a\x26\x47\x53\xa2\xa9\xd1\xb4\x68\x7a"
"\x34\x23\x9a\x19\xcd\x8a\x66\x47\x73\xa2\xb9\xd1\xbc\x68\x7e\xb4\x20"
"\x5a\x18\x2d\x8a\x16\x47\x4b\xa2\x84\x68\x69\x94\x18\x2d\x8b\x92\xa2"
"\xe5\xd1\x8a\x68\x65\xb4\x2a\x5a\x1d\xad\x89\xd6\x46\xeb\xa2\xf5\xd1"
"\x86\x68\x63\xb4\x29\xda\x1c\x6d\x89\xb6\x46\xdb\xa2\xed\x11\x16\xe1"
"\x11\x11\x91\x11\x15\xd1\x11\x13\xb1\x11\x17\xf1\x91\x10\x89\x91\x14"
"\xc9\x91\x12\xa9\x91\x16\xe9\x91\x11\x99\x91\x15\xd9\x91\x13\xb9\x91"
"\x17\xf9\x51\x10\x85\x51\x14\x81\x08\x46\x28\x8a\x45\x3b\xa2\x9d\xd1"
"\xae\x68\x77\xb4\x27\xda\x1b\xed\x8b\xf6\x47\x07\xa2\x83\xd1\xa1\xe8"
"\x70\x74\x24\x3a\x1a\x1d\x8b\x8e\x47\x27\xa2\x93\xd1\xa9\xe8\x74\x74"
"\x26\x3a\x1b\x9d\x8b\xce\x47\x17\xa2\x8b\xd1\xa5\xe8\x72\x74\x25\xba"
"\x1a\x5d\x8b\xae\x47\x37\xa2\x9b\xd1\xad\xe8\x76\x74\x27\xba\x1b\xdd"
"\x8b\xee\x47\x0f\xa2\x87\xd1\xa3\xe8\x71\xf4\x24\x7a\x1a\x3d\x8b\x9e"
"\x47\x2f\xa2\x97\xd1\xab\xe8\x75\xf4\x26\x7a\x1b\xbd\x8b\xde\x47\x1f"
"\xa2\x8f\xd1\xa7\xe8\x73\xf4\x25\xfa\x1a\x7d\x8b\xbe\x47\x3f\xa2\x9f"
"\xd1\xaf\xe8\x77\xf4\x27\xfa\x1b\xfd\x8b\x92\x81\xe4\x20\x05\x48\x09"
"\x52\x81\xd4\x20\x0d\x48\x0b\xd2\x81\xf4\x20\x03\xc8\x08\x32\x81\xcc"
"\x20\x0b\xc8\x0a\xb2\x81\xec\x20\x07\xc8\x09\x72\x81\xdc\x20\x0f\xc8"
"\x0b\xf2\x81\xfc\xa0\x00\x28\x08\x0a\x81\xc2\xa0\x08\x28\x0a\x8a\x81"
"\xe2\xa0\x04\x28\x09\x4a\x81\xd2\xa0\x0c\x28\x0b\xca\x81\xf2\xa0\x02"
"\xa8\x08\x2a\x81\xca\xa0\x0a\xa8\x0a\xaa\x81\xea\xa0\x06\xa8\x09\x6a"
"\x81\xda\xa0\x0e\xa8\x0b\xea\x81\xfa\xa0\x01\x68\x08\x1a\x81\xff\x40"
"\x63\xd0\x04\x34\x05\xcd\x40\x1c\x68\x0e\x5a\x80\x96\xa0\x15\x68\x0d"
"\xda\x80\xb6\xa0\x1d\x68\x0f\x3a\x80\x8e\xa0\x13\xe8\x0c\xba\x80\xae"
"\xa0\x1b\xe8\x0e\x7a\x80\x9e\x20\x1e\xf4\x02\xbd\x41\x1f\xd0\x17\xf4"
"\x03\xfd\xc1\x00\x30\x10\x0c\x02\x83\xc1\x10\x30\x14\x0c\x03\xc3\xc1"
"\x08\x30\x12\x8c\x02\xa3\xc1\x18\x30\x16\x8c\x03\xe3\xc1\x04\x30\x11"
"\x4c\x02\x93\xc1\x14\x30\x15\x4c\x03\xd3\xc1\x0c\x30\x13\xcc\x02\xb3"
"\xc1\x1c\x30\x17\xcc\x03\xf3\xc1\x02\xb0\x10\x2c\x02\x8b\xc1\x12\x90"
"\x00\x96\x82\x44\xb0\x0c\x24\x81\xe5\x60\x05\x58\x09\x56\x81\xd5\x60"
"\x0d\x58\x0b\xd6\x81\xf5\x60\x03\xd8\x08\x36\x81\xcd\x60\x0b\xd8\x0a"
"\xb6\x81\xed\x00\x03\x38\x20\x00\x09\x28\x40\x03\x06\xb0\x80\x03\x3c"
"\x10\x80\x08\x24\x20\x03\x05\xa8\x40\x03\x3a\x30\x80\x09\x2c\x60\x03"
"\x07\xb8\xc0\x03\x3e\x08\x40\x08\x22\x00\x00\x04\x08\xc4\xc0\x0e\xb0"
"\x13\xec\x02\xbb\xc1\x1e\xb0\x17\xec\x03\xfb\xc1\x01\x70\x10\x1c\x02"
"\x87\xc1\x11\x70\x14\x1c\x03\xc7\xc1\x09\x70\x12\x9c\x02\xa7\xc1\x19"
"\x70\x16\x9c\x03\xe7\xc1\x05\x70\x11\x5c\x02\x97\xc1\x15\x70\x15\x5c"
"\x03\xd7\xc1\x0d\x70\x13\xdc\x02\xb7\xc1\x1d\x70\x17\xdc\x03\xf7\xc1"
"\x03\xf0\x10\x3c\x02\x8f\xc1\x13\xf0\x14\x3c\x03\xcf\xc1\x0b\xf0\x12"
"\xbc\x02\xaf\xc1\x1b\xf0\x16\xbc\x03\xef\xc1\x07\xf0\x11\x7c\x02\x9f"
"\xc1\x17\xf0\x15\x7c\x03\xdf\xc1\x0f\xf0\x13\xfc\x02\xbf\xc1\x1f\xf0"
"\x17\xfc\x03\xc9\x60\x72\x98\x02\xa6\x84\xa9\x60\x6a\x98\x06\xa6\x85"
"\xe9\x60\x7a\x98\x01\x66\x84\x99\x60\x66\x98\x05\x66\x85\xd9\x60\x76"
"\x98\x03\xe6\x84\xb9\x60\x6e\x98\x07\xe6\x85\xf9\x60\x7e\x58\x00\x16"
"\x84\x85\x60\x61\x58\x04\x16\x85\xc5\x60\x71\x58\x02\x96\x84\xa5\x60"
"\x69\x58\x06\x96\x85\xe5\x60\x79\x58\x01\x56\x84\x95\x60\x65\x58\x05"
"\x56\x85\xd5\x60\x75\x58\x03\xd6\x84\xb5\x60\x6d\x58\x07\xd6\x85\xf5"
"\x60\x7d\xd8\x00\x36\x84\x8d\xe0\x7f\xb0\x31\x6c\x02\x9b\xc2\x66\x30"
"\x0e\x36\x87\x2d\x60\x4b\xd8\x0a\xb6\x86\x6d\x60\x5b\xd8\x0e\xb6\x87"
"\x1d\x60\x47\xd8\x09\x76\x86\x5d\x60\x57\xd8\x0d\x76\x87\x3d\x60\x4f"
"\x18\x0f\x7b\xc1\xde\xb0\x0f\xec\x0b\xfb\xc1\xfe\x70\x00\x1c\x08\x07"
"\xc1\xc1\x70\x08\x1c\x0a\x87\xc1\xe1\x70\x04\x1c\x09\x47\xc1\xd1\x70"
"\x0c\x1c\x0b\xc7\xc1\xf1\x70\x02\x9c\x08\x27\xc1\xc9\x70\x0a\x9c\x0a"
"\xa7\xc1\xe9\x70\x06\x9c\x09\x67\xc1\xd9\x70\x0e\x9c\x0b\xe7\xc1\xf9"
"\x70\x01\x5c\x08\x17\xc1\xc5\x70\x09\x4c\x80\x4b\x61\x22\x5c\x06\x93"
"\xe0\x72\xb8\x02\xae\x84\xab\xe0\x6a\xb8\x06\xae\x85\xeb\xe0\x7a\xb8"
"\x01\x6e\x84\x9b\xe0\x66\xb8\x05\x6e\x85\xdb\xe0\x76\x88\x41\x1c\x12"
"\x90\x84\x14\xa4\x21\x03\x59\xc8\x41\x1e\x0a\x50\x84\x12\x94\xa1\x02"
"\x55\xa8\x41\x1d\x1a\xd0\x84\x16\xb4\xa1\x03\x5d\xe8\x41\x1f\x06\x30"
"\x84\x11\x04\x10\x42\x04\x63\x70\x07\xdc\x09\x77\xc1\xdd\x70\x0f\xdc"
"\x0b\xf7\xc1\xfd\xf0\x00\x3c\x08\x0f\xc1\xc3\xf0\x08\x3c\x0a\x8f\xc1"
"\xe3\xf0\x04\x3c\x09\x4f\xc1\xd3\xf0\x0c\x3c\x0b\xcf\xc1\xf3\xf0\x02"
"\xbc\x08\x2f\xc1\xcb\xf0\x0a\xbc\x0a\xaf\xc1\xeb\xf0\x06\xbc\x09\x6f"
"\xc1\xdb\xf0\x0e\xbc\x0b\xef\xc1\xfb\xf0\x01\x7c\x08\x1f\xc1\xc7\xf0"
"\x09\x7c\x0a\x9f\xc1\xe7\xf0\x05\x7c\x09\x5f\xc1\xd7\xf0\x0d\x7c\x0b"
"\xdf\xc1\xf7\xf0\x03\xfc\x08\x3f\xc1\xcf\xf0\x0b\xfc\x0a\xbf\xc1\xef"
"\xf0\x07\xfc\x09\x7f\xc1\xdf\xf0\x0f\xfc\x0b\xff\xc1\x64\x28\x39\x4a"
"\x81\x52\xa2\x54\x28\x35\x4a\x83\xd2\xa2\x74\x28\x3d\xca\x80\x32\xa2"
"\x4c\x28\x33\xca\x82\xb2\xa2\x6c\x28\x3b\xca\x81\x72\xa2\x5c\x28\x37"
"\xca\x83\xf2\xa2\x7c\x28\x3f\x2a\x80\x0a\xa2\x42\xa8\x30\x2a\x82\x8a"
"\xa2\x62\xa8\x38\x2a\x81\x4a\xa2\x52\xa8\x34\x2a\x83\xca\xa2\x72\xa8"
"\x3c\xaa\x80\x2a\xa2\x4a\xa8\x32\xaa\x82\xaa\xa2\x6a\xa8\x3a\xaa\x81"
"\x6a\xa2\x5a\xa8\x36\xaa\x83\xea\xa2\x7a\xa8\x3e\x6a\x80\x1a\xa2\x46"
"\xe8\x3f\xd4\x18\x35\x41\x4d\x51\x33\x14\x87\x9a\xa3\x16\xa8\x25\x6a"
"\x85\x5a\xa3\x36\xa8\x2d\x6a\x87\xda\xa3\x0e\xa8\x23\xea\x84\x3a\xa3"
"\x2e\xa8\x2b\xea\x86\xba\xa3\x1e\xa8\x27\x8a\x47\xbd\x50\x6f\xd4\x07"
"\xf5\x45\xfd\x50\x7f\x34\x00\x0d\x44\x83\xd0\x60\x34\x04\x0d\x45\xc3"
"\xd0\x70\x34\x02\x8d\x44\xa3\xd0\x68\x34\x06\x8d\x45\xe3\xd0\x78\x34"
"\x01\x4d\x44\x93\xd0\x64\x34\x05\x4d\x45\xd3\xd0\x74\x34\x03\xcd\x44"
"\xb3\xd0\x6c\x34\x07\xcd\x45\xf3\xd0\x7c\xb4\x00\x2d\x44\x8b\xd0\x62"
"\xb4\x04\x25\xa0\xa5\x28\x11\x2d\x43\x49\x68\x39\x5a\x81\x56\xa2\x55"
"\x68\x35\x5a\x83\xd6\xa2\x75\x68\x3d\xda\x80\x36\xa2\x4d\x68\x33\xda"
"\x82\xb6\xa2\x6d\x68\x3b\xc2\x10\x8e\x08\x44\x22\x0a\xd1\x88\x41\x2c"
"\xe2\x10\x8f\x04\x24\x22\x09\xc9\x48\x41\x2a\xd2\x90\x8e\x0c\x64\x22"
"\x0b\xd9\xc8\x41\x2e\xf2\x90\x8f\x02\x14\xa2\x08\x01\x04\x11\x42\x31"
"\xb4\x03\xed\x44\xbb\xd0\x6e\xb4\x07\xed\x45\xfb\xd0\x7e\x74\x00\x1d"
"\x44\x87\xd0\x61\x74\x04\x1d\x45\xc7\xd0\x71\x74\x02\x9d\x44\xa7\xd0"
"\x69\x74\x06\x9d\x45\xe7\xd0\x79\x74\x01\x5d\x44\x97\xd0\x65\x74\x05"
"\x5d\x45\xd7\xd0\x75\x74\x03\xdd\x44\xb7\xd0\x6d\x74\x07\xdd\x45\xf7"
"\xd0\x7d\xf4\x00\x3d\x44\x8f\xd0\x63\xf4\x04\x3d\x45\xcf\xd0\x73\xf4"
"\x02\xbd\x44\xaf\xd0\x6b\xf4\x06\xbd\x45\xef\xd0\x7b\xf4\x01\x7d\x44"
"\x9f\xd0\x67\xf4\x05\x7d\x45\xdf\xd0\x77\xf4\x03\xfd\x44\xbf\xd0\x6f"
"\xf4\x07\xfd\x45\xff\x50\xb2\x58\xf2\x58\x8a\x58\xca\x58\xaa\x58\xea"
"\x58\x9a\x58\xda\x58\xba\x58\xfa\x58\x86\x58\xc6\x58\xa6\x58\xe6\x58"
"\x96\x58\xd6\x58\xb6\x58\xf6\x58\x8e\x58\xce\x58\xae\x58\xee\x58\x9e"
"\x58\xde\x58\xbe\x58\xfe\x58\x81\x58\xc1\x58\xa1\x58\xe1\x58\x91\x58"
"\xd1\x58\xb1\x58\xf1\x58\x89\x58\xc9\x58\xa9\x58\xe9\x58\x99\x58\xd9"
"\x58\xb9\x58\xf9\x58\x85\x58\xc5\x58\xa5\x58\xe5\x58\x95\x58\xd5\x58"
"\xb5\x58\xf5\x58\x8d\x58\xcd\x58\xad\x58\xed\x58\x9d\x58\xdd\x58\xbd"
"\x58\xfd\x58\x83\x58\xc3\x58\xa3\xd8\x7f\xb1\xc6\xb1\x26\xb1\xa6\xb1"
"\x66\xb1\xb8\xd8\xff\x08\x80\x07\xc0\x2a\x83\x00\x00\xc0\x2d\xdb\xb6"
"\x6d\xdb\xb6\x6d\xdb\xb6\x6d\xdb\xb6\xfb\x6d\x5b\x77\xf7\xb2\x6d\xd7"
"\xaa\xad\xaf\x19\xd6\x1c\x6b\x81\xb5\xc4\x5a\x61\xad\xb1\x36\x58\x5b"
"\xac\x1d\xd6\x1e\xeb\x80\x75\xc4\x3a\x61\x9d\xb1\x2e\x58\x57\xac\x1b"
"\xd6\x1d\xeb\x81\xf5\xc4\x7a\x61\xbd\xb1\x3e\x58\x5f\xac\x1f\xd6\x1f"
"\x1b\x80\x0d\xc4\x06\x61\x83\xb1\x21\xd8\x50\x6c\x18\x36\x1c\x1b\x81"
"\x8d\xc4\x46\x61\xa3\xb1\x31\xd8\x58\x6c\x1c\x36\x1e\x9b\x80\x4d\xc4"
"\x26\x61\x93\xb1\x29\xd8\x54\x6c\x1a\x36\x1d\x9b\x81\xcd\xc4\x66\x61"
"\xb3\xb1\x39\xd8\x5c\x6c\x1e\x36\x1f\x5b\x80\x2d\xc4\x16\x61\x8b\xb1"
"\x25\xd8\x52\x6c\x19\xb6\x1c\x5b\x81\xad\xc4\x56\x61\xab\xb1\x35\xd8"
"\x5a\x6c\x1d\xb6\x1e\xdb\x80\x6d\xc4\x36\x61\x9b\xb1\x2d\xd8\x56\x6c"
"\x1b\xb6\x1d\xdb\x81\xed\xc4\x76\x61\xbb\xb1\x3d\xd8\x5e\x6c\x1f\xb6"
"\x1f\x3b\x80\x1d\xc4\x0e\x61\x87\xb1\x23\xd8\x51\xec\x18\x76\x1c\x3b"
"\x81\x9d\xc4\x4e\x61\xa7\xb1\x33\xd8\x59\xec\x1c\x76\x1e\xbb\x80\x5d"
"\xc4\x2e\x61\x97\xb1\x2b\xd8\x55\xec\x1a\x76\x1d\xbb\x81\xdd\xc4\x6e"
"\x61\x18\x86\x63\x04\x46\x62\x14\x46\x63\x0c\xc6\x62\x1c\xc6\x63\x02"
"\x26\x62\x12\x26\x63\x0a\xa6\x62\x71\x98\x8e\x19\x98\x89\x59\x98\x8d"
"\x39\x98\x8b\x79\x98\x8f\x05\x58\x88\x45\x18\xc0\x20\x86\xb0\x18\x76"
"\x1b\xbb\x83\xdd\xc5\xee\x61\xf7\xb1\x07\xd8\x43\xec\x11\xf6\x18\x7b"
"\x82\x3d\xc5\x9e\x61\xcf\xb1\x17\xd8\x4b\xec\x15\xf6\x1a\x7b\x83\xbd"
"\xc5\xde\x61\xef\xb1\x0f\xd8\x47\xec\x13\xf6\x19\xfb\x82\x7d\xc5\xbe"
"\x61\xdf\xb1\x1f\xd8\x4f\xec\x17\xf6\x1b\x8b\xc7\xfe\x60\x7f\xb1\x7f"
"\x58\x02\x96\x88\x25\xc1\xe3\xf0\xa4\x78\x32\x3c\x39\x9e\x02\x4f\x89"
"\xa7\xc2\x53\xe3\x69\xf0\xb4\x78\x3a\x3c\x3d\x9e\x01\xcf\x88\x67\xc2"
"\x33\xe3\x59\xf0\xac\x78\x36\x3c\x3b\x9e\x03\xcf\x89\xe7\xc2\x73\xe3"
"\x79\xf0\xbc\x78\x3e\x3c\x3f\x5e\x00\x2f\x88\x17\xc2\x0b\xe3\x45\xf0"
"\xa2\x78\x31\xbc\x38\x5e\x02\x2f\x89\x97\xc2\x4b\xe3\x65\xf0\xb2\x78"
"\x39\xbc\x3c\x5e\x01\xaf\x88\x57\xc2\x2b\xe3\x55\xf0\xaa\x78\x35\xbc"
"\x3a\x5e\x03\xaf\x89\xd7\xc2\x6b\xe3\x75\xf0\xba\x78\x3d\xbc\x3e\xde"
"\x00\x6f\x88\x37\xc2\x1b\xe3\x4d\xf0\xa6\x78\x33\xbc\x39\xde\x02\x6f"
"\x89\xb7\xc2\x5b\xe3\x6d\xf0\xb6\x78\x3b\xbc\x3d\xde\x01\xef\x88\x77"
"\xc2\x3b\xe3\x5d\xf0\xae\x78\x37\xbc\x3b\xde\x03\xef\x89\xf7\xc2\x7b"
"\xe3\x7d\xf0\xbe\x78\x3f\xbc\x3f\x3e\x00\x1f\x88\x0f\xc2\x07\xe3\x43"
"\xf0\xa1\xf8\x30\x7c\x38\x3e\x02\x1f\x89\x8f\xc2\x47\xe3\x63\xf0\xb1"
"\xf8\x38\x7c\x3c\x3e\x01\x9f\x88\x4f\xc2\x27\xe3\x53\xf0\xa9\xf8\x34"
"\x7c\x3a\x3e\x03\x9f\x89\xcf\xc2\x67\xe3\x73\xf0\xb9\xf8\x3c\x7c\x3e"
"\xbe\x00\x5f\x88\x2f\xc2\x17\xe3\x4b\xf0\xa5\xf8\x32\x7c\x39\xbe\x02"
"\x5f\x89\xaf\xc2\x57\xe3\x6b\xf0\xb5\xf8\x3a\x7c\x3d\xbe\x01\xdf\x88"
"\x6f\xc2\x37\xe3\x5b\xf0\xad\xf8\x36\x7c\x3b\xbe\x03\xdf\x89\xef\xc2"
"\x77\xe3\x7b\xf0\xbd\xf8\x3e\x7c\x3f\x7e\x00\x3f\x88\x1f\xc2\x0f\xe3"
"\x47\xf0\xa3\xf8\x31\xfc\x38\x7e\x02\x3f\x89\x9f\xc2\x4f\xe3\x67\xf0"
"\xb3\xf8\x39\xfc\x3c\x7e\x01\xbf\x88\x5f\xc2\x2f\xe3\x57\xf0\xab\xf8"
"\x35\xfc\x3a\x7e\x03\xbf\x89\xdf\xc2\x31\x1c\xc7\x09\x9c\xc4\x29\x9c"
"\xc6\x19\x9c\xc5\x39\x9c\xc7\x05\x5c\xc4\x25\x5c\xc6\x15\x5c\xc5\x35"
"\x5c\xc7\x0d\xdc\xc4\x2d\xdc\xc6\x1d\xdc\xc5\x3d\xdc\xc7\x03\x3c\xc4"
"\x23\x1c\xe0\x10\x47\x78\x0c\xbf\x8d\xdf\xc1\xef\xe2\xf7\xf0\xfb\xf8"
"\x03\xfc\x21\xfe\x08\x7f\x8c\x3f\xc1\x9f\xe2\xcf\xf0\xe7\xf8\x0b\xfc"
"\x25\xfe\x0a\x7f\x8d\xbf\xc1\xdf\xe2\xef\xf0\xf7\xf8\x07\xfc\x23\xfe"
"\x09\xff\x8c\x7f\xc1\xbf\xe2\xdf\xf0\xef\xf8\x0f\xfc\x27\xfe\x0b\xff"
"\x8d\xc7\xe3\x7f\xf0\xbf\xf8\x3f\x3c\x01\x4f\xc4\x93\x10\x71\x44\x52"
"\x22\x19\x91\x9c\x48\x41\xa4\x24\x52\x11\xa9\x89\x34\x44\x5a\x22\x1d"
"\x91\x9e\xc8\x40\x64\x24\x32\x11\x99\x89\x2c\x44\x56\x22\x1b\x91\x9d"
"\xc8\x41\xe4\x24\x72\x11\xb9\x89\x3c\x44\x5e\x22\x1f\x91\x9f\x28\x40"
"\x14\x24\x0a\x11\x85\x89\x22\x44\x51\xa2\x18\x51\x9c\x28\x41\x94\x24"
"\x4a\x11\xa5\x89\x32\x44\x59\xa2\x1c\x51\x9e\xa8\x40\x54\x24\x2a\x11"
"\x95\x89\x2a\x44\x55\xa2\x1a\x51\x9d\xa8\x41\xd4\x24\x6a\x11\xb5\x89"
"\x3a\x44\x5d\xa2\x1e\x51\x9f\x68\x40\x34\x24\x1a\x11\x8d\x89\x26\x44"
"\x53\xa2\x19\xd1\x9c\x68\x41\xb4\x24\x5a\x11\xad\x89\x36\x44\x5b\xa2"
"\x1d\xd1\x9e\xe8\x40\x74\x24\x3a\x11\x9d\x89\x2e\x44\x57\xa2\x1b\xd1"
"\x9d\xe8\x41\xf4\x24\x7a\x11\xbd\x89\x3e\x44\x5f\xa2\x1f\xd1\x9f\x18"
"\x40\x0c\x24\x06\x11\x83\x89\x21\xc4\x50\x62\x18\x31\x9c\x18\x41\x8c"
"\x24\x46\x11\xa3\x89\x31\xc4\x58\x62\x1c\x31\x9e\x98\x40\x4c\x24\x26"
"\x11\x93\x89\x29\xc4\x54\x62\x1a\x31\x9d\x98\x41\xcc\x24\x66\x11\xb3"
"\x89\x39\xc4\x5c\x62\x1e\x31\x9f\x58\x40\x2c\x24\x16\x11\x8b\x89\x25"
"\xc4\x52\x62\x19\xb1\x9c\x58\x41\xac\x24\x56\x11\xab\x89\x35\xc4\x5a"
"\x62\x1d\xb1\x9e\xd8\x40\x6c\x24\x36\x11\x9b\x89\x2d\xc4\x56\x62\x1b"
"\xb1\x9d\xd8\x41\xec\x24\x76\x11\xbb\x89\x3d\xc4\x5e\x62\x1f\xb1\x9f"
"\x38\x40\x1c\x24\x0e\x11\x87\x89\x23\xc4\x51\xe2\x18\x71\x9c\x38\x41"
"\x9c\x24\x4e\x11\xa7\x89\x33\xc4\x59\xe2\x1c\x71\x9e\xb8\x40\x5c\x24"
"\x2e\x11\x97\x89\x2b\xc4\x55\xe2\x1a\x71\x9d\xb8\x41\xdc\x24\x6e\x11"
"\x18\x81\x13\x04\x41\x12\x14\x41\x13\x0c\xc1\x12\x1c\xc1\x13\x02\x21"
"\x12\x12\x21\x13\x0a\xa1\x12\x1a\xa1\x13\x06\x61\x12\x16\x61\x13\x0e"
"\xe1\x12\x1e\xe1\x13\x01\x11\x12\x11\x01\x08\x48\x20\x22\x46\xdc\x26"
"\xee\x10\x77\x89\x7b\xc4\x7d\xe2\x01\xf1\x90\x78\x44\x3c\x26\x9e\x10"
"\x4f\x89\x67\xc4\x73\xe2\x05\xf1\x92\x78\x45\xbc\x26\xde\x10\x6f\x89"
"\x77\xc4\x7b\xe2\x03\xf1\x91\xf8\x44\x7c\x26\xbe\x10\x5f\x89\x6f\xc4"
"\x77\xe2\x07\xf1\x93\xf8\x45\xfc\x26\xe2\x89\x3f\xc4\x5f\xe2\x1f\x91"
"\x40\x24\x12\x49\xc8\x38\x32\x29\x99\x8c\x4c\x4e\xa6\x20\x53\x92\xa9"
"\xc8\xd4\x64\x1a\x32\x2d\x99\x8e\x4c\x4f\x66\x20\x33\x92\x99\xc8\xcc"
"\x64\x16\x32\x2b\x99\x8d\xcc\x4e\xe6\x20\x73\x92\xb9\xc8\xdc\x64\x1e"
"\x32\x2f\x99\x8f\xcc\x4f\x16\x20\x0b\x92\x85\xc8\xc2\x64\x11\xb2\x28"
"\x59\x8c\x2c\x4e\x96\x20\x4b\x92\xa5\xc8\xd2\x64\x19\xb2\x2c\x59\x8e"
"\x2c\x4f\x56\x20\x2b\x92\x95\xc8\xca\x64\x15\xb2\x2a\x59\x8d\xac\x4e"
"\xd6\x20\x6b\x92\xb5\xc8\xda\x64\x1d\xb2\x2e\x59\x8f\xac\x4f\x36\x20"
"\x1b\x92\x8d\xc8\xc6\x64\x13\xb2\x29\xd9\x8c\x6c\x4e\xb6\x20\x5b\x92"
"\xad\xc8\xd6\x64\x1b\xb2\x2d\xd9\x8e\x6c\x4f\x76\x20\x3b\x92\x9d\xc8"
"\xce\x64\x17\xb2\x2b\xd9\x8d\xec\x4e\xf6\x20\x7b\x92\xbd\xc8\xde\x64"
"\x1f\xb2\x2f\xd9\x8f\xec\x4f\x0e\x20\x07\x92\x83\xc8\xc1\xe4\x10\x72"
"\x28\x39\x8c\x1c\x4e\x8e\x20\x47\x92\xa3\xc8\xd1\xe4\x18\x72\x2c\x39"
"\x8e\x1c\x4f\x4e\x20\x27\x92\x93\xc8\xc9\xe4\x14\x72\x2a\x39\x8d\x9c"
"\x4e\xce\x20\x67\x92\xb3\xc8\xd9\xe4\x1c\x72\x2e\x39\x8f\x9c\x4f\x2e"
"\x20\x17\x92\x8b\xc8\xc5\xe4\x12\x72\x29\xb9\x8c\x5c\x4e\xae\x20\x57"
"\x92\xab\xc8\xd5\xe4\x1a\x72\x2d\xb9\x8e\x5c\x4f\x6e\x20\x37\x92\x9b"
"\xc8\xcd\xe4\x16\x72\x2b\xb9\x8d\xdc\x4e\xee\x20\x77\x92\xbb\xc8\xdd"
"\xe4\x1e\x72\x2f\xb9\x8f\xdc\x4f\x1e\x20\x0f\x92\x87\xc8\xc3\xe4\x11"
"\xf2\x28\x79\x8c\x3c\x4e\x9e\x20\x4f\x92\xa7\xc8\xd3\xe4\x19\xf2\x2c"
"\x79\x8e\x3c\x4f\x5e\x20\x2f\x92\x97\xc8\xcb\xe4\x15\xf2\x2a\x79\x8d"
"\xbc\x4e\xde\x20\x6f\x92\xb7\x48\x8c\xc4\x49\x82\x24\x49\x8a\xa4\x49"
"\x86\x64\x49\x8e\xe4\x49\x81\x14\x49\x89\x94\x49\x85\x54\x49\x8d\xd4"
"\x49\x83\x34\x49\x8b\xb4\x49\x87\x74\x49\x8f\xf4\xc9\x80\x0c\xc9\x88"
"\x04\x24\x24\x11\x19\x23\x6f\x93\x77\xc8\xbb\xe4\x3d\xf2\x3e\xf9\x80"
"\x7c\x48\x3e\x22\x1f\x93\x4f\xc8\xa7\xe4\x33\xf2\x39\xf9\x82\x7c\x49"
"\xbe\x22\x5f\x93\x6f\xc8\xb7\xe4\x3b\xf2\x3d\xf9\x81\xfc\x48\x7e\x22"
"\x3f\x93\x5f\xc8\xaf\xe4\x37\xf2\x3b\xf9\x83\xfc\x49\xfe\x22\x7f\x93"
"\xf1\xe4\x1f\xf2\x2f\xf9\x8f\x4c\x20\x13\xc9\x24\x54\x1c\x95\x94\x4a"
"\x46\x25\xa7\x52\x50\x29\xa9\x54\x54\x6a\x2a\x0d\x95\x96\x4a\x47\xa5"
"\xa7\x32\x50\x19\xa9\x4c\x54\x66\x2a\x0b\x95\x95\xca\x46\x65\xa7\x72"
"\x50\x39\xa9\x5c\x54\x6e\x2a\x0f\x95\x97\xca\x47\xe5\xa7\x0a\x50\x05"
"\xa9\x42\x54\x61\xaa\x08\x55\x94\x2a\x46\x15\xa7\x4a\x50\x25\xa9\x52"
"\x54\x69\xaa\x0c\x55\x96\x2a\x47\x95\xa7\x2a\x50\x15\xa9\x4a\x54\x65"
"\xaa\x0a\x55\x95\xaa\x46\x55\xa7\x6a\x50\x35\xa9\x5a\x54\x6d\xaa\x0e"
"\x55\x97\xaa\x47\xd5\xa7\x1a\x50\x0d\xa9\x46\x54\x63\xaa\x09\xd5\x94"
"\x6a\x46\x35\xa7\x5a\x50\x2d\xa9\x56\x54\x6b\xaa\x0d\xd5\x96\x6a\x47"
"\xb5\xa7\x3a\x50\x1d\xa9\x4e\x54\x67\xaa\x0b\xd5\x95\xea\x46\x75\xa7"
"\x7a\x50\x3d\xa9\x5e\x54\x6f\xaa\x0f\xd5\x97\xea\x47\xf5\xa7\x06\x50"
"\x03\xa9\x41\xd4\x60\x6a\x08\x35\x94\x1a\x46\x0d\xa7\x46\x50\x23\xa9"
"\x51\xd4\x68\x6a\x0c\x35\x96\x1a\x47\x8d\xa7\x26\x50\x13\xa9\x49\xd4"
"\x64\x6a\x0a\x35\x95\x9a\x46\x4d\xa7\x66\x50\x33\xa9\x59\xd4\x6c\x6a"
"\x0e\x35\x97\x9a\x47\xcd\xa7\x16\x50\x0b\xa9\x45\xd4\x62\x6a\x09\xb5"
"\x94\x5a\x46\x2d\xa7\x56\x50\x2b\xa9\x55\xd4\x6a\x6a\x0d\xb5\x96\x5a"
"\x47\xad\xa7\x36\x50\x1b\xa9\x4d\xd4\x66\x6a\x0b\xb5\x95\xda\x46\x6d"
"\xa7\x76\x50\x3b\xa9\x5d\xd4\x6e\x6a\x0f\xb5\x97\xda\x47\xed\xa7\x0e"
"\x50\x07\xa9\x43\xd4\x61\xea\x08\x75\x94\x3a\x46\x1d\xa7\x4e\x50\x27"
"\xa9\x53\xd4\x69\xea\x0c\x75\x96\x3a\x47\x9d\xa7\x2e\x50\x17\xa9\x4b"
"\xd4\x65\xea\x0a\x75\x95\xba\x46\x5d\xa7\x6e\x50\x37\xa9\x5b\x14\x46"
"\xe1\x14\x41\x91\x14\x45\xd1\x14\x43\xb1\x14\x47\xf1\x94\x40\x89\x94"
"\x44\xc9\x94\x42\xa9\x94\x46\xe9\x94\x41\x99\x94\x45\xd9\x94\x43\xb9"
"\x94\x47\xf9\x54\x40\x85\x54\x44\x01\x0a\x52\x88\x8a\x51\xb7\xa9\x3b"
"\xd4\x5d\xea\x1e\x75\x9f\x7a\x40\x3d\xa4\x1e\x51\x8f\xa9\x27\xd4\x53"
"\xea\x19\xf5\x9c\x7a\x41\xbd\xa4\x5e\x51\xaf\xa9\x37\xd4\x5b\xea\x1d"
"\xf5\x9e\xfa\x40\x7d\xa4\x3e\x51\x9f\xa9\x2f\xd4\x57\xea\x1b\xf5\x9d"
"\xfa\x41\xfd\xa4\x7e\x51\xbf\xa9\x78\xea\x0f\xf5\x97\xfa\x47\x25\x50"
"\x89\x54\x12\x3a\x8e\x4e\x4a\x27\xa3\x93\xd3\x29\xe8\x94\x74\x2a\x3a"
"\x35\x9d\x86\x4e\x4b\xa7\xa3\xd3\xd3\x19\xe8\x8c\x74\x26\x3a\x33\x9d"
"\x85\xce\x4a\x67\xa3\xb3\xd3\x39\xe8\x9c\x74\x2e\x3a\x37\x9d\x87\xce"
"\x4b\xe7\xa3\xf3\xd3\x05\xe8\x82\x74\x21\xba\x30\x5d\x84\x2e\x4a\x17"
"\xa3\x8b\xd3\x25\xe8\x92\x74\x29\xba\x34\x5d\x86\x2e\x4b\x97\xa3\xcb"
"\xd3\x15\xe8\x8a\x74\x25\xba\x32\x5d\x85\xae\x4a\x57\xa3\xab\xd3\x35"
"\xe8\x9a\x74\x2d\xba\x36\x5d\x87\xae\x4b\xd7\xa3\xeb\xd3\x0d\xe8\x86"
"\x74\x23\xba\x31\xdd\x84\x6e\x4a\x37\xa3\x9b\xd3\x2d\xe8\x96\x74\x2b"
"\xba\x35\xdd\x86\x6e\x4b\xb7\xa3\xdb\xd3\x1d\xe8\x8e\x74\x27\xba\x33"
"\xdd\x85\xee\x4a\x77\xa3\xbb\xd3\x3d\xe8\x9e\x74\x2f\xba\x37\xdd\x87"
"\xee\x4b\xf7\xa3\xfb\xd3\x03\xe8\x81\xf4\x20\x7a\x30\x3d\x84\x1e\x4a"
"\x0f\xa3\x87\xd3\x23\xe8\x91\xf4\x28\x7a\x34\x3d\x86\x1e\x4b\x8f\xa3"
"\xc7\xd3\x13\xe8\x89\xf4\x24\x7a\x32\x3d\x85\x9e\x4a\x4f\xa3\xa7\xd3"
"\x33\xe8\x99\xf4\x2c\x7a\x36\x3d\x87\x9e\x4b\xcf\xa3\xe7\xd3\x0b\xe8"
"\x85\xf4\x22\x7a\x31\xbd\x84\x5e\x4a\x2f\xa3\x97\xd3\x2b\xe8\x95\xf4"
"\x2a\x7a\x35\xbd\x86\x5e\x4b\xaf\xa3\xd7\xd3\x1b\xe8\x8d\xf4\x26\x7a"
"\x33\xbd\x85\xde\x4a\x6f\xa3\xb7\xd3\x3b\xe8\x9d\xf4\x2e\x7a\x37\xbd"
"\x87\xde\x4b\xef\xa3\xf7\xd3\x07\xe8\x83\xf4\x21\xfa\x30\x7d\x84\x3e"
"\x4a\x1f\xa3\x8f\xd3\x27\xe8\x93\xf4\x29\xfa\x34\x7d\x86\x3e\x4b\x9f"
"\xa3\xcf\xd3\x17\xe8\x8b\xf4\x25\xfa\x32\x7d\x85\xbe\x4a\x5f\xa3\xaf"
"\xd3\x37\xe8\x9b\xf4\x2d\x1a\xa3\x71\x9a\xa0\x49\x9a\xa2\x69\x9a\xa1"
"\x59\x9a\xa3\x79\x5a\xa0\x45\x5a\xa2\x65\x5a\xa1\x55\x5a\xa3\x75\xda"
"\xa0\x4d\xda\xa2\x6d\xda\xa1\x5d\xda\xa3\x7d\x3a\xa0\x43\x3a\xa2\x01"
"\x0d\x69\x44\xc7\xe8\xdb\xf4\x1d\xfa\x2e\x7d\x8f\xbe\x4f\x3f\xa0\x1f"
"\xd2\x8f\xe8\xc7\xf4\x13\xfa\x29\xfd\x8c\x7e\x4e\xbf\xa0\x5f\xd2\xaf"
"\xe8\xd7\xf4\x1b\xfa\x2d\xfd\x8e\x7e\x4f\x7f\xa0\x3f\xd2\x9f\xe8\xcf"
"\xf4\x17\xfa\x2b\xfd\x8d\xfe\x4e\xff\xa0\x7f\xd2\xbf\xe8\xdf\x74\x3c"
"\xfd\x87\xfe\x4b\xff\xa3\x13\xe8\x44\x3a\x09\x13\xc7\x24\x65\x92\x31"
"\xc9\x99\x14\x4c\x4a\x26\x15\x93\x9a\x49\xc3\xa4\x65\xd2\x31\xe9\x99"
"\x0c\x4c\x46\x26\x13\x93\x99\xc9\xc2\x64\x65\xb2\x31\xd9\x99\x1c\x4c"
"\x4e\x26\x17\x93\x9b\xc9\xc3\xe4\x65\xf2\x31\xf9\x99\x02\x4c\x41\xa6"
"\x10\x53\x98\x29\xc2\x14\x65\x8a\x31\xc5\x99\x12\x4c\x49\xa6\x14\x53"
"\x9a\x29\xc3\x94\x65\xca\x31\xe5\x99\x0a\x4c\x45\xa6\x12\x53\x99\xa9"
"\xc2\x54\x65\xaa\x31\xd5\x99\x1a\x4c\x4d\xa6\x16\x53\x9b\xa9\xc3\xd4"
"\x65\xea\x31\xf5\x99\x06\x4c\x43\xa6\x11\xd3\x98\x69\xc2\x34\x65\x9a"
"\x31\xcd\x99\x16\x4c\x4b\xa6\x15\xd3\x9a\x69\xc3\xb4\x65\xda\x31\xed"
"\x99\x0e\x4c\x47\xa6\x13\xd3\x99\xe9\xc2\x74\x65\xba\x31\xdd\x99\x1e"
"\x4c\x4f\xa6\x17\xd3\x9b\xe9\xc3\xf4\x65\xfa\x31\xfd\x99\x01\xcc\x40"
"\x66\x10\x33\x98\x19\xc2\x0c\x65\x86\x31\xc3\x99\x11\xcc\x48\x66\x14"
"\x33\x9a\x19\xc3\x8c\x65\xc6\x31\xe3\x99\x09\xcc\x44\x66\x12\x33\x99"
"\x99\xc2\x4c\x65\xa6\x31\xd3\x99\x19\xcc\x4c\x66\x16\x33\x9b\x99\xc3"
"\xcc\x65\xe6\x31\xf3\x99\x05\xcc\x42\x66\x11\xb3\x98\x59\xc2\x2c\x65"
"\x96\x31\xcb\x99\x15\xcc\x4a\x66\x15\xb3\x9a\x59\xc3\xac\x65\xd6\x31"
"\xeb\x99\x0d\xcc\x46\x66\x13\xb3\x99\xd9\xc2\x6c\x65\xb6\x31\xdb\x99"
"\x1d\xcc\x4e\x66\x17\xb3\x9b\xd9\xc3\xec\x65\xf6\x31\xfb\x99\x03\xcc"
"\x41\xe6\x10\x73\x98\x39\xc2\x1c\x65\x8e\x31\xc7\x99\x13\xcc\x49\xe6"
"\x14\x73\x9a\x39\xc3\x9c\x65\xce\x31\xe7\x99\x0b\xcc\x45\xe6\x12\x73"
"\x99\xb9\xc2\x5c\x65\xae\x31\xd7\x99\x1b\xcc\x4d\xe6\x16\x83\x31\x38"
"\x43\x30\x24\x43\x31\x34\xc3\x30\x2c\xc3\x31\x3c\x23\x30\x22\x23\x31"
"\x32\xa3\x30\x2a\xa3\x31\x3a\x63\x30\x26\x63\x31\x36\xe3\x30\x2e\xe3"
"\x31\x3e\x13\x30\x21\x13\x31\x80\x81\x0c\x62\x62\xcc\x6d\xe6\x0e\x73"
"\x97\xb9\xc7\xdc\x67\x1e\x30\x0f\x99\x47\xcc\x63\xe6\x09\xf3\x94\x79"
"\xc6\x3c\x67\x5e\x30\x2f\x99\x57\xcc\x6b\xe6\x0d\xf3\x96\x79\xc7\xbc"
"\x67\x3e\x30\x1f\x99\x4f\xcc\x67\xe6\x0b\xf3\x95\xf9\xc6\x7c\x67\x7e"
"\x30\x3f\x99\x5f\xcc\x6f\x26\x9e\xf9\xc3\xfc\x65\xfe\x31\x09\x4c\x22"
"\x93\x84\x8d\x63\x93\xb2\xc9\xd8\xe4\x6c\x0a\x36\x25\x9b\x8a\x4d\xcd"
"\xa6\x61\xd3\xb2\xe9\xd8\xf4\x6c\x06\x36\x23\x9b\x89\xcd\xcc\x66\x61"
"\xb3\xb2\xd9\xd8\xec\x6c\x0e\x36\x27\x9b\x8b\xcd\xcd\xe6\x61\xf3\xb2"
"\xf9\xd8\xfc\x6c\x01\xb6\x20\x5b\x88\x2d\xcc\x16\x61\x8b\xb2\xc5\xd8"
"\xe2\x6c\x09\xb6\x24\x5b\x8a\x2d\xcd\x96\x61\xcb\xb2\xe5\xd8\xf2\x6c"
"\x05\xb6\x22\x5b\x89\xad\xcc\x56\x61\xab\xb2\xd5\xd8\xea\x6c\x0d\xb6"
"\x26\x5b\x8b\xad\xcd\xd6\x61\xeb\xb2\xf5\xd8\xfa\x6c\x03\xb6\x21\xdb"
"\x88\x6d\xcc\x36\x61\x9b\xb2\xcd\xd8\xe6\x6c\x0b\xb6\x25\xdb\x8a\x6d"
"\xcd\xb6\x61\xdb\xb2\xed\xd8\xf6\x6c\x07\xb6\x23\xdb\x89\xed\xcc\x76"
"\x61\xbb\xb2\xdd\xd8\xee\x6c\x0f\xb6\x27\xdb\x8b\xed\xcd\xf6\x61\xfb"
"\xb2\xfd\xd8\xfe\xec\x00\x76\x20\x3b\x88\x1d\xcc\x0e\x61\x87\xb2\xc3"
"\xd8\xe1\xec\x08\x76\x24\x3b\x8a\x1d\xcd\x8e\x61\xc7\xb2\xe3\xd8\xf1"
"\xec\x04\x76\x22\x3b\x89\x9d\xcc\x4e\x61\xa7\xb2\xd3\xd8\xe9\xec\x0c"
"\x76\x26\x3b\x8b\x9d\xcd\xce\x61\xe7\xb2\xf3\xd8\xf9\xec\x02\x76\x21"
"\xbb\x88\x5d\xcc\x2e\x61\x97\xb2\xcb\xd8\xe5\xec\x0a\x76\x25\xbb\x8a"
"\x5d\xcd\xae\x61\xd7\xb2\xeb\xd8\xf5\xec\x06\x76\x23\xbb\x89\xdd\xcc"
"\x6e\x61\xb7\xb2\xdb\xd8\xed\xec\x0e\x76\x27\xbb\x8b\xdd\xcd\xee\x61"
"\xf7\xb2\xfb\xd8\xfd\xec\x01\xf6\x20\x7b\x88\x3d\xcc\x1e\x61\x8f\xb2"
"\xc7\xd8\xe3\xec\x09\xf6\x24\x7b\x8a\x3d\xcd\x9e\x61\xcf\xb2\xe7\xd8"
"\xf3\xec\x05\xf6\x22\x7b\x89\xbd\xcc\x5e\x61\xaf\xb2\xd7\xd8\xeb\xec"
"\x0d\xf6\x26\x7b\x8b\xc5\x58\x9c\x25\x58\x92\xa5\x58\x9a\x65\x58\x96"
"\xe5\x58\x9e\x15\x58\x91\x95\x58\x99\x55\x58\x95\xd5\x58\x9d\x35\x58"
"\x93\xb5\x58\x9b\x75\x58\x97\xf5\x58\x9f\x0d\xd8\x90\x8d\x58\xc0\x42"
"\x16\xb1\x31\xf6\x36\x7b\x87\xbd\xcb\xde\x63\xef\xb3\x0f\xd8\x87\xec"
"\x23\xf6\x31\xfb\x84\x7d\xca\x3e\x63\x9f\xb3\x2f\xd8\x97\xec\x2b\xf6"
"\x35\xfb\x86\x7d\xcb\xbe\x63\xdf\xb3\x1f\xd8\x8f\xec\x27\xf6\x33\xfb"
"\x85\xfd\xca\x7e\x63\xbf\xb3\x3f\xd8\x9f\xec\x2f\xf6\x37\x1b\xcf\xfe"
"\x61\xff\xb2\xff\xd8\x04\x36\x91\x4d\xc2\xc5\x71\x49\xb9\x64\x5c\x72"
"\x2e\x05\x97\x92\x4b\xc5\xa5\xe6\xd2\x70\x69\xb9\x74\x5c\x7a\x2e\x03"
"\x97\x91\xcb\xc4\x65\xe6\xb2\x70\x59\xb9\x6c\x5c\x76\x2e\x07\x97\x93"
"\xcb\xc5\xe5\xe6\xf2\x70\x79\xb9\x7c\x5c\x7e\xae\x00\x57\x90\x2b\xc4"
"\x15\xe6\x8a\x70\x45\xb9\x62\x5c\x71\xae\x04\x57\x92\x2b\xc5\x95\xe6"
"\xca\x70\x65\xb9\x72\x5c\x79\xae\x02\x57\x91\xab\xc4\x55\xe6\xaa\x70"
"\x55\xb9\x6a\x5c\x75\xae\x06\x57\x93\xab\xc5\xd5\xe6\xea\x70\x75\xb9"
"\x7a\x5c\x7d\xae\x01\xd7\x90\x6b\xc4\x35\xe6\x9a\x70\x4d\xb9\x66\x5c"
"\x73\xae\x05\xd7\x92\x6b\xc5\xb5\xe6\xda\x70\x6d\xb9\x76\x5c\x7b\xae"
"\x03\xd7\x91\xeb\xc4\x75\xe6\xba\x70\x5d\xb9\x6e\x5c\x77\xae\x07\xd7"
"\x93\xeb\xc5\xf5\xe6\xfa\x70\x7d\xb9\x7e\x5c\x7f\x6e\x00\x37\x90\x1b"
"\xc4\x0d\xe6\x86\x70\x43\xb9\x61\xdc\x70\x6e\x04\x37\x92\x1b\xc5\x8d"
"\xe6\xc6\x70\x63\xb9\x71\xdc\x78\x6e\x02\x37\x91\x9b\xc4\x4d\xe6\xa6"
"\x70\x53\xb9\x69\xdc\x74\x6e\x06\x37\x93\x9b\xc5\xcd\xe6\xe6\x70\x73"
"\xb9\x79\xdc\x7c\x6e\x01\xb7\x90\x5b\xc4\x2d\xe6\x96\x70\x4b\xb9\x65"
"\xdc\x72\x6e\x05\xb7\x92\x5b\xc5\xad\xe6\xd6\x70\x6b\xb9\x75\xdc\x7a"
"\x6e\x03\xb7\x91\xdb\xc4\x6d\xe6\xb6\x70\x5b\xb9\x6d\xdc\x76\x6e\x07"
"\xb7\x93\xdb\xc5\xed\xe6\xf6\x70\x7b\xb9\x7d\xdc\x7e\xee\x00\x77\x90"
"\x3b\xc4\x1d\xe6\x8e\x70\x47\xb9\x63\xdc\x71\xee\x04\x77\x92\x3b\xc5"
"\x9d\xe6\xce\x70\x67\xb9\x73\xdc\x79\xee\x02\x77\x91\xbb\xc4\x5d\xe6"
"\xae\x70\x57\xb9\x6b\xdc\x75\xee\x06\x77\x93\xbb\xc5\x61\x1c\xce\x11"
"\x1c\xc9\x51\x1c\xcd\x31\x1c\xcb\x71\x1c\xcf\x09\x9c\xc8\x49\x9c\xcc"
"\x29\x9c\xca\x69\x9c\xce\x19\x9c\xc9\x59\x9c\xcd\x39\x9c\xcb\x79\x9c"
"\xcf\x05\x5c\xc8\x45\x1c\xe0\x20\x87\xb8\x18\x77\x9b\xbb\xc3\xdd\xe5"
"\xee\x71\xf7\xb9\x07\xdc\x43\xee\x11\xf7\x98\x7b\xc2\x3d\xe5\x9e\x71"
"\xcf\xb9\x17\xdc\x4b\xee\x15\xf7\x9a\x7b\xc3\xbd\xe5\xde\x71\xef\xb9"
"\x0f\xdc\x47\xee\x13\xf7\x99\xfb\xc2\x7d\xe5\xbe\x71\xdf\xb9\x1f\xdc"
"\x4f\xee\x17\xf7\x9b\x8b\xe7\xfe\x70\x7f\xb9\x7f\x5c\x02\x97\xc8\x25"
"\xe1\xe3\xf8\xa4\x7c\x32\x3e\x39\x9f\x82\x4f\xc9\xa7\xe2\x53\xf3\x69"
"\xf8\xb4\x7c\x3a\x3e\x3d\x9f\x81\xcf\xc8\x67\xe2\x33\xf3\x59\xf8\xac"
"\x7c\x36\x3e\x3b\x9f\x83\xcf\xc9\xe7\xe2\x73\xf3\x79\xf8\xbc\x7c\x3e"
"\x3e\x3f\x5f\x80\x2f\xc8\x17\xe2\x0b\xf3\x45\xf8\xa2\x7c\x31\xbe\x38"
"\x5f\x82\x2f\xc9\x97\xe2\x4b\xf3\x65\xf8\xb2\x7c\x39\xbe\x3c\x5f\x81"
"\xaf\xc8\x57\xe2\x2b\xf3\x55\xf8\xaa\x7c\x35\xbe\x3a\x5f\x83\xaf\xc9"
"\xd7\xe2\x6b\xf3\x75\xf8\xba\x7c\x3d\xbe\x3e\xdf\x80\x6f\xc8\x37\xe2"
"\x1b\xf3\x4d\xf8\xa6\x7c\x33\xbe\x39\xdf\x82\x6f\xc9\xb7\xe2\x5b\xf3"
"\x6d\xf8\xb6\x7c\x3b\xbe\x3d\xdf\x81\xef\xc8\x77\xe2\x3b\xf3\x5d\xf8"
"\xae\x7c\x37\xbe\x3b\xdf\x83\xef\xc9\xf7\xe2\x7b\xf3\x7d\xf8\xbe\x7c"
"\x3f\xbe\x3f\x3f\x80\x1f\xc8\x0f\xe2\x07\xf3\x43\xf8\xa1\xfc\x30\x7e"
"\x38\x3f\x82\x1f\xc9\x8f\xe2\x47\xf3\x63\xf8\xb1\xfc\x38\x7e\x3c\x3f"
"\x81\x9f\xc8\x4f\xe2\x27\xf3\x53\xf8\xa9\xfc\x34\x7e\x3a\x3f\x83\x9f"
"\xc9\xcf\xe2\x67\xf3\x73\xf8\xb9\xfc\x3c\x7e\x3e\xbf\x80\x5f\xc8\x2f"
"\xe2\x17\xf3\x4b\xf8\xa5\xfc\x32\x7e\x39\xbf\x82\x5f\xc9\xaf\xe2\x57"
"\xf3\x6b\xf8\xb5\xfc\x3a\x7e\x3d\xbf\x81\xdf\xc8\x6f\xe2\x37\xf3\x5b"
"\xf8\xad\xfc\x36\x7e\x3b\xbf\x83\xdf\xc9\xef\xe2\x77\xf3\x7b\xf8\xbd"
"\xfc\x3e\x7e\x3f\x7f\x80\x3f\xc8\x1f\xe2\x0f\xf3\x47\xf8\xa3\xfc\x31"
"\xfe\x38\x7f\x82\x3f\xc9\x9f\xe2\x4f\xf3\x67\xf8\xb3\xfc\x39\xfe\x3c"
"\x7f\x81\xbf\xc8\x5f\xe2\x2f\xf3\x57\xf8\xab\xfc\x35\xfe\x3a\x7f\x83"
"\xbf\xc9\xdf\xe2\x31\x1e\xe7\x09\x9e\xe4\x29\x9e\xe6\x19\x9e\xe5\x39"
"\x9e\xe7\x05\x5e\xe4\x25\x5e\xe6\x15\x5e\xe5\x35\x5e\xe7\x0d\xde\xe4"
"\x2d\xde\xe6\x1d\xde\xe5\x3d\xde\xe7\x03\x3e\xe4\x23\x1e\xf0\x90\x47"
"\x7c\x8c\xbf\xcd\xdf\xe1\xef\xf2\xf7\xf8\xfb\xfc\x03\xfe\x21\xff\x88"
"\x7f\xcc\x3f\xe1\x9f\xf2\xcf\xf8\xe7\xfc\x0b\xfe\x25\xff\x8a\x7f\xcd"
"\xbf\xe1\xdf\xf2\xef\xf8\xf7\xfc\x07\xfe\x23\xff\x89\xff\xcc\x7f\xe1"
"\xbf\xf2\xdf\xf8\xef\xfc\x0f\xfe\x27\xff\x8b\xff\xcd\xc7\xf3\x7f\xf8"
"\xbf\xfc\x3f\x3e\x81\x4f\xe4\x93\x08\x71\x42\x52\x21\x99\x90\x5c\x48"
"\x21\xa4\x14\x52\x09\xa9\x85\x34\x42\x5a\x21\x9d\x90\x5e\xc8\x20\x64"
"\x14\x32\x09\x99\x85\x2c\x42\x56\x21\x9b\x90\x5d\xc8\x21\xe4\x14\x72"
"\x09\xb9\x85\x3c\x42\x5e\x21\x9f\x90\x5f\x28\x20\x14\x14\x0a\x09\x85"
"\x85\x22\x42\x51\xa1\x98\x50\x5c\x28\x21\x94\x14\x4a\x09\xa5\x85\x32"
"\x42\x59\xa1\x9c\x50\x5e\xa8\x20\x54\x14\x2a\x09\x95\x85\x2a\x42\x55"
"\xa1\x9a\x50\x5d\xa8\x21\xd4\x14\x6a\x09\xb5\x85\x3a\x42\x5d\xa1\x9e"
"\x50\x5f\x68\x20\x34\x14\x1a\x09\x8d\x85\x26\x42\x53\xa1\x99\xd0\x5c"
"\x68\x21\xb4\x14\x5a\x09\xad\x85\x36\x42\x5b\xa1\x9d\xd0\x5e\xe8\x20"
"\x74\x14\x3a\x09\x9d\x85\x2e\x42\x57\xa1\x9b\xd0\x5d\xe8\x21\xf4\x14"
"\x7a\x09\xbd\x85\x3e\x42\x5f\xa1\x9f\xd0\x5f\x18\x20\x0c\x14\x06\x09"
"\x83\x85\x21\xc2\x50\x61\x98\x30\x5c\x18\x21\x8c\x14\x46\x09\xa3\x85"
"\x31\xc2\x58\x61\x9c\x30\x5e\x98\x20\x4c\x14\x26\x09\x93\x85\x29\xc2"
"\x54\x61\x9a\x30\x5d\x98\x21\xcc\x14\x66\x09\xb3\x85\x39\xc2\x5c\x61"
"\x9e\x30\x5f\x58\x20\x2c\x14\x16\x09\x8b\x85\x25\xc2\x52\x61\x99\xb0"
"\x5c\x58\x21\xac\x14\x56\x09\xab\x85\x35\xc2\x5a\x61\x9d\xb0\x5e\xd8"
"\x20\x6c\x14\x36\x09\x9b\x85\x2d\xc2\x56\x61\x9b\xb0\x5d\xd8\x21\xec"
"\x14\x76\x09\xbb\x85\x3d\xc2\x5e\x61\x9f\xb0\x5f\x38\x20\x1c\x14\x0e"
"\x09\x87\x85\x23\xc2\x51\xe1\x98\x70\x5c\x38\x21\x9c\x14\x4e\x09\xa7"
"\x85\x33\xc2\x59\xe1\x9c\x70\x5e\xb8\x20\x5c\x14\x2e\x09\x97\x85\x2b"
"\xc2\x55\xe1\x9a\x70\x5d\xb8\x21\xdc\x14\x6e\x09\x98\x80\x0b\x84\x40"
"\x0a\x94\x40\x0b\x8c\xc0\x0a\x9c\xc0\x0b\x82\x20\x0a\x92\x20\x0b\x8a"
"\xa0\x0a\x9a\xa0\x0b\x86\x60\x0a\x96\x60\x0b\x8e\xe0\x0a\x9e\xe0\x0b"
"\x81\x10\x0a\x91\x00\x04\x28\x20\x21\x26\xdc\x16\xee\x08\x77\x85\x7b"
"\xc2\x7d\xe1\x81\xf0\x50\x78\x24\x3c\x16\x9e\x08\x4f\x85\x67\xc2\x73"
"\xe1\x85\xf0\x52\x78\x25\xbc\x16\xde\x08\x6f\x85\x77\xc2\x7b\xe1\x83"
"\xf0\x51\xf8\x24\x7c\x16\xbe\x08\x5f\x85\x6f\xc2\x77\xe1\x87\xf0\x53"
"\xf8\x25\xfc\x16\xe2\x85\x3f\xc2\x5f\xe1\x9f\x90\x20\x24\x0a\x49\xc4"
"\x38\x31\xa9\x98\x4c\x4c\x2e\xa6\x10\x53\x8a\xa9\xc4\xd4\x62\x1a\x31"
"\xad\x98\x4e\x4c\x2f\x66\x10\x33\x8a\x99\xc4\xcc\x62\x16\x31\xab\x98"
"\x4d\xcc\x2e\xe6\x10\x73\x8a\xb9\xc4\xdc\x62\x1e\x31\xaf\x98\x4f\xcc"
"\x2f\x16\x10\x0b\x8a\x85\xc4\xc2\x62\x11\xb1\xa8\x58\x4c\x2c\x2e\x96"
"\x10\x4b\x8a\xa5\xc4\xd2\x62\x19\xb1\xac\x58\x4e\x2c\x2f\x56\x10\x2b"
"\x8a\x95\xc4\xca\x62\x15\xb1\xaa\x58\x4d\xac\x2e\xd6\x10\x6b\x8a\xb5"
"\xc4\xda\x62\x1d\xb1\xae\x58\x4f\xac\x2f\x36\x10\x1b\x8a\x8d\xc4\xc6"
"\x62\x13\xb1\xa9\xd8\x4c\x6c\x2e\xb6\x10\x5b\x8a\xad\xc4\xd6\x62\x1b"
"\xb1\xad\xd8\x4e\x6c\x2f\x76\x10\x3b\x8a\x9d\xc4\xce\x62\x17\xb1\xab"
"\xd8\x4d\xec\x2e\xf6\x10\x7b\x8a\xbd\xc4\xde\x62\x1f\xb1\xaf\xd8\x4f"
"\xec\x2f\x0e\x10\x07\x8a\x83\xc4\xc1\xe2\x10\x71\xa8\x38\x4c\x1c\x2e"
"\x8e\x10\x47\x8a\xa3\xc4\xd1\xe2\x18\x71\xac\x38\x4e\x1c\x2f\x4e\x10"
"\x27\x8a\x93\xc4\xc9\xe2\x14\x71\xaa\x38\x4d\x9c\x2e\xce\x10\x67\x8a"
"\xb3\xc4\xd9\xe2\x1c\x71\xae\x38\x4f\x9c\x2f\x2e\x10\x17\x8a\x8b\xc4"
"\xc5\xe2\x12\x71\xa9\xb8\x4c\x5c\x2e\xae\x10\x57\x8a\xab\xc4\xd5\xe2"
"\x1a\x71\xad\xb8\x4e\x5c\x2f\x6e\x10\x37\x8a\x9b\xc4\xcd\xe2\x16\x71"
"\xab\xb8\x4d\xdc\x2e\xee\x10\x77\x8a\xbb\xc4\xdd\xe2\x1e\x71\xaf\xb8"
"\x4f\xdc\x2f\x1e\x10\x0f\x8a\x87\xc4\xc3\xe2\x11\xf1\xa8\x78\x4c\x3c"
"\x2e\x9e\x10\x4f\x8a\xa7\xc4\xd3\xe2\x19\xf1\xac\x78\x4e\x3c\x2f\x5e"
"\x10\x2f\x8a\x97\xc4\xcb\xe2\x15\xf1\xaa\x78\x4d\xbc\x2e\xde\x10\x6f"
"\x8a\xb7\x44\x4c\xc4\x45\x42\x24\x45\x4a\xa4\x45\x46\x64\x45\x4e\xe4"
"\x45\x41\x14\x45\x49\x94\x45\x45\x54\x45\x4d\xd4\x45\x43\x34\x45\x4b"
"\xb4\x45\x47\x74\x45\x4f\xf4\xc5\x40\x0c\xc5\x48\x04\x22\x14\x91\x18"
"\x13\x6f\x8b\x77\xc4\xbb\xe2\x3d\xf1\xbe\xf8\x40\x7c\x28\x3e\x12\x1f"
"\x8b\x4f\xc4\xa7\xe2\x33\xf1\xb9\xf8\x42\x7c\x29\xbe\x12\x5f\x8b\x6f"
"\xc4\xb7\xe2\x3b\xf1\xbd\xf8\x41\xfc\x28\x7e\x12\x3f\x8b\x5f\xc4\xaf"
"\xe2\x37\xf1\xbb\xf8\x43\xfc\x29\xfe\x12\x7f\x8b\xf1\xe2\x1f\xf1\xaf"
"\xf8\x4f\x4c\x10\x13\xc5\x24\x52\x9c\x94\x54\x4a\x26\x25\x97\x52\x48"
"\x29\xa5\x54\x52\x6a\x29\x8d\x94\x56\x4a\x27\xa5\x97\x32\x48\x19\xa5"
"\x4c\x52\x66\x29\x8b\x94\x55\xca\x26\x65\x97\x72\x48\x39\xa5\x5c\x52"
"\x6e\x29\x8f\x94\x57\xca\x27\xe5\x97\x0a\x48\x05\xa5\x42\x52\x61\xa9"
"\x88\x54\x54\x2a\x26\x15\x97\x4a\x48\x25\xa5\x52\x52\x69\xa9\x8c\x54"
"\x56\x2a\x27\x95\x97\x2a\x48\x15\xa5\x4a\x52\x65\xa9\x8a\x54\x55\xaa"
"\x26\x55\x97\x6a\x48\x35\xa5\x5a\x52\x6d\xa9\x8e\x54\x57\xaa\x27\xd5"
"\x97\x1a\x48\x0d\xa5\x46\x52\x63\xa9\x89\xd4\x54\x6a\x26\x35\x97\x5a"
"\x48\x2d\xa5\x56\x52\x6b\xa9\x8d\xd4\x56\x6a\x27\xb5\x97\x3a\x48\x1d"
"\xa5\x4e\x52\x67\xa9\x8b\xd4\x55\xea\x26\x75\x97\x7a\x48\x3d\xa5\x5e"
"\x52\x6f\xa9\x8f\xd4\x57\xea\x27\xf5\x97\x06\x48\x03\xa5\x41\xd2\x60"
"\x69\x88\x34\x54\x1a\x26\x0d\x97\x46\x48\x23\xa5\x51\xd2\x68\x69\x8c"
"\x34\x56\x1a\x27\x8d\x97\x26\x48\x13\xa5\x49\xd2\x64\x69\x8a\x34\x55"
"\x9a\x26\x4d\x97\x66\x48\x33\xa5\x59\xd2\x6c\x69\x8e\x34\x57\x9a\x27"
"\xcd\x97\x16\x48\x0b\xa5\x45\xd2\x62\x69\x89\xb4\x54\x5a\x26\x2d\x97"
"\x56\x48\x2b\xa5\x55\xd2\x6a\x69\x8d\xb4\x56\x5a\x27\xad\x97\x36\x48"
"\x1b\xa5\x4d\xd2\x66\x69\x8b\xb4\x55\xda\x26\x6d\x97\x76\x48\x3b\xa5"
"\x5d\xd2\x6e\x69\x8f\xb4\x57\xda\x27\xed\x97\x0e\x48\x07\xa5\x43\xd2"
"\x61\xe9\x88\x74\x54\x3a\x26\x1d\x97\x4e\x48\x27\xa5\x53\xd2\x69\xe9"
"\x8c\x74\x56\x3a\x27\x9d\x97\x2e\x48\x17\xa5\x4b\xd2\x65\xe9\x8a\x74"
"\x55\xba\x26\x5d\x97\x6e\x48\x37\xa5\x5b\x12\x26\xe1\x12\x21\x91\x12"
"\x25\xd1\x12\x23\xb1\x12\x27\xf1\x92\x20\x89\x92\x24\xc9\x92\x22\xa9"
"\x92\x26\xe9\x92\x21\x99\x92\x25\xd9\x92\x23\xb9\x92\x27\xf9\x52\x20"
"\x85\x52\x24\x01\x09\x4a\x48\x8a\x49\xb7\xa5\x3b\xd2\x5d\xe9\x9e\x74"
"\x5f\x7a\x20\x3d\x94\x1e\x49\x8f\xa5\x27\xd2\x53\xe9\x99\xf4\x5c\x7a"
"\x21\xbd\x94\x5e\x49\xaf\xa5\x37\xd2\x5b\xe9\x9d\xf4\x5e\xfa\x20\x7d"
"\x94\x3e\x49\x9f\xa5\x2f\xd2\x57\xe9\x9b\xf4\x5d\xfa\x21\xfd\x94\x7e"
"\x49\xbf\xa5\x78\xe9\x8f\xf4\x57\xfa\x27\x25\x48\x89\x52\x12\x39\x4e"
"\x4e\x2a\x27\x93\x93\xcb\x29\xe4\x94\x72\x2a\x39\xb5\x9c\x46\x4e\x2b"
"\xa7\x93\xd3\xcb\x19\xe4\x8c\x72\x26\x39\xb3\x9c\x45\xce\x2a\x67\x93"
"\xb3\xcb\x39\xe4\x9c\x72\x2e\x39\xb7\x9c\x47\xce\x2b\xe7\x93\xf3\xcb"
"\x05\xe4\x82\x72\x21\xb9\xb0\x5c\x44\x2e\x2a\x17\x93\x8b\xcb\x25\xe4"
"\x92\x72\x29\xb9\xb4\x5c\x46\x2e\x2b\x97\x93\xcb\xcb\x15\xe4\x8a\x72"
"\x25\xb9\xb2\x5c\x45\xae\x2a\x57\x93\xab\xcb\x35\xe4\x9a\x72\x2d\xb9"
"\xb6\x5c\x47\xae\x2b\xd7\x93\xeb\xcb\x0d\xe4\x86\x72\x23\xb9\xb1\xdc"
"\x44\x6e\x2a\x37\x93\x9b\xcb\x2d\xe4\x96\x72\x2b\xb9\xb5\xdc\x46\x6e"
"\x2b\xb7\x93\xdb\xcb\x1d\xe4\x8e\x72\x27\xb9\xb3\xdc\x45\xee\x2a\x77"
"\x93\xbb\xcb\x3d\xe4\x9e\x72\x2f\xb9\xb7\xdc\x47\xee\x2b\xf7\x93\xfb"
"\xcb\x03\xe4\x81\xf2\x20\x79\xb0\x3c\x44\x1e\x2a\x0f\x93\x87\xcb\x23"
"\xe4\x91\xf2\x28\x79\xb4\x3c\x46\x1e\x2b\x8f\x93\xc7\xcb\x13\xe4\x89"
"\xf2\x24\x79\xb2\x3c\x45\x9e\x2a\x4f\x93\xa7\xcb\x33\xe4\x99\xf2\x2c"
"\x79\xb6\x3c\x47\x9e\x2b\xcf\x93\xe7\xcb\x0b\xe4\x85\xf2\x22\x79\xb1"
"\xbc\x44\x5e\x2a\x2f\x93\x97\xcb\x2b\xe4\x95\xf2\x2a\x79\xb5\xbc\x46"
"\x5e\x2b\xaf\x93\xd7\xcb\x1b\xe4\x8d\xf2\x26\x79\xb3\xbc\x45\xde\x2a"
"\x6f\x93\xb7\xcb\x3b\xe4\x9d\xf2\x2e\x79\xb7\xbc\x47\xde\x2b\xef\x93"
"\xf7\xcb\x07\xe4\x83\xf2\x21\xf9\xb0\x7c\x44\x3e\x2a\x1f\x93\x8f\xcb"
"\x27\xe4\x93\xf2\x29\xf9\xb4\x7c\x46\x3e\x2b\x9f\x93\xcf\xcb\x17\xe4"
"\x8b\xf2\x25\xf9\xb2\x7c\x45\xbe\x2a\x5f\x93\xaf\xcb\x37\xe4\x9b\xf2"
"\x2d\x19\x93\x71\x99\x90\x49\x99\x92\x69\x99\x91\x59\x99\x93\x79\x59"
"\x90\x45\x59\x92\x65\x59\x91\x55\x59\x93\x75\xd9\x90\x4d\xd9\x92\x6d"
"\xd9\x91\x5d\xd9\x93\x7d\x39\x90\x43\x39\x92\x81\x0c\x65\x24\xc7\xe4"
"\xdb\xf2\x1d\xf9\xae\x7c\x4f\xbe\x2f\x3f\x90\x1f\xca\x8f\xe4\xc7\xf2"
"\x13\xf9\xa9\xfc\x4c\x7e\x2e\xbf\x90\x5f\xca\xaf\xe4\xd7\xf2\x1b\xf9"
"\xad\xfc\x4e\x7e\x2f\x7f\x90\x3f\xca\x9f\xe4\xcf\xf2\x17\xf9\xab\xfc"
"\x4d\xfe\x2e\xff\x90\x7f\xca\xbf\xe4\xdf\x72\xbc\xfc\x47\xfe\x2b\xff"
"\x93\x13\xe4\x44\x39\x89\x12\xa7\x24\x55\x92\x29\xc9\x95\x14\x4a\x4a"
"\x25\x95\x92\x5a\x49\xa3\xa4\x55\xd2\x29\xe9\x95\x0c\x4a\x46\x25\x93"
"\x92\x59\xc9\xa2\x64\x55\xb2\x29\xd9\x95\x1c\x4a\x4e\x25\x97\x92\x5b"
"\xc9\xa3\xe4\x55\xf2\x29\xf9\x95\x02\x4a\x41\xa5\x90\x52\x58\x29\xa2"
"\x14\x55\x8a\x29\xc5\x95\x12\x4a\x49\xa5\x94\x52\x5a\x29\xa3\x94\x55"
"\xca\x29\xe5\x95\x0a\x4a\x45\xa5\x92\x52\x59\xa9\xa2\x54\x55\xaa\x29"
"\xd5\x95\x1a\x4a\x4d\xa5\x96\x52\x5b\xa9\xa3\xd4\x55\xea\x29\xf5\x95"
"\x06\x4a\x43\xa5\x91\xd2\x58\x69\xa2\x34\x55\x9a\x29\xcd\x95\x16\x4a"
"\x4b\xa5\x95\xd2\x5a\x69\xa3\xb4\x55\xda\x29\xed\x95\x0e\x4a\x47\xa5"
"\x93\xd2\x59\xe9\xa2\x74\x55\xba\x29\xdd\x95\x1e\x4a\x4f\xa5\x97\xd2"
"\x5b\xe9\xa3\xf4\x55\xfa\x29\xfd\x95\x01\xca\x40\x65\x90\x32\x58\x19"
"\xa2\x0c\x55\x86\x29\xc3\x95\x11\xca\x48\x65\x94\x32\x5a\x19\xa3\x8c"
"\x55\xc6\x29\xe3\x95\x09\xca\x44\x65\x92\x32\x59\x99\xa2\x4c\x55\xa6"
"\x29\xd3\x95\x19\xca\x4c\x65\x96\x32\x5b\x99\xa3\xcc\x55\xe6\x29\xf3"
"\x95\x05\xca\x42\x65\x91\xb2\x58\x59\xa2\x2c\x55\x96\x29\xcb\x95\x15"
"\xca\x4a\x65\x95\xb2\x5a\x59\xa3\xac\x55\xd6\x29\xeb\x95\x0d\xca\x46"
"\x65\x93\xb2\x59\xd9\xa2\x6c\x55\xb6\x29\xdb\x95\x1d\xca\x4e\x65\x97"
"\xb2\x5b\xd9\xa3\xec\x55\xf6\x29\xfb\x95\x03\xca\x41\xe5\x90\x72\x58"
"\x39\xa2\x1c\x55\x8e\x29\xc7\x95\x13\xca\x49\xe5\x94\x72\x5a\x39\xa3"
"\x9c\x55\xce\x29\xe7\x95\x0b\xca\x45\xe5\x92\x72\x59\xb9\xa2\x5c\x55"
"\xae\x29\xd7\x95\x1b\xca\x4d\xe5\x96\x82\x29\xb8\x42\x28\xa4\x42\x29"
"\xb4\xc2\x28\xac\xc2\x29\xbc\x22\x28\xa2\x22\x29\xb2\xa2\x28\xaa\xa2"
"\x29\xba\x62\x28\xa6\x62\x29\xb6\xe2\x28\xae\xe2\x29\xbe\x12\x28\xa1"
"\x12\x29\x40\x81\x0a\x52\x62\xca\x6d\xe5\x8e\x72\x57\xb9\xa7\xdc\x57"
"\x1e\x28\x0f\x95\x47\xca\x63\xe5\x89\xf2\x54\x79\xa6\x3c\x57\x5e\x28"
"\x2f\x95\x57\xca\x6b\xe5\x8d\xf2\x56\x79\xa7\xbc\x57\x3e\x28\x1f\x95"
"\x4f\xca\x67\xe5\x8b\xf2\x55\xf9\xa6\x7c\x57\x7e\x28\x3f\x95\x5f\xca"
"\x6f\x25\x5e\xf9\xa3\xfc\x55\xfe\x29\x09\x4a\xa2\x92\x44\x8d\x53\x93"
"\xaa\xc9\xd4\xe4\x6a\x0a\x35\xa5\x9a\x4a\x4d\xad\xa6\x51\xd3\xaa\xe9"
"\xd4\xf4\x6a\x06\x35\xa3\x9a\x49\xcd\xac\x66\x51\xb3\xaa\xd9\xd4\xec"
"\x6a\x0e\x35\xa7\x9a\x4b\xcd\xad\xe6\x51\xf3\xaa\xf9\xd4\xfc\x6a\x01"
"\xb5\xa0\x5a\x48\x2d\xac\x16\x51\x8b\xaa\xc5\xd4\xe2\x6a\x09\xb5\xa4"
"\x5a\x4a\x2d\xad\x96\x51\xcb\xaa\xe5\xd4\xf2\x6a\x05\xb5\xa2\x5a\x49"
"\xad\xac\x56\x51\xab\xaa\xd5\xd4\xea\x6a\x0d\xb5\xa6\x5a\x4b\xad\xad"
"\xd6\x51\xeb\xaa\xf5\xd4\xfa\x6a\x03\xb5\xa1\xda\x48\x6d\xac\x36\x51"
"\x9b\xaa\xcd\xd4\xe6\x6a\x0b\xb5\xa5\xda\x4a\x6d\xad\xb6\x51\xdb\xaa"
"\xed\xd4\xf6\x6a\x07\xb5\xa3\xda\x49\xed\xac\x76\x51\xbb\xaa\xdd\xd4"
"\xee\x6a\x0f\xb5\xa7\xda\x4b\xed\xad\xf6\x51\xfb\xaa\xfd\xd4\xfe\xea"
"\x00\x75\xa0\x3a\x48\x1d\xac\x0e\x51\x87\xaa\xc3\xd4\xe1\xea\x08\x75"
"\xa4\x3a\x4a\x1d\xad\x8e\x51\xc7\xaa\xe3\xd4\xf1\xea\x04\x75\xa2\x3a"
"\x49\x9d\xac\x4e\x51\xa7\xaa\xd3\xd4\xe9\xea\x0c\x75\xa6\x3a\x4b\x9d"
"\xad\xce\x51\xe7\xaa\xf3\xd4\xf9\xea\x02\x75\xa1\xba\x48\x5d\xac\x2e"
"\x51\x97\xaa\xcb\xd4\xe5\xea\x0a\x75\xa5\xba\x4a\x5d\xad\xae\x51\xd7"
"\xaa\xeb\xd4\xf5\xea\x06\x75\xa3\xba\x49\xdd\xac\x6e\x51\xb7\xaa\xdb"
"\xd4\xed\xea\x0e\x75\xa7\xba\x4b\xdd\xad\xee\x51\xf7\xaa\xfb\xd4\xfd"
"\xea\x01\xf5\xa0\x7a\x48\x3d\xac\x1e\x51\x8f\xaa\xc7\xd4\xe3\xea\x09"
"\xf5\xa4\x7a\x4a\x3d\xad\x9e\x51\xcf\xaa\xe7\xd4\xf3\xea\x05\xf5\xa2"
"\x7a\x49\xbd\xac\x5e\x51\xaf\xaa\xd7\xd4\xeb\xea\x0d\xf5\xa6\x7a\x4b"
"\xc5\x54\x5c\x25\x54\x52\xa5\x54\x5a\x65\x54\x56\xe5\x54\x5e\x15\x54"
"\x51\x95\x54\x59\x55\x54\x55\xd5\x54\x5d\x35\x54\x53\xb5\x54\x5b\x75"
"\x54\x57\xf5\x54\x5f\x0d\xd4\x50\x8d\x54\xa0\x42\x15\xa9\x31\xf5\xb6"
"\x7a\x47\xbd\xab\xde\x53\xef\xab\x0f\xd4\x87\xea\x23\xf5\xb1\xfa\x44"
"\x7d\xaa\x3e\x53\x9f\xab\x2f\xd4\x97\xea\x2b\xf5\xb5\xfa\x46\x7d\xab"
"\xbe\x53\xdf\xab\x1f\xd4\x8f\xea\x27\xf5\xb3\xfa\x45\xfd\xaa\x7e\x53"
"\xbf\xab\x3f\xd4\x9f\xea\x2f\xf5\xb7\x1a\xaf\xfe\x51\xff\xaa\xff\xd4"
"\x04\x35\x51\x4d\xa2\xc5\x69\x49\xb5\x64\x5a\x72\x2d\x85\x96\x52\x4b"
"\xa5\xa5\xd6\xd2\x68\x69\xb5\x74\x5a\x7a\x2d\x83\x96\x51\xcb\xa4\x65"
"\xd6\xb2\x68\x59\xb5\x6c\x5a\x76\x2d\x87\x96\x53\xcb\xa5\xe5\xd6\xf2"
"\x68\x79\xb5\x7c\x5a\x7e\xad\x80\x56\x50\x2b\xa4\x15\xd6\x8a\x68\x45"
"\xb5\x62\x5a\x71\xad\x84\x56\x52\x2b\xa5\x95\xd6\xca\x68\x65\xb5\x72"
"\x5a\x79\xad\x82\x56\x51\xab\xa4\x55\xd6\xaa\x68\x55\xb5\x6a\x5a\x75"
"\xad\x86\x56\x53\xab\xa5\xd5\xd6\xea\x68\x75\xb5\x7a\x5a\x7d\xad\x81"
"\xd6\x50\x6b\xa4\x35\xd6\x9a\x68\x4d\xb5\x66\x5a\x73\xad\x85\xd6\x52"
"\x6b\xa5\xb5\xd6\xda\x68\x6d\xb5\x76\x5a\x7b\xad\x83\xd6\x51\xeb\xa4"
"\x75\xd6\xba\x68\x5d\xb5\x6e\x5a\x77\xad\x87\xd6\x53\xeb\xa5\xf5\xd6"
"\xfa\x68\x7d\xb5\x7e\x5a\x7f\x6d\x80\x36\x50\x1b\xa4\x0d\xd6\x86\x68"
"\x43\xb5\x61\xda\x70\x6d\x84\x36\x52\x1b\xa5\x8d\xd6\xc6\x68\x63\xb5"
"\x71\xda\x78\x6d\x82\x36\x51\x9b\xa4\x4d\xd6\xa6\x68\x53\xb5\x69\xda"
"\x74\x6d\x86\x36\x53\x9b\xa5\xcd\xd6\xe6\x68\x73\xb5\x79\xda\x7c\x6d"
"\x81\xb6\x50\x5b\xa4\x2d\xd6\x96\x68\x4b\xb5\x65\xda\x72\x6d\x85\xb6"
"\x52\x5b\xa5\xad\xd6\xd6\x68\x6b\xb5\x75\xda\x7a\x6d\x83\xb6\x51\xdb"
"\xa4\x6d\xd6\xb6\x68\x5b\xb5\x6d\xda\x76\x6d\x87\xb6\x53\xdb\xa5\xed"
"\xd6\xf6\x68\x7b\xb5\x7d\xda\x7e\xed\x80\x76\x50\x3b\xa4\x1d\xd6\x8e"
"\x68\x47\xb5\x63\xda\x71\xed\x84\x76\x52\x3b\xa5\x9d\xd6\xce\x68\x67"
"\xb5\x73\xda\x79\xed\x82\x76\x51\xbb\xa4\x5d\xd6\xae\x68\x57\xb5\x6b"
"\xda\x75\xed\x86\x76\x53\xbb\xa5\x61\x1a\xae\x11\x1a\xa9\x51\x1a\xad"
"\x31\x1a\xab\x71\x1a\xaf\x09\x9a\xa8\x49\x9a\xac\x29\x9a\xaa\x69\x9a"
"\xae\x19\x9a\xa9\x59\x9a\xad\x39\x9a\xab\x79\x9a\xaf\x05\x5a\xa8\x45"
"\x1a\xd0\xa0\x86\xb4\x98\x76\x5b\xbb\xa3\xdd\xd5\xee\x69\xf7\xb5\x07"
"\xda\x43\xed\x91\xf6\x58\x7b\xa2\x3d\xd5\x9e\x69\xcf\xb5\x17\xda\x4b"
"\xed\x95\xf6\x5a\x7b\xa3\xbd\xd5\xde\x69\xef\xb5\x0f\xda\x47\xed\x93"
"\xf6\x59\xfb\xa2\x7d\xd5\xbe\x69\xdf\xb5\x1f\xda\x4f\xed\x97\xf6\x5b"
"\x8b\xd7\xfe\x68\x7f\xb5\x7f\x5a\x82\x96\xa8\x25\xd1\xe3\xf4\xa4\x7a"
"\x32\x3d\xb9\x9e\x42\x4f\xa9\xa7\xd2\x53\xeb\x69\xf4\xb4\x7a\x3a\x3d"
"\xbd\x9e\x41\xcf\xa8\x67\xd2\x33\xeb\x59\xf4\xac\x7a\x36\x3d\xbb\x9e"
"\x43\xcf\xa9\xe7\xd2\x73\xeb\x79\xf4\xbc\x7a\x3e\x3d\xbf\x5e\x40\x2f"
"\xa8\x17\xd2\x0b\xeb\x45\xf4\xa2\x7a\x31\xbd\xb8\x5e\x42\x2f\xa9\x97"
"\xd2\x4b\xeb\x65\xf4\xb2\x7a\x39\xbd\xbc\x5e\x41\xaf\xa8\x57\xd2\x2b"
"\xeb\x55\xf4\xaa\x7a\x35\xbd\xba\x5e\x43\xaf\xa9\xd7\xd2\x6b\xeb\x75"
"\xf4\xba\x7a\x3d\xbd\xbe\xde\x40\x6f\xa8\x37\xd2\x1b\xeb\x4d\xf4\xa6"
"\x7a\x33\xbd\xb9\xde\x42\x6f\xa9\xb7\xd2\x5b\xeb\x6d\xf4\xb6\x7a\x3b"
"\xbd\xbd\xde\x41\xef\xa8\x77\xd2\x3b\xeb\x5d\xf4\xae\x7a\x37\xbd\xbb"
"\xde\x43\xef\xa9\xf7\xd2\x7b\xeb\x7d\xf4\xbe\x7a\x3f\xbd\xbf\x3e\x40"
"\x1f\xa8\x0f\xd2\x07\xeb\x43\xf4\xa1\xfa\x30\x7d\xb8\x3e\x42\x1f\xa9"
"\x8f\xd2\x47\xeb\x63\xf4\xb1\xfa\x38\x7d\xbc\x3e\x41\x9f\xa8\x4f\xd2"
"\x27\xeb\x53\xf4\xa9\xfa\x34\x7d\xba\x3e\x43\x9f\xa9\xcf\xd2\x67\xeb"
"\x73\xf4\xb9\xfa\x3c\x7d\xbe\xbe\x40\x5f\xa8\x2f\xd2\x17\xeb\x4b\xf4"
"\xa5\xfa\x32\x7d\xb9\xbe\x42\x5f\xa9\xaf\xd2\x57\xeb\x6b\xf4\xb5\xfa"
"\x3a\x7d\xbd\xbe\x41\xdf\xa8\x6f\xd2\x37\xeb\x5b\xf4\xad\xfa\x36\x7d"
"\xbb\xbe\x43\xdf\xa9\xef\xd2\x77\xeb\x7b\xf4\xbd\xfa\x3e\x7d\xbf\x7e"
"\x40\x3f\xa8\x1f\xd2\x0f\xeb\x47\xf4\xa3\xfa\x31\xfd\xb8\x7e\x42\x3f"
"\xa9\x9f\xd2\x4f\xeb\x67\xf4\xb3\xfa\x39\xfd\xbc\x7e\x41\xbf\xa8\x5f"
"\xd2\x2f\xeb\x57\xf4\xab\xfa\x35\xfd\xba\x7e\x43\xbf\xa9\xdf\xd2\x31"
"\x1d\xd7\x09\x9d\xd4\x29\x9d\xd6\x19\x9d\xd5\x39\x9d\xd7\x05\x5d\xd4"
"\x25\x5d\xd6\x15\x5d\xd5\x35\x5d\xd7\x0d\xdd\xd4\x2d\xdd\xd6\x1d\xdd"
"\xd5\x3d\xdd\xd7\x03\x3d\xd4\x23\x1d\xe8\x50\x47\x7a\x4c\xbf\xad\xdf"
"\xd1\xef\xea\xf7\xf4\xfb\xfa\x03\xfd\xa1\xfe\x48\x7f\xac\x3f\xd1\x9f"
"\xea\xcf\xf4\xe7\xfa\x0b\xfd\xa5\xfe\x4a\x7f\xad\xbf\xd1\xdf\xea\xef"
"\xf4\xf7\xfa\x07\xfd\xa3\xfe\x49\xff\xac\x7f\xd1\xbf\xea\xdf\xf4\xef"
"\xfa\x0f\xfd\xa7\xfe\x4b\xff\xad\xc7\xeb\x7f\xf4\xbf\xfa\x3f\x3d\x41"
"\x4f\xd4\x93\x18\x71\x46\x52\x23\x99\x91\xdc\x48\x61\xa4\x34\x52\x19"
"\xa9\x8d\x34\x46\x5a\x23\x9d\x91\xde\xc8\x60\x64\x34\x32\x19\x99\x8d"
"\x2c\x46\x56\x23\x9b\x91\xdd\xc8\x61\xe4\x34\x72\x19\xb9\x8d\x3c\x46"
"\x5e\x23\x9f\x91\xdf\x28\x60\x14\x34\x0a\x19\x85\x8d\x22\x46\x51\xa3"
"\x98\x51\xdc\x28\x61\x94\x34\x4a\x19\xa5\x8d\x32\x46\x59\xa3\x9c\x51"
"\xde\xa8\x60\x54\x34\x2a\x19\x95\x8d\x2a\x46\x55\xa3\x9a\x51\xdd\xa8"
"\x61\xd4\x34\x6a\x19\xb5\x8d\x3a\x46\x5d\xa3\x9e\x51\xdf\x68\x60\x34"
"\x34\x1a\x19\x8d\x8d\x26\x46\x53\xa3\x99\xd1\xdc\x68\x61\xb4\x34\x5a"
"\x19\xad\x8d\x36\x46\x5b\xa3\x9d\xd1\xde\xe8\x60\x74\x34\x3a\x19\x9d"
"\x8d\x2e\x46\x57\xa3\x9b\xd1\xdd\xe8\x61\xf4\x34\x7a\x19\xbd\x8d\x3e"
"\x46\x5f\xa3\x9f\xd1\xdf\x18\x60\x0c\x34\x06\x19\x83\x8d\x21\xc6\x50"
"\x63\x98\x31\xdc\x18\x61\x8c\x34\x46\x19\xa3\x8d\x31\xc6\x58\x63\x9c"
"\x31\xde\x98\x60\x4c\x34\x26\x19\x93\x8d\x29\xc6\x54\x63\x9a\x31\xdd"
"\x98\x61\xcc\x34\x66\x19\xb3\x8d\x39\xc6\x5c\x63\x9e\x31\xdf\x58\x60"
"\x2c\x34\x16\x19\x8b\x8d\x25\xc6\x52\x63\x99\xb1\xdc\x58\x61\xac\x34"
"\x56\x19\xab\x8d\x35\xc6\x5a\x63\x9d\xb1\xde\xd8\x60\x6c\x34\x36\x19"
"\x9b\x8d\x2d\xc6\x56\x63\x9b\xb1\xdd\xd8\x61\xec\x34\x76\x19\xbb\x8d"
"\x3d\xc6\x5e\x63\x9f\xb1\xdf\x38\x60\x1c\x34\x0e\x19\x87\x8d\x23\xc6"
"\x51\xe3\x98\x71\xdc\x38\x61\x9c\x34\x4e\x19\xa7\x8d\x33\xc6\x59\xe3"
"\x9c\x71\xde\xb8\x60\x5c\x34\x2e\x19\x97\x8d\x2b\xc6\x55\xe3\x9a\x71"
"\xdd\xb8\x61\xdc\x34\x6e\x19\x98\x81\x1b\x84\x41\x1a\x94\x41\x1b\x8c"
"\xc1\x1a\x9c\xc1\x1b\x82\x21\x1a\x92\x21\x1b\x8a\xa1\x1a\x9a\xa1\x1b"
"\x86\x61\x1a\x96\x61\x1b\x8e\xe1\x1a\x9e\xe1\x1b\x81\x11\x1a\x91\x01"
"\x0c\x68\x20\x23\x66\xdc\x36\xee\x18\x77\x8d\x7b\xc6\x7d\xe3\x81\xf1"
"\xd0\x78\x64\x3c\x36\x9e\x18\x4f\x8d\x67\xc6\x73\xe3\x85\xf1\xd2\x78"
"\x65\xbc\x36\xde\x18\x6f\x8d\x77\xc6\x7b\xe3\x83\xf1\xd1\xf8\x64\x7c"
"\x36\xbe\x18\x5f\x8d\x6f\xc6\x77\xe3\x87\xf1\xd3\xf8\x65\xfc\x36\xe2"
"\x8d\x3f\xc6\x5f\xe3\x9f\x91\x60\x24\x1a\x49\xcc\x38\x33\xa9\x99\xcc"
"\x4c\x6e\xa6\x30\x53\x9a\xa9\xcc\xd4\x66\x1a\x33\xad\x99\xce\x4c\x6f"
"\x66\x30\x33\x9a\x99\xcc\xcc\x66\x16\x33\xab\x99\xcd\xcc\x6e\xe6\x30"
"\x73\x9a\xb9\xcc\xdc\x66\x1e\x33\xaf\x99\xcf\xcc\x6f\x16\x30\x0b\x9a"
"\x85\xcc\xc2\x66\x11\xb3\xa8\x59\xcc\x2c\x6e\x96\x30\x4b\x9a\xa5\xcc"
"\xd2\x66\x19\xb3\xac\x59\xce\x2c\x6f\x56\x30\x2b\x9a\x95\xcc\xca\x66"
"\x15\xb3\xaa\x59\xcd\xac\x6e\xd6\x30\x6b\x9a\xb5\xcc\xda\x66\x1d\xb3"
"\xae\x59\xcf\xac\x6f\x36\x30\x1b\x9a\x8d\xcc\xc6\x66\x13\xb3\xa9\xd9"
"\xcc\x6c\x6e\xb6\x30\x5b\x9a\xad\xcc\xd6\x66\x1b\xb3\xad\xd9\xce\x6c"
"\x6f\x76\x30\x3b\x9a\x9d\xcc\xce\x66\x17\xb3\xab\xd9\xcd\xec\x6e\xf6"
"\x30\x7b\x9a\xbd\xcc\xde\x66\x1f\xb3\xaf\xd9\xcf\xec\x6f\x0e\x30\x07"
"\x9a\x83\xcc\xc1\xe6\x10\x73\xa8\x39\xcc\x1c\x6e\x8e\x30\x47\x9a\xa3"
"\xcc\xd1\xe6\x18\x73\xac\x39\xce\x1c\x6f\x4e\x30\x27\x9a\x93\xcc\xc9"
"\xe6\x14\x73\xaa\x39\xcd\x9c\x6e\xce\x30\x67\x9a\xb3\xcc\xd9\xe6\x1c"
"\x73\xae\x39\xcf\x9c\x6f\x2e\x30\x17\x9a\x8b\xcc\xc5\xe6\x12\x73\xa9"
"\xb9\xcc\x5c\x6e\xae\x30\x57\x9a\xab\xcc\xd5\xe6\x1a\x73\xad\xb9\xce"
"\x5c\x6f\x6e\x30\x37\x9a\x9b\xcc\xcd\xe6\x16\x73\xab\xb9\xcd\xdc\x6e"
"\xee\x30\x77\x9a\xbb\xcc\xdd\xe6\x1e\x73\xaf\xb9\xcf\xdc\x6f\x1e\x30"
"\x0f\x9a\x87\xcc\xc3\xe6\x11\xf3\xa8\x79\xcc\x3c\x6e\x9e\x30\x4f\x9a"
"\xa7\xcc\xd3\xe6\x19\xf3\xac\x79\xce\x3c\x6f\x5e\x30\x2f\x9a\x97\xcc"
"\xcb\xe6\x15\xf3\xaa\x79\xcd\xbc\x6e\xde\x30\x6f\x9a\xb7\x4c\xcc\xc4"
"\x4d\xc2\x24\x4d\xca\xa4\x4d\xc6\x64\x4d\xce\xe4\x4d\xc1\x14\x4d\xc9"
"\x94\x4d\xc5\x54\x4d\xcd\xd4\x4d\xc3\x34\x4d\xcb\xb4\x4d\xc7\x74\x4d"
"\xcf\xf4\xcd\xc0\x0c\xcd\xc8\x04\x26\x34\x91\x19\x33\x6f\x9b\x77\xcc"
"\xbb\xe6\x3d\xf3\xbe\xf9\xc0\x7c\x68\x3e\x32\x1f\x9b\x4f\xcc\xa7\xe6"
"\x33\xf3\xb9\xf9\xc2\x7c\x69\xbe\x32\x5f\x9b\x6f\xcc\xb7\xe6\x3b\xf3"
"\xbd\xf9\xc1\xfc\x68\x7e\x32\x3f\x9b\x5f\xcc\xaf\xe6\x37\xf3\xbb\xf9"
"\xc3\xfc\x69\xfe\x32\x7f\x9b\xf1\xe6\x1f\xf3\xaf\xf9\xcf\x4c\x30\x13"
"\xcd\x24\x56\x9c\x95\xd4\x4a\x66\x25\xb7\x52\x58\x29\xad\x54\x56\x6a"
"\x2b\x8d\x95\xd6\x4a\x67\xa5\xb7\x32\x58\x19\xad\x4c\x56\x66\x2b\x8b"
"\x95\xd5\xca\x66\x65\xb7\x72\x58\x39\xad\x5c\x56\x6e\x2b\x8f\x95\xd7"
"\xca\x67\xe5\xb7\x0a\x58\x05\xad\x42\x56\x61\xab\x88\x55\xd4\x2a\x66"
"\x15\xb7\x4a\x58\x25\xad\x52\x56\x69\xab\x8c\x55\xd6\x2a\x67\x95\xb7"
"\x2a\x58\x15\xad\x4a\x56\x65\xab\x8a\x55\xd5\xaa\x66\x55\xb7\x6a\x58"
"\x35\xad\x5a\x56\x6d\xab\x8e\x55\xd7\xaa\x67\xd5\xb7\x1a\x58\x0d\xad"
"\x46\x56\x63\xab\x89\xd5\xd4\x6a\x66\x35\xb7\x5a\x58\x2d\xad\x56\x56"
"\x6b\xab\x8d\xd5\xd6\x6a\x67\xb5\xb7\x3a\x58\x1d\xad\x4e\x56\x67\xab"
"\x8b\xd5\xd5\xea\x66\x75\xb7\x7a\x58\x3d\xad\x5e\x56\x6f\xab\x8f\xd5"
"\xd7\xea\x67\xf5\xb7\x06\x58\x03\xad\x41\xd6\x60\x6b\x88\x35\xd4\x1a"
"\x66\x0d\xb7\x46\x58\x23\xad\x51\xd6\x68\x6b\x8c\x35\xd6\x1a\x67\x8d"
"\xb7\x26\x58\x13\xad\x49\xd6\x64\x6b\x8a\x35\xd5\x9a\x66\x4d\xb7\x66"
"\x58\x33\xad\x59\xd6\x6c\x6b\x8e\x35\xd7\x9a\x67\xcd\xb7\x16\x58\x0b"
"\xad\x45\xd6\x62\x6b\x89\xb5\xd4\x5a\x66\x2d\xb7\x56\x58\x2b\xad\x55"
"\xd6\x6a\x6b\x8d\xb5\xd6\x5a\x67\xad\xb7\x36\x58\x1b\xad\x4d\xd6\x66"
"\x6b\x8b\xb5\xd5\xda\x66\x6d\xb7\x76\x58\x3b\xad\x5d\xd6\x6e\x6b\x8f"
"\xb5\xd7\xda\x67\xed\xb7\x0e\x58\x07\xad\x43\xd6\x61\xeb\x88\x75\xd4"
"\x3a\x66\x1d\xb7\x4e\x58\x27\xad\x53\xd6\x69\xeb\x8c\x75\xd6\x3a\x67"
"\x9d\xb7\x2e\x58\x17\xad\x4b\xd6\x65\xeb\x8a\x75\xd5\xba\x66\x5d\xb7"
"\x6e\x58\x37\xad\x5b\x16\x66\xe1\x16\x61\x91\x16\x65\xd1\x16\x63\xb1"
"\x16\x67\xf1\x96\x60\x89\x96\x64\xc9\x96\x62\xa9\x96\x66\xe9\x96\x61"
"\x99\x96\x65\xd9\x96\x63\xb9\x96\x67\xf9\x56\x60\x85\x56\x64\x01\x0b"
"\x5a\xc8\x8a\x59\xb7\xad\x3b\xd6\x5d\xeb\x9e\x75\xdf\x7a\x60\x3d\xb4"
"\x1e\x59\x8f\xad\x27\xd6\x53\xeb\x99\xf5\xdc\x7a\x61\xbd\xb4\x5e\x59"
"\xaf\xad\x37\xd6\x5b\xeb\x9d\xf5\xde\xfa\x60\x7d\xb4\x3e\x59\x9f\xad"
"\x2f\xd6\x57\xeb\x9b\xf5\xdd\xfa\x61\xfd\xb4\x7e\x59\xbf\xad\x78\xeb"
"\x8f\xf5\xd7\xfa\x67\x25\x58\x89\x56\x12\x3b\xce\x4e\x6a\x27\xb3\x93"
"\xdb\x29\xec\x94\x76\x2a\x3b\xb5\x9d\xc6\x4e\x6b\xa7\xb3\xd3\xdb\x19"
"\xec\x8c\x76\x26\x3b\xb3\x9d\xc5\xce\x6a\x67\xb3\xb3\xdb\x39\xec\x9c"
"\x76\x2e\x3b\xb7\x9d\xc7\xce\x6b\xe7\xb3\xf3\xdb\x05\xec\x82\x76\x21"
"\xbb\xb0\x5d\xc4\x2e\x6a\x17\xb3\x8b\xdb\x25\xec\x92\x76\x29\xbb\xb4"
"\x5d\xc6\x2e\x6b\x97\xb3\xcb\xdb\x15\xec\x8a\x76\x25\xbb\xb2\x5d\xc5"
"\xae\x6a\x57\xb3\xab\xdb\x35\xec\x9a\x76\x2d\xbb\xb6\x5d\xc7\xae\x6b"
"\xd7\xb3\xeb\xdb\x0d\xec\x86\x76\x23\xbb\xb1\xdd\xc4\x6e\x6a\x37\xb3"
"\x9b\xdb\x2d\xec\x96\x76\x2b\xbb\xb5\xdd\xc6\x6e\x6b\xb7\xb3\xdb\xdb"
"\x1d\xec\x8e\x76\x27\xbb\xb3\xdd\xc5\xee\x6a\x77\xb3\xbb\xdb\x3d\xec"
"\x9e\x76\x2f\xbb\xb7\xdd\xc7\xee\x6b\xf7\xb3\xfb\xc7\x0f\xb0\x07\xda"
"\x83\xec\xc1\xf6\x10\x7b\xa8\x3d\xcc\x1e\x6e\x8f\xb0\x47\xda\xa3\xec"
"\xd1\xf6\x18\x7b\xac\x3d\xce\x1e\x6f\x4f\xb0\x27\xda\x93\xec\xc9\xf6"
"\x14\x7b\xaa\x3d\xcd\x9e\x6e\xcf\xb0\x67\xda\xb3\xec\xd9\xf6\x1c\x7b"
"\xae\x3d\xcf\x9e\x6f\x2f\xb0\x17\xda\x8b\xec\xc5\xf6\x12\x7b\xa9\xbd"
"\xcc\x5e\x6e\xaf\xb0\x57\xda\xab\xec\xd5\xf6\x1a\x7b\xad\xbd\xce\x5e"
"\x6f\x6f\xb0\x37\xda\x9b\xec\xcd\xf6\x16\x7b\xab\xbd\xcd\xde\x6e\xef"
"\xb0\x77\xda\xbb\xec\xdd\xf6\x1e\x7b\xaf\xbd\xcf\xde\x6f\x1f\xb0\x0f"
"\xda\x87\xec\xc3\xf6\x11\xfb\xa8\x7d\xcc\x3e\x6e\x9f\xb0\x4f\xda\xa7"
"\xec\xd3\xf6\x19\xfb\xac\x7d\xce\x3e\x6f\x5f\xb0\x2f\xda\x97\xec\xcb"
"\xf6\x15\xfb\xaa\x7d\xcd\xbe\x6e\xdf\xb0\x6f\xda\xb7\x6c\xcc\xc6\x6d"
"\xc2\x26\x6d\xca\xa6\x6d\xc6\x66\x6d\xce\xe6\x6d\xc1\x16\x6d\xc9\x96"
"\x6d\xc5\x56\x6d\xcd\xd6\x6d\xc3\x36\x6d\xcb\xb6\x6d\xc7\x76\x6d\xcf"
"\xf6\xed\xc0\x0e\xed\xc8\x06\x36\xb4\x91\x1d\xb3\x6f\xdb\x77\xec\xbb"
"\xf6\x3d\xfb\xbe\xfd\xc0\x7e\x68\x3f\xb2\x1f\xdb\x4f\xec\xa7\xf6\x33"
"\xfb\xb9\xfd\xc2\x7e\x69\xbf\xb2\x5f\xdb\x6f\xec\xb7\xf6\x3b\xfb\xbd"
"\xfd\xc1\xfe\x68\x7f\xb2\x3f\xdb\x5f\xec\xaf\xf6\x37\xfb\xbb\xfd\xc3"
"\xfe\x69\xff\xb2\x7f\xdb\xf1\xf6\x1f\xfb\xaf\xfd\xcf\x4e\xb0\x13\xed"
"\x24\x4e\x9c\x93\xd4\x49\xe6\x24\x77\x52\x38\x29\x9d\x54\x4e\x6a\x27"
"\x8d\x93\xd6\x49\xe7\xa4\x77\x32\x38\x19\x9d\x4c\x4e\x66\x27\x8b\x93"
"\xd5\xc9\xe6\x64\x77\x72\x38\x39\x9d\x5c\x4e\x6e\x27\x8f\x93\xd7\xc9"
"\xe7\xe4\x77\x0a\x38\x05\x9d\x42\x4e\x61\xa7\x88\x53\xd4\x29\xe6\x14"
"\x77\x4a\x38\x25\x9d\x52\x4e\x69\xa7\x8c\x53\xd6\x29\xe7\x94\x77\x2a"
"\x38\x15\x9d\x4a\x4e\x65\xa7\x8a\x53\xd5\xa9\xe6\x54\x77\x6a\x38\x35"
"\x9d\x5a\x4e\x6d\xa7\x8e\x53\xd7\xa9\xe7\xd4\x77\x1a\x38\x0d\x9d\x46"
"\x4e\x63\xa7\x89\xd3\xd4\x69\xe6\x34\x77\x5a\x38\x2d\x9d\x56\x4e\x6b"
"\xa7\x8d\xd3\xd6\x69\xe7\xb4\x77\x3a\x38\x1d\x9d\x4e\x4e\x67\xa7\x8b"
"\xd3\xd5\xe9\xe6\x74\x77\x7a\x38\x3d\x9d\x5e\x4e\x6f\xa7\x8f\xd3\xd7"
"\xe9\xe7\xf4\x77\x06\x38\x03\x9d\x41\xce\x60\x67\x88\x33\xd4\x19\xe6"
"\x0c\x77\x46\x38\x23\x9d\x51\xce\x68\x67\x8c\x33\xd6\x19\xe7\x8c\x77"
"\x26\x38\x13\x9d\x49\xce\x64\x67\x8a\x33\xd5\x99\xe6\x4c\x77\x66\x38"
"\x33\x9d\x59\xce\x6c\x67\x8e\x33\xd7\x99\xe7\xcc\x77\x16\x38\x0b\x9d"
"\x45\xce\x62\x67\x89\xb3\xd4\x59\xe6\x2c\x77\x56\x38\x2b\x9d\x55\xce"
"\x6a\x67\x8d\xb3\xd6\x59\xe7\xac\x77\x36\x38\x1b\x9d\x4d\xce\x66\x67"
"\x8b\xb3\xd5\xd9\xe6\x6c\x77\x76\x38\x3b\x9d\x5d\xce\x6e\x67\x8f\xb3"
"\xd7\xd9\xe7\xec\x77\x0e\x38\x07\x9d\x43\xce\x61\xe7\x88\x73\xd4\x39"
"\xe6\x1c\x77\x4e\x38\x27\x9d\x53\xce\x69\xe7\x8c\x73\xd6\x39\xe7\x9c"
"\x77\x2e\x38\x17\x9d\x4b\xce\x65\xe7\x8a\x73\xd5\xb9\xe6\x5c\x77\x6e"
"\x38\x37\x9d\x5b\x0e\xe6\xe0\x0e\xe1\x90\x0e\xe5\xd0\x0e\xe3\xb0\x0e"
"\xe7\xf0\x8e\xe0\x88\x8e\xe4\xc8\x8e\xe2\xa8\x8e\xe6\xe8\x8e\xe1\x98"
"\x8e\xe5\xd8\x8e\xe3\xb8\x8e\xe7\xf8\x4e\xe0\x84\x4e\xe4\x00\x07\x3a"
"\xc8\x89\x39\xb7\x9d\x3b\xce\x5d\xe7\x9e\x73\xdf\x79\xe0\x3c\x74\x1e"
"\x39\x8f\x9d\x27\xce\x53\xe7\x99\xf3\xdc\x79\xe1\xbc\x74\x5e\x39\xaf"
"\x9d\x37\xce\x5b\xe7\x9d\xf3\xde\xf9\xe0\x7c\x74\x3e\x39\x9f\x9d\x2f"
"\xce\x57\xe7\x9b\xf3\xdd\xf9\xe1\xfc\x74\x7e\x39\xbf\x9d\x78\xe7\x8f"
"\xf3\xd7\xf9\xe7\x24\x38\x89\x4e\x12\x37\xce\x4d\xea\x26\x73\x93\xbb"
"\x29\xdc\x94\x6e\x2a\x37\xb5\x9b\xc6\x4d\xeb\xa6\x73\xd3\xbb\x19\xdc"
"\x8c\x6e\x26\x37\xb3\x9b\xc5\xcd\xea\x66\x73\xb3\xbb\x39\xdc\x9c\x6e"
"\x2e\x37\xb7\x9b\xc7\xcd\xeb\xe6\x73\xf3\xbb\x05\xdc\x82\x6e\x21\xb7"
"\xb0\x5b\xc4\x2d\xea\x16\x73\x8b\xbb\x25\xdc\x92\x6e\x29\xb7\xb4\x5b"
"\xc6\x2d\xeb\x96\x73\xcb\xbb\x15\xdc\x8a\x6e\x25\xb7\xb2\x5b\xc5\xad"
"\xea\x56\x73\xab\xbb\x35\xdc\x9a\x6e\x2d\xb7\xb6\x5b\xc7\xad\xeb\xd6"
"\x73\xeb\xbb\x0d\xdc\x86\x6e\x23\xb7\xb1\xdb\xc4\x6d\xea\x36\x73\x9b"
"\xbb\x2d\xdc\x96\x6e\x2b\xb7\xb5\xdb\xc6\x6d\xeb\xb6\x73\xdb\xbb\x1d"
"\xdc\x8e\x6e\x27\xb7\xb3\xdb\xc5\xed\xea\x76\x73\xbb\xbb\x3d\xdc\x9e"
"\x6e\x2f\xb7\xb7\xdb\xc7\xed\xeb\xf6\x73\xfb\xbb\x03\xdc\x81\xee\x20"
"\x77\xb0\x3b\xc4\x1d\xea\x0e\x73\x87\xbb\x23\xdc\x91\xee\x28\x77\xb4"
"\x3b\xc6\x1d\xeb\x8e\x73\xc7\xbb\x13\xdc\x89\xee\x24\x77\xb2\x3b\xc5"
"\x9d\xea\x4e\x73\xa7\xbb\x33\xdc\x99\xee\x2c\x77\xb6\x3b\xc7\x9d\xeb"
"\xce\x73\xe7\xbb\x0b\xdc\x85\xee\x22\x77\xb1\xbb\xc4\x5d\xea\x2e\x73"
"\x97\xbb\x2b\xdc\x95\xee\x2a\x77\xb5\xbb\xc6\x5d\xeb\xae\x73\xd7\xbb"
"\x1b\xdc\x8d\xee\x26\x77\xb3\xbb\xc5\xdd\xea\x6e\x73\xb7\xbb\x3b\xdc"
"\x9d\xee\x2e\x77\xb7\xbb\xc7\xdd\xeb\xee\x73\xf7\xbb\x07\xdc\x83\xee"
"\x21\xf7\xb0\x7b\xc4\x3d\xea\x1e\x73\x8f\xbb\x27\xdc\x93\xee\x29\xf7"
"\xb4\x7b\xc6\x3d\xeb\x9e\x73\xcf\xbb\x17\xdc\x8b\xee\x25\xf7\xb2\x7b"
"\xc5\xbd\xea\x5e\x73\xaf\xbb\x37\xdc\x9b\xee\x2d\x17\x73\x71\x97\x70"
"\x49\x97\x72\x69\x97\x71\x59\x97\x73\x79\x57\x70\x45\x57\x72\x65\x57"
"\x71\x55\x57\x73\x75\xd7\x70\x4d\xd7\x72\x6d\xd7\x71\x5d\xd7\x73\x7d"
"\x37\x70\x43\x37\x72\x81\x0b\x5d\xe4\xc6\xdc\xdb\xee\x1d\xf7\xae\x7b"
"\xcf\xbd\xef\x3e\x70\x1f\xba\x8f\xdc\xc7\xee\x13\xf7\xa9\xfb\xcc\x7d"
"\xee\xbe\x70\x5f\xba\xaf\xdc\xd7\xee\x1b\xf7\xad\xfb\xce\x7d\xef\x7e"
"\x70\x3f\xba\x9f\xdc\xcf\xee\x17\xf7\xab\xfb\xcd\xfd\xee\xfe\x70\x7f"
"\xba\xbf\xdc\xdf\x6e\xbc\xfb\xc7\xfd\xeb\xfe\x73\x13\xdc\x44\x37\x89"
"\x17\xe7\x25\xf5\x92\x79\xc9\xbd\x14\x5e\x4a\x2f\x95\x97\xda\x4b\xe3"
"\xa5\xf5\xd2\x79\xe9\xbd\x0c\x5e\x46\x2f\x93\x97\xd9\xcb\xe2\x65\xf5"
"\xb2\x79\xd9\xbd\x1c\x5e\x4e\x2f\x97\x97\xdb\xcb\xe3\xe5\xf5\xf2\x79"
"\xf9\xbd\x02\x5e\x41\xaf\x90\x57\xd8\x2b\xe2\x15\xf5\x8a\x79\xc5\xbd"
"\x12\x5e\x49\xaf\x94\x57\xda\x2b\xe3\x95\xf5\xca\x79\xe5\xbd\x0a\x5e"
"\x45\xaf\x92\x57\xd9\xab\xe2\x55\xf5\xaa\x79\xd5\xbd\x1a\x5e\x4d\xaf"
"\x96\x57\xdb\xab\xe3\xd5\xf5\xea\x79\xf5\xbd\x06\x5e\x43\xaf\x91\xd7"
"\xd8\x6b\xe2\x35\xf5\x9a\x79\xcd\xbd\x16\x5e\x4b\xaf\x95\xd7\xda\x6b"
"\xe3\xb5\xf5\xda\x79\xed\xbd\x0e\x5e\x47\xaf\x93\xd7\xd9\xeb\xe2\x75"
"\xf5\xba\x79\xdd\xbd\x1e\x5e\x4f\xaf\x97\xd7\xdb\xeb\xe3\xf5\xf5\xfa"
"\x79\xfd\xbd\x01\xde\x40\x6f\x90\x37\xd8\x1b\xe2\x0d\xf5\x86\x79\xc3"
"\xbd\x11\xde\x48\x6f\x94\x37\xda\x1b\xe3\x8d\xf5\xc6\x79\xe3\xbd\x09"
"\xde\x44\x6f\x92\x37\xd9\x9b\xe2\x4d\xf5\xa6\x79\xd3\xbd\x19\xde\x4c"
"\x6f\x96\x37\xdb\x9b\xe3\xcd\xf5\xe6\x79\xf3\xbd\x05\xde\x42\x6f\x91"
"\xb7\xd8\x5b\xe2\x2d\xf5\x96\x79\xcb\xbd\x15\xde\x4a\x6f\x95\xb7\xda"
"\x5b\xe3\xad\xf5\xd6\x79\xeb\xbd\x0d\xde\x46\x6f\x93\xb7\xd9\xdb\xe2"
"\x6d\xf5\xb6\x79\xdb\xbd\x1d\xde\x4e\x6f\x97\xb7\xdb\xdb\xe3\xed\xf5"
"\xf6\x79\xfb\xbd\x03\xde\x41\xef\x90\x77\xd8\x3b\xe2\x1d\xf5\x8e\x79"
"\xc7\xbd\x13\xde\x49\xef\x94\x77\xda\x3b\xe3\x9d\xf5\xce\x79\xe7\xbd"
"\x0b\xde\x45\xef\x92\x77\xd9\xbb\xe2\x5d\xf5\xae\x79\xd7\xbd\x1b\xde"
"\x4d\xef\x96\x87\x79\xb8\x47\x78\xa4\x47\x79\xb4\xc7\x78\xac\xc7\x79"
"\xbc\x27\x78\xa2\x27\x79\xb2\xa7\x78\xaa\xa7\x79\xba\x67\x78\xa6\x67"
"\x79\xb6\xe7\x78\xae\xe7\x79\xbe\x17\x78\xa1\x17\x79\xc0\x83\x1e\xf2"
"\x62\xde\x6d\xef\x8e\x77\xd7\xbb\xe7\xdd\xf7\x1e\x78\x0f\xbd\x47\xde"
"\x63\xef\x89\xf7\xd4\x7b\xe6\x3d\xf7\x5e\x78\x2f\xbd\x57\xde\x6b\xef"
"\x8d\xf7\xd6\x7b\xe7\xbd\xf7\x3e\x78\x1f\xbd\x4f\xde\x67\xef\x8b\xf7"
"\xd5\xfb\xe6\x7d\xf7\x7e\x78\x3f\xbd\x5f\xde\x6f\x2f\xde\xfb\xe3\xfd"
"\xf5\xfe\x79\x09\x5e\xa2\x97\xc4\x8f\xf3\x93\xfa\xc9\xfc\xe4\x7e\x0a"
"\x3f\xa5\x9f\xca\x4f\xed\xa7\xf1\xd3\xfa\xe9\xfc\xf4\x7e\x06\x3f\xa3"
"\x9f\xc9\xcf\xec\x67\xf1\xb3\xfa\xd9\xfc\xec\x7e\x0e\x3f\xa7\x9f\xcb"
"\xcf\xed\xe7\xf1\xf3\xfa\xf9\xfc\xfc\x7e\x01\xbf\xa0\x5f\xc8\x2f\xec"
"\x17\xf1\x8b\xfa\xc5\xfc\xe2\x7e\x09\xbf\xa4\x5f\xca\x2f\xed\x97\xf1"
"\xcb\xfa\xe5\xfc\xf2\x7e\x05\xbf\xa2\x5f\xc9\xaf\xec\x57\xf1\xab\xfa"
"\xd5\xfc\xea\x7e\x0d\xbf\xa6\x5f\xcb\xaf\xed\xd7\xf1\xeb\xfa\xf5\xfc"
"\xfa\x7e\x03\xbf\xa1\xdf\xc8\x6f\xec\x37\xf1\x9b\xfa\xcd\xfc\xe6\x7e"
"\x0b\xbf\xa5\xdf\xca\x6f\xed\xb7\xf1\xdb\xfa\xed\xfc\xf6\x7e\x07\xbf"
"\xa3\xdf\xc9\xef\xec\x77\xf1\xbb\xfa\xdd\xfc\xee\x7e\x0f\xbf\xa7\xdf"
"\xcb\xef\xed\xf7\xf1\xfb\xfa\xfd\xfc\xfe\xfe\x00\x7f\xa0\x3f\xc8\x1f"
"\xec\x0f\xf1\x87\xfa\xc3\xfc\xe1\xfe\x08\x7f\xa4\x3f\xca\x1f\xed\x8f"
"\xf1\xc7\xfa\xe3\xfc\xf1\xfe\x04\x7f\xa2\x3f\xc9\x9f\xec\x4f\xf1\xa7"
"\xfa\xd3\xfc\xe9\xfe\x0c\x7f\xa6\x3f\xcb\x9f\xed\xcf\xf1\xe7\xfa\xf3"
"\xfc\xf9\xfe\x02\x7f\xa1\xbf\xc8\x5f\xec\x2f\xf1\x97\xfa\xcb\xfc\xe5"
"\xfe\x0a\x7f\xa5\xbf\xca\x5f\xed\xaf\xf1\xd7\xfa\xeb\xfc\xf5\xfe\x06"
"\x7f\xa3\xbf\xc9\xdf\xec\x6f\xf1\xb7\xfa\xdb\xfc\xed\xfe\x0e\x7f\xa7"
"\xbf\xcb\xdf\xed\xef\xf1\xf7\xfa\xfb\xfc\xfd\xfe\x01\xff\xa0\x7f\xc8"
"\x3f\xec\x1f\xf1\x8f\xfa\xc7\xfc\xe3\xfe\x09\xff\xa4\x7f\xca\x3f\xed"
"\x9f\xf1\xcf\xfa\xe7\xfc\xf3\xfe\x05\xff\xa2\x7f\xc9\xbf\xec\x5f\xf1"
"\xaf\xfa\xd7\xfc\xeb\xfe\x0d\xff\xa6\x7f\xcb\xc7\x7c\xdc\x27\x7c\xd2"
"\xa7\x7c\xda\x67\x7c\xd6\xe7\x7c\xde\x17\x7c\xd1\x97\x7c\xd9\x57\x7c"
"\xd5\xd7\x7c\xdd\x37\x7c\xd3\xb7\x7c\xdb\x77\x7c\xd7\xf7\x7c\xdf\x0f"
"\xfc\xd0\x8f\x7c\xe0\x43\x1f\xf9\x31\xff\xb6\x7f\xc7\xbf\xeb\xdf\xf3"
"\xef\xfb\x0f\xfc\x87\xfe\x23\xff\xb1\xff\xc4\x7f\xea\x3f\xf3\x9f\xfb"
"\x2f\xfc\x97\xfe\x2b\xff\xb5\xff\xc6\x7f\xeb\xbf\xf3\xdf\xfb\x1f\xfc"
"\x8f\xfe\x27\xff\xb3\xff\xc5\xff\xea\x7f\xf3\xbf\xfb\x3f\xfc\x9f\xfe"
"\x2f\xff\xb7\x1f\xef\xff\xf1\xff\xfa\xff\xfc\x04\x3f\xd1\x4f\x12\xc4"
"\x05\x49\x83\x64\x41\xf2\x20\x45\x90\x32\x48\x15\xa4\x0e\xd2\x04\x69"
"\x83\x74\x41\xfa\x20\x43\x90\x31\xc8\x14\x64\x0e\xb2\x04\x59\x83\x6c"
"\x41\xf6\x20\x47\x90\x33\xc8\x15\xe4\x0e\xf2\x04\x79\x83\x7c\x41\xfe"
"\xa0\x40\x50\x30\x28\x14\x14\x0e\x8a\x04\x45\x83\x62\x41\xf1\xa0\x44"
"\x50\x32\x28\x15\x94\x0e\xca\x04\x65\x83\x72\x41\xf9\xa0\x42\x50\x31"
"\xa8\x14\x54\x0e\xaa\x04\x55\x83\x6a\x41\xf5\xa0\x46\x50\x33\xa8\x15"
"\xd4\x0e\xea\x04\x75\x83\x7a\x41\xfd\xa0\x41\xd0\x30\x68\x14\x34\x0e"
"\x9a\x04\x4d\x83\x66\x41\xf3\xa0\x45\xd0\x32\x68\x15\xb4\x0e\xda\x04"
"\x6d\x83\x76\x41\xfb\xa0\x43\xd0\x31\xe8\x14\x74\x0e\xba\x04\x5d\x83"
"\x6e\x41\xf7\xa0\x47\xd0\x33\xe8\x15\xf4\x0e\xfa\x04\x7d\x83\x7e\x41"
"\xff\x60\x40\x30\x30\x18\x14\x0c\x0e\x86\x04\x43\x83\x61\xc1\xf0\x60"
"\x44\x30\x32\x18\x15\x8c\x0e\xc6\x04\x63\x83\x71\xc1\xf8\x60\x42\x30"
"\x31\x98\x14\x4c\x0e\xa6\x04\x53\x83\x69\xc1\xf4\x60\x46\x30\x33\x98"
"\x15\xcc\x0e\xe6\x04\x73\x83\x79\xc1\xfc\x60\x41\xb0\x30\x58\x14\x2c"
"\x0e\x96\x04\x4b\x83\x65\xc1\xf2\x60\x45\xb0\x32\x58\x15\xac\x0e\xd6"
"\x04\x6b\x83\x75\xc1\xfa\x60\x43\xb0\x31\xd8\x14\x6c\x0e\xb6\x04\x5b"
"\x83\x6d\xc1\xf6\x60\x47\xb0\x33\xd8\x15\xec\x0e\xf6\x04\x7b\x83\x7d"
"\xc1\xfe\xe0\x40\x70\x30\x38\x14\x1c\x0e\x8e\x04\x47\x83\x63\xc1\xf1"
"\xe0\x44\x70\x32\x38\x15\x9c\x0e\xce\x04\x67\x83\x73\xc1\xf9\xe0\x42"
"\x70\x31\xb8\x14\x5c\x0e\xae\x04\x57\x83\x6b\xc1\xf5\xe0\x46\x70\x33"
"\xb8\x15\x60\x01\x1e\x10\x01\x19\x50\x01\x1d\x30\x01\x1b\x70\x01\x1f"
"\x08\x81\x18\x48\x81\x1c\x28\x81\x1a\x68\x81\x1e\x18\x81\x19\x58\x81"
"\x1d\x38\x81\x1b\x78\x81\x1f\x04\x41\x18\x44\x01\x08\x60\x80\x82\x58"
"\x70\x3b\xb8\x13\xdc\x0d\xee\x05\xf7\x83\x07\xc1\xc3\xe0\x51\xf0\x38"
"\x78\x12\x3c\x0d\x9e\x05\xcf\x83\x17\xc1\xcb\xe0\x55\xf0\x3a\x78\x13"
"\xbc\x0d\xde\x05\xef\x83\x0f\xc1\xc7\xe0\x53\xf0\x39\xf8\x12\x7c\x0d"
"\xbe\x05\xdf\x83\x1f\xc1\xcf\xe0\x57\xf0\x3b\x88\x0f\xfe\x04\x7f\x83"
"\x7f\x41\x42\x90\x18\x24\x09\xe3\xc2\xa4\x61\xb2\x30\x79\x98\x22\x4c"
"\x19\xa6\x0a\x53\x87\x69\xc2\xb4\x61\xba\x30\x7d\x98\x21\xcc\x18\x66"
"\x0a\x33\x87\x59\xc2\xac\x61\xb6\x30\x7b\x98\x23\xcc\x19\xe6\x0a\x73"
"\x87\x79\xc2\xbc\x61\xbe\x30\x7f\x58\x20\x2c\x18\x16\x0a\x0b\x87\x45"
"\xc2\xa2\x61\xb1\xb0\x78\x58\x22\x2c\x19\x96\x0a\x4b\x87\x65\xc2\xb2"
"\x61\xb9\xb0\x7c\x58\x21\xac\x18\x56\x0a\x2b\x87\x55\xc2\xaa\x61\xb5"
"\xb0\x7a\x58\x23\xac\x19\xd6\x0a\x6b\x87\x75\xc2\xba\x61\xbd\xb0\x7e"
"\xd8\x20\x6c\x18\x36\x0a\x1b\x87\x4d\xc2\xa6\x61\xb3\xb0\x79\xd8\x22"
"\x6c\x19\xb6\x0a\x5b\x87\x6d\xc2\xb6\x61\xbb\xb0\x7d\xd8\x21\xec\x18"
"\x76\x0a\x3b\x87\x5d\xc2\xae\x61\xb7\xb0\x7b\xd8\x23\xec\x19\xf6\x0a"
"\x7b\x87\x7d\xc2\xbe\x61\xbf\xb0\x7f\x38\x20\x1c\x18\x0e\x0a\x07\x87"
"\x43\xc2\xa1\xe1\xb0\x70\x78\x38\x22\x1c\x19\x8e\x0a\x47\x87\x63\xc2"
"\xb1\xe1\xb8\x70\x7c\x38\x21\x9c\x18\x4e\x0a\x27\x87\x53\xc2\xa9\xe1"
"\xb4\x70\x7a\x38\x23\x9c\x19\xce\x0a\x67\x87\x73\xc2\xb9\xe1\xbc\x70"
"\x7e\xb8\x20\x5c\x18\x2e\x0a\x17\x87\x4b\xc2\xa5\xe1\xb2\x70\x79\xb8"
"\x22\x5c\x19\xae\x0a\x57\x87\x6b\xc2\xb5\xe1\xba\x70\x7d\xb8\x21\xdc"
"\x18\x6e\x0a\x37\x87\x5b\xc2\xad\xe1\xb6\x70\x7b\xb8\x23\xdc\x19\xee"
"\x0a\x77\x87\x7b\xc2\xbd\xe1\xbe\x70\x7f\x78\x20\x3c\x18\x1e\x0a\x0f"
"\x87\x47\xc2\xa3\xe1\xb1\xf0\x78\x78\x22\x3c\x19\x9e\x0a\x4f\x87\x67"
"\xc2\xb3\xe1\xb9\xf0\x7c\x78\x21\xbc\x18\x5e\x0a\x2f\x87\x57\xc2\xab"
"\xe1\xb5\xf0\x7a\x78\x23\xbc\x19\xde\x0a\xb1\x10\x0f\x89\x90\x0c\xa9"
"\x90\x0e\x99\x90\x0d\xb9\x90\x0f\x85\x50\x0c\xa5\x50\x0e\x95\x50\x0d"
"\xb5\x50\x0f\x8d\xd0\x0c\xad\xd0\x0e\x9d\xd0\x0d\xbd\xd0\x0f\x83\x30"
"\x0c\xa3\x10\x84\x30\x44\x61\x2c\xbc\x1d\xde\x09\xef\x86\xf7\xc2\xfb"
"\xe1\x83\xf0\x61\xf8\x28\x7c\x1c\x3e\x09\x9f\x86\xcf\xc2\xe7\xe1\x8b"
"\xf0\x65\xf8\x2a\x7c\x1d\xbe\x09\xdf\x86\xef\xc2\xf7\xe1\x87\xf0\x63"
"\xf8\x29\xfc\x1c\x7e\x09\xbf\x86\xdf\xc2\xef\xe1\x8f\xf0\x67\xf8\x2b"
"\xfc\x1d\xc6\x87\x7f\xc2\xbf\xe1\xbf\x30\x21\x4c\x0c\x93\x44\x71\x51"
"\xd2\x28\x59\x94\x3c\x4a\x11\xa5\x8c\x52\x45\xa9\xa3\x34\x51\xda\x28"
"\x5d\x94\x3e\xca\x10\x65\x8c\x32\x45\x99\xa3\x2c\x51\xd6\x28\x5b\x94"
"\x3d\xca\x11\xe5\x8c\x72\x45\xb9\xa3\x3c\x51\xde\x28\x5f\x94\x3f\x2a"
"\x10\x15\x8c\x0a\x45\x85\xa3\x22\x51\xd1\xa8\x58\x54\x3c\x2a\x11\x95"
"\x8c\x4a\x45\xa5\xa3\x32\x51\xd9\xa8\x5c\x54\x3e\xaa\x10\x55\x8c\x2a"
"\x45\x95\xa3\x2a\x51\xd5\xa8\x5a\x54\x3d\xaa\x11\xd5\x8c\x6a\x45\xb5"
"\xa3\x3a\x51\xdd\xa8\x5e\x54\x3f\x6a\x10\x35\x8c\x1a\x45\x8d\xa3\x26"
"\x51\xd3\xa8\x59\xd4\x3c\x6a\x11\xb5\x8c\x5a\x45\xad\xa3\x36\x51\xdb"
"\xa8\x5d\xd4\x3e\xea\x10\x75\x8c\x3a\x45\x9d\xa3\x2e\x51\xd7\xa8\x5b"
"\xd4\x3d\xea\x11\xf5\x8c\x7a\x45\xbd\xa3\x3e\x51\xdf\xa8\x5f\xd4\x3f"
"\x1a\x10\x0d\x8c\x06\x45\x83\xa3\x21\xd1\xd0\x68\x58\x34\x3c\x1a\x11"
"\x8d\x8c\x46\x45\xa3\xa3\x31\xd1\xd8\x68\x5c\x34\x3e\x9a\x10\x4d\x8c"
"\x26\x45\x93\xa3\x29\xd1\xd4\x68\x5a\x34\x3d\x9a\x11\xcd\x8c\x66\x45"
"\xb3\xa3\x39\xd1\xdc\x68\x5e\x34\x3f\x5a\x10\x2d\x8c\x16\x45\x8b\xa3"
"\x25\xd1\xd2\x68\x59\xb4\x3c\x5a\x11\xad\x8c\x56\x45\xab\xa3\x35\xd1"
"\xda\x68\x5d\xb4\x3e\xda\x10\x6d\x8c\x36\x45\x9b\xa3\x2d\xd1\xd6\x68"
"\x5b\xb4\x3d\xda\x11\xed\x8c\x76\x45\xbb\xa3\x3d\xd1\xde\x68\x5f\xb4"
"\x3f\x3a\x10\x1d\x8c\x0e\x45\x87\xa3\x23\xd1\xd1\xe8\x58\x74\x3c\x3a"
"\x11\x9d\x8c\x4e\x45\xa7\xa3\x33\xd1\xd9\xe8\x5c\x74\x3e\xba\x10\x5d"
"\x8c\x2e\x45\x97\xa3\x2b\xd1\xd5\xe8\x5a\x74\x3d\xba\x11\xdd\x8c\x6e"
"\x45\x58\x84\x47\x44\x44\x46\x54\x44\x47\x4c\xc4\x46\x5c\xc4\x47\x42"
"\x24\x46\x52\x24\x47\x4a\xa4\x46\x5a\xa4\x47\x46\x64\x46\x56\x64\x47"
"\x4e\xe4\x46\x5e\xe4\x47\x41\x14\x46\x51\x04\x22\x18\xa1\x28\x16\xdd"
"\x8e\xee\x44\x77\xa3\x7b\xd1\xfd\xe8\x41\xf4\x30\x7a\x14\x3d\x8e\x9e"
"\x44\x4f\xa3\x67\xd1\xf3\xe8\x45\xf4\x32\x7a\x15\xbd\x8e\xde\x44\x6f"
"\xa3\x77\xd1\xfb\xe8\x43\xf4\x31\xfa\x14\x7d\x8e\xbe\x44\x5f\xa3\x6f"
"\xd1\xf7\xe8\x47\xf4\x33\xfa\x15\xfd\x8e\xe2\xa3\x3f\xd1\xdf\xe8\x5f"
"\x94\x10\x25\x46\x49\x40\x1c\x48\x0a\x92\x81\xe4\x20\x05\x48\x09\x52"
"\x81\xd4\x20\x0d\x48\x0b\xd2\x81\xf4\x20\x03\xc8\x08\x32\x81\xcc\x20"
"\x0b\xc8\x0a\xb2\x81\xec\x20\x07\xc8\x09\x72\x81\xdc\x20\x0f\xc8\x0b"
"\xf2\x81\xfc\xa0\x00\x28\x08\x0a\x81\xc2\xa0\x08\x28\x0a\x8a\x81\xe2"
"\xa0\x04\x28\x09\x4a\x81\xd2\xa0\x0c\x28\x0b\xca\x81\xf2\xa0\x02\xa8"
"\x08\x2a\x81\xca\xa0\x0a\xa8\x0a\xaa\x81\xea\xa0\x06\xa8\x09\x6a\x81"
"\xda\xa0\x0e\xa8\x0b\xea\x81\xfa\xa0\x01\x68\x08\x1a\x81\xc6\xa0\x09"
"\x68\x0a\x9a\x81\xe6\xa0\x05\x68\x09\x5a\x81\xd6\xa0\x0d\x68\x0b\xda"
"\x81\xf6\xa0\x03\xe8\x08\x3a\x81\xce\xa0\x0b\xe8\x0a\xba\x81\xee\xa0"
"\x07\xe8\x09\x7a\x81\xde\xa0\x0f\xe8\x0b\xfa\x81\xfe\x60\x00\x18\x08"
"\x06\x81\xc1\x60\x08\x18\x0a\x86\x81\xe1\x60\x04\x18\x09\x46\x81\xd1"
"\x60\x0c\x18\x0b\xc6\x81\xf1\x60\x02\x98\x08\x26\x81\xc9\x60\x0a\x98"
"\x0a\xa6\x81\xe9\x60\x06\x98\x09\x66\x81\xd9\x60\x0e\x98\x0b\xe6\x81"
"\xf9\x60\x01\x58\x08\x16\x81\xc5\x60\x09\x58\x0a\x96\x81\xe5\x60\x05"
"\x58\x09\x56\x81\xd5\x60\x0d\x58\x0b\xd6\x81\xf5\x60\x03\xd8\x08\x36"
"\x81\xcd\x60\x0b\xd8\x0a\xb6\x81\xed\x60\x07\xd8\x09\x76\x81\xdd\x60"
"\x0f\xd8\x0b\xf6\x81\xfd\xe0\x00\x38\x08\x0e\x81\xc3\xe0\x08\x38\x0a"
"\x8e\x81\xe3\xe0\x04\x38\x09\x4e\x81\xd3\xe0\x0c\x38\x0b\xce\x81\xf3"
"\xe0\x02\xb8\x08\x2e\x81\xcb\xe0\x0a\xb8\x0a\xae\x81\xeb\xe0\x06\xb8"
"\x09\x6e\x01\x0c\xe0\x80\x00\x24\xa0\x00\x0d\x18\xc0\x02\x0e\xf0\x40"
"\x00\x22\x90\x80\x0c\x14\xa0\x02\x0d\xe8\xc0\x00\x26\xb0\x80\x0d\x1c"
"\xe0\x02\x0f\xf8\x20\x00\x21\x88\x00\x00\x10\x20\x10\x03\xb7\xc1\x1d"
"\x70\x17\xdc\x03\xf7\xc1\x03\xf0\x10\x3c\x02\x8f\xc1\x13\xf0\x14\x3c"
"\x03\xcf\xc1\x0b\xf0\x12\xbc\x02\xaf\xc1\x1b\xf0\x16\xbc\x03\xef\xc1"
"\x07\xf0\x11\x7c\x02\x9f\xc1\x17\xf0\x15\x7c\x03\xdf\xc1\x0f\xf0\x13"
"\xfc\x02\xbf\x41\x3c\xf8\x03\xfe\x82\x7f\x20\x01\x24\x82\x24\x30\x0e"
"\x26\x85\xc9\x60\x72\x98\x02\xa6\x84\xa9\x60\x6a\x98\x06\xa6\x85\xe9"
"\x60\x7a\x98\x01\x66\x84\x99\x60\x66\x98\x05\x66\x85\xd9\x60\x76\x98"
"\x03\xe6\x84\xb9\x60\x6e\x98\x07\xe6\x85\xf9\x60\x7e\x58\x00\x16\x84"
"\x85\x60\x61\x58\x04\x16\x85\xc5\x60\x71\x58\x02\x96\x84\xa5\x60\x69"
"\x58\x06\x96\x85\xe5\x60\x79\x58\x01\x56\x84\x95\x60\x65\x58\x05\x56"
"\x85\xd5\x60\x75\x58\x03\xd6\x84\xb5\x60\x6d\x58\x07\xd6\x85\xf5\x60"
"\x7d\xd8\x00\x36\x84\x8d\x60\x63\xd8\x04\x36\x85\xcd\x60\x73\xd8\x02"
"\xb6\x84\xad\x60\x6b\xd8\x06\xb6\x85\xed\x60\x7b\xd8\x01\x76\x84\x9d"
"\x60\x67\xd8\x05\x76\x85\xdd\x60\x77\xd8\x03\xf6\x84\xbd\x60\x6f\xd8"
"\x07\xf6\x85\xfd\x60\x7f\x38\x00\x0e\x84\x83\xe0\x60\x38\x04\x0e\x85"
"\xc3\xe0\x70\x38\x02\x8e\x84\xa3\xe0\x68\x38\x06\x8e\x85\xe3\xe0\x78"
"\x38\x01\x4e\x84\x93\xe0\x64\x38\x05\x4e\x85\xd3\xe0\x74\x38\x03\xce"
"\x84\xb3\xe0\x6c\x38\x07\xce\x85\xf3\xe0\x7c\xb8\x00\x2e\x84\x8b\xe0"
"\x62\xb8\x04\x2e\x85\xcb\xe0\x72\xb8\x02\xae\x84\xab\xe0\x6a\xb8\x06"
"\xae\x85\xeb\xe0\x7a\xb8\x01\x6e\x84\x9b\xe0\x66\xb8\x05\x6e\x85\xdb"
"\xe0\x76\xb8\x03\xee\x84\xbb\xe0\x6e\xb8\x07\xee\x85\xfb\xe0\x7e\x78"
"\x00\x1e\x84\x87\xe0\x61\x78\x04\x1e\x85\xc7\xe0\x71\x78\x02\x9e\x84"
"\xa7\xe0\x69\x78\x06\x9e\x85\xe7\xe0\x79\x78\x01\x5e\x84\x97\xe0\x65"
"\x78\x05\x5e\x85\xd7\xe0\x75\x78\x03\xde\x84\xb7\x20\x06\x71\x48\x40"
"\x12\x52\x90\x86\x0c\x64\x21\x07\x79\x28\x40\x11\x4a\x50\x86\x0a\x54"
"\xa1\x06\x75\x68\x40\x13\x5a\xd0\x86\x0e\x74\xa1\x07\x7d\x18\xc0\x10"
"\x46\x10\x40\x08\x11\x8c\xc1\xdb\xf0\x0e\xbc\x0b\xef\xc1\xfb\xf0\x01"
"\x7c\x08\x1f\xc1\xc7\xf0\x09\x7c\x0a\x9f\xc1\xe7\xf0\x05\x7c\x09\x5f"
"\xc1\xd7\xf0\x0d\x7c\x0b\xdf\xc1\xf7\xf0\x03\xfc\x08\x3f\xc1\xcf\xf0"
"\x0b\xfc\x0a\xbf\xc1\xef\xf0\x07\xfc\x09\x7f\xc1\xdf\x30\x1e\xfe\x81"
"\x7f\xe1\x3f\x98\x00\x13\x61\x12\x14\x87\x92\xa2\x64\x28\x39\x4a\x81"
"\x52\xa2\x54\x28\x35\x4a\x83\xd2\xa2\x74\x28\x3d\xca\x80\x32\xa2\x4c"
"\x28\x33\xca\x82\xb2\xa2\x6c\x28\x3b\xca\x81\x72\xa2\x5c\x28\x37\xca"
"\x83\xf2\xa2\x7c\x28\x3f\x2a\x80\x0a\xa2\x42\xa8\x30\x2a\x82\x8a\xa2"
"\x62\xa8\x38\x2a\x81\x4a\xa2\x52\xa8\x34\x2a\x83\xca\xa2\x72\xa8\x3c"
"\xaa\x80\x2a\xa2\x4a\xa8\x32\xaa\x82\xaa\xa2\x6a\xa8\x3a\xaa\x81\x6a"
"\xa2\x5a\xa8\x36\xaa\x83\xea\xa2\x7a\xa8\x3e\x6a\x80\x1a\xa2\x46\xa8"
"\x31\x6a\x82\x9a\xa2\x66\xa8\x39\x6a\x81\x5a\xa2\x56\xa8\x35\x6a\x83"
"\xda\xa2\x76\xa8\x3d\xea\x80\x3a\xa2\x4e\xa8\x33\xea\x82\xba\xa2\x6e"
"\xa8\x3b\xea\x81\x7a\xa2\x5e\xa8\x37\xea\x83\xfa\xa2\x7e\xa8\x3f\x1a"
"\x80\x06\xa2\x41\x68\x30\x1a\x82\x86\xa2\x61\x68\x38\x1a\x81\x46\xa2"
"\x51\x68\x34\x1a\x83\xc6\xa2\x71\x68\x3c\x9a\x80\x26\xa2\x49\x68\x32"
"\x9a\x82\xa6\xa2\x69\x68\x3a\x9a\x81\x66\xa2\x59\x68\x36\x9a\x83\xe6"
"\xa2\x79\x68\x3e\x5a\x80\x16\xa2\x45\x68\x31\x5a\x82\x96\xa2\x65\x68"
"\x39\x5a\x81\x56\xa2\x55\x68\x35\x5a\x83\xd6\xa2\x75\x68\x3d\xda\x80"
"\x36\xa2\x4d\x68\x33\xda\x82\xb6\xa2\x6d\x68\x3b\xda\x81\x76\xa2\x5d"
"\x68\x37\xda\x83\xf6\xa2\x7d\x68\x3f\x3a\x80\x0e\xa2\x43\xe8\x30\x3a"
"\x82\x8e\xa2\x63\xe8\x38\x3a\x81\x4e\xa2\x53\xe8\x34\x3a\x83\xce\xa2"
"\x73\xe8\x3c\xba\x80\x2e\xa2\x4b\xe8\x32\xba\x82\xae\xa2\x6b\xe8\x3a"
"\xba\x81\x6e\xa2\x5b\x08\x43\x38\x22\x10\x89\x28\x44\x23\x06\xb1\x88"
"\x43\x3c\x12\x90\x88\x24\x24\x23\x05\xa9\x48\x43\x3a\x32\x90\x89\x2c"
"\x64\x23\x07\xb9\xc8\x43\x3e\x0a\x50\x88\x22\x04\x10\x44\x08\xc5\xd0"
"\x6d\x74\x07\xdd\x45\xf7\xd0\x7d\xf4\x00\x3d\x44\x8f\xd0\x63\xf4\x04"
"\x3d\x45\xcf\xd0\x73\xf4\x02\xbd\x44\xaf\xd0\x6b\xf4\x06\xbd\x45\xef"
"\xd0\x7b\xf4\x01\x7d\x44\x9f\xd0\x67\xf4\x05\x7d\x45\xdf\xd0\x77\xf4"
"\x03\xfd\x44\xbf\xd0\x6f\x14\x8f\xfe\xa0\xbf\xe8\x1f\x4a\x40\x89\x28"
"\x49\x2c\x2e\x96\x34\x96\x2c\x96\x3c\x96\x22\x96\x32\x96\x2a\x96\x3a"
"\x96\x26\x96\x36\x96\x2e\x96\x3e\x96\x21\x96\x31\x96\x29\x96\x39\x96"
"\x25\x96\x35\x96\x2d\x96\x3d\x96\x23\x96\x33\x96\x2b\x96\x3b\x96\x27"
"\x96\x37\x96\x2f\x96\x3f\x56\x20\x56\x30\x56\x28\x56\x38\x56\x24\x56"
"\x34\x56\x2c\x56\x3c\x56\x22\x56\x32\x56\x2a\x56\x3a\x56\x26\x56\x36"
"\x56\x2e\x56\x3e\x56\x21\x56\x31\x56\x29\x56\x39\x56\x25\x56\x35\x56"
"\x2d\x56\x3d\x56\x23\x56\x33\x56\x2b\x56\x3b\x56\x27\x56\x37\x56\x2f"
"\x56\x3f\xd6\x20\xd6\x30\xd6\x28\xd6\x38\xd6\x24\xd6\x34\xf6\x9f\xa5"
"\x7b\x5a\xcc\xb3\x49\x00\x00\x9c\xd4\xee\x5f\xdb\xb6\xed\x26\xa9\x6d"
"\xdb\xb6\x6d\xdb\xb6\x6d\xdb\x76\x5f\xdb\xd6\xcc\x7c\xf5\x9e\xec\x95"
"\x3c\x09\x58\x22\x96\x84\x35\xc3\x9a\x63\x2d\xb0\x96\x58\x2b\xac\x35"
"\xd6\x06\x6b\x8b\xb5\xc3\xda\x63\x1d\xb0\x8e\x58\x27\xac\x33\xd6\x05"
"\xeb\x8a\x75\xc3\xba\x63\x3d\xb0\x9e\x58\x2f\xac\x37\xd6\x07\xeb\x8b"
"\xf5\xc3\xfa\x63\x03\xb0\x81\xd8\x20\x6c\x30\x36\x04\x1b\x8a\x0d\xc3"
"\x86\x63\x23\xb0\x91\xd8\x28\x6c\x34\x36\x06\x1b\x8b\x8d\xc3\xc6\x63"
"\x13\xb0\x89\xd8\x24\x6c\x32\x36\x05\x9b\x8a\x4d\xc3\xa6\x63\x33\xb0"
"\x99\xd8\x2c\x6c\x36\x36\x07\x9b\x8b\xcd\xc3\xe6\x63\x0b\xb0\x85\xd8"
"\x22\x6c\x31\xb6\x04\x5b\x8a\x2d\xc3\x96\x63\x2b\xb0\x95\xd8\x2a\x6c"
"\x35\xb6\x06\x5b\x8b\xad\xc3\xd6\x63\x1b\xb0\x8d\xd8\x26\x6c\x33\xb6"
"\x05\xdb\x8a\x6d\xc3\xb6\x63\x3b\xb0\x9d\xd8\x2e\x6c\x37\xb6\x07\xdb"
"\x8b\xed\xc3\xf6\x63\x07\xb0\x83\xd8\x21\xec\x30\x76\x04\x3b\x8a\x1d"
"\xc3\x8e\x63\x27\xb0\x93\xd8\x29\xec\x34\x76\x06\x3b\x8b\x9d\xc3\xce"
"\x63\x17\xb0\x8b\xd8\x25\xec\x32\x76\x05\xbb\x8a\x5d\xc3\xae\x63\x37"
"\xb0\x9b\xd8\x2d\xec\x36\x76\x07\xbb\x8b\xdd\xc3\xee\x63\x0f\xb0\x87"
"\xd8\x23\xec\x31\xf6\x04\x7b\x8a\x3d\xc3\x9e\x63\x2f\xb0\x97\xd8\x2b"
"\xec\x35\xf6\x06\x7b\x8b\xbd\xc3\xde\x63\x1f\xb0\x8f\xd8\x27\xec\x33"
"\xf6\x05\xfb\x8a\x7d\xc3\xbe\x63\x18\x86\x63\x04\x46\x62\x14\x46\x63"
"\x0c\xc6\x62\x1c\xc6\x63\x02\x26\x62\x12\x26\x63\x0a\xa6\x62\x1a\xa6"
"\x63\x06\x66\x62\x16\x66\x63\x0e\xe6\x62\x1e\xe6\x63\x01\x16\x62\x11"
"\x06\x30\x88\x21\x2c\x86\xfd\xc0\x7e\x62\xbf\xb0\xdf\xd8\x1f\xec\x2f"
"\xf6\x0f\x8b\xc3\xe3\xf1\x64\x78\x72\x3c\x05\x9e\x12\x4f\x85\xa7\xc6"
"\xd3\xe0\x69\xf1\x74\x78\x7a\x3c\x03\x9e\x11\xcf\x84\x67\xc6\xff\xc3"
"\xb3\xe0\x59\xf1\x6c\x78\x76\x3c\x07\x9e\x13\xcf\x85\xe7\xc6\xf3\xe0"
"\x79\xf1\x7c\x78\x7e\xbc\x00\x5e\x10\x2f\x84\x17\xc6\x8b\xe0\x45\xf1"
"\x62\x78\x71\xbc\x04\x5e\x12\x2f\x85\x97\xc6\xcb\xe0\x65\xf1\x72\x78"
"\x79\xbc\x02\x5e\x11\xaf\x84\x57\xc6\xab\xe0\x55\xf1\x6a\x78\x75\xbc"
"\x06\x5e\x13\xaf\x85\xd7\xc6\xeb\xe0\x75\xf1\x7a\x78\x7d\xbc\x01\xde"
"\x10\x6f\x84\x37\xc6\x9b\xe0\x4d\xf1\x04\x3c\x11\x4f\xc2\x9b\xe1\xcd"
"\xf1\x16\x78\x4b\xbc\x15\xde\x1a\x6f\x83\xb7\xc5\xdb\xe1\xed\xf1\x0e"
"\x78\x47\xbc\x13\xde\x19\xef\x82\x77\xc5\xbb\xe1\xdd\xf1\x1e\x78\x4f"
"\xbc\x17\xde\x1b\xef\x83\xf7\xc5\xfb\xe1\xfd\xf1\x01\xf8\x40\x7c\x10"
"\x3e\x18\x1f\x82\x0f\xc5\x87\xe1\xc3\xf1\x11\xf8\x48\x7c\x14\x3e\x1a"
"\x1f\x83\x8f\xc5\xc7\xe1\xe3\xf1\x09\xf8\x44\x7c\x12\x3e\x19\x9f\x82"
"\x4f\xc5\xa7\xe1\xd3\xf1\x19\xf8\x4c\x7c\x16\x3e\x1b\x9f\x83\xcf\xc5"
"\xe7\xe1\xf3\xf1\x05\xf8\x42\x7c\x11\xbe\x18\x5f\x82\x2f\xc5\x97\xe1"
"\xcb\xf1\x15\xf8\x4a\x7c\x15\xbe\x1a\x5f\x83\xaf\xc5\xd7\xe1\xeb\xf1"
"\x0d\xf8\x46\x7c\x13\xbe\x19\xdf\x82\x6f\xc5\xb7\xe1\xdb\xf1\x1d\xf8"
"\x4e\x7c\x17\xbe\x1b\xdf\x83\xef\xc5\xf7\xe1\xfb\xf1\x03\xf8\x41\xfc"
"\x10\x7e\x18\x3f\x82\x1f\xc5\x8f\xe1\xc7\xf1\x13\xf8\x49\xfc\x14\x7e"
"\x1a\x3f\x83\x9f\xc5\xcf\xe1\xe7\xf1\x0b\xf8\x45\xfc\x12\x7e\x19\xbf"
"\x82\x5f\xc5\xaf\xe1\xd7\xf1\x1b\xf8\x4d\xfc\x16\x7e\x1b\xbf\x83\xdf"
"\xc5\xef\xe1\xf7\xf1\x07\xf8\x43\xfc\x11\xfe\x18\x7f\x82\x3f\xc5\x9f"
"\xe1\xcf\xf1\x17\xf8\x4b\xfc\x15\xfe\x1a\x7f\x83\xbf\xc5\xdf\xe1\xef"
"\xf1\x0f\xf8\x47\xfc\x13\xfe\x19\xff\x82\x7f\xc5\xbf\xe1\xdf\x71\x0c"
"\xc7\x71\x02\x27\x71\x0a\xa7\x71\x06\x67\x71\x0e\xe7\x71\x01\x17\x71"
"\x09\x97\x71\x05\x57\x71\x0d\xd7\x71\x03\x37\x71\x0b\xb7\x71\x07\x77"
"\x71\x0f\xf7\xf1\x00\x0f\xf1\x08\x07\x38\xc4\x11\x1e\xc3\x7f\xe0\x3f"
"\xf1\x5f\xf8\x6f\xfc\x0f\xfe\x17\xff\x87\xc7\x11\xf1\x44\x32\x22\x39"
"\x91\x82\x48\x49\xa4\x22\x52\x13\x69\x88\xb4\x44\x3a\x22\x3d\x91\x81"
"\xc8\x48\x64\x22\x32\x13\xff\x11\x59\x88\xac\x44\x36\x22\x3b\x91\x83"
"\xc8\x49\xe4\x22\x72\x13\x79\x88\xbc\x44\x3e\x22\x3f\x51\x80\x28\x48"
"\x14\x22\x0a\x13\x45\x88\xa2\x44\x31\xa2\x38\x51\x82\x28\x49\x94\x22"
"\x4a\x13\x65\x88\xb2\x44\x39\xa2\x3c\x51\x81\xa8\x48\x54\x22\x2a\x13"
"\x55\x88\xaa\x44\x35\xa2\x3a\x51\x83\xa8\x49\xd4\x22\x6a\x13\x75\x88"
"\xba\x44\x3d\xa2\x3e\xd1\x80\x68\x48\x34\x22\x1a\x13\x4d\x88\xa6\x44"
"\x02\x91\x48\x24\x11\xcd\x88\xe6\x44\x0b\xa2\x25\xd1\x8a\x68\x4d\xb4"
"\x21\xda\x12\xed\x88\xf6\x44\x07\xa2\x23\xd1\x89\xe8\x4c\x74\x21\xba"
"\x12\xdd\x88\xee\x44\x0f\xa2\x27\xd1\x8b\xe8\x4d\xf4\x21\xfa\x12\xfd"
"\x88\xfe\xc4\x00\x62\x20\x31\x88\x18\x4c\x0c\x21\x86\x12\xc3\x88\xe1"
"\xc4\x08\x62\x24\x31\x8a\x18\x4d\x8c\x21\xc6\x12\xe3\x88\xf1\xc4\x04"
"\x62\x22\x31\x89\x98\x4c\x4c\x21\xa6\x12\xd3\x88\xe9\xc4\x0c\x62\x26"
"\x31\x8b\x98\x4d\xcc\x21\xe6\x12\xf3\x88\xf9\xc4\x02\x62\x21\xb1\x88"
"\x58\x4c\x2c\x21\x96\x12\xcb\x88\xe5\xc4\x0a\x62\x25\xb1\x8a\x58\x4d"
"\xac\x21\xd6\x12\xeb\x88\xf5\xc4\x06\x62\x23\xb1\x89\xd8\x4c\x6c\x21"
"\xb6\x12\xdb\x88\xed\xc4\x0e\x62\x27\xb1\x8b\xd8\x4d\xec\x21\xf6\x12"
"\xfb\x88\xfd\xc4\x01\xe2\x20\x71\x88\x38\x4c\x1c\x21\x8e\x12\xc7\x88"
"\xe3\xc4\x09\xe2\x24\x71\x8a\x38\x4d\x9c\x21\xce\x12\xe7\x88\xf3\xc4"
"\x05\xe2\x22\x71\x89\xb8\x4c\x5c\x21\xae\x12\xd7\x88\xeb\xc4\x0d\xe2"
"\x26\x71\x8b\xb8\x4d\xdc\x21\xee\x12\xf7\x88\xfb\xc4\x03\xe2\x21\xf1"
"\x88\x78\x4c\x3c\x21\x9e\x12\xcf\x88\xe7\xc4\x0b\xe2\x25\xf1\x8a\x78"
"\x4d\xbc\x21\xde\x12\xef\x88\xf7\xc4\x07\xe2\x23\xf1\x89\xf8\x4c\x7c"
"\x21\xbe\x12\xdf\x88\xef\x04\x46\xe0\x04\x41\x90\x04\x45\xd0\x04\x43"
"\xb0\x04\x47\xf0\x84\x40\x88\x84\x44\xc8\x84\x42\xa8\x84\x46\xe8\x84"
"\x41\x98\x84\x45\xd8\x84\x43\xb8\x84\x47\xf8\x44\x40\x84\x44\x44\x00"
"\x02\x12\x88\x88\x11\x3f\x88\x9f\xc4\x2f\xe2\x37\xf1\x87\xf8\x4b\xfc"
"\x23\xe2\xc8\x78\x32\x19\x99\x9c\x4c\x41\xa6\x24\x53\x91\xa9\xc9\x34"
"\x64\x5a\x32\x1d\x99\x9e\xcc\x40\x66\x24\x33\x91\x99\xe3\xfe\xcf\x45"
"\x92\xd9\xc9\x1c\x64\x4e\x32\x17\x99\x9b\xcc\x43\xe6\x25\xf3\x91\xf9"
"\xc9\x02\x64\x41\xb2\x10\x59\x98\x2c\x42\x16\x25\x8b\x91\xc5\xc9\x12"
"\x64\x49\xb2\x14\x59\x9a\x2c\x43\x96\x25\xcb\x91\xe5\xc9\x0a\x64\x45"
"\xb2\x12\x59\x99\xac\x42\x56\x25\xab\x91\xd5\xc9\x1a\x64\x4d\xb2\x16"
"\x59\x9b\xac\x43\xd6\x25\xeb\x91\xf5\xc9\x06\x64\x43\xb2\x11\xd9\x98"
"\x6c\x42\x36\x25\x13\xc8\x44\x32\x89\x6c\x46\x36\x27\x5b\x90\x2d\xc9"
"\x56\x64\x6b\xb2\x0d\xd9\x96\x6c\x47\xb6\x27\x3b\x90\x1d\xc9\x4e\x64"
"\x67\xb2\x0b\xd9\x95\xec\x46\x76\x27\x7b\x90\x3d\xc9\x5e\x64\x6f\xb2"
"\x0f\xd9\x97\xec\x47\xf6\x27\x07\x90\x03\xc9\x41\xe4\x60\x72\x08\x39"
"\x94\x1c\x46\x0e\x27\x47\x90\x23\xc9\x51\xe4\x68\x72\x0c\x39\x96\x1c"
"\x47\x8e\x27\x27\x90\x13\xc9\x49\xe4\x64\x72\x0a\x39\x95\x9c\x46\x4e"
"\x27\x67\x90\x33\xc9\x59\xe4\x6c\x72\x0e\x39\x97\x9c\x47\xce\x27\x17"
"\x90\x0b\xc9\x45\xe4\x62\x72\x09\xb9\x94\x5c\x46\x2e\x27\x57\x90\x2b"
"\xc9\x55\xe4\x6a\x72\x0d\xb9\x96\x5c\x47\xae\x27\x37\x90\x1b\xc9\x4d"
"\xe4\x66\x72\x0b\xb9\x95\xdc\x46\x6e\x27\x77\x90\x3b\xc9\x5d\xe4\x6e"
"\x72\x0f\xb9\x97\xdc\x47\xee\x27\x0f\x90\x07\xc9\x43\xe4\x61\xf2\x08"
"\x79\x94\x3c\x46\x1e\x27\x4f\x90\x27\xc9\x53\xe4\x69\xf2\x0c\x79\x96"
"\x3c\x47\x9e\x27\x2f\x90\x17\xc9\x4b\xe4\x65\xf2\x0a\x79\x95\xbc\x46"
"\x5e\x27\x6f\x90\x37\xc9\x5b\xe4\x6d\xf2\x0e\x79\x97\xbc\x47\xde\x27"
"\x1f\x90\x0f\xc9\x47\xe4\x63\xf2\x09\xf9\x94\x7c\x46\x3e\x27\x5f\x90"
"\x2f\xc9\x57\xe4\x6b\xf2\x0d\xf9\x96\x7c\x47\xbe\x27\x3f\x90\x1f\xc9"
"\x4f\xe4\x67\xf2\x0b\xf9\x95\xfc\x46\x7e\x27\x31\x12\x27\x09\x92\x24"
"\x29\x92\x26\x19\x92\x25\x39\x92\x27\x05\x52\x24\x25\x52\x26\x15\x52"
"\x25\x35\x52\x27\x0d\xd2\x24\x2d\xd2\x26\x1d\xd2\x25\x3d\xd2\x27\x03"
"\x32\x24\x23\x12\x90\x90\x44\x64\x8c\xfc\x41\xfe\x24\x7f\x91\xbf\xc9"
"\x3f\xe4\x5f\xf2\x1f\x19\x47\xc5\x53\xc9\xa8\xe4\x54\x0a\x2a\x25\x95"
"\x8a\x4a\x4d\xa5\xa1\xd2\x52\xe9\xa8\xf4\x54\x06\x2a\x23\x95\x89\xca"
"\x4c\xfd\x47\x65\xa1\xb2\x52\xd9\xa8\xec\x54\x0e\x2a\x27\x95\x8b\xca"
"\x4d\xe5\xa1\xf2\x52\xf9\xa8\xfc\x54\x01\xaa\x20\x55\x88\x2a\x4c\x15"
"\xa1\x8a\x52\xc5\xa8\xe2\x54\x09\xaa\x24\x55\x8a\x2a\x4d\x95\xa1\xca"
"\x52\xe5\xa8\xf2\x54\x05\xaa\x22\x55\x89\xaa\x4c\x55\xa1\xaa\x52\xd5"
"\xa8\xea\x54\x0d\xaa\x26\x55\x8b\xaa\x4d\xd5\xa1\xea\x52\xf5\xa8\xfa"
"\x54\x03\xaa\x21\xd5\x88\x6a\x4c\x35\xa1\x9a\x52\x09\x54\x22\x95\x44"
"\x35\xa3\x9a\x53\x2d\xa8\x96\x54\x2b\xaa\x35\xd5\x86\x6a\x4b\xb5\xa3"
"\xda\x53\x1d\xa8\x8e\x54\x27\xaa\x33\xd5\x85\xea\x4a\x75\xa3\xba\x53"
"\x3d\xa8\x9e\x54\x2f\xaa\x37\xd5\x87\xea\x4b\xf5\xa3\xfa\x53\x03\xa8"
"\x81\xd4\x20\x6a\x30\x35\x84\x1a\x4a\x0d\xa3\x86\x53\x23\xa8\x91\xd4"
"\x28\x6a\x34\x35\x86\x1a\x4b\x8d\xa3\xc6\x53\x13\xa8\x89\xd4\x24\x6a"
"\x32\x35\x85\x9a\x4a\x4d\xa3\xa6\x53\x33\xa8\x99\xd4\x2c\x6a\x36\x35"
"\x87\x9a\x4b\xcd\xa3\xe6\x53\x0b\xa8\x85\xd4\x22\x6a\x31\xb5\x84\x5a"
"\x4a\x2d\xa3\x96\x53\x2b\xa8\x95\xd4\x2a\x6a\x35\xb5\x86\x5a\x4b\xad"
"\xa3\xd6\x53\x1b\xa8\x8d\xd4\x26\x6a\x33\xb5\x85\xda\x4a\x6d\xa3\xb6"
"\x53\x3b\xa8\x9d\xd4\x2e\x6a\x37\xb5\x87\xda\x4b\xed\xa3\xf6\x53\x07"
"\xa8\x83\xd4\x21\xea\x30\x75\x84\x3a\x4a\x1d\xa3\x8e\x53\x27\xa8\x93"
"\xd4\x29\xea\x34\x75\x86\x3a\x4b\x9d\xa3\xce\x53\x17\xa8\x8b\xd4\x25"
"\xea\x32\x75\x85\xba\x4a\x5d\xa3\xae\x53\x37\xa8\x9b\xd4\x2d\xea\x36"
"\x75\x87\xba\x4b\xdd\xa3\xee\x53\x0f\xa8\x87\xd4\x23\xea\x31\xf5\x84"
"\x7a\x4a\x3d\xa3\x9e\x53\x2f\xa8\x97\xd4\x2b\xea\x35\xf5\x86\x7a\x4b"
"\xbd\xa3\xde\x53\x1f\xa8\x8f\xd4\x27\xea\x33\xf5\x85\xfa\x4a\x7d\xa3"
"\xbe\x53\x18\x85\x53\x04\x45\x52\x14\x45\x53\x0c\xc5\x52\x1c\xc5\x53"
"\x02\x25\x52\x12\x25\x53\x0a\xa5\x52\x1a\xa5\x53\x06\x65\x52\x16\x65"
"\x53\x0e\xe5\x52\x1e\xe5\x53\x01\x15\x52\x11\x05\x28\x48\x21\x2a\x46"
"\xfd\xa0\x7e\x52\xbf\xa8\xdf\xd4\x1f\xea\x2f\xf5\x8f\x8a\xa3\xe3\xe9"
"\x64\x74\x72\x3a\x05\x9d\x92\x4e\x45\xa7\xa6\xd3\xd0\x69\xe9\x74\x74"
"\x7a\x3a\x03\x9d\x91\xce\x44\x67\xa6\xff\xa3\xb3\xd0\x59\xe9\x6c\x74"
"\x76\x3a\x07\x9d\x93\xce\x45\xe7\xa6\xf3\xd0\x79\xe9\x7c\x74\x7e\xba"
"\x00\x5d\x90\x2e\x44\x17\xa6\x8b\xd0\x45\xe9\x62\x74\x71\xba\x04\x5d"
"\x92\x2e\x45\x97\xa6\xcb\xd0\x65\xe9\x72\x74\x79\xba\x02\x5d\x91\xae"
"\x44\x57\xa6\xab\xd0\x55\xe9\x6a\x74\x75\xba\x06\x5d\x93\xae\x45\xd7"
"\xa6\xeb\xd0\x75\xe9\x7a\x74\x7d\xba\x01\xdd\x90\x6e\x44\x37\xa6\x9b"
"\xd0\x4d\xe9\x04\x3a\x91\x4e\xa2\x9b\xd1\xcd\xe9\x16\x74\x4b\xba\x15"
"\xdd\x9a\x6e\x43\xb7\xa5\xdb\xd1\xed\xe9\x0e\x74\x47\xba\x13\xdd\x99"
"\xee\x42\x77\xa5\xbb\xd1\xdd\xe9\x1e\x74\x4f\xba\x17\xdd\x9b\xee\x43"
"\xf7\xa5\xfb\xd1\xfd\xe9\x01\xf4\x40\x7a\x10\x3d\x98\x1e\x42\x0f\xa5"
"\x87\xd1\xc3\xe9\x11\xf4\x48\x7a\x14\x3d\x9a\x1e\x43\x8f\xa5\xc7\xd1"
"\xe3\xe9\x09\xf4\x44\x7a\x12\x3d\x99\x9e\x42\x4f\xa5\xa7\xd1\xd3\xe9"
"\x19\xf4\x4c\x7a\x16\x3d\x9b\x9e\x43\xcf\xa5\xe7\xd1\xf3\xe9\x05\xf4"
"\x42\x7a\x11\xbd\x98\x5e\x42\x2f\xa5\x97\xd1\xcb\xe9\x15\xf4\x4a\x7a"
"\x15\xbd\x9a\x5e\x43\xaf\xa5\xd7\xd1\xeb\xe9\x0d\xf4\x46\x7a\x13\xbd"
"\x99\xde\x42\x6f\xa5\xb7\xd1\xdb\xe9\x1d\xf4\x4e\x7a\x17\xbd\x9b\xde"
"\x43\xef\xa5\xf7\xd1\xfb\xe9\x03\xf4\x41\xfa\x10\x7d\x98\x3e\x42\x1f"
"\xa5\x8f\xd1\xc7\xe9\x13\xf4\x49\xfa\x14\x7d\x9a\x3e\x43\x9f\xa5\xcf"
"\xd1\xe7\xe9\x0b\xf4\x45\xfa\x12\x7d\x99\xbe\x42\x5f\xa5\xaf\xd1\xd7"
"\xe9\x1b\xf4\x4d\xfa\x16\x7d\x9b\xbe\x43\xdf\xa5\xef\xd1\xf7\xe9\x07"
"\xf4\x43\xfa\x11\xfd\x98\x7e\x42\x3f\xa5\x9f\xd1\xcf\xe9\x17\xf4\x4b"
"\xfa\x15\xfd\x9a\x7e\x43\xbf\xa5\xdf\xd1\xef\xe9\x0f\xf4\x47\xfa\x13"
"\xfd\x99\xfe\x42\x7f\xa5\xbf\xd1\xdf\x69\x8c\xc6\x69\x82\x26\x69\x8a"
"\xa6\x69\x86\x66\x69\x8e\xe6\x69\x81\x16\x69\x89\x96\x69\x85\x56\x69"
"\x8d\xd6\x69\x83\x36\x69\x8b\xb6\x69\x87\x76\x69\x8f\xf6\xe9\x80\x0e"
"\xe9\x88\x06\x34\xa4\x11\x1d\xa3\x7f\xd0\x3f\xe9\x5f\xf4\x6f\xfa\x0f"
"\xfd\x97\xfe\x47\xc7\x31\xf1\x4c\x32\x26\x39\x93\x82\x49\xc9\xa4\x62"
"\x52\x33\x69\x98\xb4\x4c\x3a\x26\x3d\x93\x81\xc9\xc8\x64\x62\x32\x33"
"\xff\x31\x59\x98\xac\x4c\x36\x26\x3b\x93\x83\xc9\xc9\xe4\x62\x72\x33"
"\x79\x98\xbc\x4c\x3e\x26\x3f\x53\x80\x29\xc8\x14\x62\x0a\x33\x45\x98"
"\xa2\x4c\x31\xa6\x38\x53\x82\x29\xc9\x94\x62\x4a\x33\x65\x98\xb2\x4c"
"\x39\xa6\x3c\x53\x81\xa9\xc8\x54\x62\x2a\x33\x55\x98\xaa\x4c\x35\xa6"
"\x3a\x53\x83\xa9\xc9\xd4\x62\x6a\x33\x75\x98\xba\x4c\x3d\xa6\x3e\xd3"
"\x80\x69\xc8\x34\x62\x1a\x33\x4d\x98\xa6\x4c\x02\x93\xc8\x24\x31\xcd"
"\x98\xe6\x4c\x0b\xa6\x25\xd3\x8a\x69\xcd\xb4\x61\xda\x32\xed\x98\xf6"
"\x4c\x07\xa6\x23\xd3\x89\xe9\xcc\x74\x61\xba\x32\xdd\x98\xee\x4c\x0f"
"\xa6\x27\xd3\x8b\xe9\xcd\xf4\x61\xfa\x32\xfd\x98\xfe\xcc\x00\x66\x20"
"\x33\x88\x19\xcc\x0c\x61\x86\x32\xc3\x98\xe1\xcc\x08\x66\x24\x33\x8a"
"\x19\xcd\x8c\x61\xc6\x32\xe3\x98\xf1\xcc\x04\x66\x22\x33\x89\x99\xcc"
"\x4c\x61\xa6\x32\xd3\x98\xe9\xcc\x0c\x66\x26\x33\x8b\x99\xcd\xcc\x61"
"\xe6\x32\xf3\x98\xf9\xcc\x02\x66\x21\xb3\x88\x59\xcc\x2c\x61\x96\x32"
"\xcb\x98\xe5\xcc\x0a\x66\x25\xb3\x8a\x59\xcd\xac\x61\xd6\x32\xeb\x98"
"\xf5\xcc\x06\x66\x23\xb3\x89\xd9\xcc\x6c\x61\xb6\x32\xdb\x98\xed\xcc"
"\x0e\x66\x27\xb3\x8b\xd9\xcd\xec\x61\xf6\x32\xfb\x98\xfd\xcc\x01\xe6"
"\x20\x73\x88\x39\xcc\x1c\x61\x8e\x32\xc7\x98\xe3\xcc\x09\xe6\x24\x73"
"\x8a\x39\xcd\x9c\x61\xce\x32\xe7\x98\xf3\xcc\x05\xe6\x22\x73\x89\xb9"
"\xcc\x5c\x61\xae\x32\xd7\x98\xeb\xcc\x0d\xe6\x26\x73\x8b\xb9\xcd\xdc"
"\x61\xee\x32\xf7\x98\xfb\xcc\x03\xe6\x21\xf3\x88\x79\xcc\x3c\x61\x9e"
"\x32\xcf\x98\xe7\xcc\x0b\xe6\x25\xf3\x8a\x79\xcd\xbc\x61\xde\x32\xef"
"\x98\xf7\xcc\x07\xe6\x23\xf3\x89\xf9\xcc\x7c\x61\xbe\x32\xdf\x98\xef"
"\x0c\xc6\xe0\x0c\xc1\x90\x0c\xc5\xd0\x0c\xc3\xb0\x0c\xc7\xf0\x8c\xc0"
"\x88\x8c\xc4\xc8\x8c\xc2\xa8\x8c\xc6\xe8\x8c\xc1\x98\x8c\xc5\xd8\x8c"
"\xc3\xb8\x8c\xc7\xf8\x4c\xc0\x84\x4c\xc4\x00\x06\x32\x88\x89\x31\x3f"
"\x98\x9f\xcc\x2f\xe6\x37\xf3\x87\xf9\xcb\xfc\x63\xe2\xd8\x78\x36\x19"
"\x9b\x9c\x4d\xc1\xa6\x64\x53\xb1\xa9\xd9\x34\x6c\x5a\x36\x1d\x9b\x9e"
"\xcd\xc0\x66\x64\x33\xb1\x99\xd9\xff\xd8\x2c\x6c\x56\x36\x1b\x9b\x9d"
"\xcd\xc1\xe6\x64\x73\xb1\xb9\xd9\x3c\x6c\x5e\x36\x1f\x9b\x9f\x2d\xc0"
"\x16\x64\x0b\xb1\x85\xd9\x22\x6c\x51\xb6\x18\x5b\x9c\x2d\xc1\x96\x64"
"\x4b\xb1\xa5\xd9\x32\x6c\x59\xb6\x1c\x5b\x9e\xad\xc0\x56\x64\x2b\xb1"
"\x95\xd9\x2a\x6c\x55\xb6\x1a\x5b\x9d\xad\xc1\xd6\x64\x6b\xb1\xb5\xd9"
"\x3a\x6c\x5d\xb6\x1e\x5b\x9f\x6d\xc0\x36\x64\x1b\xb1\x8d\xd9\x26\x6c"
"\x53\x36\x81\x4d\x64\x93\xd8\x66\x6c\x73\xb6\x05\xdb\x92\x6d\xc5\xb6"
"\x66\xdb\xb0\x6d\xd9\x76\x6c\x7b\xb6\x03\xdb\x91\xed\xc4\x76\x66\xbb"
"\xb0\x5d\xd9\x6e\x6c\x77\xb6\x07\xdb\x93\xed\xc5\xf6\x66\xfb\xb0\x7d"
"\xd9\x7e\x6c\x7f\x76\x00\x3b\x90\x1d\xc4\x0e\x66\x87\xb0\x43\xd9\x61"
"\xec\x70\x76\x04\x3b\x92\x1d\xc5\x8e\x66\xc7\xb0\x63\xd9\x71\xec\x78"
"\x76\x02\x3b\x91\x9d\xc4\x4e\x66\xa7\xb0\x53\xd9\x69\xec\x74\x76\x06"
"\x3b\x93\x9d\xc5\xce\x66\xe7\xb0\x73\xd9\x79\xec\x7c\x76\x01\xbb\x90"
"\x5d\xc4\x2e\x66\x97\xb0\x4b\xd9\x65\xec\x72\x76\x05\xbb\x92\x5d\xc5"
"\xae\x66\xd7\xb0\x6b\xd9\x75\xec\x7a\x76\x03\xbb\x91\xdd\xc4\x6e\x66"
"\xb7\xb0\x5b\xd9\x6d\xec\x76\x76\x07\xbb\x93\xdd\xc5\xee\x66\xf7\xb0"
"\x7b\xd9\x7d\xec\x7e\xf6\x00\x7b\x90\x3d\xc4\x1e\x66\x8f\xb0\x47\xd9"
"\x63\xec\x71\xf6\x04\x7b\x92\x3d\xc5\x9e\x66\xcf\xb0\x67\xd9\x73\xec"
"\x79\xf6\x02\x7b\x91\xbd\xc4\x5e\x66\xaf\xb0\x57\xd9\x6b\xec\x75\xf6"
"\x06\x7b\x93\xbd\xc5\xde\x66\xef\xb0\x77\xd9\x7b\xec\x7d\xf6\x01\xfb"
"\x90\x7d\xc4\x3e\x66\x9f\xb0\x4f\xd9\x67\xec\x73\xf6\x05\xfb\x92\x7d"
"\xc5\xbe\x66\xdf\xb0\x6f\xd9\x77\xec\x7b\xf6\x03\xfb\x91\xfd\xc4\x7e"
"\x66\xbf\xb0\x5f\xd9\x6f\xec\x77\x16\x63\x71\x96\x60\x49\x96\x62\x69"
"\x96\x61\x59\x96\x63\x79\x56\x60\x45\x56\x62\x65\x56\x61\x55\x56\x63"
"\x75\xd6\x60\x4d\xd6\x62\x6d\xd6\x61\x5d\xd6\x63\x7d\x36\x60\x43\x36"
"\x62\x01\x0b\x59\xc4\xc6\xd8\x1f\xec\x4f\xf6\x17\xfb\x9b\xfd\xc3\xfe"
"\x65\xff\xb1\x71\x5c\x3c\x97\x8c\x4b\xce\xa5\xe0\x52\x72\xa9\xb8\xd4"
"\x5c\x1a\x2e\x2d\x97\x8e\x4b\xcf\x65\xe0\x32\x72\x99\xb8\xcc\xdc\x7f"
"\x5c\x16\x2e\x2b\x97\x8d\xcb\xce\xe5\xe0\x72\x72\xb9\xb8\xdc\x5c\x1e"
"\x2e\x2f\x97\x8f\xcb\xcf\x15\xe0\x0a\x72\x85\xb8\xc2\x5c\x11\xae\x28"
"\x57\x8c\x2b\xce\x95\xe0\x4a\x72\xa5\xb8\xd2\x5c\x19\xae\x2c\x57\x8e"
"\x2b\xcf\x55\xe0\x2a\x72\x95\xb8\xca\x5c\x15\xae\x2a\x57\x8d\xab\xce"
"\xd5\xe0\x6a\x72\xb5\xb8\xda\x5c\x1d\xae\x2e\x57\x8f\xab\xcf\x35\xe0"
"\x1a\x72\x8d\xb8\xc6\x5c\x13\xae\x29\x97\xc0\x25\x72\x49\x5c\x33\xae"
"\x39\xd7\x82\x6b\xc9\xb5\xe2\x5a\x73\x6d\xb8\xb6\x5c\x3b\xae\x3d\xd7"
"\x81\xeb\xc8\x75\xe2\x3a\x73\x5d\xb8\xae\x5c\x37\xae\x3b\xd7\x83\xeb"
"\xc9\xf5\xe2\x7a\x73\x7d\xb8\xbe\x5c\x3f\xae\x3f\x37\x80\x1b\xc8\x0d"
"\xe2\x06\x73\x43\xb8\xa1\xdc\x30\x6e\x38\x37\x82\x1b\xc9\x8d\xe2\x46"
"\x73\x63\xb8\xb1\xdc\x38\x6e\x3c\x37\x81\x9b\xc8\x4d\xe2\x26\x73\x53"
"\xb8\xa9\xdc\x34\x6e\x3a\x37\x83\x9b\xc9\xcd\xe2\x66\x73\x73\xb8\xb9"
"\xdc\x3c\x6e\x3e\xb7\x80\x5b\xc8\x2d\xe2\x16\x73\x4b\xb8\xa5\xdc\x32"
"\x6e\x39\xb7\x82\x5b\xc9\xad\xe2\x56\x73\x6b\xb8\xb5\xdc\x3a\x6e\x3d"
"\xb7\x81\xdb\xc8\x6d\xe2\x36\x73\x5b\xb8\xad\xdc\x36\x6e\x3b\xb7\x83"
"\xdb\xc9\xed\xe2\x76\x73\x7b\xb8\xbd\xdc\x3e\x6e\x3f\x77\x80\x3b\xc8"
"\x1d\xe2\x0e\x73\x47\xb8\xa3\xdc\x31\xee\x38\x77\x82\x3b\xc9\x9d\xe2"
"\x4e\x73\x67\xb8\xb3\xdc\x39\xee\x3c\x77\x81\xbb\xc8\x5d\xe2\x2e\x73"
"\x57\xb8\xab\xdc\x35\xee\x3a\x77\x83\xbb\xc9\xdd\xe2\x6e\x73\x77\xb8"
"\xbb\xdc\x3d\xee\x3e\xf7\x80\x7b\xc8\x3d\xe2\x1e\x73\x4f\xb8\xa7\xdc"
"\x33\xee\x39\xf7\x82\x7b\xc9\xbd\xe2\x5e\x73\x6f\xb8\xb7\xdc\x3b\xee"
"\x3d\xf7\x81\xfb\xc8\x7d\xe2\x3e\x73\x5f\xb8\xaf\xdc\x37\xee\x3b\x87"
"\x71\x38\x47\x70\x24\x47\x71\x34\xc7\x70\x2c\xc7\x71\x3c\x27\x70\x22"
"\x27\x71\x32\xa7\x70\x2a\xa7\x71\x3a\x67\x70\x26\x67\x71\x36\xe7\x70"
"\x2e\xe7\x71\x3e\x17\x70\x21\x17\x71\x80\x83\x1c\xe2\x62\xdc\x0f\xee"
"\x27\xf7\x8b\xfb\xcd\xfd\xe1\xfe\x72\xff\xb8\x38\x3e\x9e\x4f\xc6\x27"
"\xe7\x53\xf0\x29\xf9\x54\x7c\x6a\x3e\x0d\x9f\x96\x4f\xc7\xa7\xe7\x33"
"\xf0\x19\xf9\x4c\x7c\x66\xfe\x3f\x3e\x0b\x9f\x95\xcf\xc6\x67\xe7\x73"
"\xf0\x39\xf9\x5c\x7c\x6e\x3e\x0f\x9f\x97\xcf\xc7\xe7\xe7\x0b\xf0\x05"
"\xf9\x42\x7c\x61\xbe\x08\x5f\x94\x2f\xc6\x17\xe7\x4b\xf0\x25\xf9\x52"
"\x7c\x69\xbe\x0c\x5f\x96\x2f\xc7\x97\xe7\x2b\xf0\x15\xf9\x4a\x7c\x65"
"\xbe\x0a\x5f\x95\xaf\xc6\x57\xe7\x6b\xf0\x35\xf9\x5a\x7c\x6d\xbe\x0e"
"\x5f\x97\xaf\xc7\xd7\xe7\x1b\xf0\x0d\xf9\x46\x7c\x63\xbe\x09\xdf\x94"
"\x4f\xe0\x13\xf9\x24\xbe\x19\xdf\x9c\x6f\xc1\xb7\xe4\x5b\xf1\xad\xf9"
"\x36\x7c\x5b\xbe\x1d\xdf\x9e\xef\xc0\x77\xe4\x3b\xf1\x9d\xf9\x2e\x7c"
"\x57\xbe\x1b\xdf\x9d\xef\xc1\xf7\xe4\x7b\xf1\xbd\xf9\x3e\x7c\x5f\xbe"
"\x1f\xdf\x9f\x1f\xc0\x0f\xe4\x07\xf1\x83\xf9\x21\xfc\x50\x7e\x18\x3f"
"\x9c\x1f\xc1\x8f\xe4\x47\xf1\xa3\xf9\x31\xfc\x58\x7e\x1c\x3f\x9e\x9f"
"\xc0\x4f\xe4\x27\xf1\x93\xf9\x29\xfc\x54\x7e\x1a\x3f\x9d\x9f\xc1\xcf"
"\xe4\x67\xf1\xb3\xf9\x39\xfc\x5c\x7e\x1e\x3f\x9f\x5f\xc0\x2f\xe4\x17"
"\xf1\x8b\xf9\x25\xfc\x52\x7e\x19\xbf\x9c\x5f\xc1\xaf\xe4\x57\xf1\xab"
"\xf9\x35\xfc\x5a\x7e\x1d\xbf\x9e\xdf\xc0\x6f\xe4\x37\xf1\x9b\xf9\x2d"
"\xfc\x56\x7e\x1b\xbf\x9d\xdf\xc1\xef\xe4\x77\xf1\xbb\xf9\x3d\xfc\x5e"
"\x7e\x1f\xbf\x9f\x3f\xc0\x1f\xe4\x0f\xf1\x87\xf9\x23\xfc\x51\xfe\x18"
"\x7f\x9c\x3f\xc1\x9f\xe4\x4f\xf1\xa7\xf9\x33\xfc\x59\xfe\x1c\x7f\x9e"
"\xbf\xc0\x5f\xe4\x2f\xf1\x97\xf9\x2b\xfc\x55\xfe\x1a\x7f\x9d\xbf\xc1"
"\xdf\xe4\x6f\xf1\xb7\xf9\x3b\xfc\x5d\xfe\x1e\x7f\x9f\x7f\xc0\x3f\xe4"
"\x1f\xf1\x8f\xf9\x27\xfc\x53\xfe\x19\xff\x9c\x7f\xc1\xbf\xe4\x5f\xf1"
"\xaf\xf9\x37\xfc\x5b\xfe\x1d\xff\x9e\xff\xc0\x7f\xe4\x3f\xf1\x9f\xf9"
"\x2f\xfc\x57\xfe\x1b\xff\x9d\xc7\x78\x9c\x27\x78\x92\xa7\x78\x9a\x67"
"\x78\x96\xe7\x78\x9e\x17\x78\x91\x97\x78\x99\x57\x78\x95\xd7\x78\x9d"
"\x37\x78\x93\xb7\x78\x9b\x77\x78\x97\xf7\x78\x9f\x0f\xf8\x90\x8f\x78"
"\xc0\x43\x1e\xf1\x31\xfe\x07\xff\x93\xff\xc5\xff\xe6\xff\xf0\x7f\xf9"
"\x7f\x7c\x9c\x10\x2f\x24\x13\x92\x0b\x29\x84\x94\x42\x2a\x21\xb5\x90"
"\x46\x48\x2b\xa4\x13\xd2\x0b\x19\x84\x8c\x42\x26\x21\xb3\xf0\x9f\x90"
"\x45\xc8\x2a\x64\x13\xb2\x0b\x39\x84\x9c\x42\x2e\x21\xb7\x90\x47\xc8"
"\x2b\xe4\x13\xf2\x0b\x05\x84\x82\x42\x21\xa1\xb0\x50\x44\x28\x2a\x14"
"\x13\x8a\x0b\x25\x84\x92\x42\x29\xa1\xb4\x50\x46\x28\x2b\x94\x13\xca"
"\x0b\x15\x84\x8a\x42\x25\xa1\xb2\x50\x45\xa8\x2a\x54\x13\xaa\x0b\x35"
"\x84\x9a\x42\x2d\xa1\xb6\x50\x47\xa8\x2b\xd4\x13\xea\x0b\x0d\x84\x86"
"\x42\x23\xa1\xb1\xd0\x44\x68\x2a\x24\x08\x89\x42\x92\xd0\x4c\x68\x2e"
"\xb4\x10\x5a\x0a\xad\x84\xd6\x42\x1b\xa1\xad\xd0\x4e\x68\x2f\x74\x10"
"\x3a\x0a\x9d\x84\xce\x42\x17\xa1\xab\xd0\x4d\xe8\x2e\xf4\x10\x7a\x0a"
"\xbd\x84\xde\x42\x1f\xa1\xaf\xd0\x4f\xe8\x2f\x0c\x10\x06\x0a\x83\x84"
"\xc1\xc2\x10\x61\xa8\x30\x4c\x18\x2e\x8c\x10\x46\x0a\xa3\x84\xd1\xc2"
"\x18\x61\xac\x30\x4e\x18\x2f\x4c\x10\x26\x0a\x93\x84\xc9\xc2\x14\x61"
"\xaa\x30\x4d\x98\x2e\xcc\x10\x66\x0a\xb3\x84\xd9\xc2\x1c\x61\xae\x30"
"\x4f\x98\x2f\x2c\x10\x16\x0a\x8b\x84\xc5\xc2\x12\x61\xa9\xb0\x4c\x58"
"\x2e\xac\x10\x56\x0a\xab\x84\xd5\xc2\x1a\x61\xad\xb0\x4e\x58\x2f\x6c"
"\x10\x36\x0a\x9b\x84\xcd\xc2\x16\x61\xab\xb0\x4d\xd8\x2e\xec\x10\x76"
"\x0a\xbb\x84\xdd\xc2\x1e\x61\xaf\xb0\x4f\xd8\x2f\x1c\x10\x0e\x0a\x87"
"\x84\xc3\xc2\x11\xe1\xa8\x70\x4c\x38\x2e\x9c\x10\x4e\x0a\xa7\x84\xd3"
"\xc2\x19\xe1\xac\x70\x4e\x38\x2f\x5c\x10\x2e\x0a\x97\x84\xcb\xc2\x15"
"\xe1\xaa\x70\x4d\xb8\x2e\xdc\x10\x6e\x0a\xb7\x84\xdb\xc2\x1d\xe1\xae"
"\x70\x4f\xb8\x2f\x3c\x10\x1e\x0a\x8f\x84\xc7\xc2\x13\xe1\xa9\xf0\x4c"
"\x78\x2e\xbc\x10\x5e\x0a\xaf\x84\xd7\xc2\x1b\xe1\xad\xf0\x4e\x78\x2f"
"\x7c\x10\x3e\x0a\x9f\x84\xcf\xc2\x17\xe1\xab\xf0\x4d\xf8\x2e\x60\x02"
"\x2e\x10\x02\x29\x50\x02\x2d\x30\x02\x2b\x70\x02\x2f\x08\x82\x28\x48"
"\x82\x2c\x28\x82\x2a\x68\x82\x2e\x18\x82\x29\x58\x82\x2d\x38\x82\x2b"
"\x78\x82\x2f\x04\x42\x28\x44\x02\x10\xa0\x80\x84\x98\xf0\x43\xf8\x29"
"\xfc\x12\x7e\x0b\x7f\x84\xbf\xc2\x3f\x21\x4e\x8c\x17\x93\x89\xc9\xc5"
"\x14\x62\x4a\x31\x95\x98\x5a\x4c\x23\xa6\x15\xd3\x89\xe9\xc5\x0c\x62"
"\x46\x31\x93\x98\x59\xfc\x4f\xcc\x22\x66\x15\xb3\x89\xd9\xc5\x1c\x62"
"\x4e\x31\x97\x98\x5b\xcc\x23\xe6\x15\xf3\x89\xf9\xc5\x02\x62\x41\xb1"
"\x90\x58\x58\x2c\x22\x16\x15\x8b\x89\xc5\xc5\x12\x62\x49\xb1\x94\x58"
"\x5a\x2c\x23\x96\x15\xcb\x89\xe5\xc5\x0a\x62\x45\xb1\x92\x58\x59\xac"
"\x22\x56\x15\xab\x89\xd5\xc5\x1a\x62\x4d\xb1\x96\x58\x5b\xac\x23\xd6"
"\x15\xeb\x89\xf5\xc5\x06\x62\x43\xb1\x91\xd8\x58\x6c\x22\x36\x15\x13"
"\xc4\x44\x31\x49\x6c\x26\x36\x17\x5b\x88\x2d\xc5\x56\x62\x6b\xb1\x8d"
"\xd8\x56\x6c\x27\xb6\x17\x3b\x88\x1d\xc5\x4e\x62\x67\xb1\x8b\xd8\x55"
"\xec\x26\x76\x17\x7b\x88\x3d\xc5\x5e\x62\x6f\xb1\x8f\xd8\x57\xec\x27"
"\xf6\x17\x07\x88\x03\xc5\x41\xe2\x60\x71\x88\x38\x54\x1c\x26\x0e\x17"
"\x47\x88\x23\xc5\x51\xe2\x68\x71\x8c\x38\x56\x1c\x27\x8e\x17\x27\x88"
"\x13\xc5\x49\xe2\x64\x71\x8a\x38\x55\x9c\x26\x4e\x17\x67\x88\x33\xc5"
"\x59\xe2\x6c\x71\x8e\x38\x57\x9c\x27\xce\x17\x17\x88\x0b\xc5\x45\xe2"
"\x62\x71\x89\xb8\x54\x5c\x26\x2e\x17\x57\x88\x2b\xc5\x55\xe2\x6a\x71"
"\x8d\xb8\x56\x5c\x27\xae\x17\x37\x88\x1b\xc5\x4d\xe2\x66\x71\x8b\xb8"
"\x55\xdc\x26\x6e\x17\x77\x88\x3b\xc5\x5d\xe2\x6e\x71\x8f\xb8\x57\xdc"
"\x27\xee\x17\x0f\x88\x07\xc5\x43\xe2\x61\xf1\x88\x78\x54\x3c\x26\x1e"
"\x17\x4f\x88\x27\xc5\x53\xe2\x69\xf1\x8c\x78\x56\x3c\x27\x9e\x17\x2f"
"\x88\x17\xc5\x4b\xe2\x65\xf1\x8a\x78\x55\xbc\x26\x5e\x17\x6f\x88\x37"
"\xc5\x5b\xe2\x6d\xf1\x8e\x78\x57\xbc\x27\xde\x17\x1f\x88\x0f\xc5\x47"
"\xe2\x63\xf1\x89\xf8\x54\x7c\x26\x3e\x17\x5f\x88\x2f\xc5\x57\xe2\x6b"
"\xf1\x8d\xf8\x56\x7c\x27\xbe\x17\x3f\x88\x1f\xc5\x4f\xe2\x67\xf1\x8b"
"\xf8\x55\xfc\x26\x7e\x17\x31\x11\x17\x09\x91\x14\x29\x91\x16\x19\x91"
"\x15\x39\x91\x17\x05\x51\x14\x25\x51\x16\x15\x51\x15\x35\x51\x17\x0d"
"\xd1\x14\x2d\xd1\x16\x1d\xd1\x15\x3d\xd1\x17\x03\x31\x14\x23\x11\x88"
"\x50\x44\x62\x4c\xfc\x21\xfe\x14\x7f\x89\xbf\xc5\x3f\xe2\x5f\xf1\x9f"
"\x18\x27\xc5\x4b\xc9\xa4\xe4\x52\x0a\x29\xa5\x94\x4a\x4a\x2d\xa5\x91"
"\xd2\x4a\xe9\xa4\xf4\x52\x06\x29\xa3\x94\x49\xca\x2c\xfd\x27\x65\x91"
"\xb2\x4a\xd9\xa4\xec\x52\x0e\x29\xa7\x94\x4b\xca\x2d\xe5\x91\xf2\x4a"
"\xf9\xa4\xfc\x52\x01\xa9\xa0\x54\x48\x2a\x2c\x15\x91\x8a\x4a\xc5\xa4"
"\xe2\x52\x09\xa9\xa4\x54\x4a\x2a\x2d\x95\x91\xca\x4a\xe5\xa4\xf2\x52"
"\x05\xa9\xa2\x54\x49\xaa\x2c\x55\x91\xaa\x4a\xd5\xa4\xea\x52\x0d\xa9"
"\xa6\x54\x4b\xaa\x2d\xd5\x91\xea\x4a\xf5\xa4\xfa\x52\x03\xa9\xa1\xd4"
"\x48\x6a\x2c\x35\x91\x9a\x4a\x09\x52\xa2\x94\x24\x35\x93\x9a\x4b\x2d"
"\xa4\x96\x52\x2b\xa9\xb5\xd4\x46\x6a\x2b\xb5\x93\xda\x4b\x1d\xa4\x8e"
"\x52\x27\xa9\xb3\xd4\x45\xea\x2a\x75\x93\xba\x4b\x3d\xa4\x9e\x52\x2f"
"\xa9\xb7\xd4\x47\xea\x2b\xf5\x93\xfa\x4b\x03\xa4\x81\xd2\x20\x69\xb0"
"\x34\x44\x1a\x2a\x0d\x93\x86\x4b\x23\xa4\x91\xd2\x28\x69\xb4\x34\x46"
"\x1a\x2b\x8d\x93\xc6\x4b\x13\xa4\x89\xd2\x24\x69\xb2\x34\x45\x9a\x2a"
"\x4d\x93\xa6\x4b\x33\xa4\x99\xd2\x2c\x69\xb6\x34\x47\x9a\x2b\xcd\x93"
"\xe6\x4b\x0b\xa4\x85\xd2\x22\x69\xb1\xb4\x44\x5a\x2a\x2d\x93\x96\x4b"
"\x2b\xa4\x95\xd2\x2a\x69\xb5\xb4\x46\x5a\x2b\xad\x93\xd6\x4b\x1b\xa4"
"\x8d\xd2\x26\x69\xb3\xb4\x45\xda\x2a\x6d\x93\xb6\x4b\x3b\xa4\x9d\xd2"
"\x2e\x69\xb7\xb4\x47\xda\x2b\xed\x93\xf6\x4b\x07\xa4\x83\xd2\x21\xe9"
"\xb0\x74\x44\x3a\x2a\x1d\x93\x8e\x4b\x27\xa4\x93\xd2\x29\xe9\xb4\x74"
"\x46\x3a\x2b\x9d\x93\xce\x4b\x17\xa4\x8b\xd2\x25\xe9\xb2\x74\x45\xba"
"\x2a\x5d\x93\xae\x4b\x37\xa4\x9b\xd2\x2d\xe9\xb6\x74\x47\xba\x2b\xdd"
"\x93\xee\x4b\x0f\xa4\x87\xd2\x23\xe9\xb1\xf4\x44\x7a\x2a\x3d\x93\x9e"
"\x4b\x2f\xa4\x97\xd2\x2b\xe9\xb5\xf4\x46\x7a\x2b\xbd\x93\xde\x4b\x1f"
"\xa4\x8f\xd2\x27\xe9\xb3\xf4\x45\xfa\x2a\x7d\x93\xbe\x4b\x98\x84\x4b"
"\x84\x44\x4a\x94\x44\x4b\x8c\xc4\x4a\x9c\xc4\x4b\x82\x24\x4a\x92\x24"
"\x4b\x8a\xa4\x4a\x9a\xa4\x4b\x86\x64\x4a\x96\x64\x4b\x8e\xe4\x4a\x9e"
"\xe4\x4b\x81\x14\x4a\x91\x04\x24\x28\x21\x29\x26\xfd\x90\x7e\x4a\xbf"
"\xa4\xdf\xd2\x1f\xe9\xaf\xf4\x4f\x8a\x93\xe3\xe5\x64\x72\x72\x39\x85"
"\x9c\x52\x4e\x25\xa7\x96\xd3\xc8\x69\xe5\x74\x72\x7a\x39\x83\x9c\x51"
"\xce\x24\x67\x96\xff\x93\xb3\xc8\x59\xe5\x6c\x72\x76\x39\x87\x9c\x53"
"\xce\x25\xe7\x96\xf3\xc8\x79\xe5\x7c\x72\x7e\xb9\x80\x5c\x50\x2e\x24"
"\x17\x96\x8b\xc8\x45\xe5\x62\x72\x71\xb9\x84\x5c\x52\x2e\x25\x97\x96"
"\xcb\xc8\x65\xe5\x72\x72\x79\xb9\x82\x5c\x51\xae\x24\x57\x96\xab\xc8"
"\x55\xe5\x6a\x72\x75\xb9\x86\x5c\x53\xae\x25\xd7\x96\xeb\xc8\x75\xe5"
"\x7a\x72\x7d\xb9\x81\xdc\x50\x6e\x24\x37\x96\x9b\xc8\x4d\xe5\x04\x39"
"\x51\x4e\x92\x9b\xc9\xcd\xe5\x16\x72\x4b\xb9\x95\xdc\x5a\x6e\x23\xb7"
"\x95\xdb\xc9\xed\xe5\x0e\x72\x47\xb9\x93\xdc\x59\xee\x22\x77\x95\xbb"
"\xc9\xdd\xe5\x1e\x72\x4f\xb9\x97\xdc\x5b\xee\x23\xf7\x95\xfb\xc9\xfd"
"\xe5\x01\xf2\x40\x79\x90\x3c\x58\x1e\x22\x0f\x95\x87\xc9\xc3\xe5\x11"
"\xf2\x48\x79\x94\x3c\x5a\x1e\x23\x8f\x95\xc7\xc9\xe3\xe5\x09\xf2\x44"
"\x79\x92\x3c\x59\x9e\x22\x4f\x95\xa7\xc9\xd3\xe5\x19\xf2\x4c\x79\x96"
"\x3c\x5b\x9e\x23\xcf\x95\xe7\xc9\xf3\xe5\x05\xf2\x42\x79\x91\xbc\x58"
"\x5e\x22\x2f\x95\x97\xc9\xcb\xe5\x15\xf2\x4a\x79\x95\xbc\x5a\x5e\x23"
"\xaf\x95\xd7\xc9\xeb\xe5\x0d\xf2\x46\x79\x93\xbc\x59\xde\x22\x6f\x95"
"\xb7\xc9\xdb\xe5\x1d\xf2\x4e\x79\x97\xbc\x5b\xde\x23\xef\x95\xf7\xc9"
"\xfb\xe5\x03\xf2\x41\xf9\x90\x7c\x58\x3e\x22\x1f\x95\x8f\xc9\xc7\xe5"
"\x13\xf2\x49\xf9\x94\x7c\x5a\x3e\x23\x9f\x95\xcf\xc9\xe7\xe5\x0b\xf2"
"\x45\xf9\x92\x7c\x59\xbe\x22\x5f\x95\xaf\xc9\xd7\xe5\x1b\xf2\x4d\xf9"
"\x96\x7c\x5b\xbe\x23\xdf\x95\xef\xc9\xf7\xe5\x07\xf2\x43\xf9\x91\xfc"
"\x58\x7e\x22\x3f\x95\x9f\xc9\xcf\xe5\x17\xf2\x4b\xf9\x95\xfc\x5a\x7e"
"\x23\xbf\x95\xdf\xc9\xef\xe5\x0f\xf2\x47\xf9\x93\xfc\x59\xfe\x22\x7f"
"\x95\xbf\xc9\xdf\x65\x4c\xc6\x65\x42\x26\x65\x4a\xa6\x65\x46\x66\x65"
"\x4e\xe6\x65\x41\x16\x65\x49\x96\x65\x45\x56\x65\x4d\xd6\x65\x43\x36"
"\x65\x4b\xb6\x65\x47\x76\x65\x4f\xf6\xe5\x40\x0e\xe5\x48\x06\x32\x94"
"\x91\x1c\x93\x7f\xc8\x3f\xe5\x5f\xf2\x6f\xf9\x8f\xfc\x57\xfe\x27\xc7"
"\x29\xf1\x4a\x32\x25\xb9\x92\x42\x49\xa9\xa4\x52\x52\x2b\x69\x94\xb4"
"\x4a\x3a\x25\xbd\x92\x41\xc9\xa8\x64\x52\x32\x2b\xff\x29\x59\x94\xac"
"\x4a\x36\x25\xbb\x92\x43\xc9\xa9\xe4\x52\x72\x2b\x79\x94\xbc\x4a\x3e"
"\x25\xbf\x52\x40\x29\xa8\x14\x52\x0a\x2b\x45\x94\xa2\x4a\x31\xa5\xb8"
"\x52\x42\x29\xa9\x94\x52\x4a\x2b\x65\x94\xb2\x4a\x39\xa5\xbc\x52\x41"
"\xa9\xa8\x54\x52\x2a\x2b\x55\x94\xaa\x4a\x35\xa5\xba\x52\x43\xa9\xa9"
"\xd4\x52\x6a\x2b\x75\x94\xba\x4a\x3d\xa5\xbe\xd2\x40\x69\xa8\x34\x52"
"\x1a\x2b\x4d\x94\xa6\x4a\x82\x92\xa8\x24\x29\xcd\x94\xe6\x4a\x0b\xa5"
"\xa5\xd2\x4a\x69\xad\xb4\x51\xda\x2a\xed\x94\xf6\x4a\x07\xa5\xa3\xd2"
"\x49\xe9\xac\x74\x51\xba\x2a\xdd\x94\xee\x4a\x0f\xa5\xa7\xd2\x4b\xe9"
"\xad\xf4\x51\xfa\x2a\xfd\x94\xfe\xca\x00\x65\xa0\x32\x48\x19\xac\x0c"
"\x51\x86\x2a\xc3\x94\xe1\xca\x08\x65\xa4\x32\x4a\x19\xad\x8c\x51\xc6"
"\x2a\xe3\x94\xf1\xca\x04\x65\xa2\x32\x49\x99\xac\x4c\x51\xa6\x2a\xd3"
"\x94\xe9\xca\x0c\x65\xa6\x32\x4b\x99\xad\xcc\x51\xe6\x2a\xf3\x94\xf9"
"\xca\x02\x65\xa1\xb2\x48\x59\xac\x2c\x51\x96\x2a\xcb\x94\xe5\xca\x0a"
"\x65\xa5\xb2\x4a\x59\xad\xac\x51\xd6\x2a\xeb\x94\xf5\xca\x06\x65\xa3"
"\xb2\x49\xd9\xac\x6c\x51\xb6\x2a\xdb\x94\xed\xca\x0e\x65\xa7\xb2\x4b"
"\xd9\xad\xec\x51\xf6\x2a\xfb\x94\xfd\xca\x01\xe5\xa0\x72\x48\x39\xac"
"\x1c\x51\x8e\x2a\xc7\x94\xe3\xca\x09\xe5\xa4\x72\x4a\x39\xad\x9c\x51"
"\xce\x2a\xe7\x94\xf3\xca\x05\xe5\xa2\x72\x49\xb9\xac\x5c\x51\xae\x2a"
"\xd7\x94\xeb\xca\x0d\xe5\xa6\x72\x4b\xb9\xad\xdc\x51\xee\x2a\xf7\x94"
"\xfb\xca\x03\xe5\xa1\xf2\x48\x79\xac\x3c\x51\x9e\x2a\xcf\x94\xe7\xca"
"\x0b\xe5\xa5\xf2\x4a\x79\xad\xbc\x51\xde\x2a\xef\x94\xf7\xca\x07\xe5"
"\xa3\xf2\x49\xf9\xac\x7c\x51\xbe\x2a\xdf\x94\xef\x0a\xa6\xe0\x0a\xa1"
"\x90\x0a\xa5\xd0\x0a\xa3\xb0\x0a\xa7\xf0\x8a\xa0\x88\x8a\xa4\xc8\x8a"
"\xa2\xa8\x8a\xa6\xe8\x8a\xa1\x98\x8a\xa5\xd8\x8a\xa3\xb8\x8a\xa7\xf8"
"\x4a\xa0\x84\x4a\xa4\x00\x05\x2a\x48\x89\x29\x3f\x94\x9f\xca\x2f\xe5"
"\xb7\xf2\x47\xf9\xab\xfc\x53\xe2\xd4\x78\x35\x99\x9a\x5c\x4d\xa1\xa6"
"\x54\x53\xa9\xa9\xd5\x34\x6a\x5a\x35\x9d\x9a\x5e\xcd\xa0\x66\x54\x33"
"\xa9\x99\xd5\xff\xd4\x2c\x6a\x56\x35\x9b\x9a\x5d\xcd\xa1\xe6\x54\x73"
"\xa9\xb9\xd5\x3c\x6a\x5e\x35\x9f\x9a\x5f\x2d\xa0\x16\x54\x0b\xa9\x85"
"\xd5\x22\x6a\x51\xb5\x98\x5a\x5c\x2d\xa1\x96\x54\x4b\xa9\xa5\xd5\x32"
"\x6a\x59\xb5\x9c\x5a\x5e\xad\xa0\x56\x54\x2b\xa9\x95\xd5\x2a\x6a\x55"
"\xb5\x9a\x5a\x5d\xad\xa1\xd6\x54\x6b\xa9\xb5\xd5\x3a\x6a\x5d\xb5\x9e"
"\x5a\x5f\x6d\xa0\x36\x54\x1b\xa9\x8d\xd5\x26\x6a\x53\x35\x41\x4d\x54"
"\x93\xd4\x66\x6a\x73\xb5\x85\xda\x52\x6d\xa5\xb6\x56\xdb\xa8\x6d\xd5"
"\x76\x6a\x7b\xb5\x83\xda\x51\xed\xa4\x76\x56\xbb\xa8\x5d\xd5\x6e\x6a"
"\x77\xb5\x87\xda\x53\xed\xa5\xf6\x56\xfb\xa8\x7d\xd5\x7e\x6a\x7f\x75"
"\x80\x3a\x50\x1d\xa4\x0e\x56\x87\xa8\x43\xd5\x61\xea\x70\x75\x84\x3a"
"\x52\x1d\xa5\x8e\x56\xc7\xa8\x63\xd5\x71\xea\x78\x75\x82\x3a\x51\x9d"
"\xa4\x4e\x56\xa7\xa8\x53\xd5\x69\xea\x74\x75\x86\x3a\x53\x9d\xa5\xce"
"\x56\xe7\xa8\x73\xd5\x79\xea\x7c\x75\x81\xba\x50\x5d\xa4\x2e\x56\x97"
"\xa8\x4b\xd5\x65\xea\x72\x75\x85\xba\x52\x5d\xa5\xae\x56\xd7\xa8\x6b"
"\xd5\x75\xea\x7a\x75\x83\xba\x51\xdd\xa4\x6e\x56\xb7\xa8\x5b\xd5\x6d"
"\xea\x76\x75\x87\xba\x53\xdd\xa5\xee\x56\xf7\xa8\x7b\xd5\x7d\xea\x7e"
"\xf5\x80\x7a\x50\x3d\xa4\x1e\x56\x8f\xa8\x47\xd5\x63\xea\x71\xf5\x84"
"\x7a\x52\x3d\xa5\x9e\x56\xcf\xa8\x67\xd5\x73\xea\x79\xf5\x82\x7a\x51"
"\xbd\xa4\x5e\x56\xaf\xa8\x57\xd5\x6b\xea\x75\xf5\x86\x7a\x53\xbd\xa5"
"\xde\x56\xef\xa8\x77\xd5\x7b\xea\x7d\xf5\x81\xfa\x50\x7d\xa4\x3e\x56"
"\x9f\xa8\x4f\xd5\x67\xea\x73\xf5\x85\xfa\x52\x7d\xa5\xbe\x56\xdf\xa8"
"\x6f\xd5\x77\xea\x7b\xf5\x83\xfa\x51\xfd\xa4\x7e\x56\xbf\xa8\x5f\xd5"
"\x6f\xea\x77\x15\x53\x71\x95\x50\x49\x95\x52\x69\x95\x51\x59\x95\x53"
"\x79\x55\x50\x45\x55\x52\x65\x55\x51\x55\x55\x53\x75\xd5\x50\x4d\xd5"
"\x52\x6d\xd5\x51\x5d\xd5\x53\x7d\x35\x50\x43\x35\x52\x81\x0a\x55\xa4"
"\xc6\xd4\x1f\xea\x4f\xf5\x97\xfa\x5b\xfd\xa3\xfe\x55\xff\xa9\x71\x5a"
"\xbc\x96\x4c\x4b\xae\xa5\xd0\x52\x6a\xa9\xb4\xd4\x5a\x1a\x2d\xad\x96"
"\x4e\x4b\xaf\x65\xd0\x32\x6a\x99\xb4\xcc\xda\x7f\x5a\x16\x2d\xab\x96"
"\x4d\xcb\xae\xe5\xd0\x72\x6a\xb9\xb4\xdc\x5a\x1e\x2d\xaf\x96\x4f\xcb"
"\xaf\x15\xd0\x0a\x6a\x85\xb4\xc2\x5a\x11\xad\xa8\x56\x4c\x2b\xae\x95"
"\xd0\x4a\x6a\xa5\xb4\xd2\x5a\x19\xad\xac\x56\x4e\x2b\xaf\x55\xd0\x2a"
"\x6a\x95\xb4\xca\x5a\x15\xad\xaa\x56\x4d\xab\xae\xd5\xd0\x6a\x6a\xb5"
"\xb4\xda\x5a\x1d\xad\xae\x56\x4f\xab\xaf\x35\xd0\x1a\x6a\x8d\xb4\xc6"
"\x5a\x13\xad\xa9\x96\xa0\x25\x6a\x49\x5a\x33\xad\xb9\xd6\x42\x6b\xa9"
"\xb5\xd2\x5a\x6b\x6d\xb4\xb6\x5a\x3b\xad\xbd\xd6\x41\xeb\xa8\x75\xd2"
"\x3a\x6b\x5d\xb4\xae\x5a\x37\xad\xbb\xd6\x43\xeb\xa9\xf5\xd2\x7a\x6b"
"\x7d\xb4\xbe\x5a\x3f\xad\xbf\x36\x40\x1b\xa8\x0d\xd2\x06\x6b\x43\xb4"
"\xa1\xda\x30\x6d\xb8\x36\x42\x1b\xa9\x8d\xd2\x46\x6b\x63\xb4\xb1\xda"
"\x38\x6d\xbc\x36\x41\x9b\xa8\x4d\xd2\x26\x6b\x53\xb4\xa9\xda\x34\x6d"
"\xba\x36\x43\x9b\xa9\xcd\xd2\x66\x6b\x73\xb4\xb9\xda\x3c\x6d\xbe\xb6"
"\x40\x5b\xa8\x2d\xd2\x16\x6b\x4b\xb4\xa5\xda\x32\x6d\xb9\xb6\x42\x5b"
"\xa9\xad\xd2\x56\x6b\x6b\xb4\xb5\xda\x3a\x6d\xbd\xb6\x41\xdb\xa8\x6d"
"\xd2\x36\x6b\x5b\xb4\xad\xda\x36\x6d\xbb\xb6\x43\xdb\xa9\xed\xd2\x76"
"\x6b\x7b\xb4\xbd\xda\x3e\x6d\xbf\x76\x40\x3b\xa8\x1d\xd2\x0e\x6b\x47"
"\xb4\xa3\xda\x31\xed\xb8\x76\x42\x3b\xa9\x9d\xd2\x4e\x6b\x67\xb4\xb3"
"\xda\x39\xed\xbc\x76\x41\xbb\xa8\x5d\xd2\x2e\x6b\x57\xb4\xab\xda\x35"
"\xed\xba\x76\x43\xbb\xa9\xdd\xd2\x6e\x6b\x77\xb4\xbb\xda\x3d\xed\xbe"
"\xf6\x40\x7b\xa8\x3d\xd2\x1e\x6b\x4f\xb4\xa7\xda\x33\xed\xb9\xf6\x42"
"\x7b\xa9\xbd\xd2\x5e\x6b\x6f\xb4\xb7\xda\x3b\xed\xbd\xf6\x41\xfb\xa8"
"\x7d\xd2\x3e\x6b\x5f\xb4\xaf\xda\x37\xed\xbb\x86\x69\xb8\x46\x68\xa4"
"\x46\x69\xb4\xc6\x68\xac\xc6\x69\xbc\x26\x68\xa2\x26\x69\xb2\xa6\x68"
"\xaa\xa6\x69\xba\x66\x68\xa6\x66\x69\xb6\xe6\x68\xae\xe6\x69\xbe\x16"
"\x68\xa1\x16\x69\x40\x83\x1a\xd2\x62\xda\x0f\xed\xa7\xf6\x4b\xfb\xad"
"\xfd\xd1\xfe\x6a\xff\xb4\x38\x3d\x5e\x4f\xa6\x27\xd7\x53\xe8\x29\xf5"
"\x54\x7a\x6a\x3d\x8d\x9e\x56\x4f\xa7\xa7\xd7\x33\xe8\x19\xf5\x4c\x7a"
"\x66\xfd\x3f\x3d\x8b\x9e\x55\xcf\xa6\x67\xd7\x73\xe8\x39\xf5\x5c\x7a"
"\x6e\x3d\x8f\x9e\x57\xcf\xa7\xe7\xd7\x0b\xe8\x05\xf5\x42\x7a\x61\xbd"
"\x88\x5e\x54\x2f\xa6\x17\xd7\x4b\xe8\x25\xf5\x52\x7a\x69\xbd\x8c\x5e"
"\x56\x2f\xa7\x97\xd7\x2b\xe8\x15\xf5\x4a\x7a\x65\xbd\x8a\x5e\x55\xaf"
"\xa6\x57\xd7\x6b\xe8\x35\xf5\x5a\x7a\x6d\xbd\x8e\x5e\x57\xaf\xa7\xd7"
"\xd7\x1b\xe8\x0d\xf5\x46\x7a\x63\xbd\x89\xde\x54\x4f\xd0\x13\xf5\x24"
"\xbd\x99\xde\x5c\x6f\xa1\xb7\xd4\x5b\xe9\xad\xf5\x36\x7a\x5b\xbd\x9d"
"\xde\x5e\xef\xa0\x77\xd4\x3b\xe9\x9d\xf5\x2e\x7a\x57\xbd\x9b\xde\x5d"
"\xef\xa1\xf7\xd4\x7b\xe9\xbd\xf5\x3e\x7a\x5f\xbd\x9f\xde\x5f\x1f\xa0"
"\x0f\xd4\x07\xe9\x83\xf5\x21\xfa\x50\x7d\x98\x3e\x5c\x1f\xa1\x8f\xd4"
"\x47\xe9\xa3\xf5\x31\xfa\x58\x7d\x9c\x3e\x5e\x9f\xa0\x4f\xd4\x27\xe9"
"\x93\xf5\x29\xfa\x54\x7d\x9a\x3e\x5d\x9f\xa1\xcf\xd4\x67\xe9\xb3\xf5"
"\x39\xfa\x5c\x7d\x9e\x3e\x5f\x5f\xa0\x2f\xd4\x17\xe9\x8b\xf5\x25\xfa"
"\x52\x7d\x99\xbe\x5c\x5f\xa1\xaf\xd4\x57\xe9\xab\xf5\x35\xfa\x5a\x7d"
"\x9d\xbe\x5e\xdf\xa0\x6f\xd4\x37\xe9\x9b\xf5\x2d\xfa\x56\x7d\x9b\xbe"
"\x5d\xdf\xa1\xef\xd4\x77\xe9\xbb\xf5\x3d\xfa\x5e\x7d\x9f\xbe\x5f\x3f"
"\xa0\x1f\xd4\x0f\xe9\x87\xf5\x23\xfa\x51\xfd\x98\x7e\x5c\x3f\xa1\x9f"
"\xd4\x4f\xe9\xa7\xf5\x33\xfa\x59\xfd\x9c\x7e\x5e\xbf\xa0\x5f\xd4\x2f"
"\xe9\x97\xf5\x2b\xfa\x55\xfd\x9a\x7e\x5d\xbf\xa1\xdf\xd4\x6f\xe9\xb7"
"\xf5\x3b\xfa\x5d\xfd\x9e\x7e\x5f\x7f\xa0\x3f\xd4\x1f\xe9\x8f\xf5\x27"
"\xfa\x53\xfd\x99\xfe\x5c\x7f\xa1\xbf\xd4\x5f\xe9\xaf\xf5\x37\xfa\x5b"
"\xfd\x9d\xfe\x5e\xff\xa0\x7f\xd4\x3f\xe9\x9f\xf5\x2f\xfa\x57\xfd\x9b"
"\xfe\x5d\xc7\x74\x5c\x27\x74\x52\xa7\x74\x5a\x67\x74\x56\xe7\x74\x5e"
"\x17\x74\x51\x97\x74\x59\x57\x74\x55\xd7\x74\x5d\x37\x74\x53\xb7\x74"
"\x5b\x77\x74\x57\xf7\x74\x5f\x0f\xf4\x50\x8f\x74\xa0\x43\x1d\xe9\x31"
"\xfd\x87\xfe\x53\xff\xa5\xff\xd6\xff\xe8\x7f\xf5\x7f\x7a\x9c\x11\x6f"
"\x24\x33\x92\x1b\x29\x8c\x94\x46\x2a\x23\xb5\x91\xc6\x48\x6b\xa4\x33"
"\xd2\x1b\x19\x8c\x8c\x46\x26\x23\xb3\xf1\x9f\x91\xc5\xc8\x6a\x64\x33"
"\xb2\x1b\x39\x8c\x9c\x46\x2e\x23\xb7\x91\xc7\xc8\x6b\xe4\x33\xf2\x1b"
"\x05\x8c\x82\x46\x21\xa3\xb0\x51\xc4\x28\x6a\x14\x33\x8a\x1b\x25\x8c"
"\x92\x46\x29\xa3\xb4\x51\xc6\x28\x6b\x94\x33\xca\x1b\x15\x8c\x8a\x46"
"\x25\xa3\xb2\x51\xc5\xa8\x6a\x54\x33\xaa\x1b\x35\x8c\x9a\x46\x2d\xa3"
"\xb6\x51\xc7\xa8\x6b\xd4\x33\xea\x1b\x0d\x8c\x86\x46\x23\xa3\xb1\xd1"
"\xc4\x68\x6a\x24\x18\x89\x46\x92\xd1\xcc\x68\x6e\xb4\x30\x5a\x1a\xad"
"\x8c\xd6\x46\x1b\xa3\xad\xd1\xce\x68\x6f\x74\x30\x3a\x1a\x9d\x8c\xce"
"\x46\x17\xa3\xab\xd1\xcd\xe8\x6e\xf4\x30\x7a\x1a\xbd\x8c\xde\x46\x1f"
"\xa3\xaf\xd1\xcf\xe8\x6f\x0c\x30\x06\x1a\x83\x8c\xc1\xc6\x10\x63\xa8"
"\x31\xcc\x18\x6e\x8c\x30\x46\x1a\xa3\x8c\xd1\xc6\x18\x63\xac\x31\xce"
"\x18\x6f\x4c\x30\x26\x1a\x93\x8c\xc9\xc6\x14\x63\xaa\x31\xcd\x98\x6e"
"\xcc\x30\x66\x1a\xb3\x8c\xd9\xc6\x1c\x63\xae\x31\xcf\x98\x6f\x2c\x30"
"\x16\x1a\x8b\x8c\xc5\xc6\x12\x63\xa9\xb1\xcc\x58\x6e\xac\x30\x56\x1a"
"\xab\x8c\xd5\xc6\x1a\x63\xad\xb1\xce\x58\x6f\x6c\x30\x36\x1a\x9b\x8c"
"\xcd\xc6\x16\x63\xab\xb1\xcd\xd8\x6e\xec\x30\x76\x1a\xbb\x8c\xdd\xc6"
"\x1e\x63\xaf\xb1\xcf\xd8\x6f\x1c\x30\x0e\x1a\x87\x8c\xc3\xc6\x11\xe3"
"\xa8\x71\xcc\x38\x6e\x9c\x30\x4e\x1a\xa7\x8c\xd3\xc6\x19\xe3\xac\x71"
"\xce\x38\x6f\x5c\x30\x2e\x1a\x97\x8c\xcb\xc6\x15\xe3\xaa\x71\xcd\xb8"
"\x6e\xdc\x30\x6e\x1a\xb7\x8c\xdb\xc6\x1d\xe3\xae\x71\xcf\xb8\x6f\x3c"
"\x30\x1e\x1a\x8f\x8c\xc7\xc6\x13\xe3\xa9\xf1\xcc\x78\x6e\xbc\x30\x5e"
"\x1a\xaf\x8c\xd7\xc6\x1b\xe3\xad\xf1\xce\x78\x6f\x7c\x30\x3e\x1a\x9f"
"\x8c\xcf\xc6\x17\xe3\xab\xf1\xcd\xf8\x6e\x60\x06\x6e\x10\x06\x69\x50"
"\x06\x6d\x30\x06\x6b\x70\x06\x6f\x08\x86\x68\x48\x86\x6c\x28\x86\x6a"
"\x68\x86\x6e\x18\x86\x69\x58\x86\x6d\x38\x86\x6b\x78\x86\x6f\x04\x46"
"\x68\x44\x06\x30\xa0\x81\x8c\x98\xf1\xc3\xf8\x69\xfc\x32\x7e\x1b\x7f"
"\x8c\xbf\xc6\x3f\x23\xce\x8c\x37\x93\x99\xc9\xcd\x14\x66\x4a\x33\x95"
"\x99\xda\x4c\x63\xa6\x35\xd3\x99\xe9\xcd\x0c\x66\x46\x33\x93\x99\xd9"
"\xfc\xcf\xcc\x62\x66\x35\xb3\x99\xd9\xcd\x1c\x66\x4e\x33\x97\x99\xdb"
"\xcc\x63\xe6\x35\xf3\x99\xf9\xcd\x02\x66\x41\xb3\x90\x59\xd8\x2c\x62"
"\x16\x35\x8b\x99\xc5\xcd\x12\x66\x49\xb3\x94\x59\xda\x2c\x63\x96\x35"
"\xcb\x99\xe5\xcd\x0a\x66\x45\xb3\x92\x59\xd9\xac\x62\x56\x35\xab\x99"
"\xd5\xcd\x1a\x66\x4d\xb3\x96\x59\xdb\xac\x63\xd6\x35\xeb\x99\xf5\xcd"
"\x06\x66\x43\xb3\x91\xd9\xd8\x6c\x62\x36\x35\x13\xcc\x44\x33\xc9\x6c"
"\x66\x36\x37\x5b\x98\x2d\xcd\x56\x66\x6b\xb3\x8d\xd9\xd6\x6c\x67\xb6"
"\x37\x3b\x98\x1d\xcd\x4e\x66\x67\xb3\x8b\xd9\xd5\xec\x66\x76\x37\x7b"
"\x98\x3d\xcd\x5e\x66\x6f\xb3\x8f\xd9\xd7\xec\x67\xf6\x37\x07\x98\x03"
"\xcd\x41\xe6\x60\x73\x88\x39\xd4\x1c\x66\x0e\x37\x47\x98\x23\xcd\x51"
"\xe6\x68\x73\x8c\x39\xd6\x1c\x67\x8e\x37\x27\x98\x13\xcd\x49\xe6\x64"
"\x73\x8a\x39\xd5\x9c\x66\x4e\x37\x67\x98\x33\xcd\x59\xe6\x6c\x73\x8e"
"\x39\xd7\x9c\x67\xce\x37\x17\x98\x0b\xcd\x45\xe6\x62\x73\x89\xb9\xd4"
"\x5c\x66\x2e\x37\x57\x98\x2b\xcd\x55\xe6\x6a\x73\x8d\xb9\xd6\x5c\x67"
"\xae\x37\x37\x98\x1b\xcd\x4d\xe6\x66\x73\x8b\xb9\xd5\xdc\x66\x6e\x37"
"\x77\x98\x3b\xcd\x5d\xe6\x6e\x73\x8f\xb9\xd7\xdc\x67\xee\x37\x0f\x98"
"\x07\xcd\x43\xe6\x61\xf3\x88\x79\xd4\x3c\x66\x1e\x37\x4f\x98\x27\xcd"
"\x53\xe6\x69\xf3\x8c\x79\xd6\x3c\x67\x9e\x37\x2f\x98\x17\xcd\x4b\xe6"
"\x65\xf3\x8a\x79\xd5\xbc\x66\x5e\x37\x6f\x98\x37\xcd\x5b\xe6\x6d\xf3"
"\x8e\x79\xd7\xbc\x67\xde\x37\x1f\x98\x0f\xcd\x47\xe6\x63\xf3\x89\xf9"
"\xd4\x7c\x66\x3e\x37\x5f\x98\x2f\xcd\x57\xe6\x6b\xf3\x8d\xf9\xd6\x7c"
"\x67\xbe\x37\x3f\x98\x1f\xcd\x4f\xe6\x67\xf3\x8b\xf9\xd5\xfc\x66\x7e"
"\x37\x31\x13\x37\x09\x93\x34\x29\x93\x36\x19\x93\x35\x39\x93\x37\x05"
"\x53\x34\x25\x53\x36\x15\x53\x35\x35\x53\x37\x0d\xd3\x34\x2d\xd3\x36"
"\x1d\xd3\x35\x3d\xd3\x37\x03\x33\x34\x23\x13\x98\xd0\x44\x66\xcc\xfc"
"\x61\xfe\x34\x7f\x99\xbf\xcd\x3f\xe6\x5f\xf3\x9f\x19\x67\xc5\x5b\xc9"
"\xac\xe4\x56\x0a\x2b\xa5\x95\xca\x4a\x6d\xa5\xb1\xd2\x5a\xe9\xac\xf4"
"\x56\x06\x2b\xa3\x95\xc9\xca\x6c\xfd\x67\x65\xb1\xb2\x5a\xd9\xac\xec"
"\x56\x0e\x2b\xa7\x95\xcb\xca\x6d\xe5\xb1\xf2\x5a\xf9\xac\xfc\x56\x01"
"\xab\xa0\x55\xc8\x2a\x6c\x15\xb1\x8a\xc6\x17\xb3\x8a\x5b\x25\xac\x92"
"\x56\x29\xab\xb4\x55\xc6\x2a\x6b\x95\xb3\xca\x5b\x15\xac\x8a\x56\x25"
"\xab\xb2\x55\xc5\xaa\x6a\x55\xb3\xaa\x5b\x35\xac\x9a\x56\x2d\xab\xb6"
"\x55\xc7\xaa\x6b\xd5\xb3\xea\x5b\x0d\xac\x86\x56\x23\xab\xb1\xd5\xc4"
"\x6a\x6a\x25\x58\x89\x56\x92\xd5\xcc\x6a\x6e\xb5\xb0\x5a\x5a\xad\xac"
"\xd6\x56\x1b\xab\xad\xd5\xce\x6a\x6f\x75\xb0\x3a\x5a\x9d\xac\xce\x56"
"\x17\xab\xab\xd5\xcd\xea\x6e\xf5\xb0\x7a\x5a\xbd\xac\xde\x56\x1f\xab"
"\xaf\xd5\xcf\xea\x6f\x0d\xb0\x06\x5a\x83\xac\xc1\xd6\x10\x6b\xa8\x35"
"\xcc\x1a\x6e\x8d\xb0\x46\x5a\xa3\xac\xd1\xd6\x18\x6b\xac\x35\xce\x1a"
"\x6f\x4d\xb0\x26\x5a\x93\xac\xc9\xd6\x14\x6b\xaa\x35\xcd\x9a\x6e\xcd"
"\xb0\x66\x5a\xb3\xac\xd9\xd6\x1c\x6b\xae\x35\xcf\x9a\x6f\x2d\xb0\x16"
"\x5a\x8b\xac\xc5\xd6\x12\x6b\xa9\xb5\xcc\x5a\x6e\xad\xb0\x56\x5a\xab"
"\xac\xd5\xd6\x1a\x6b\xad\xb5\xce\x5a\x6f\x6d\xb0\x36\x5a\x9b\xac\xcd"
"\xd6\x16\x6b\xab\xb5\xcd\xda\x6e\xed\xb0\x76\x5a\xbb\xac\xdd\xd6\x1e"
"\x6b\xaf\xb5\xcf\xda\x6f\x1d\xb0\x0e\x5a\x87\xac\xc3\xd6\x11\xeb\xa8"
"\x75\xcc\x3a\x6e\x9d\xb0\x4e\x5a\xa7\xac\xd3\xd6\x19\xeb\xac\x75\xce"
"\x3a\x6f\x5d\xb0\x2e\x5a\x97\xac\xcb\xd6\x15\xeb\xaa\x75\xcd\xba\x6e"
"\xdd\xb0\x6e\x5a\xb7\xac\xdb\xd6\x1d\xeb\xae\x75\xcf\xba\x6f\x3d\xb0"
"\x1e\x5a\x8f\xac\xc7\xd6\x13\xeb\xa9\xf5\xcc\x7a\x6e\xbd\xb0\x5e\x5a"
"\xaf\xac\xd7\xd6\x1b\xeb\xad\xf5\xce\x7a\x6f\x7d\xb0\x3e\x5a\x9f\xac"
"\xcf\xd6\x17\xeb\xab\xf5\xcd\xfa\x6e\x61\x16\x6e\x11\x16\x69\x51\x16"
"\x6d\x31\x16\x6b\x71\x16\x6f\x09\x96\x68\x49\x96\x6c\x29\x96\x6a\x69"
"\x96\x6e\x19\x96\x69\x59\x96\x6d\x39\x96\x6b\x79\x96\x6f\x05\x56\x68"
"\x45\x16\xb0\xa0\x85\xac\x98\xf5\xc3\xfa\x69\xfd\xb2\x7e\x5b\x7f\xac"
"\xbf\xd6\x3f\x2b\xce\x8e\xb7\x93\xd9\xc9\xed\x14\x76\x4a\x3b\x95\x9d"
"\xda\x4e\x63\xa7\xb5\xd3\xd9\xe9\xed\x0c\x76\x46\x3b\x93\x9d\xd9\xfe"
"\xcf\xce\x62\x67\xb5\xb3\xd9\xd9\xed\x1c\x76\x4e\x3b\x97\x9d\xdb\xce"
"\x63\xe7\xb5\xf3\xd9\xf9\xed\x02\x76\x41\xbb\x90\x5d\xd8\x2e\x62\x17"
"\xb5\x8b\xd9\xc5\xed\x12\x76\x49\xbb\x94\x5d\xda\x2e\x63\x97\xb5\xcb"
"\xd9\xe5\xed\x0a\x76\x45\xbb\x92\x5d\xd9\xae\x62\x57\xb5\xab\xd9\xd5"
"\xed\x1a\x76\x4d\xbb\x96\x5d\xdb\xae\x63\xd7\xb5\xeb\xd9\xf5\xed\x06"
"\x76\xc3\xf8\xb8\xb8\x38\xbb\x89\xdd\xd4\x4e\xb0\x13\xed\x24\xbb\x99"
"\xdd\xdc\x6e\x61\xb7\xb4\x5b\xd9\xad\xed\x36\x76\x5b\xbb\x9d\xdd\xde"
"\xee\x60\x77\xb4\x3b\xd9\x9d\xed\x2e\x76\x57\xbb\x9b\xdd\xdd\xee\x61"
"\xf7\xb4\x7b\xd9\xbd\xed\x3e\x76\x5f\xbb\x9f\xdd\xdf\x1e\x60\x0f\xb4"
"\x07\xd9\x83\xed\x21\xf6\x50\x7b\x98\x3d\xdc\x1e\x61\x8f\xb4\x47\xd9"
"\xa3\xed\x31\xf6\x58\x7b\x9c\x3d\xde\x9e\x60\x4f\xb4\x27\xd9\x93\xed"
"\x29\xf6\x54\x7b\x9a\x3d\xdd\x9e\x61\xcf\xb4\x67\xd9\xb3\xed\x39\xf6"
"\x5c\x7b\x9e\x3d\xdf\x5e\x60\x2f\xb4\x17\xd9\x8b\xed\x25\xf6\x52\x7b"
"\x99\xbd\xdc\x5e\x61\xaf\xb4\x57\xd9\xab\xed\x35\xf6\x5a\x7b\x9d\xbd"
"\xde\xde\x60\x6f\xb4\x37\xd9\x9b\xed\x2d\xf6\x56\x7b\x9b\xbd\xdd\xde"
"\x61\xef\xb4\x77\xd9\xbb\xed\x3d\xf6\x5e\x7b\x9f\xbd\xdf\x3e\x60\x1f"
"\xb4\x0f\xd9\x87\xed\x23\xf6\x51\xfb\x98\x7d\xdc\x3e\x61\x9f\xb4\x4f"
"\xd9\xa7\xed\x33\xf6\x59\xfb\x9c\x7d\xde\xbe\x60\x5f\xb4\x2f\xd9\x97"
"\xed\x2b\xf6\x55\xfb\x9a\x7d\xdd\xbe\x61\xdf\xb4\x6f\xd9\xb7\xed\x3b"
"\xf6\x5d\xfb\x9e\x7d\xdf\x7e\x60\x3f\xb4\x1f\xd9\x8f\xed\x27\xf6\x53"
"\xfb\x99\xfd\xdc\x7e\x61\xbf\xb4\x5f\xd9\xaf\xed\x37\xf6\x5b\xfb\x9d"
"\xfd\xde\xfe\x60\x7f\xb4\x3f\xd9\x9f\xed\x2f\xf6\x57\xfb\x9b\xfd\xdd"
"\xc6\x6c\xdc\x26\x6c\xd2\xa6\x6c\xda\x66\x6c\xd6\xe6\x6c\xde\x16\x6c"
"\xd1\x96\x6c\xd9\x56\x6c\xd5\xd6\x6c\xdd\x36\x6c\xd3\xb6\x6c\xdb\x76"
"\x6c\xd7\xf6\x6c\xdf\x0e\xec\xd0\x8e\x6c\x60\x43\x1b\xd9\x31\xfb\x87"
"\xfd\xd3\xfe\x65\xff\xb6\xff\xd8\x7f\xed\x7f\x76\x9c\x13\xef\x24\x73"
"\x92\x3b\x29\x9c\x94\x4e\x2a\x27\xb5\x93\xc6\x49\xeb\xa4\x73\xd2\x3b"
"\x19\x9c\x8c\x4e\x26\x27\xb3\xf3\x9f\x93\xc5\xc9\xea\x64\x73\xb2\x3b"
"\x39\x9c\x9c\x4e\x2e\x27\xb7\x93\xc7\xc9\xeb\xe4\x73\xf2\x3b\x05\x9c"
"\x82\x4e\x21\xa7\xb0\x53\xc4\x29\xea\x14\x73\x8a\x3b\x25\x9c\x92\x4e"
"\x29\xa7\xb4\x53\xc6\x29\xeb\x94\x73\xca\x3b\x15\x9c\x8a\x4e\x25\xa7"
"\xb2\x53\xc5\xa9\xea\x54\x73\xaa\x3b\x35\x9c\x9a\x4e\x2d\xa7\xb6\x53"
"\xc7\xa9\xeb\xd4\x73\xea\x3b\x0d\x9c\x86\x4e\x23\xa7\xb1\xd3\xc4\x69"
"\xea\x24\x38\x89\x4e\x92\xd3\xcc\x69\xee\xb4\x70\x5a\x3a\xad\x9c\xd6"
"\x4e\x1b\xa7\xad\xd3\xce\x69\xef\x74\x70\x3a\x3a\x9d\x9c\xce\x4e\x17"
"\xa7\xab\xd3\xcd\xe9\xee\xf4\x70\x7a\x3a\xbd\x9c\xde\x4e\x1f\xa7\xaf"
"\xd3\xcf\xe9\xef\x0c\x70\x06\x3a\x83\x9c\xc1\xce\x10\x67\xa8\x33\xcc"
"\x19\xee\x8c\x70\x46\x3a\xa3\x9c\xd1\xce\x18\x67\xac\x33\xce\x19\xef"
"\x4c\x70\x26\x3a\x93\x9c\xc9\xce\x14\x67\xaa\x33\xcd\x99\xee\xcc\x70"
"\x66\x3a\xb3\x9c\xd9\xce\x1c\x67\xae\x33\xcf\x99\xef\x2c\x70\x16\x3a"
"\x8b\x9c\xc5\xce\x12\x67\xa9\xb3\xcc\x59\xee\xac\x70\x56\x3a\xab\x9c"
"\xd5\xce\x1a\x67\xad\xb3\xce\x59\xef\x6c\x70\x36\x3a\x9b\x9c\xcd\xce"
"\x16\x67\xab\xb3\xcd\xd9\xee\xec\x70\x76\x3a\xbb\x9c\xdd\xce\x1e\x67"
"\xaf\xb3\xcf\xd9\xef\x1c\x70\x0e\x3a\x87\x9c\xc3\xce\x11\xe7\xa8\x73"
"\xcc\x39\xee\x9c\x70\x4e\x3a\xa7\x9c\xd3\xce\x19\xe7\xac\x73\xce\x39"
"\xef\x5c\x70\x2e\x3a\x97\x9c\xcb\xce\x15\xe7\xaa\x73\xcd\xb9\xee\xdc"
"\x70\x6e\x3a\xb7\x9c\xdb\xce\x1d\xe7\xae\x73\xcf\xb9\xef\x3c\x70\x1e"
"\x3a\x8f\x9c\xc7\xce\x13\xe7\xa9\xf3\xcc\x79\xee\xbc\x70\x5e\x3a\xaf"
"\x9c\xd7\xce\x1b\xe7\xad\xf3\xce\x79\xef\x7c\x70\x3e\x3a\x9f\x9c\xcf"
"\xce\x17\xe7\xab\xf3\xcd\xf9\xee\x60\x0e\xee\x10\x0e\xe9\x50\x0e\xed"
"\x30\x0e\xeb\x70\x0e\xef\x08\x8e\xe8\x48\x8e\xec\x28\x8e\xea\x68\x8e"
"\xee\x18\x8e\xe9\x58\x8e\xed\x38\x8e\xeb\x78\x8e\xef\x04\x4e\xe8\x44"
"\x0e\x70\xa0\x83\x9c\x98\xf3\xc3\xf9\xe9\xfc\x72\x7e\x3b\x7f\x9c\xbf"
"\xce\x3f\x27\xce\x8d\x77\x93\xb9\xc9\xdd\x14\x6e\x4a\x37\x95\x9b\xda"
"\x4d\xe3\xa6\x75\xd3\xb9\xe9\xdd\x0c\x6e\x46\x37\x93\x9b\xd9\xfd\xcf"
"\xcd\xe2\x66\x75\xb3\xb9\xd9\xdd\x1c\x6e\x4e\x37\x97\x9b\xdb\xcd\xe3"
"\xe6\x75\xf3\xb9\xf9\xdd\x02\x6e\x41\xb7\x90\x5b\xd8\x2d\xe2\x16\x75"
"\x8b\xb9\xc5\xdd\x12\x6e\x49\xb7\x94\x5b\xda\x2d\xe3\x96\x75\xcb\xb9"
"\xe5\xdd\x0a\x6e\x45\xb7\x92\x5b\xd9\xad\xe2\x56\x75\xab\xb9\xd5\xdd"
"\x1a\x6e\x4d\xb7\x96\x5b\xdb\xad\xe3\xd6\x75\xeb\xb9\xf5\xdd\x06\x6e"
"\x43\xb7\x91\xdb\xd8\x6d\xe2\x36\x75\x13\xdc\x44\x37\xc9\x6d\xe6\x36"
"\x77\x5b\xb8\x2d\xdd\x56\x6e\x6b\xb7\x8d\xdb\xd6\x6d\xe7\xb6\x77\x3b"
"\xb8\x1d\xdd\x4e\x6e\x67\xb7\x8b\xdb\xd5\xed\xe6\x76\x77\x7b\xb8\x3d"
"\xdd\x5e\x6e\x6f\xb7\x8f\xdb\xd7\xed\xe7\xf6\x77\x07\xb8\x03\xdd\x41"
"\xee\x60\x77\x88\x3b\xd4\x1d\xe6\x0e\x77\x47\xb8\x23\xdd\x51\xee\x68"
"\x77\x8c\x3b\xd6\x1d\xe7\x8e\x77\x27\xb8\x13\xdd\x49\xee\x64\x77\x8a"
"\x3b\xd5\x9d\xe6\x4e\x77\x67\xb8\x33\xdd\x59\xee\x6c\x77\x8e\x3b\xd7"
"\x9d\xe7\xce\x77\x17\xb8\x0b\xdd\x45\xee\x62\x77\x89\xbb\xd4\x5d\xe6"
"\x2e\x77\x57\xb8\x2b\xdd\x55\xee\x6a\x77\x8d\xbb\xd6\x5d\xe7\xae\x77"
"\x37\xb8\x1b\xdd\x4d\xee\x66\x77\x8b\xbb\xd5\xdd\xe6\x6e\x77\x77\xb8"
"\x3b\xdd\x5d\xee\x6e\x77\x8f\xbb\xd7\xdd\xe7\xee\x77\x0f\xb8\x07\xdd"
"\x43\xee\x61\xf7\x88\x7b\xd4\x3d\xe6\x1e\x77\x4f\xb8\x27\xdd\x53\xee"
"\x69\xf7\x8c\x7b\xd6\x3d\xe7\x9e\x77\x2f\xb8\x17\xdd\x4b\xee\x65\xf7"
"\x8a\x7b\xd5\xbd\xe6\x5e\x77\x6f\xb8\x37\xdd\x5b\xee\x6d\xf7\x8e\x7b"
"\xd7\xbd\xe7\xde\x77\x1f\xb8\x0f\xdd\x47\xee\x63\xf7\x89\xfb\xd4\x7d"
"\xe6\x3e\x77\x5f\xb8\x2f\xdd\x57\xee\x6b\xf7\x8d\xfb\xd6\x7d\xe7\xbe"
"\x77\x3f\xb8\x1f\xdd\x4f\xee\x67\xf7\x8b\xfb\xd5\xfd\xe6\x7e\x77\x31"
"\x17\x77\x09\x97\x74\x29\x97\x76\x19\x97\x75\x39\x97\x77\x05\x57\x74"
"\x25\x57\x76\x15\x57\x75\x35\x57\x77\x0d\xd7\x74\x2d\xd7\x76\x1d\xd7"
"\x75\x3d\xd7\x77\x03\x37\x74\x23\x17\xb8\xd0\x45\x6e\xcc\xfd\xe1\xfe"
"\x74\x7f\xb9\xbf\xdd\x3f\xee\x5f\xf7\x9f\x1b\xe7\xc5\x7b\xc9\xbc\xe4"
"\x5e\x0a\x2f\xa5\x97\xca\x4b\xed\xa5\xf1\xd2\x7a\xe9\xbc\xf4\x5e\x06"
"\x2f\xa3\x97\xc9\xcb\xec\xfd\xe7\x65\xf1\xb2\x7a\xd9\xbc\xec\x5e\x0e"
"\x2f\xa7\x97\xcb\xcb\xed\xe5\xf1\xf2\x7a\xf9\xbc\xfc\x5e\x01\xaf\xa0"
"\x57\xc8\x2b\xec\x15\xf1\x8a\x7a\xc5\xbc\xe2\x5e\x09\xaf\xa4\x57\xca"
"\x2b\xed\x95\xf1\xca\x7a\xe5\xbc\xf2\x5e\x05\xaf\xa2\x57\xc9\xab\xec"
"\x55\xf1\xaa\x7a\xd5\xbc\xea\x5e\x0d\xaf\xa6\x57\xcb\xab\xed\xd5\xf1"
"\xea\x7a\xf5\xbc\xfa\x5e\x03\xaf\xa1\xd7\xc8\x6b\xec\x35\xf1\x9a\x7a"
"\x09\x5e\xa2\x97\xe4\x35\xf3\x9a\x7b\x2d\xbc\x96\x5e\x2b\xaf\xb5\xd7"
"\xc6\x6b\xeb\xb5\xf3\xda\x7b\x1d\xbc\x8e\x5e\x27\xaf\xb3\xd7\xc5\xeb"
"\xea\x75\xf3\xba\x7b\x3d\xbc\x9e\x5e\x2f\xaf\xb7\xd7\xc7\xeb\xeb\xf5"
"\xf3\xfa\x7b\x03\xbc\x81\xde\x20\x6f\xb0\x37\xc4\x1b\xea\x0d\xf3\x86"
"\x7b\x23\xbc\x91\xde\x28\x6f\xb4\x37\xc6\x1b\xeb\x8d\xf3\xc6\x7b\x13"
"\xbc\x89\xde\x24\x6f\xb2\x37\xc5\x9b\xea\x4d\xf3\xa6\x7b\x33\xbc\x99"
"\xde\x2c\x6f\xb6\x37\xc7\x9b\xeb\xcd\xf3\xe6\x7b\x0b\xbc\x85\xde\x22"
"\x6f\xb1\xb7\xc4\x5b\xea\x2d\xf3\x96\x7b\x2b\xbc\x95\xde\x2a\x6f\xb5"
"\xb7\xc6\x5b\xeb\xad\xf3\xd6\x7b\x1b\xbc\x8d\xde\x26\x6f\xb3\xb7\xc5"
"\xdb\xea\x6d\xf3\xb6\x7b\x3b\xbc\x9d\xde\x2e\x6f\xb7\xb7\xc7\xdb\xeb"
"\xed\xf3\xf6\x7b\x07\xbc\x83\xde\x21\xef\xb0\x77\xc4\x3b\xea\x1d\xf3"
"\x8e\x7b\x27\xbc\x93\xde\x29\xef\xb4\x77\xc6\x3b\xeb\x9d\xf3\xce\x7b"
"\x17\xbc\x8b\xde\x25\xef\xb2\x77\xc5\xbb\xea\x5d\xf3\xae\x7b\x37\xbc"
"\x9b\xde\x2d\xef\xb6\x77\xc7\xbb\xeb\xdd\xf3\xee\x7b\x0f\xbc\x87\xde"
"\x23\xef\xb1\xf7\xc4\x7b\xea\x3d\xf3\x9e\x7b\x2f\xbc\x97\xde\x2b\xef"
"\xb5\xf7\xc6\x7b\xeb\xbd\xf3\xde\x7b\x1f\xbc\x8f\xde\x27\xef\xb3\xf7"
"\xc5\xfb\xea\x7d\xf3\xbe\x7b\x98\x87\x7b\x84\x47\x7a\x94\x47\x7b\x8c"
"\xc7\x7a\x9c\xc7\x7b\x82\x27\x7a\x92\x27\x7b\x8a\xa7\x7a\x9a\xa7\x7b"
"\x86\x67\x7a\x96\x67\x7b\x8e\xe7\x7a\x9e\xe7\x7b\x81\x17\x7a\x91\x07"
"\x3c\xe8\x21\x2f\xe6\xfd\xf0\x7e\x7a\xbf\xbc\xdf\xde\x1f\xef\xaf\xf7"
"\xcf\x8b\xf3\xe3\xfd\x64\x7e\x72\x3f\x85\x9f\xd2\x4f\xe5\xa7\xf6\xd3"
"\xf8\x69\xfd\x74\x7e\x7a\x3f\x83\x9f\xd1\xcf\xe4\x67\xf6\xff\xf3\xb3"
"\xf8\x59\xfd\x6c\x7e\x76\x3f\x87\x9f\xd3\xcf\xe5\xe7\xf6\xf3\xf8\x79"
"\xfd\x7c\x7e\x7e\xbf\x80\x5f\xd0\x2f\xe4\x17\xf6\x8b\xf8\x45\xfd\x62"
"\x7e\x71\xbf\x84\x5f\xd2\x2f\xe5\x97\xf6\xcb\xf8\x65\xfd\x72\x7e\x79"
"\xbf\x82\x5f\xd1\xaf\xe4\x57\xf6\xab\xf8\x55\xfd\x6a\x7e\x75\xbf\x86"
"\x5f\xd3\xaf\xe5\xd7\xf6\xeb\xf8\x75\xfd\x7a\x7e\x7d\xbf\x81\xdf\xd0"
"\x6f\xe4\x37\xf6\x9b\xf8\x4d\xfd\x04\x3f\xd1\x4f\xf2\x9b\xf9\xcd\xfd"
"\x16\x7e\x4b\xbf\x95\xdf\xda\x6f\xe3\xb7\xf5\xdb\xf9\xed\xfd\x0e\x7e"
"\x47\xbf\x93\xdf\xd9\xef\xe2\x77\xf5\xbb\xf9\xdd\xfd\x1e\x7e\x4f\xbf"
"\x97\xdf\xdb\xef\xe3\xf7\xf5\xfb\xf9\xfd\xfd\x01\xfe\x40\x7f\x90\x3f"
"\xd8\x1f\xe2\x0f\xf5\x87\xf9\xc3\xfd\x11\xfe\x48\x7f\x94\x3f\xda\x1f"
"\xe3\x8f\xf5\xc7\xf9\xe3\xfd\x09\xfe\x44\x7f\x92\x3f\xd9\x9f\xe2\x4f"
"\xf5\xa7\xf9\xd3\xfd\x19\xfe\x4c\x7f\x96\x3f\xdb\x9f\xe3\xcf\xf5\xe7"
"\xf9\xf3\xfd\x05\xfe\x42\x7f\x91\xbf\xd8\x5f\xe2\x2f\xf5\x97\xf9\xcb"
"\xfd\x15\xfe\x4a\x7f\x95\xbf\xda\x5f\xe3\xaf\xf5\xd7\xf9\xeb\xfd\x0d"
"\xfe\x46\x7f\x93\xbf\xd9\xdf\xe2\x6f\xf5\xb7\xf9\xdb\xfd\x1d\xfe\x4e"
"\x7f\x97\xbf\xdb\xdf\xe3\xef\xf5\xf7\xf9\xfb\xfd\x03\xfe\x41\xff\x90"
"\x7f\xd8\x3f\xe2\x1f\xf5\x8f\xf9\xc7\xfd\x13\xfe\x49\xff\x94\x7f\xda"
"\x3f\xe3\x9f\xf5\xcf\xf9\xe7\xfd\x0b\xfe\x45\xff\x92\x7f\xd9\xbf\xe2"
"\x5f\xf5\xaf\xf9\xd7\xfd\x1b\xfe\x4d\xff\x96\x7f\xdb\xbf\xe3\xdf\xf5"
"\xef\xf9\xf7\xfd\x07\xfe\x43\xff\x91\xff\xd8\x7f\xe2\x3f\xf5\x9f\xf9"
"\xcf\xfd\x17\xfe\x4b\xff\x95\xff\xda\x7f\xe3\xbf\xf5\xdf\xf9\xef\xfd"
"\x0f\xfe\x47\xff\x93\xff\xd9\xff\xe2\x7f\xf5\xbf\xf9\xdf\x7d\xcc\xc7"
"\x7d\xc2\x27\x7d\xca\xa7\x7d\xc6\x67\x7d\xce\xe7\x7d\xc1\x17\x7d\xc9"
"\x97\x7d\xc5\x57\x7d\xcd\xd7\x7d\xc3\x37\x7d\xcb\xb7\x7d\xc7\x77\x7d"
"\xcf\xf7\xfd\xc0\x0f\xfd\xc8\x07\x3e\xf4\x91\x1f\xf3\x7f\xf8\x3f\xfd"
"\x5f\xfe\x6f\xff\x8f\xff\xd7\xff\xe7\xc7\x05\xf1\x41\xb2\x20\x79\x90"
"\x22\x48\x19\xa4\x0a\x52\x07\x69\x82\xb4\x41\xba\x20\x7d\x90\x21\xc8"
"\x18\x64\x0a\x32\x07\xff\x05\x59\x82\xac\x41\xb6\x20\x7b\x90\x23\xc8"
"\x19\xe4\x0a\x72\x07\x79\x82\xbc\x41\xbe\x20\x7f\x50\x20\x28\x18\x14"
"\x0a\x0a\x07\x45\x82\xa2\x41\xb1\xa0\x78\x50\x22\x28\x19\x94\x0a\x4a"
"\x07\x65\x82\xb2\x41\xb9\xa0\x7c\x50\x21\xa8\x18\x54\x0a\x2a\x07\x55"
"\x82\xaa\x41\xb5\xa0\x7a\x50\x23\xa8\x19\xd4\x0a\x6a\x07\x75\x82\xba"
"\x41\xbd\xa0\x7e\xd0\x20\x68\x18\x34\x0a\x1a\x07\x4d\x82\xa6\x41\x42"
"\x90\x18\x24\x05\xcd\x82\xe6\x41\x8b\xa0\x65\xd0\x2a\x68\x1d\xb4\x09"
"\xda\x06\xed\x82\xf6\x41\x87\xa0\x63\xd0\x29\xe8\x1c\x74\x09\xba\x06"
"\xdd\x82\xee\x41\x8f\xa0\x67\xd0\x2b\xe8\x1d\xf4\x09\xfa\x06\xfd\x82"
"\xfe\xc1\x80\x60\x60\x30\x28\x18\x1c\x0c\x09\x86\x06\xc3\x82\xe1\xc1"
"\x88\x60\x64\x30\x2a\x18\x1d\x8c\x09\xc6\x06\xe3\x82\xf1\xc1\x84\x60"
"\x62\x30\x29\x98\x1c\x4c\x09\xa6\x06\xd3\x82\xe9\xc1\x8c\x60\x66\x30"
"\x2b\x98\x1d\xcc\x09\xe6\x06\xf3\x82\xf9\xc1\x82\x60\x61\xb0\x28\x58"
"\x1c\x2c\x09\x96\x06\xcb\x82\xe5\xc1\x8a\x60\x65\xb0\x2a\x58\x1d\xac"
"\x09\xd6\x06\xeb\x82\xf5\xc1\x86\x60\x63\xb0\x29\xd8\x1c\x6c\x09\xb6"
"\x06\xdb\x82\xed\xc1\x8e\x60\x67\xb0\x2b\xd8\x1d\xec\x09\xf6\x06\xfb"
"\x82\xfd\xc1\x81\xe0\x60\x70\x28\x38\x1c\x1c\x09\x8e\x06\xc7\x82\xe3"
"\xc1\x89\xe0\x64\x70\x2a\x38\x1d\x9c\x09\xce\x06\xe7\x82\xf3\xc1\x85"
"\xe0\x62\x70\x29\xb8\x1c\x5c\x09\xae\x06\xd7\x82\xeb\xc1\x8d\xe0\x66"
"\x70\x2b\xb8\x1d\xdc\x09\xee\x06\xf7\x82\xfb\xc1\x83\xe0\x61\xf0\x28"
"\x78\x1c\x3c\x09\x9e\x06\xcf\x82\xe7\xc1\x8b\xe0\x65\xf0\x2a\x78\x1d"
"\xbc\x09\xde\x06\xef\x82\xf7\xc1\x87\xe0\x63\xf0\x29\xf8\x1c\x7c\x09"
"\xbe\x06\xdf\x82\xef\x01\x16\xe0\x01\x11\x90\x01\x15\xd0\x01\x13\xb0"
"\x01\x17\xf0\x81\x10\x88\x81\x14\xc8\x81\x12\xa8\x81\x16\xe8\x81\x11"
"\x98\x81\x15\xd8\x81\x13\xb8\x81\x17\xf8\x41\x10\x84\x41\x14\x80\x00"
"\x06\x28\x88\x05\x3f\x82\x9f\xc1\xaf\xe0\x77\xf0\x27\xf8\x1b\xfc\x0b"
"\xe2\xc2\xf8\x30\x59\x98\x3c\x4c\x11\xa6\x0c\x53\x85\xa9\xc3\x34\x61"
"\xda\x30\x5d\x98\x3e\xcc\x10\x66\x0c\x33\x85\x99\xc3\xff\xc2\x2c\x61"
"\xd6\x30\x5b\x98\x3d\xcc\x11\xe6\x0c\x73\x85\xb9\xc3\x3c\x61\xde\x30"
"\x5f\x98\x3f\x2c\x10\x16\x0c\x0b\x85\x85\xc3\x22\x61\xd1\xb0\x58\x58"
"\x3c\x2c\x11\x96\x0c\x4b\x85\xa5\xc3\x32\x61\xd9\xb0\x5c\x58\x3e\xac"
"\x10\x56\x0c\x2b\x85\x95\xc3\x2a\x61\xd5\xb0\x5a\x58\x3d\xac\x11\xd6"
"\x0c\x6b\x85\xb5\xc3\x3a\x61\xdd\xb0\x5e\x58\x3f\x6c\x10\x36\x0c\x1b"
"\x85\x8d\xc3\x26\x61\xd3\x30\x21\x4c\x0c\x93\xc2\x66\x61\xf3\xb0\x45"
"\xd8\x32\x6c\x15\xb6\x0e\xdb\x84\x6d\xc3\x76\x61\xfb\xb0\x43\xd8\x31"
"\xec\x14\x76\x0e\xbb\x84\x5d\xc3\x6e\x61\xf7\xb0\x47\xd8\x33\xec\x15"
"\xf6\x0e\xfb\x84\x7d\xc3\x7e\x61\xff\x70\x40\x38\x30\x1c\x14\x0e\x0e"
"\x87\x84\x43\xc3\x61\xe1\xf0\x70\x44\x38\x32\x1c\x15\x8e\x0e\xc7\x84"
"\x63\xc3\x71\xe1\xf8\x70\x42\x38\x31\x9c\x14\x4e\x0e\xa7\x84\x53\xc3"
"\x69\xe1\xf4\x70\x46\x38\x33\x9c\x15\xce\x0e\xe7\x84\x73\xc3\x79\xe1"
"\xfc\x70\x41\xb8\x30\x5c\x14\x2e\x0e\x97\x84\x4b\xc3\x65\xe1\xf2\x70"
"\x45\xb8\x32\x5c\x15\xae\x0e\xd7\x84\x6b\xc3\x75\xe1\xfa\x70\x43\xb8"
"\x31\xdc\x14\x6e\x0e\xb7\x84\x5b\xc3\x6d\xe1\xf6\x70\x47\xb8\x33\xdc"
"\x15\xee\x0e\xf7\x84\x7b\xc3\x7d\xe1\xfe\xf0\x40\x78\x30\x3c\x14\x1e"
"\x0e\x8f\x84\x47\xc3\x63\xe1\xf1\xf0\x44\x78\x32\x3c\x15\x9e\x0e\xcf"
"\x84\x67\xc3\x73\xe1\xf9\xf0\x42\x78\x31\xbc\x14\x5e\x0e\xaf\x84\x57"
"\xc3\x6b\xe1\xf5\xf0\x46\x78\x33\xbc\x15\xde\x0e\xef\x84\x77\xc3\x7b"
"\xe1\xfd\xf0\x41\xf8\x30\x7c\x14\x3e\x0e\x9f\x84\x4f\xc3\x67\xe1\xf3"
"\xf0\x45\xf8\x32\x7c\x15\xbe\x0e\xdf\x84\x6f\xc3\x77\xe1\xfb\xf0\x43"
"\xf8\x31\xfc\x14\x7e\x0e\xbf\x84\x5f\xc3\x6f\xe1\xf7\x10\x0b\xf1\x90"
"\x08\xc9\x90\x0a\xe9\x90\x09\xd9\x90\x0b\xf9\x50\x08\xc5\x50\x0a\xe5"
"\x50\x09\xd5\x50\x0b\xf5\xd0\x08\xcd\xd0\x0a\xed\xd0\x09\xdd\xd0\x0b"
"\xfd\x30\x08\xc3\x30\x0a\x41\x08\x43\x14\xc6\xc2\x1f\xe1\xcf\xf0\x57"
"\xf8\x3b\xfc\x13\xfe\x0d\xff\x85\x71\x51\x7c\x94\x2c\x4a\x1e\xa5\x88"
"\x52\x46\xa9\xa2\xd4\x51\x9a\x28\x6d\x94\x2e\x4a\x1f\x65\x88\x32\x46"
"\x99\xa2\xcc\xd1\x7f\x51\x96\x28\x6b\x94\x2d\xca\x1e\xe5\x88\x72\x46"
"\xb9\xa2\xdc\x51\x9e\x28\x6f\x94\x2f\xca\x1f\x15\x88\x0a\x46\x85\xa2"
"\xc2\x51\x91\xa8\x68\x54\x2c\x2a\x1e\x95\x88\x4a\x46\xa5\xa2\xd2\x51"
"\x99\xa8\x6c\x54\x2e\x2a\x1f\x55\x88\x2a\x46\x95\xa2\xca\x51\x95\xa8"
"\x6a\x54\x2d\xaa\x1e\xd5\x88\x6a\x46\xb5\xa2\xda\x51\x9d\xa8\x6e\x54"
"\x2f\xaa\x1f\x35\x88\x1a\x46\x8d\xa2\xc6\x51\x93\xa8\x69\x94\x10\x25"
"\x46\x49\x51\xb3\xa8\x79\xd4\x22\x6a\x19\xb5\x8a\x5a\x47\x6d\xa2\xb6"
"\x51\xbb\xa8\x7d\xd4\x21\xea\x18\x75\x8a\x3a\x47\x5d\xa2\xae\x51\xb7"
"\xa8\x7b\xd4\x23\xea\x19\xf5\x8a\x7a\x47\x7d\xa2\xbe\x51\xbf\xa8\x7f"
"\x34\x20\x1a\x18\x0d\x8a\x06\x47\x43\xa2\xa1\xd1\xb0\x68\x78\x34\x22"
"\x1a\x19\x8d\x8a\x46\x47\x63\xa2\xb1\xd1\xb8\x68\x7c\x34\x21\x9a\x18"
"\x4d\x8a\x26\x47\x53\xa2\xa9\xd1\xb4\x68\x7a\x34\x23\x9a\x19\xcd\x8a"
"\x66\x47\x73\xa2\xb9\xd1\xbc\x68\x7e\xb4\x20\x5a\x18\x2d\x8a\x16\x47"
"\x4b\xa2\xa5\xd1\xb2\x68\x79\xb4\x22\x5a\x19\xad\x8a\x56\x47\x6b\xa2"
"\xb5\xd1\xba\x68\x7d\xb4\x21\xda\x18\x6d\x8a\x36\x47\x5b\xa2\xad\xd1"
"\xb6\x68\x7b\xb4\x23\xda\x19\xed\x8a\x76\x47\x7b\xa2\xbd\xd1\xbe\x68"
"\x7f\x74\x20\x3a\x18\x1d\x8a\x0e\x47\x47\xa2\xa3\xd1\xb1\xe8\x78\x74"
"\x22\x3a\x19\x9d\x8a\x4e\x47\x67\xa2\xb3\xd1\xb9\xe8\x7c\x74\x21\xba"
"\x18\x5d\x8a\x2e\x47\x57\xa2\xab\xd1\xb5\xe8\x7a\x74\x23\xba\x19\xdd"
"\x8a\x6e\x47\x77\xa2\xbb\xd1\xbd\xe8\x7e\xf4\x20\x7a\x18\x3d\x8a\x1e"
"\x47\x4f\xa2\xa7\xd1\xb3\xe8\x79\xf4\x22\x7a\x19\xbd\x8a\x5e\x47\x6f"
"\xa2\xb7\xd1\xbb\xe8\x7d\xf4\x21\xfa\x18\x7d\x8a\x3e\x47\x5f\xa2\xaf"
"\xd1\xb7\xe8\x7b\x84\x45\x78\x44\x44\x64\x44\x45\x74\xc4\x44\x6c\xc4"
"\x45\x7c\x24\x44\x62\x24\x45\x72\xa4\x44\x6a\xa4\x45\x7a\x64\x44\x66"
"\x64\x45\x76\xe4\x44\x6e\xe4\x45\x7e\x14\x44\x61\x14\x45\x20\x82\x11"
"\x8a\x62\xd1\x8f\xe8\x67\xf4\x2b\xfa\x1d\xfd\x89\xfe\x46\xff\xa2\x38"
"\x10\x0f\x92\x81\xe4\x20\x05\x48\x09\x52\x81\xd4\x20\x0d\x48\x0b\xd2"
"\x81\xf4\x20\x03\xc8\x08\x32\x81\xcc\xe0\x3f\x90\x05\x64\x05\xd9\x40"
"\x76\x90\x03\xe4\x04\xb9\x40\x6e\x90\x07\xe4\x05\xf9\x40\x7e\x50\x00"
"\x14\x04\x85\x40\x61\x50\x04\x14\x05\xc5\x40\x71\x50\x02\x94\x04\xa5"
"\x40\x69\x50\x06\x94\x05\xe5\x40\x79\x50\x01\x54\x04\x95\x40\x65\x50"
"\x05\x54\x05\xd5\x40\x75\x50\x03\xd4\x04\xb5\x40\x6d\x50\x07\xd4\x05"
"\xf5\x40\x7d\xd0\x00\x34\x04\x8d\x40\x63\xd0\x04\x34\x05\x09\x20\x11"
"\x24\x81\x66\xa0\x39\x68\x01\x5a\x82\x56\xa0\x35\x68\x03\xda\x82\x76"
"\xa0\x3d\xe8\x00\x3a\x82\x4e\xa0\x33\xe8\x02\xba\x82\x6e\xa0\x3b\xe8"
"\x01\x7a\x82\x5e\xa0\x37\xe8\x03\xfa\x82\x7e\xa0\x3f\x18\x00\x06\x82"
"\x41\x60\x30\x18\x02\x86\x82\x61\x60\x38\x18\x01\x46\x82\x51\x60\x34"
"\x18\x03\xc6\x82\x71\x60\x3c\x98\x00\x26\x82\x49\x60\x32\x98\x02\xa6"
"\x82\x69\x60\x3a\x98\x01\x66\x82\x59\x60\x36\x98\x03\xe6\x82\x79\x60"
"\x3e\x58\x00\x16\x82\x45\x60\x31\x58\x02\x96\x82\x65\x60\x39\x58\x01"
"\x56\x82\x55\x60\x35\x58\x03\xd6\x82\x75\x60\x3d\xd8\x00\x36\x82\x4d"
"\x60\x33\xd8\x02\xb6\x82\x6d\x60\x3b\xd8\x01\x76\x82\x5d\x60\x37\xd8"
"\x03\xf6\x82\x7d\x60\x3f\x38\x00\x0e\x82\x43\xe0\x30\x38\x02\x8e\x82"
"\x63\xe0\x38\x38\x01\x4e\x82\x53\xe0\x34\x38\x03\xce\x82\x73\xe0\x3c"
"\xb8\x00\x2e\x82\x4b\xe0\x32\xb8\x02\xae\x82\x6b\xe0\x3a\xb8\x01\x6e"
"\x82\x5b\xe0\x36\xb8\x03\xee\x82\x7b\xe0\x3e\x78\x00\x1e\x82\x47\xe0"
"\x31\x78\x02\x9e\x82\x67\xe0\x39\x78\x01\x5e\x82\x57\xe0\x35\x78\x03"
"\xde\x82\x77\xe0\x3d\xf8\x00\x3e\x82\x4f\xe0\x33\xf8\x02\xbe\x82\x6f"
"\xe0\x3b\xc0\x00\x0e\x08\x40\x02\x0a\xd0\x80\x01\x2c\xe0\x00\x0f\x04"
"\x20\x02\x09\xc8\x40\x01\x2a\xd0\x80\x0e\x0c\x60\x02\x0b\xd8\xc0\x01"
"\x2e\xf0\x80\x0f\x02\x10\x82\x08\x00\x00\x01\x02\x31\xf0\x03\xfc\x04"
"\xbf\xc0\x6f\xf0\x07\xfc\x05\xff\x40\x1c\x8c\x87\xc9\x60\x72\x98\x02"
"\xa6\x84\xa9\x60\x6a\x98\x06\xa6\x85\xe9\x60\x7a\x98\x01\x66\x84\x99"
"\x60\x66\xf8\x1f\xcc\x02\xb3\xc2\x6c\x30\x3b\xcc\x01\x73\xc2\x5c\x30"
"\x37\xcc\x03\xf3\xc2\x7c\x30\x3f\x2c\x00\x0b\xc2\x42\xb0\x30\x2c\x02"
"\x8b\xc2\x62\xb0\x38\x2c\x01\x4b\xc2\x52\xb0\x34\x2c\x03\xcb\xc2\x72"
"\xb0\x3c\xac\x00\x2b\xc2\x4a\xb0\x32\xac\x02\xab\xc2\x6a\xb0\x3a\xac"
"\x01\x6b\xc2\x5a\xb0\x36\xac\x03\xeb\xc2\x7a\xb0\x3e\x6c\x00\x1b\xc2"
"\x46\xb0\x31\x6c\x02\x9b\xc2\x04\x98\x08\x93\x60\x33\xd8\x1c\xb6\x80"
"\x2d\x61\x2b\xd8\x1a\xb6\x81\x6d\x61\x3b\xd8\x1e\x76\x80\x1d\x61\x27"
"\xd8\x19\x76\x81\x5d\x61\x37\xd8\x1d\xf6\x80\x3d\x61\x2f\xd8\x1b\xf6"
"\x81\x7d\x61\x3f\xd8\x1f\x0e\x80\x03\xe1\x20\x38\x18\x0e\x81\x43\xe1"
"\x30\x38\x1c\x8e\x80\x23\xe1\x28\x38\x1a\x8e\x81\x63\xe1\x38\x38\x1e"
"\x4e\x80\x13\xe1\x24\x38\x19\x4e\x81\x53\xe1\x34\x38\x1d\xce\x80\x33"
"\xe1\x2c\x38\x1b\xce\x81\x73\xe1\x3c\x38\x1f\x2e\x80\x0b\xe1\x22\xb8"
"\x18\x2e\x81\x4b\xe1\x32\xb8\x1c\xae\x80\x2b\xe1\x2a\xb8\x1a\xae\x81"
"\x6b\xe1\x3a\xb8\x1e\x6e\x80\x1b\xe1\x26\xb8\x19\x6e\x81\x5b\xe1\x36"
"\xb8\x1d\xee\x80\x3b\xe1\x2e\xb8\x1b\xee\x81\x7b\xe1\x3e\xb8\x1f\x1e"
"\x80\x07\xe1\x21\x78\x18\x1e\x81\x47\xe1\x31\x78\x1c\x9e\x80\x27\xe1"
"\x29\x78\x1a\x9e\x81\x67\xe1\x39\x78\x1e\x5e\x80\x17\xe1\x25\x78\x19"
"\x5e\x81\x57\xe1\x35\x78\x1d\xde\x80\x37\xe1\x2d\x78\x1b\xde\x81\x77"
"\xe1\x3d\x78\x1f\x3e\x80\x0f\xe1\x23\xf8\x18\x3e\x81\x4f\xe1\x33\xf8"
"\x1c\xbe\x80\x2f\xe1\x2b\xf8\x1a\xbe\x81\x6f\xe1\x3b\xf8\x1e\x7e\x80"
"\x1f\xe1\x27\xf8\x19\x7e\x81\x5f\xe1\x37\xf8\x1d\x62\x10\x87\x04\x24"
"\x21\x05\x69\xc8\x40\x16\x72\x90\x87\x02\x14\xa1\x04\x65\xa8\x40\x15"
"\x6a\x50\x87\x06\x34\xa1\x05\x6d\xe8\x40\x17\x7a\xd0\x87\x01\x0c\x61"
"\x04\x01\x84\x10\xc1\x18\xfc\x01\x7f\xc2\x5f\xf0\x37\xfc\x03\xff\xc2"
"\x7f\x30\x0e\xc5\xa3\x64\x28\x39\x4a\x81\x52\xa2\x54\x28\x35\x4a\x83"
"\xd2\xa2\x74\x28\x3d\xca\x80\x32\xa2\x4c\x28\x33\xfa\x0f\x65\x41\x59"
"\x51\x36\x94\x1d\xe5\x40\x39\x51\x2e\x94\x1b\xe5\x41\x79\x51\x3e\x94"
"\x1f\x15\x40\x05\x51\x21\x54\x18\x15\x41\x45\x51\x31\x54\x1c\x95\x40"
"\x25\x51\x29\x54\x1a\x95\x41\x65\x51\x39\x54\x1e\x55\x40\x15\x51\x25"
"\x54\x19\x55\x41\x55\x51\x35\x54\x1d\xd5\x40\x35\x51\x2d\x54\x1b\xd5"
"\x41\x75\x51\x3d\x54\x1f\x35\x40\x0d\x51\x23\xd4\x18\x35\x41\x4d\x51"
"\x02\x4a\x44\x49\xa8\x19\x6a\x8e\x5a\xa0\x96\xa8\x15\x6a\x8d\xda\xa0"
"\xb6\xa8\x1d\x6a\x8f\x3a\xa0\x8e\xa8\x13\xea\x8c\xba\xa0\xae\xa8\x1b"
"\xea\x8e\x7a\xa0\x9e\xa8\x17\xea\x8d\xfa\xa0\xbe\xa8\x1f\xea\x8f\x06"
"\xa0\x81\x68\x10\x1a\x8c\x86\xa0\xa1\x68\x18\x1a\x8e\x46\xa0\x91\x68"
"\x14\x1a\x8d\xc6\xa0\xb1\x68\x1c\x1a\x8f\x26\xa0\x89\x68\x12\x9a\x8c"
"\xa6\xa0\xa9\x68\x1a\x9a\x8e\x66\xa0\x99\x68\x16\x9a\x8d\xe6\xa0\xb9"
"\x68\x1e\x9a\x8f\x16\xa0\x85\x68\x11\x5a\x8c\x96\xa0\xa5\x68\x19\x5a"
"\x8e\x56\xa0\x95\x68\x15\x5a\x8d\xd6\xa0\xb5\x68\x1d\x5a\x8f\x36\xa0"
"\x8d\x68\x13\xda\x8c\xb6\xa0\xad\x68\x1b\xda\x8e\x76\xa0\x9d\x68\x17"
"\xda\x8d\xf6\xa0\xbd\x68\x1f\xda\x8f\x0e\xa0\x83\xe8\x10\x3a\x8c\x8e"
"\xa0\xa3\xe8\x18\x3a\x8e\x4e\xa0\x93\xe8\x14\x3a\x8d\xce\xa0\xb3\xe8"
"\x1c\x3a\x8f\x2e\xa0\x8b\xe8\x12\xba\x8c\xae\xa0\xab\xe8\x1a\xba\x8e"
"\x6e\xa0\x9b\xe8\x16\xba\x8d\xee\xa0\xbb\xe8\x1e\xba\x8f\x1e\xa0\x87"
"\xe8\x11\x7a\x8c\x9e\xa0\xa7\xe8\x19\x7a\x8e\x5e\xa0\x97\xe8\x15\x7a"
"\x8d\xde\xa0\xb7\xe8\x1d\x7a\x8f\x3e\xa0\x8f\xe8\x13\xfa\x8c\xbe\xa0"
"\xaf\xe8\x1b\xfa\x8e\x30\x84\x23\x02\x91\x88\x42\x34\x62\x10\x8b\x38"
"\xc4\x23\x01\x89\x48\x42\x32\x52\x90\x8a\x34\xa4\x23\x03\x99\xc8\x42"
"\x36\x72\x90\x8b\x3c\xe4\xa3\x00\x85\x28\x42\x00\x41\x84\x50\x0c\xfd"
"\x40\x3f\xd1\x2f\xf4\x1b\xfd\x41\x7f\xd1\x3f\x14\x17\x8b\x8f\x25\x8b"
"\x25\x8f\xa5\x88\xa5\x8c\xa5\x8a\xa5\x8e\xa5\x89\xa5\x8d\xa5\x8b\xa5"
"\x8f\x65\x88\x65\x8c\x65\x8a\x65\x8e\xfd\x17\xcb\x12\xcb\x1a\xcb\x16"
"\xcb\x1e\xcb\x11\xcb\x19\xcb\x15\xcb\x1d\xcb\x13\xcb\x1b\xcb\x17\xcb"
"\x1f\x2b\x10\x2b\x18\x2b\x14\x2b\x1c\x2b\x12\x2b\x1a\x2b\x16\x2b\x1e"
"\x2b\x11\x2b\x19\x2b\x15\x2b\x1d\x2b\x13\x2b\x1b\x2b\x17\x2b\x1f\xab"
"\x10\xab\x18\xab\x14\xab\x1c\xab\x12\xab\x1a\xab\x16\xab\x1e\xab\x11"
"\xab\x19\xab\x15\xab\x1d\xab\x13\xab\x1b\xab\x17\xab\x1f\x6b\x10\x6b"
"\x18\x6b\x14\x6b\x1c\x6b\x12\x6b\x1a\x4b\x88\x25\xc6\x92\x62\xff\x63"
"\xe7\x2e\x80\xf5\xaa\xee\x3d\x60\xbf\x11\x5c\x0a\x94\x52\xa8\x71\xa0"
"\xb4\xa5\x46\x81\x52\xc3\x2b\xb8\xbb\x4b\x80\x00\x81\x40\x20\xc1\x82"
"\x06\x77\x77\x77\x77\x77\x77\x77\x77\x77\xb7\xbd\xfc\x9b\x4b\x43\x6e"
"\x69\xcb\xf7\xcd\xbd\xdf\x9d\xf6\xde\xc9\xf3\xcc\x24\x6b\xbd\x6b\xef"
"\x77\xad\xfd\xae\xdf\x9c\x33\xe7\xbf\x67\x9f\x33\x7f\xb7\x40\xb7\x60"
"\xb7\x50\xb7\x70\xb7\x48\xb7\x68\xb7\x58\xb7\x78\xb7\x44\xb7\x64\xb7"
"\x54\xb7\x74\xb7\x4c\xb7\x6c\xb7\x5c\xb7\x7c\xb7\x42\xb7\x62\xb7\x52"
"\xb7\x72\xb7\x4a\xb7\x6a\xb7\x5a\xb7\x7a\xb7\x46\xb7\x66\x37\xa8\x5b"
"\xab\x5b\xbb\x5b\xa7\x1b\xdc\xad\xdb\xad\xd7\xad\xdf\x0d\xe9\x36\xe8"
"\x36\xec\x86\x76\x1b\x75\x1b\x77\xc3\xba\x4d\xba\x4d\xbb\xe1\xdd\x88"
"\x6e\xb3\x6e\xf3\x6e\x8b\x6e\xcb\x6e\xab\x6e\x64\xb7\x75\xb7\x4d\xb7"
"\x6d\xb7\x5d\xb7\x7d\xb7\x43\x37\xaa\xdb\xb1\xdb\xa9\xdb\xb9\xdb\xa5"
"\xdb\xb5\xdb\xad\xdb\xbd\xdb\xa3\xdb\xb3\xdb\xab\xdb\xbb\xdb\xa7\xdb"
"\xb7\xdb\xaf\xdb\xbf\x3b\xa0\x3b\xb0\x3b\xa8\x3b\xb8\x3b\xa4\x3b\xb4"
"\x3b\xac\x3b\xbc\x3b\xa2\x3b\xb2\x3b\xaa\x3b\xba\x3b\xa6\x3b\xb6\x3b"
"\xae\x3b\xbe\x3b\xa1\x3b\xb1\x3b\xa9\x3b\xb9\x3b\xa5\x3b\xb5\x3b\xad"
"\x3b\xbd\x3b\xa3\x3b\xb3\x3b\xab\x3b\xbb\x3b\xa7\x3b\xb7\x3b\xaf\x3b"
"\xbf\xbb\xa0\xbb\xb0\xbb\xa8\xbb\xb8\xbb\xa4\xbb\xb4\xbb\xac\xbb\xbc"
"\xbb\xa2\xbb\xb2\xbb\xaa\xbb\xba\xbb\xa6\xbb\xb6\xbb\xae\xbb\xbe\xbb"
"\xa1\xbb\xb1\xbb\xa9\xbb\xb9\xbb\xa5\xbb\xb5\xbb\xad\xbb\xbd\xbb\xa3"
"\xbb\xb3\xbb\xab\xbb\xbb\xbb\xa7\xbb\xb7\xbb\xaf\xbb\xbf\x7b\xa0\x7b"
"\xb0\x7b\xa8\x7b\xb8\x7b\xa4\x7b\xb4\x7b\xac\x7b\xbc\x7b\xa2\x7b\xb2"
"\x7b\xaa\x7b\xba\x7b\xa6\x7b\xb6\x7b\xae\x7b\xbe\x7b\xa1\x7b\xb1\x7b"
"\xa9\x7b\xb9\x7b\xa5\x7b\xb5\x7b\xad\x7b\xbd\x7b\xa3\x7b\xb3\x7b\xab"
"\x7b\xbb\x7b\xa7\x7b\xb7\x7b\xaf\x7b\xbf\xfb\xa0\xfb\xb0\xfb\xa8\xfb"
"\xb8\xfb\xa4\xfb\xb4\xfb\xac\xfb\xbc\xeb\xba\xd0\xc5\x2e\x75\xb9\x2b"
"\x5d\xed\x5a\xd7\x0b\xfd\x42\xff\x30\x20\x0c\x0c\xe3\x84\x71\xc3\x78"
"\x61\xfc\x30\x41\x98\x30\x4c\x14\x26\x0e\x93\x84\x49\xc3\x37\xc2\x64"
"\x61\xf2\x30\x45\xf8\x66\x98\x32\x7c\x2b\x4c\x15\xbe\x1d\xa6\x0e\xd3"
"\x84\xef\x84\xef\x86\xef\x85\xef\x87\x1f\x84\x69\x43\x5f\x98\x2e\x4c"
"\x1f\x7e\x18\x66\x08\x3f\x0a\x3f\x0e\x3f\x09\x33\x86\x9f\x86\x9f\x85"
"\x9f\x87\x5f\x84\x5f\x86\x99\xc2\xaf\xc2\xcc\x61\x96\x30\x6b\xf8\x75"
"\x98\x2d\xfc\x26\xfc\x36\xfc\x2e\xfc\x3e\xfc\x21\xcc\x1e\xe6\x08\x73"
"\x86\xb9\xc2\xdc\x61\x9e\x30\x6f\xf8\x63\xf8\x53\xf8\x73\xf8\x4b\x98"
"\x2f\xcc\x1f\x16\x08\x0b\x86\x85\xc2\xc2\x61\x91\xb0\x68\x58\x2c\x2c"
"\x1e\x96\x08\x4b\x86\xa5\xc2\xd2\x61\x99\xb0\x6c\x58\x2e\x2c\x1f\x56"
"\x08\x2b\x86\x95\xc2\xca\x61\x95\xb0\x6a\x58\x2d\xac\x1e\xd6\x08\x6b"
"\x86\x41\x61\xad\xb0\x76\x58\x27\x0c\x0e\xeb\x86\xf5\xc2\xfa\x61\x48"
"\xd8\x20\x6c\x18\x86\x86\x8d\xc2\xc6\x61\x58\xd8\x24\x6c\x1a\x86\x87"
"\x11\x61\xb3\xb0\x79\xd8\x22\x6c\x19\xb6\x0a\x23\xc3\xd6\x61\x9b\xb0"
"\x6d\xd8\x2e\x6c\x1f\x76\x08\xa3\xc2\x8e\x61\xa7\xb0\x73\xd8\x25\xec"
"\x1a\x76\x0b\xbb\x87\x3d\xc2\x9e\x61\xaf\xb0\x77\xd8\x27\xec\x1b\xf6"
"\x0b\xfb\x87\x03\xc2\x81\xe1\xa0\x70\x70\x38\x24\x1c\x1a\x0e\x0b\x87"
"\x87\x23\xc2\x91\xe1\xa8\x70\x74\x38\x26\x1c\x1b\x8e\x0b\xc7\x87\x13"
"\xc2\x89\xe1\xa4\x70\x72\x38\x25\x9c\x1a\x4e\x0b\xa7\x87\x33\xc2\x99"
"\xe1\xac\x70\x76\x38\x27\x9c\x1b\xce\x0b\xe7\x87\x0b\xc2\x85\xe1\xa2"
"\x70\x71\xb8\x24\x5c\x1a\x2e\x0b\x97\x87\x2b\xc2\x95\xe1\xaa\x70\x75"
"\xb8\x26\x5c\x1b\xae\x0b\xd7\x87\x1b\xc2\x8d\xe1\xa6\x70\x73\xb8\x25"
"\xdc\x1a\x6e\x0b\xb7\x87\x3b\xc2\x9d\xe1\xae\x70\x77\xb8\x27\xdc\x1b"
"\xee\x0b\xf7\x87\x07\xc2\x83\xe1\xa1\xf0\x70\x78\x24\x3c\x1a\x1e\x0b"
"\x8f\x87\x27\xc2\x93\xe1\xa9\xf0\x74\x78\x26\x3c\x1b\x9e\x0b\xcf\x87"
"\x17\xc2\x8b\xe1\xa5\xf0\x72\x78\x25\xbc\x1a\x5e\x0b\xaf\x87\x37\xc2"
"\x9b\xe1\xad\xf0\x76\x78\x27\xbc\x1b\xde\x0b\xef\x87\x0f\xc2\x87\xe1"
"\xa3\xf0\x71\xf8\x24\x7c\x1a\x3e\x0b\x9f\x87\x2e\x84\x10\x43\x0a\x39"
"\x94\x50\x43\x0b\xbd\xd8\x2f\xf6\x8f\x03\xe2\xc0\x38\x4e\x1c\x37\x8e"
"\x17\xc7\x8f\x13\xc4\x09\xe3\x44\x71\xe2\x38\x49\x9c\x34\x7e\x23\x4e"
"\x16\x27\x8f\x53\xc4\x6f\xc6\x29\xe3\xb7\xe2\x54\xf1\xdb\x71\xea\x38"
"\x4d\xfc\x4e\xfc\x6e\xfc\x5e\xfc\x7e\xfc\x41\x9c\x36\xf6\xc5\xe9\xe2"
"\xf4\xf1\x87\x71\x86\xf8\xa3\xf8\xe3\xf8\x93\x38\x63\xfc\x69\xfc\x59"
"\xfc\x79\xfc\x45\xfc\x65\x9c\x29\xfe\x2a\xce\x1c\x67\x89\xb3\xc6\x5f"
"\xc7\xd9\xe2\x6f\xe2\x6f\xe3\xef\xe2\xef\xe3\x1f\xe2\xec\x71\x8e\x38"
"\x67\x9c\x2b\xce\x1d\xe7\x89\xf3\xc6\x3f\xc6\x3f\xc5\x3f\xc7\xbf\xc4"
"\xf9\xe2\xfc\x71\x81\xb8\x60\x5c\x28\x2e\x1c\x17\x89\x8b\xc6\xc5\xe2"
"\xe2\x71\x89\xb8\x64\x5c\x2a\x2e\x1d\x97\x89\xcb\xc6\xe5\xe2\xf2\x71"
"\x85\xb8\x62\x5c\x29\xae\x1c\x57\x89\xab\xc6\xd5\xe2\xea\x71\x8d\xb8"
"\x66\x1c\x14\xd7\x8a\x6b\xc7\x75\xe2\xe0\xb8\x6e\x5c\x2f\xae\x1f\x87"
"\xc4\x0d\xe2\x86\x71\x68\xdc\x28\x6e\x1c\x87\xc5\x4d\xe2\xa6\x71\x78"
"\x1c\x11\x37\x8b\x9b\xc7\x2d\xe2\x96\x71\xab\x38\x32\x6e\x1d\xb7\x89"
"\xdb\xc6\xed\xe2\xf6\x71\x87\x38\x2a\xee\x18\x77\x8a\x3b\xc7\x5d\xe2"
"\xae\x71\xb7\xb8\x7b\xdc\x23\xee\x19\xf7\x8a\x7b\xc7\x7d\xe2\xbe\x71"
"\xbf\xb8\x7f\x3c\x20\x1e\x18\x0f\x8a\x07\xc7\x43\xe2\xa1\xf1\xb0\x78"
"\x78\x3c\x22\x1e\x19\x8f\x8a\x47\xc7\x63\xe2\xb1\xf1\xb8\x78\x7c\x3c"
"\x21\x9e\x18\x4f\x8a\x27\xc7\x53\xe2\xa9\xf1\xb4\x78\x7a\x3c\x23\x9e"
"\x19\xcf\x8a\x67\xc7\x73\xe2\xb9\xf1\xbc\x78\x7e\xbc\x20\x5e\x18\x2f"
"\x8a\x17\xc7\x4b\xe2\xa5\xf1\xb2\x78\x79\xbc\x22\x5e\x19\xaf\x8a\x57"
"\xc7\x6b\xe2\xb5\xf1\xba\x78\x7d\xbc\x21\xde\x18\x6f\x8a\x37\xc7\x5b"
"\xe2\xad\xf1\xb6\x78\x7b\xbc\x23\xde\x19\xef\x8a\x77\xc7\x7b\xe2\xbd"
"\xf1\xbe\x78\x7f\x7c\x20\x3e\x18\x1f\x8a\x0f\xc7\x47\xe2\xa3\xf1\xb1"
"\xf8\x78\x7c\x22\x3e\x19\x9f\x8a\x4f\xc7\x67\xe2\xb3\xf1\xb9\xf8\x7c"
"\x7c\x21\xbe\x18\x5f\x8a\x2f\xc7\x57\xe2\xab\xf1\xb5\xf8\x7a\x7c\x23"
"\xbe\x19\xdf\x8a\x6f\xc7\x77\xe2\xbb\xf1\xbd\xf8\x7e\xfc\x20\x7e\x18"
"\x3f\x8a\x1f\xc7\x4f\xe2\xa7\xf1\xb3\xf8\x79\xec\x62\x88\x31\xa6\x98"
"\x63\x89\x35\xb6\xd8\x4b\xfd\x52\xff\x34\x20\x0d\x4c\xe3\xa4\x71\xd3"
"\x78\x69\xfc\x34\x41\x9a\x30\x4d\x94\x26\x4e\x93\xa4\x49\xd3\x37\xd2"
"\x64\x69\xf2\x34\x45\xfa\x66\x9a\x32\x7d\x2b\x4d\x95\xbe\x9d\xa6\x4e"
"\xd3\xa4\xef\xa4\xef\xa6\xef\xa5\xef\xa7\x1f\xa4\x69\x53\x5f\x9a\x2e"
"\x4d\x9f\x7e\x98\x66\x48\x3f\x4a\x3f\x4e\x3f\x49\x33\xa6\x9f\xa6\x9f"
"\xa5\x9f\xa7\x5f\xa4\x5f\xa6\x99\xd2\xaf\xd2\xcc\x69\x96\x34\x6b\xfa"
"\x75\x9a\x2d\xfd\x26\xfd\x36\xfd\x2e\xfd\x3e\xfd\x21\xcd\x9e\xe6\x48"
"\x73\xa6\xb9\xd2\xdc\x69\x9e\x34\x6f\xfa\x63\xfa\x53\xfa\x73\xfa\x4b"
"\x9a\x2f\xcd\x9f\x16\x48\x0b\xa6\x85\xd2\xc2\x69\x91\xb4\x68\x5a\x2c"
"\x2d\x9e\x96\x48\x4b\xa6\xa5\xd2\xd2\x69\x99\xb4\x6c\x5a\x2e\x2d\x9f"
"\x56\x48\x2b\xa6\x95\xd2\xca\x69\x95\xb4\x6a\x5a\x2d\xad\x9e\xd6\x48"
"\x6b\xa6\x41\x69\xad\xb4\x76\x5a\x27\x0d\x4e\xeb\xa6\xf5\xd2\xfa\x69"
"\x48\xda\x20\x6d\x98\x86\xa6\x8d\xd2\xc6\x69\x58\xda\x24\x6d\x9a\x86"
"\xa7\x11\x69\xb3\xb4\x79\xda\x22\x6d\x99\xb6\x4a\x23\xd3\xd6\x69\x9b"
"\xb4\x6d\xda\x2e\x6d\x9f\x76\x48\xa3\xd2\x8e\x69\xa7\xb4\x73\xda\x25"
"\xed\x9a\x76\x4b\xbb\xa7\x3d\xd2\x9e\x69\xaf\xb4\x77\xda\x27\xed\x9b"
"\xf6\x4b\xfb\xa7\x03\xd2\x81\xe9\xa0\x74\x70\x3a\x24\x1d\x9a\x0e\x4b"
"\x87\xa7\x23\xd2\x91\xe9\xa8\x74\x74\x3a\x26\x1d\x9b\x8e\x4b\xc7\xa7"
"\x13\xd2\x89\xe9\xa4\x74\x72\x3a\x25\x9d\x9a\x4e\x4b\xa7\xa7\x33\xd2"
"\x99\xe9\xac\x74\x76\x3a\x27\x9d\x9b\xce\x4b\xe7\xa7\x0b\xd2\x85\xe9"
"\xa2\x74\x71\xba\x24\x5d\x9a\x2e\x4b\x97\xa7\x2b\xd2\x95\xe9\xaa\x74"
"\x75\xba\x26\x5d\x9b\xae\x4b\xd7\xa7\x1b\xd2\x8d\xe9\xa6\x74\x73\xba"
"\x25\xdd\x9a\x6e\x4b\xb7\xa7\x3b\xd2\x9d\xe9\xae\x74\x77\xba\x27\xdd"
"\x9b\xee\x4b\xf7\xa7\x07\xd2\x83\xe9\xa1\xf4\x70\x7a\x24\x3d\x9a\x1e"
"\x4b\x8f\xa7\x27\xd2\x93\xe9\xa9\xf4\x74\x7a\x26\x3d\x9b\x9e\x4b\xcf"
"\xa7\x17\xd2\x8b\xe9\xa5\xf4\x72\x7a\x25\xbd\x9a\x5e\x4b\xaf\xa7\x37"
"\xd2\x9b\xe9\xad\xf4\x76\x7a\x27\xbd\x9b\xde\x4b\xef\xa7\x0f\xd2\x87"
"\xe9\xa3\xf4\x71\xfa\x24\x7d\x9a\x3e\x4b\x9f\xa7\x2e\x85\x14\x53\x4a"
"\x39\x95\x54\x53\x4b\xbd\xdc\x2f\xf7\xcf\x03\xf2\xc0\x3c\x4e\x1e\x37"
"\x8f\x97\xc7\xcf\x13\xe4\x09\xf3\x44\x79\xe2\x3c\x49\x9e\x34\x7f\x23"
"\x4f\x96\x27\xcf\x53\xe4\x6f\xe6\x29\xf3\xb7\xf2\x54\xf9\xdb\x79\xea"
"\x3c\x4d\xfe\x4e\xfe\x6e\xfe\x5e\xfe\x7e\xfe\x41\x9e\x36\xf7\xe5\xe9"
"\xf2\xf4\xf9\x87\x79\x86\xfc\xa3\xfc\xe3\xfc\x93\x3c\x63\xfe\x69\xfe"
"\x59\xfe\x79\xfe\x45\xfe\x65\x9e\x29\xff\x2a\xcf\x9c\x67\xc9\xb3\xe6"
"\x5f\xe7\xd9\xf2\x6f\xf2\x6f\xf3\xef\xf2\xef\xf3\x1f\xf2\xec\x79\x8e"
"\x3c\x67\x9e\x2b\xcf\x9d\xe7\xc9\xf3\xe6\x3f\xe6\x3f\xe5\x3f\xe7\xbf"
"\xe4\xf9\xf2\xfc\x79\x81\xbc\x60\x5e\x28\x2f\x9c\x17\xc9\x8b\xe6\xc5"
"\xf2\xe2\x79\x89\xbc\x64\x5e\x2a\x2f\x9d\x97\xc9\xcb\xe6\xe5\xf2\xf2"
"\x79\x85\xbc\x62\x5e\x29\xaf\x9c\x57\xc9\xab\xe6\xd5\xf2\xea\x79\x8d"
"\xbc\x66\x1e\x94\xd7\xca\x6b\xe7\x75\xf2\xe0\xbc\x6e\x5e\x2f\xaf\x9f"
"\x87\xe4\x0d\xf2\x86\x79\x68\xde\x28\x6f\x9c\x87\xe5\x4d\xf2\xa6\x79"
"\x78\x1e\x91\x37\xcb\x9b\xe7\x2d\xf2\x96\x79\xab\x3c\x32\x6f\x9d\xb7"
"\xc9\xdb\xe6\xed\xf2\xf6\x79\x87\x3c\x2a\xef\x98\x77\xca\x3b\xe7\x5d"
"\xf2\xae\x79\xb7\xbc\x7b\xde\x23\xef\x99\xf7\xca\x7b\xe7\x7d\xf2\xbe"
"\x79\xbf\xbc\x7f\x3e\x20\x1f\x98\x0f\xca\x07\xe7\x43\xf2\xa1\xf9\xb0"
"\x7c\x78\x3e\x22\x1f\x99\x8f\xca\x47\xe7\x63\xf2\xb1\xf9\xb8\x7c\x7c"
"\x3e\x21\x9f\x98\x4f\xca\x27\xe7\x53\xf2\xa9\xf9\xb4\x7c\x7a\x3e\x23"
"\x9f\x99\xcf\xca\x67\xe7\x73\xf2\xb9\xf9\xbc\x7c\x7e\xbe\x20\x5f\x98"
"\x2f\xca\x17\xe7\x4b\xf2\xa5\xf9\xb2\x7c\x79\xbe\x22\x5f\x99\xaf\xca"
"\x57\xe7\x6b\xf2\xb5\xf9\xba\x7c\x7d\xbe\x21\xdf\x98\x6f\xca\x37\xe7"
"\x5b\xf2\xad\xf9\xb6\x7c\x7b\xbe\x23\xdf\x99\xef\xca\x77\xe7\x7b\xf2"
"\xbd\xf9\xbe\x7c\x7f\x7e\x20\x3f\x98\x1f\xca\x0f\xe7\x47\xf2\xa3\xf9"
"\xb1\xfc\x78\x7e\x22\x3f\x99\x9f\xca\x4f\xe7\x67\xf2\xb3\xf9\xb9\xfc"
"\x7c\x7e\x21\xbf\x98\x5f\xca\x2f\xe7\x57\xf2\xab\xf9\xb5\xfc\x7a\x7e"
"\x23\xbf\x99\xdf\xca\x6f\xe7\x77\xf2\xbb\xf9\xbd\xfc\x7e\xfe\x20\x7f"
"\x98\x3f\xca\x1f\xe7\x4f\xf2\xa7\xf9\xb3\xfc\x79\xee\x72\xc8\x31\xa7"
"\x9c\x73\xc9\x35\xb7\xdc\x2b\xfd\x4a\xff\x32\xa0\x0c\x2c\xe3\x94\x71"
"\xcb\x78\x65\xfc\x32\x41\x99\xb0\x4c\x54\x26\x2e\x93\x94\x49\xcb\x37"
"\xca\x64\x65\xf2\x32\x45\xf9\x66\x99\xb2\x7c\xab\x4c\x55\xbe\x5d\xa6"
"\x2e\xd3\x94\xef\x94\xef\x96\xef\x95\xef\x97\x1f\x94\x69\x4b\x5f\x99"
"\xae\x4c\x5f\x7e\x58\x66\x28\x3f\x2a\x3f\x2e\x3f\x29\x33\x96\x9f\x96"
"\x9f\x95\x9f\x97\x5f\x94\x5f\x96\x99\xca\xaf\xca\xcc\x65\x96\x32\x6b"
"\xf9\x75\x99\xad\xfc\xa6\xfc\xb6\xfc\xae\xfc\xbe\xfc\xa1\xcc\x5e\xe6"
"\x28\x73\x96\xb9\xca\xdc\x65\x9e\x32\x6f\xf9\x63\xf9\x53\xf9\x73\xf9"
"\x4b\x99\xaf\xcc\x5f\x16\x28\x0b\x96\x85\xca\xc2\x65\x91\xb2\x68\x59"
"\xac\x2c\x5e\x96\x28\x4b\x96\xa5\xca\xd2\x65\x99\xb2\x6c\x59\xae\x2c"
"\x5f\x56\x28\x2b\x96\x95\xca\xca\x65\x95\xb2\x6a\x59\xad\xac\x5e\xd6"
"\x28\x6b\x96\x41\x65\xad\xb2\x76\x59\xa7\x0c\x2e\xeb\x96\xf5\xca\xfa"
"\x65\x48\xd9\xa0\x6c\x58\x86\x96\x8d\xca\xc6\x65\x58\xd9\xa4\x6c\x5a"
"\x86\x97\x11\x65\xb3\xb2\x79\xd9\xa2\x6c\x59\xb6\x2a\x23\xcb\xd6\x65"
"\x9b\xb2\x6d\xd9\xae\x6c\x5f\x76\x28\xa3\xca\x8e\x65\xa7\xb2\x73\xd9"
"\xa5\xec\x5a\x76\x2b\xbb\x97\x3d\xca\x9e\x65\xaf\xb2\x77\xd9\xa7\xec"
"\x5b\xf6\x2b\xfb\x97\x03\xca\x81\xe5\xa0\x72\x70\x39\xa4\x1c\x5a\x0e"
"\x2b\x87\x97\x23\xca\x91\xe5\xa8\x72\x74\x39\xa6\x1c\x5b\x8e\x2b\xc7"
"\x97\x13\xca\x89\xe5\xa4\x72\x72\x39\xa5\x9c\x5a\x4e\x2b\xa7\x97\x33"
"\xca\x99\xe5\xac\x72\x76\x39\xa7\x9c\x5b\xce\x2b\xe7\x97\x0b\xca\x85"
"\xe5\xa2\x72\x71\xb9\xa4\x5c\x5a\x2e\x2b\x97\x97\x2b\xca\x95\xe5\xaa"
"\x72\x75\xb9\xa6\x5c\x5b\xae\x2b\xd7\x97\x1b\xca\x8d\xe5\xa6\x72\x73"
"\xb9\xa5\xdc\x5a\x6e\x2b\xb7\x97\x3b\xca\x9d\xe5\xae\x72\x77\xb9\xa7"
"\xdc\x5b\xee\x2b\xf7\x97\x07\xca\x83\xe5\xa1\xf2\x70\x79\xa4\x3c\x5a"
"\x1e\x2b\x8f\x97\x27\xca\x93\xe5\xa9\xf2\x74\x79\xa6\x3c\x5b\x9e\x2b"
"\xcf\x97\x17\xca\x8b\xe5\xa5\xf2\x72\x79\xa5\xbc\x5a\x5e\x2b\xaf\x97"
"\x37\xca\x9b\xe5\xad\xf2\x76\x79\xa7\xbc\x5b\xde\x2b\xef\x97\x0f\xca"
"\x87\xe5\xa3\xf2\x71\xf9\xa4\x7c\x5a\x3e\x2b\x9f\x97\xae\x84\x12\x4b"
"\x2a\xb9\x94\x52\x4b\x2b\xbd\xda\xaf\xf6\xaf\x03\xea\xc0\x3a\x4e\x1d"
"\xb7\x8e\x57\xc7\xaf\x13\xd4\x09\xeb\x44\x75\xe2\x3a\x49\x9d\xb4\x7e"
"\xa3\x4e\x56\x27\xaf\x53\xd4\x6f\xd6\x29\xeb\xb7\xea\x54\xf5\xdb\x75"
"\xea\x3a\x4d\xfd\x4e\xfd\x6e\xfd\x5e\xfd\x7e\xfd\x41\x9d\xb6\xf6\xd5"
"\xe9\xea\xf4\xf5\x87\x75\x86\xfa\xa3\xfa\xe3\xfa\x93\x3a\x63\xfd\x69"
"\xfd\x59\xfd\x79\xfd\x45\xfd\x65\x9d\xa9\xfe\xaa\xce\x5c\x67\xa9\xb3"
"\xd6\x5f\xd7\xd9\xea\x6f\xea\x6f\xeb\xef\xea\xef\xeb\x1f\xea\xec\x75"
"\x8e\x3a\x67\x9d\xab\xce\x5d\xe7\xa9\xf3\xd6\x3f\xd6\x3f\xd5\x3f\xd7"
"\xbf\xd4\xf9\xea\xfc\x75\x81\xba\x60\x5d\xa8\x2e\x5c\x17\xa9\x8b\xd6"
"\xc5\xea\xe2\x75\x89\xba\x64\x5d\xaa\x2e\x5d\x97\xa9\xcb\xd6\xe5\xea"
"\xf2\x75\x85\xba\x62\x5d\xa9\xae\x5c\x57\xa9\xab\xd6\xd5\xea\xea\x75"
"\x8d\xba\x66\x1d\x54\xd7\xaa\x6b\xd7\x75\xea\xe0\xba\x6e\x5d\xaf\xae"
"\x5f\x87\xd4\x0d\xea\x86\x75\x68\xdd\xa8\x6e\x5c\x87\xd5\x4d\xea\xa6"
"\x75\x78\x1d\x51\x37\xab\x9b\xd7\x2d\xea\x96\x75\xab\x3a\xb2\x6e\x5d"
"\xb7\xa9\xdb\xd6\xed\xea\xf6\x75\x87\x3a\xaa\xee\x58\x77\xaa\x3b\xd7"
"\x5d\xea\xae\x75\xb7\xba\x7b\xdd\xa3\xee\x59\xf7\xaa\x7b\xd7\x7d\xea"
"\xbe\x75\xbf\xba\x7f\x3d\xa0\x1e\x58\x0f\xaa\x07\xd7\x43\xea\xa1\xf5"
"\xb0\x7a\x78\x3d\xa2\x1e\x59\x8f\xaa\x47\xd7\x63\xea\xb1\xf5\xb8\x7a"
"\x7c\x3d\xa1\x9e\x58\x4f\xaa\x27\xd7\x53\xea\xa9\xf5\xb4\x7a\x7a\x3d"
"\xa3\x9e\x59\xcf\xaa\x67\xd7\x73\xea\xb9\xf5\xbc\x7a\x7e\xbd\xa0\x5e"
"\x58\x2f\xaa\x17\xd7\x4b\xea\xa5\xf5\xb2\x7a\x79\xbd\xa2\x5e\x59\xaf"
"\xaa\x57\xd7\x6b\xea\xb5\xf5\xba\x7a\x7d\xbd\xa1\xde\x58\x6f\xaa\x37"
"\xd7\x5b\xea\xad\xf5\xb6\x7a\x7b\xbd\xa3\xde\x59\xef\xaa\x77\xd7\x7b"
"\xea\xbd\xf5\xbe\x7a\x7f\x7d\xa0\x3e\x58\x1f\xaa\x0f\xd7\x47\xea\xa3"
"\xf5\xb1\xfa\x78\x7d\xa2\x3e\x59\x9f\xaa\x4f\xd7\x67\xea\xb3\xf5\xb9"
"\xfa\x7c\x7d\xa1\xbe\x58\x5f\xaa\x2f\xd7\x57\xea\xab\xf5\xb5\xfa\x7a"
"\x7d\xa3\xbe\x59\xdf\xaa\x6f\xd7\x77\xea\xbb\xf5\xbd\xfa\x7e\xfd\xa0"
"\x7e\x58\x3f\xaa\x1f\xd7\x4f\xea\xa7\xf5\xb3\xfa\x79\xed\x6a\xa8\xb1"
"\xa6\x9a\x6b\xa9\xb5\xb6\xda\x6b\xfd\x5a\xff\x36\xa0\x0d\x6c\xe3\xb4"
"\x71\xdb\x78\x6d\xfc\x36\x41\x9b\xb0\x4d\xd4\x26\x6e\x93\xb4\x49\xdb"
"\x37\xda\x64\x6d\xf2\x36\x45\xfb\x66\x9b\xb2\x7d\xab\x4d\xd5\xbe\xdd"
"\xa6\x6e\xd3\xb4\xef\xb4\xef\xb6\xef\xb5\xef\xb7\x1f\xb4\x69\x5b\x5f"
"\x9b\xae\x4d\xdf\x7e\xd8\x66\x68\x3f\x6a\x3f\x6e\x3f\x69\x33\xb6\x9f"
"\xb6\x9f\xb5\x9f\xb7\x5f\xb4\x5f\xb6\x99\xda\xaf\xda\xcc\x6d\x96\x36"
"\x6b\xfb\x75\x9b\xad\xfd\xa6\xfd\xb6\xfd\xae\xfd\xbe\xfd\xa1\xcd\xde"
"\xe6\x68\x73\xb6\xb9\xda\xdc\x6d\x9e\x36\xef\x7f\xeb\xfd\xab\xb4\x55"
"\xdb\x6a\x6d\xf5\xb6\x46\x5b\xb3\x0d\x6a\x6b\xb5\xb5\xdb\x3a\x6d\x70"
"\x5b\xb7\xad\xd7\xd6\x6f\x43\xda\x06\x6d\xc3\x36\xb4\x6d\xd4\x36\x6e"
"\xc3\xda\x26\x6d\xd3\x36\xbc\x8d\x68\x9b\xb5\xcd\xdb\x16\x6d\xcb\xb6"
"\x55\x1b\xd9\xb6\x6e\xdb\xb4\x6d\xdb\x76\x6d\xfb\xb6\x43\x1b\xd5\x76"
"\x6c\x3b\xb5\x9d\xdb\x2e\x6d\xd7\xb6\x5b\xdb\xbd\xed\xd1\xf6\x6c\x7b"
"\xb5\xbd\xdb\x3e\x6d\xdf\xb6\x5f\xdb\xbf\x1d\xd0\x0e\x6c\x07\xb5\x83"
"\xdb\x21\xed\xd0\x76\x58\x3b\xbc\x1d\xd1\x8e\x6c\x47\xb5\xa3\xdb\x31"
"\xed\xd8\x76\x5c\x3b\xbe\x9d\xd0\x4e\x6c\x27\xb5\x93\xdb\x29\xed\xd4"
"\x76\x5a\x3b\xbd\x9d\xd1\xce\x6c\x67\xb5\xb3\xdb\x39\xed\xdc\x76\x5e"
"\x3b\xbf\x5d\xd0\x2e\x6c\x17\xb5\x8b\xdb\x25\xed\xd2\x76\x59\xbb\xbc"
"\x5d\xd1\xae\x6c\x57\xb5\xab\xdb\x35\xed\xda\x76\x5d\xbb\xbe\xdd\xd0"
"\x6e\x6c\x37\xb5\x9b\xdb\x2d\xed\xd6\x76\x5b\xbb\xbd\xdd\xd1\xee\x6c"
"\x77\xb5\xbb\xdb\x3d\xed\xde\x76\x5f\xbb\xbf\x3d\xd0\x1e\x6c\x0f\xb5"
"\x87\xdb\x23\xed\xd1\xf6\x58\x7b\xbc\x3d\xd1\x9e\x6c\x4f\xb5\xa7\xdb"
"\x33\xed\xd9\xf6\x5c\x7b\xbe\xbd\xd0\x5e\x6c\x2f\xb5\x97\xdb\x2b\xed"
"\xd5\xf6\x5a\x7b\xbd\xbd\xd1\xde\x6c\x6f\xb5\xb7\xdb\x3b\xed\xdd\xf6"
"\x5e\x7b\xbf\x7d\xd0\x3e\x6c\x1f\xb5\x8f\xdb\x27\xed\xd3\xf6\x59\xfb"
"\xbc\x75\x2d\xb4\xd8\x52\xcb\xad\xb4\xda\x5a\xeb\x01\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\xf0\xdf\x32\xff\x42\x8b\xce\x37\x73\x6f\x82\x31\xaf\xfb\xf5\xfa\xf5"
"\x16\xec\xf5\xeb\x9d\xd1\xaf\xd7\xeb\x4d\xf6\x9f\xe7\x0d\x1c\xdd\x0e"
"\xf8\x9a\x79\xfe\xe3\xd4\x35\xbf\xe8\x4d\xfd\xc5\xff\x0b\x7e\xd9\xfb"
"\x64\xc3\x59\x86\xc6\x27\xfa\x7d\x5d\x3b\xee\x3f\x9b\xac\xdf\x3f\x0e"
"\xcd\xdc\xeb\xf5\xd6\x1f\x33\x7f\xff\xde\xc2\x5f\xf4\xfa\xf5\xc6\xf9"
"\x62\x6c\x9c\xff\xcf\x75\x7a\x17\x8c\x9e\x74\x74\xfb\xe5\xba\x03\x07"
"\xcc\xd0\x5b\xac\x37\x7f\x6f\xd9\xd1\xaf\x47\x8d\xbe\xf6\x7e\xbd\x79"
"\xbf\x72\x21\x73\x8f\x6e\xe7\xfd\x72\xe0\x83\x01\xff\xb4\x9d\x7c\x9e"
"\xbf\xee\xd4\x05\x5f\x99\xe7\x1f\x77\x6d\xde\xbf\xdd\xb8\x5e\xaf\x37"
"\x7e\xef\xab\xed\xe4\xfd\xfa\x7f\xd1\xb6\xd6\xda\x3f\xdb\xa2\xff\x19"
"\x5f\x97\x26\x63\x07\xf9\x8f\xdd\xe4\x3f\x76\x93\xff\xd8\x4d\xfe\x63"
"\x37\xf9\x8f\xdd\xe4\xff\x7f\xcd\xff\x6c\x5d\x39\xe0\x6b\xeb\xff\xf5"
"\xff\xae\xfe\x1f\x30\xba\x1a\xee\xff\x35\xd7\xf5\xaf\xa8\xff\x37\x19"
"\x33\x7f\xff\xde\xd2\xff\xd5\xfa\x7f\xde\xaf\x6e\xd0\x97\xeb\x8e\x3f"
"\xa6\xfe\x5f\xac\x37\xa4\x37\xbc\x37\x7c\xf4\xf8\xd7\xdd\x07\x18\xf0"
"\xf7\xfb\x3c\xef\x3f\x6f\xa7\x1b\xd8\xfe\x66\x9f\xff\xb7\xfa\xba\x34"
"\x19\x3b\xc8\x7f\xec\x26\xff\xb1\x9b\xfc\xc7\x6e\xf2\x1f\xbb\xc9\x7f"
"\xec\x26\xff\xb1\x5b\xff\x7f\xa8\xff\xfb\xff\xbf\xd4\xff\xfd\xff\x2f"
"\xd7\xff\x63\x9e\x60\xf8\x6b\xfb\xb7\xf5\xff\xa2\xbd\x61\xbd\xf5\x7a"
"\xf3\xf7\x86\xf4\x86\xf6\x06\x8f\x1e\xff\xba\xfa\x7f\x9e\xd1\xed\x98"
"\xfa\xff\xef\xe6\xfd\xb2\x9d\x6e\xde\x01\x5f\xbc\x49\xfd\xcf\xff\x5e"
"\xf2\x1f\xbb\xc9\x7f\xec\x26\xff\xb1\x9b\xfc\xc7\x6e\xf2\x1f\xbb\xc9"
"\x7f\xec\xf6\x8f\xf5\xff\x80\xd1\xf5\xff\x07\x7f\x57\xff\x8f\x3b\xfa"
"\x1e\xc0\xc0\xaf\x99\x69\xb2\xd1\xf5\xf2\x97\xf5\xff\xcc\xff\xcd\xfa"
"\xff\xab\xcf\xf9\xf7\xeb\x2d\xf9\x5f\xae\xf3\xbf\xea\xcb\xf9\xc7\x1b"
"\x30\x43\x6f\xf9\xde\xb0\xde\xd0\xde\xe6\xbd\x8d\x7a\x83\xbf\x98\x77"
"\xd4\x98\x75\xfa\xf7\xd6\x19\xb3\xe2\xc0\x51\xff\xf1\x39\xbe\xfc\x7d"
"\x80\x29\xbf\x38\x3a\xdb\xe8\xaf\x94\x29\x7b\xc7\xf7\x9b\xa2\xd7\xef"
"\xaf\xab\x8c\x33\xc5\xe8\xf7\x7f\x31\xf6\xd7\x13\xc6\xe9\xeb\xf5\x7a"
"\x7d\xfd\x7b\x5f\x39\xe7\xef\x8f\xf5\x46\xdf\x2b\x99\x79\xcc\xfa\x03"
"\x7b\x53\x8c\xee\x8d\xe8\x8d\xec\x6d\xdd\xdb\xb0\x37\xa8\x37\xf4\x8b"
"\xbb\x11\x5f\x3e\x8f\xb0\x49\xaf\xd7\x9b\x71\xcc\xf9\xe3\xf4\x26\x1e"
"\xb3\xd3\xa3\x73\x1b\xfd\xc9\x47\x8d\x19\x9f\x7a\xcc\xd3\x0a\x53\x7f"
"\xed\x7d\x88\xaf\x4b\x93\xb1\x83\xfc\xc7\x6e\xf2\x1f\xbb\xc9\x7f\xec"
"\x26\xff\xb1\x9b\xfc\xc7\x6e\xf2\x1f\xbb\x0d\xfc\xa2\x2e\x04\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x5b\x3d\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\xff\xa2\x11\x23\xb7\xde"
"\x70\xd0\xd0\xa1\x83\x87\xeb\xe8\xe8\xe8\x8c\xe9\xfc\xbb\xbf\x33\x01"
"\x00\x00\xff\xd3\xfe\xf3\x87\xfe\x7f\xf7\x95\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\xd8\xeb\x5f\xf1\xe7\xc4\xfe"
"\xdd\x9f\x11\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\xe0\xff\x8f\xb7\x97\x3e\x60\xf1\x65\xe7\x5f\xa6\xaf\xaf\xaf"
"\xaf\x37\x59\xbf\xd1\x83\xdd\x57\xcf\x19\xd5\x1b\xd5\x6b\xa3\x8f\x0d"
"\x1c\x3d\xd6\x46\xb7\xfd\x46\xff\xdb\x6a\x8d\xb3\xe6\x78\x7c\xd2\xfb"
"\xd6\xff\x8f\xb1\x49\xa7\xbd\x6e\xd3\x6d\xcf\x99\xfe\x86\xcd\x26\x5a"
"\xfe\x92\x49\xaf\x1e\xaf\x77\xef\x64\xab\xbf\xfd\xc1\xac\xaf\xdc\x3b"
"\xd5\xbd\xd3\xbc\x5d\x97\x5d\x7f\xc8\x88\xbe\x21\x23\xfa\x36\x1e\xb6"
"\x59\xdf\xa0\xbe\xb5\x86\x0d\xdb\x6c\xd0\x5a\x43\x07\xf7\xad\x33\x64"
"\xc4\x86\x33\xf5\x2d\x39\x74\xf0\xa0\x11\x83\xfb\x86\x6c\x3c\x62\xf0"
"\xf0\xaf\x1c\x5e\x77\xe8\xb0\x4d\x36\x19\xd9\x37\x68\xe3\x75\x26\x99"
"\x70\x93\xe1\x83\x47\x8c\xe8\x1b\xb4\xf1\xc8\xbe\x0d\x07\x8f\xec\xdb"
"\x6c\x58\xdf\x66\xc3\x47\xf6\x0d\x5a\x6f\xd0\x90\x8d\xfb\x66\x9a\x69"
"\xa6\xbe\x49\x26\xfc\x57\xed\xdd\xff\x7d\xcb\x9d\xf5\xef\xbe\x02\x00"
"\x00\x00\xfe\x35\xfe\x9f\x00\x00\x00\xff\xff\x5e\x32\x76\x29",
127345);
syz_mount_image(/*fs=*/0x2001f180, /*dir=*/0x2001f1c0,
/*flags=MS_NODEV|MS_NOATIME*/ 0x404, /*opts=*/0x20000140,
/*chdir=*/1, /*size=*/0x1f171, /*img=*/0x2003e380);
break;
case 4:
memcpy((void*)0x20000080, "./file1\000", 8);
res = syscall(__NR_openat, /*fd=*/0xffffff9c, /*file=*/0x20000080ul,
/*flags=O_SYNC|O_CREAT|O_RDWR*/ 0x101042ul, /*mode=*/0ul);
if (res != -1)
r[0] = res;
break;
case 5:
memcpy((void*)0x200014c0, "westwood\000", 9);
syscall(__NR_write, /*fd=*/r[0], /*val=*/0x200014c0ul,
/*len=*/0xffffff9cul);
break;
case 6:
syscall(__NR_fallocate, /*fd=*/r[0],
/*mode=FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE*/ 3ul, /*off=*/0ul,
/*len=*/0xfffful);
break;
case 7:
memcpy((void*)0x200000c0, "/dev/loop", 9);
*(uint8_t*)0x200000c9 = 0x30 + procid * 1;
*(uint8_t*)0x200000ca = 0;
memcpy((void*)0x20000200, "./file1\000", 8);
syscall(__NR_mount, /*src=*/0x200000c0ul, /*dst=*/0x20000200ul,
/*type=*/0ul, /*flags=MS_I_VERSION|MS_BIND*/ 0x801000ul,
/*data=*/0ul);
break;
case 8:
memcpy((void*)0x20000100, "./file1\000", 8);
res = syscall(__NR_openat, /*fd=*/0xffffff9c, /*file=*/0x20000100ul,
/*flags=O_CREAT|O_RDWR*/ 0x42ul, /*mode=*/0ul);
if (res != -1)
r[1] = res;
break;
case 9:
*(uint32_t*)0x20000140 = 0;
*(uint16_t*)0x20000144 = 0x18;
*(uint16_t*)0x20000146 = 0xfa00;
*(uint64_t*)0x20000148 = 0;
*(uint64_t*)0x20000150 = 0x200000c0;
*(uint16_t*)0x20000158 = 0;
*(uint8_t*)0x2000015a = 0;
memset((void*)0x2000015b, 0, 5);
syscall(__NR_write, /*fd=*/r[1], /*data=*/0x20000140ul,
/*len=*/0xfffffd1aul);
break;
case 10:
syscall(__NR_write, /*fd=*/-1, /*val=*/0ul, /*len=*/0ul);
break;
case 11:
memcpy((void*)0x20002bc0, "/dev/snd/controlC#\000", 19);
syz_open_dev(/*dev=*/0x20002bc0, /*id=*/0x8000,
/*flags=O_SYNC|O_PATH|O_NOCTTY|O_CREAT|O_RDWR|0x1*/ 0x301143);
break;
}
}
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
/*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x20000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
/*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x21000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
/*offset=*/0ul);
for (procid = 0; procid < 4; procid++) {
if (fork() == 0) {
loop();
}
}
sleep(1000000);
return 0;
}
[-- Attachment #3: .config --]
[-- Type: application/octet-stream, Size: 247338 bytes --]
#
# Automatically generated file; DO NOT EDIT.
# Linux/x86 6.9.0 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-11 (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=110400
CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=23800
CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=23800
CONFIG_LLD_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_GCC_ASM_GOTO_OUTPUT_WORKAROUND=y
CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_PAHOLE_VERSION=125
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y
#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
# CONFIG_WERROR is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_ZSTD=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_KERNEL_ZSTD is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_WATCH_QUEUE=y
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_USELIB is not set
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST_IDLE=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_CONTEXT_TRACKING=y
CONFIG_CONTEXT_TRACKING_IDLE=y
#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_CONTEXT_TRACKING_USER=y
# CONFIG_CONTEXT_TRACKING_USER_FORCE is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US=125
# end of Timers subsystem
CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
#
# BPF subsystem
#
CONFIG_BPF_SYSCALL=y
# CONFIG_BPF_JIT is not set
# CONFIG_BPF_UNPRIV_DEFAULT_OFF is not set
CONFIG_USERMODE_DRIVER=y
CONFIG_BPF_PRELOAD=y
CONFIG_BPF_PRELOAD_UMD=y
# end of BPF subsystem
CONFIG_PREEMPT_BUILD=y
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y
CONFIG_PREEMPT_DYNAMIC=y
CONFIG_SCHED_CORE=y
#
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
# CONFIG_TICK_CPU_ACCOUNTING is not set
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_SCHED_AVG_IRQ=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_PSI=y
# CONFIG_PSI_DEFAULT_DISABLED is not set
# end of CPU/Task time and stats accounting
CONFIG_CPU_ISOLATION=y
#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
CONFIG_PREEMPT_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RCU=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
# end of RCU Subsystem
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=18
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
# CONFIG_PRINTK_INDEX is not set
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC10_NO_ARRAY_BOUNDS=y
CONFIG_CC_NO_ARRAY_BOUNDS=y
CONFIG_GCC_NO_STRINGOP_OVERFLOW=y
CONFIG_CC_NO_STRINGOP_OVERFLOW=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
# CONFIG_CGROUP_FAVOR_DYNMODS is not set
CONFIG_MEMCG=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
# CONFIG_RT_GROUP_SCHED is not set
CONFIG_SCHED_MM_CID=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
CONFIG_CGROUP_MISC=y
CONFIG_CGROUP_DEBUG=y
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_CHECKPOINT_RESTORE=y
# CONFIG_SCHED_AUTOGROUP is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_RD_ZSTD=y
# CONFIG_BOOT_CONFIG is not set
CONFIG_INITRAMFS_PRESERVE_MTIME=y
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_LD_ORPHAN_WARN=y
CONFIG_LD_ORPHAN_WARN_LEVEL="warn"
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_EXPERT=y
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_PC104 is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_SELFTEST is not set
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_GUEST_PERF_EVENTS=y
#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters
CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
#
# Kexec and crash features
#
CONFIG_CRASH_RESERVE=y
CONFIG_VMCORE_INFO=y
CONFIG_KEXEC_CORE=y
CONFIG_KEXEC=y
# CONFIG_KEXEC_FILE is not set
# CONFIG_KEXEC_JUMP is not set
CONFIG_CRASH_DUMP=y
CONFIG_CRASH_HOTPLUG=y
CONFIG_CRASH_MAX_MEMORY_RANGES=8192
# end of Kexec and crash features
# end of General setup
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_CSUM=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_AUDIT_ARCH=y
CONFIG_KASAN_SHADOW_OFFSET=0xdffffc0000000000
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_PGTABLE_LEVELS=4
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
#
# Processor type and features
#
CONFIG_SMP=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_CPU_RESCTRL is not set
# CONFIG_X86_FRED is not set
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_NUMACHIP is not set
# CONFIG_X86_VSMP is not set
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
# CONFIG_X86_INTEL_LPSS is not set
# CONFIG_X86_AMD_PLATFORM_DEVICE is not set
CONFIG_IOSF_MBI=y
# CONFIG_IOSF_MBI_DEBUG is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_DEBUG=y
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
# CONFIG_XEN is not set
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
CONFIG_PVH=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
CONFIG_MCORE2=y
# CONFIG_MATOM is not set
# CONFIG_GENERIC_CPU is not set
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_INTEL_USERCOPY=y
CONFIG_X86_USE_PPRO_CHECKSUM=y
CONFIG_X86_P6_NOP=y
CONFIG_X86_TSC=y
CONFIG_X86_HAVE_PAE=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
# CONFIG_CPU_SUP_HYGON is not set
# CONFIG_CPU_SUP_CENTAUR is not set
# CONFIG_CPU_SUP_ZHAOXIN is not set
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_BOOT_VESA_SUPPORT=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS_RANGE_BEGIN=2
CONFIG_NR_CPUS_RANGE_END=512
CONFIG_NR_CPUS_DEFAULT=64
CONFIG_NR_CPUS=8
CONFIG_SCHED_CLUSTER=y
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
# CONFIG_X86_MCELOG_LEGACY is not set
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
# CONFIG_X86_MCE_INJECT is not set
#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=y
CONFIG_PERF_EVENTS_INTEL_CSTATE=y
# CONFIG_PERF_EVENTS_AMD_POWER is not set
CONFIG_PERF_EVENTS_AMD_UNCORE=y
# CONFIG_PERF_EVENTS_AMD_BRS is not set
# end of Performance monitoring
CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_MICROCODE=y
# CONFIG_MICROCODE_LATE_LOADING is not set
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
# CONFIG_X86_5LEVEL is not set
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
# CONFIG_X86_PMEM_LEGACY is not set
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_X86_UMIP=y
CONFIG_CC_HAS_IBT=y
CONFIG_X86_CET=y
CONFIG_X86_KERNEL_IBT=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
# CONFIG_X86_INTEL_TSX_MODE_OFF is not set
CONFIG_X86_INTEL_TSX_MODE_ON=y
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_X86_SGX=y
CONFIG_X86_USER_SHADOW_STACK=y
# CONFIG_EFI is not set
CONFIG_HZ_100=y
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=100
CONFIG_SCHED_HRTICK=y
CONFIG_ARCH_SUPPORTS_KEXEC=y
CONFIG_ARCH_SUPPORTS_KEXEC_FILE=y
CONFIG_ARCH_SUPPORTS_KEXEC_PURGATORY=y
CONFIG_ARCH_SUPPORTS_KEXEC_SIG=y
CONFIG_ARCH_SUPPORTS_KEXEC_SIG_FORCE=y
CONFIG_ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG=y
CONFIG_ARCH_SUPPORTS_KEXEC_JUMP=y
CONFIG_ARCH_SUPPORTS_CRASH_DUMP=y
CONFIG_ARCH_SUPPORTS_CRASH_HOTPLUG=y
CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION=y
CONFIG_PHYSICAL_START=0x1000000
# CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_ADDRESS_MASKING=y
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_XONLY=y
# CONFIG_LEGACY_VSYSCALL_NONE is not set
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlyprintk=serial net.ifnames=0 sysctl.kernel.hung_task_all_cpu_backtrace=1 ima_policy=tcb nf-conntrack-ftp.ports=20000 nf-conntrack-tftp.ports=20000 nf-conntrack-sip.ports=20000 nf-conntrack-irc.ports=20000 nf-conntrack-sane.ports=20000 binder.debug_mask=0 rcupdate.rcu_expedited=1 rcupdate.rcu_cpu_stall_cputime=1 no_hash_pointers page_owner=on sysctl.vm.nr_hugepages=4 sysctl.vm.nr_overcommit_hugepages=4 secretmem.enable=1 sysctl.max_rcu_stall_to_panic=1 msr.allow_writes=off coredump_filter=0xffff root=/dev/sda console=ttyS0 vsyscall=native numa=fake=2 kvm-intel.nested=1 spec_store_bypass_disable=prctl nopcid vivid.n_devs=16 vivid.multiplanar=1,2,1,2,1,2,1,2,1,2,1,2,1,2,1,2 netrom.nr_ndevs=16 rose.rose_ndevs=16 smp.csd_lock_timeout=100000 watchdog_thresh=55 workqueue.watchdog_thresh=140 sysctl.net.core.netdev_unregister_timeout_secs=140 dummy_hcd.num=8 panic_on_warn=1"
# CONFIG_CMDLINE_OVERRIDE is not set
CONFIG_MODIFY_LDT_SYSCALL=y
# CONFIG_STRICT_SIGALTSTACK_SIZE is not set
CONFIG_HAVE_LIVEPATCH=y
# end of Processor type and features
CONFIG_CC_HAS_SLS=y
CONFIG_CC_HAS_RETURN_THUNK=y
CONFIG_CC_HAS_ENTRY_PADDING=y
CONFIG_FUNCTION_PADDING_CFI=11
CONFIG_FUNCTION_PADDING_BYTES=16
CONFIG_CALL_PADDING=y
CONFIG_HAVE_CALL_THUNKS=y
CONFIG_CALL_THUNKS=y
CONFIG_PREFIX_SYMBOLS=y
CONFIG_CPU_MITIGATIONS=y
CONFIG_MITIGATION_PAGE_TABLE_ISOLATION=y
CONFIG_MITIGATION_RETPOLINE=y
CONFIG_MITIGATION_RETHUNK=y
CONFIG_MITIGATION_UNRET_ENTRY=y
CONFIG_MITIGATION_CALL_DEPTH_TRACKING=y
# CONFIG_CALL_THUNKS_DEBUG is not set
CONFIG_MITIGATION_IBPB_ENTRY=y
CONFIG_MITIGATION_IBRS_ENTRY=y
CONFIG_MITIGATION_SRSO=y
# CONFIG_MITIGATION_SLS is not set
# CONFIG_MITIGATION_GDS_FORCE is not set
CONFIG_MITIGATION_RFDS=y
CONFIG_MITIGATION_SPECTRE_BHI=y
CONFIG_ARCH_HAS_ADD_PAGES=y
#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
# CONFIG_SUSPEND_SKIP_SYNC is not set
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_HIBERNATION_COMP_LZO=y
# CONFIG_HIBERNATION_COMP_LZ4 is not set
CONFIG_HIBERNATION_DEF_COMP="lzo"
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_USERSPACE_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_PM_CLK=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
CONFIG_ACPI_THERMAL_LIB=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
# CONFIG_ACPI_FPDT is not set
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=y
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_TAD is not set
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
# CONFIG_ACPI_PROCESSOR_AGGREGATOR is not set
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_PLATFORM_PROFILE=y
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
# CONFIG_ACPI_DEBUG is not set
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_ACPI_CONTAINER=y
# CONFIG_ACPI_HOTPLUG_MEMORY is not set
CONFIG_ACPI_HOTPLUG_IOAPIC=y
# CONFIG_ACPI_SBS is not set
# CONFIG_ACPI_HED is not set
# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set
CONFIG_ACPI_NFIT=y
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
# CONFIG_ACPI_APEI is not set
# CONFIG_ACPI_DPTF is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_ACPI_CONFIGFS is not set
# CONFIG_ACPI_PFRUT is not set
CONFIG_ACPI_PCC=y
# CONFIG_ACPI_FFH is not set
# CONFIG_PMIC_OPREGION is not set
CONFIG_X86_PM_TIMER=y
#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
#
# CPU frequency scaling drivers
#
# CONFIG_CPUFREQ_DT is not set
# CONFIG_CPUFREQ_DT_PLATDEV is not set
CONFIG_X86_INTEL_PSTATE=y
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_AMD_PSTATE=y
CONFIG_X86_AMD_PSTATE_DEFAULT_MODE=3
# CONFIG_X86_AMD_PSTATE_UT is not set
CONFIG_X86_ACPI_CPUFREQ=y
CONFIG_X86_ACPI_CPUFREQ_CPB=y
# CONFIG_X86_POWERNOW_K8 is not set
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
# CONFIG_X86_P4_CLOCKMOD is not set
#
# shared options
#
# end of CPU Frequency scaling
#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_CPU_IDLE_GOV_TEO is not set
CONFIG_CPU_IDLE_GOV_HALTPOLL=y
CONFIG_HALTPOLL_CPUIDLE=y
# end of CPU Idle
CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options
#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_MMCONF_FAM10H=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
# CONFIG_ISA_BUS is not set
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# end of Bus options (PCI etc.)
#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_EMULATION_DEFAULT_DISABLED is not set
CONFIG_X86_X32_ABI=y
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
# end of Binary Emulations
CONFIG_KVM_COMMON=y
CONFIG_HAVE_KVM_PFNCACHE=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_DIRTY_RING=y
CONFIG_HAVE_KVM_DIRTY_RING_TSO=y
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_READONLY_MEM=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_HAVE_KVM_PM_NOTIFIER=y
CONFIG_KVM_GENERIC_HARDWARE_ENABLING=y
CONFIG_KVM_GENERIC_MMU_NOTIFIER=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
# CONFIG_KVM_SW_PROTECTED_VM is not set
CONFIG_KVM_INTEL=y
CONFIG_X86_SGX_KVM=y
CONFIG_KVM_AMD=y
# CONFIG_KVM_SMM is not set
CONFIG_KVM_HYPERV=y
CONFIG_KVM_XEN=y
CONFIG_KVM_PROVE_MMU=y
CONFIG_KVM_MAX_NR_VCPUS=1024
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y
CONFIG_AS_GFNI=y
CONFIG_AS_WRUSS=y
CONFIG_ARCH_CONFIGURES_CPU_MITIGATIONS=y
#
# General architecture-dependent options
#
CONFIG_HOTPLUG_SMT=y
CONFIG_HOTPLUG_CORE_SYNC=y
CONFIG_HOTPLUG_CORE_SYNC_DEAD=y
CONFIG_HOTPLUG_CORE_SYNC_FULL=y
CONFIG_HOTPLUG_SPLIT_STARTUP=y
CONFIG_HOTPLUG_PARALLEL=y
CONFIG_GENERIC_ENTRY=y
# CONFIG_KPROBES is not set
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_ARCH_HAS_CPU_FINALIZE_INIT=y
CONFIG_ARCH_HAS_CPU_PASID=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_ARCH_WANTS_NO_INSTR=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_RUST=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_MMU_GATHER_MERGE_VMAS=y
CONFIG_MMU_LAZY_TLB_REFCOUNT=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# CONFIG_SECCOMP_CACHE_DEBUG is not set
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y
CONFIG_LTO_NONE=y
CONFIG_ARCH_SUPPORTS_CFI_CLANG=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING_USER=y
CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PUD=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_HAVE_ARCH_HUGE_VMALLOC=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_PMD_MKWRITE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y
CONFIG_SOFTIRQ_ON_OWN_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_HAVE_PAGE_SIZE_4KB=y
CONFIG_PAGE_SIZE_4KB=y
CONFIG_PAGE_SIZE_LESS_THAN_64KB=y
CONFIG_PAGE_SIZE_LESS_THAN_256KB=y
CONFIG_PAGE_SHIFT=12
CONFIG_HAVE_OBJTOOL=y
CONFIG_HAVE_JUMP_LABEL_HACK=y
CONFIG_HAVE_NOINSTR_HACK=y
CONFIG_HAVE_NOINSTR_VALIDATION=y
CONFIG_HAVE_UACCESS_VALIDATION=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET=y
# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_HAVE_STATIC_CALL=y
CONFIG_HAVE_STATIC_CALL_INLINE=y
CONFIG_HAVE_PREEMPT_DYNAMIC=y
CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y
CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y
CONFIG_ARCH_HAS_ELFCORE_COMPAT=y
CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y
CONFIG_DYNAMIC_SIGFRAME=y
CONFIG_HAVE_ARCH_NODE_DEV_GROUP=y
CONFIG_ARCH_HAS_HW_PTE_YOUNG=y
CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y
#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling
CONFIG_HAVE_GCC_PLUGINS=y
CONFIG_FUNCTION_ALIGNMENT_4B=y
CONFIG_FUNCTION_ALIGNMENT_16B=y
CONFIG_FUNCTION_ALIGNMENT=16
# end of General architecture-dependent options
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
# CONFIG_MODULE_DEBUG is not set
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set
CONFIG_MODVERSIONS=y
CONFIG_ASM_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
# CONFIG_MODULE_SIG_ALL is not set
# CONFIG_MODULE_SIG_SHA1 is not set
CONFIG_MODULE_SIG_SHA256=y
# CONFIG_MODULE_SIG_SHA384 is not set
# CONFIG_MODULE_SIG_SHA512 is not set
# CONFIG_MODULE_SIG_SHA3_256 is not set
# CONFIG_MODULE_SIG_SHA3_384 is not set
# CONFIG_MODULE_SIG_SHA3_512 is not set
CONFIG_MODULE_SIG_HASH="sha256"
CONFIG_MODULE_COMPRESS_NONE=y
# CONFIG_MODULE_COMPRESS_GZIP is not set
# CONFIG_MODULE_COMPRESS_XZ is not set
# CONFIG_MODULE_COMPRESS_ZSTD is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_MODPROBE_PATH="/sbin/modprobe"
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLOCK_LEGACY_AUTOLOAD=y
CONFIG_BLK_RQ_ALLOC_TIME=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_CGROUP_PUNT_BIO=y
CONFIG_BLK_DEV_BSG_COMMON=y
CONFIG_BLK_ICQ=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=y
CONFIG_BLK_DEV_WRITE_MOUNTED=y
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
CONFIG_BLK_WBT=y
CONFIG_BLK_WBT_MQ=y
CONFIG_BLK_CGROUP_IOLATENCY=y
# CONFIG_BLK_CGROUP_FC_APPID is not set
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_CGROUP_IOPRIO=y
CONFIG_BLK_DEBUG_FS=y
CONFIG_BLK_DEBUG_FS_ZONED=y
# CONFIG_BLK_SED_OPAL is not set
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
CONFIG_ACORN_PARTITION=y
CONFIG_ACORN_PARTITION_CUMANA=y
CONFIG_ACORN_PARTITION_EESOX=y
CONFIG_ACORN_PARTITION_ICS=y
CONFIG_ACORN_PARTITION_ADFS=y
CONFIG_ACORN_PARTITION_POWERTEC=y
CONFIG_ACORN_PARTITION_RISCIX=y
CONFIG_AIX_PARTITION=y
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
CONFIG_ATARI_PARTITION=y
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
# CONFIG_LDM_DEBUG is not set
CONFIG_SGI_PARTITION=y
CONFIG_ULTRIX_PARTITION=y
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_SYSV68_PARTITION=y
CONFIG_CMDLINE_PARTITION=y
# end of Partition Types
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y
CONFIG_BLOCK_HOLDER_DEPRECATED=y
CONFIG_BLK_MQ_STACKING=y
#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=y
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
CONFIG_BFQ_CGROUP_DEBUG=y
# end of IO Schedulers
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y
#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
# end of Executable file formats
#
# Memory Management options
#
CONFIG_ZPOOL=y
CONFIG_SWAP=y
CONFIG_ZSWAP=y
CONFIG_ZSWAP_DEFAULT_ON=y
# CONFIG_ZSWAP_SHRINKER_DEFAULT_ON is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC=y
CONFIG_ZSWAP_ZPOOL_DEFAULT="zsmalloc"
# CONFIG_ZBUD is not set
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_ZSMALLOC_CHAIN_SIZE=8
#
# Slab allocator options
#
CONFIG_SLUB=y
# CONFIG_SLUB_TINY is not set
CONFIG_SLAB_MERGE_DEFAULT=y
# CONFIG_SLAB_FREELIST_RANDOM is not set
# CONFIG_SLAB_FREELIST_HARDENED is not set
# CONFIG_SLUB_STATS is not set
CONFIG_SLUB_CPU_PARTIAL=y
# CONFIG_RANDOM_KMALLOC_CACHES is not set
# end of Slab allocator options
# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_SPARSEMEM=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=y
CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_EXCLUSIVE_SYSTEM_RAM=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_MHP_MEMMAP_ON_MEMORY=y
CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_MEMORY_BALLOON=y
# CONFIG_BALLOON_COMPACTION is not set
CONFIG_COMPACTION=y
CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_DEVICE_MIGRATION=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PCP_BATCH_SCALE_MAX=5
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_TRANSPARENT_HUGEPAGE=y
# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
# CONFIG_TRANSPARENT_HUGEPAGE_NEVER is not set
CONFIG_THP_SWAP=y
CONFIG_READ_ONLY_THP_FOR_FS=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUGFS is not set
# CONFIG_CMA_SYSFS is not set
CONFIG_CMA_AREAS=19
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_GENERIC_EARLY_IOREMAP=y
# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
CONFIG_PAGE_IDLE_FLAG=y
# CONFIG_IDLE_PAGE_TRACKING is not set
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ARCH_HAS_ZONE_DMA_SET=y
CONFIG_ZONE_DMA=y
CONFIG_ZONE_DMA32=y
CONFIG_ZONE_DEVICE=y
CONFIG_HMM_MIRROR=y
CONFIG_GET_FREE_REGION=y
CONFIG_DEVICE_PRIVATE=y
CONFIG_VMAP_PFN=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PERCPU_STATS=y
# CONFIG_GUP_TEST is not set
# CONFIG_DMAPOOL_TEST is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_MAPPING_DIRTY_HELPERS=y
CONFIG_KMAP_LOCAL=y
CONFIG_MEMFD_CREATE=y
CONFIG_SECRETMEM=y
CONFIG_ANON_VMA_NAME=y
CONFIG_HAVE_ARCH_USERFAULTFD_WP=y
CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y
CONFIG_USERFAULTFD=y
# CONFIG_PTE_MARKER_UFFD_WP is not set
CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y
# CONFIG_LRU_GEN_STATS is not set
CONFIG_LRU_GEN_WALKS_MMU=y
CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y
CONFIG_PER_VMA_LOCK=y
CONFIG_LOCK_MM_AND_FIND_VMA=y
CONFIG_IOMMU_MM_DATA=y
#
# Data Access Monitoring
#
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_PADDR=y
# CONFIG_DAMON_SYSFS is not set
# CONFIG_DAMON_DBGFS_DEPRECATED is not set
CONFIG_DAMON_RECLAIM=y
# CONFIG_DAMON_LRU_SORT is not set
# end of Data Access Monitoring
# end of Memory Management options
CONFIG_NET=y
CONFIG_WANT_COMPAT_NETLINK_MESSAGES=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_NET_XGRESS=y
CONFIG_NET_REDIRECT=y
CONFIG_SKB_EXTENSIONS=y
#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=y
CONFIG_UNIX=y
CONFIG_AF_UNIX_OOB=y
CONFIG_UNIX_DIAG=y
CONFIG_TLS=y
CONFIG_TLS_DEVICE=y
CONFIG_TLS_TOE=y
CONFIG_XFRM=y
CONFIG_XFRM_OFFLOAD=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
CONFIG_XFRM_USER_COMPAT=y
CONFIG_XFRM_INTERFACE=y
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_AH=y
CONFIG_XFRM_ESP=y
CONFIG_XFRM_IPCOMP=y
CONFIG_NET_KEY=y
CONFIG_NET_KEY_MIGRATE=y
CONFIG_XFRM_ESPINTCP=y
CONFIG_SMC=y
CONFIG_SMC_DIAG=y
CONFIG_XDP_SOCKETS=y
CONFIG_XDP_SOCKETS_DIAG=y
CONFIG_NET_HANDSHAKE=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
CONFIG_NET_IPIP=y
CONFIG_NET_IPGRE_DEMUX=y
CONFIG_NET_IP_TUNNEL=y
CONFIG_NET_IPGRE=y
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=y
CONFIG_NET_UDP_TUNNEL=y
CONFIG_NET_FOU=y
CONFIG_NET_FOU_IP_TUNNELS=y
CONFIG_INET_AH=y
CONFIG_INET_ESP=y
CONFIG_INET_ESP_OFFLOAD=y
CONFIG_INET_ESPINTCP=y
CONFIG_INET_IPCOMP=y
CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=y
CONFIG_INET_TUNNEL=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
CONFIG_INET_UDP_DIAG=y
CONFIG_INET_RAW_DIAG=y
CONFIG_INET_DIAG_DESTROY=y
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=y
CONFIG_TCP_CONG_HTCP=y
CONFIG_TCP_CONG_HSTCP=y
CONFIG_TCP_CONG_HYBLA=y
CONFIG_TCP_CONG_VEGAS=y
CONFIG_TCP_CONG_NV=y
CONFIG_TCP_CONG_SCALABLE=y
CONFIG_TCP_CONG_LP=y
CONFIG_TCP_CONG_VENO=y
CONFIG_TCP_CONG_YEAH=y
CONFIG_TCP_CONG_ILLINOIS=y
CONFIG_TCP_CONG_DCTCP=y
CONFIG_TCP_CONG_CDG=y
CONFIG_TCP_CONG_BBR=y
# CONFIG_DEFAULT_BIC is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_HTCP is not set
# CONFIG_DEFAULT_HYBLA is not set
# CONFIG_DEFAULT_VEGAS is not set
# CONFIG_DEFAULT_VENO is not set
# CONFIG_DEFAULT_WESTWOOD is not set
# CONFIG_DEFAULT_DCTCP is not set
# CONFIG_DEFAULT_CDG is not set
# CONFIG_DEFAULT_BBR is not set
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_SIGPOOL=y
# CONFIG_TCP_AO is not set
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_ESP_OFFLOAD=y
CONFIG_INET6_ESPINTCP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_ILA=y
CONFIG_INET6_XFRM_TUNNEL=y
CONFIG_INET6_TUNNEL=y
CONFIG_IPV6_VTI=y
CONFIG_IPV6_SIT=y
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_GRE=y
CONFIG_IPV6_FOU=y
CONFIG_IPV6_FOU_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_IPV6_SEG6_LWTUNNEL=y
CONFIG_IPV6_SEG6_HMAC=y
CONFIG_IPV6_SEG6_BPF=y
CONFIG_IPV6_RPL_LWTUNNEL=y
# CONFIG_IPV6_IOAM6_LWTUNNEL is not set
CONFIG_NETLABEL=y
CONFIG_MPTCP=y
CONFIG_INET_MPTCP_DIAG=y
CONFIG_MPTCP_IPV6=y
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=y
#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_EGRESS=y
CONFIG_NETFILTER_SKIP_EGRESS=y
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
CONFIG_NETFILTER_BPF_LINK=y
# CONFIG_NETFILTER_NETLINK_HOOK is not set
CONFIG_NETFILTER_NETLINK_ACCT=y
CONFIG_NETFILTER_NETLINK_QUEUE=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NETFILTER_NETLINK_OSF=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_LOG_SYSLOG=y
CONFIG_NETFILTER_CONNCOUNT=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
# CONFIG_NF_CONNTRACK_PROCFS is not set
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CONNTRACK_OVS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_BROADCAST=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_SNMP=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_CT_NETLINK_TIMEOUT=y
CONFIG_NF_CT_NETLINK_HELPER=y
CONFIG_NETFILTER_NETLINK_GLUE_CT=y
CONFIG_NF_NAT=y
CONFIG_NF_NAT_AMANDA=y
CONFIG_NF_NAT_FTP=y
CONFIG_NF_NAT_IRC=y
CONFIG_NF_NAT_SIP=y
CONFIG_NF_NAT_TFTP=y
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NF_NAT_OVS=y
CONFIG_NETFILTER_SYNPROXY=y
CONFIG_NF_TABLES=y
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NFT_NUMGEN=y
CONFIG_NFT_CT=y
CONFIG_NFT_FLOW_OFFLOAD=y
CONFIG_NFT_CONNLIMIT=y
CONFIG_NFT_LOG=y
CONFIG_NFT_LIMIT=y
CONFIG_NFT_MASQ=y
CONFIG_NFT_REDIR=y
CONFIG_NFT_NAT=y
CONFIG_NFT_TUNNEL=y
CONFIG_NFT_QUEUE=y
CONFIG_NFT_QUOTA=y
CONFIG_NFT_REJECT=y
CONFIG_NFT_REJECT_INET=y
CONFIG_NFT_COMPAT=y
CONFIG_NFT_HASH=y
CONFIG_NFT_FIB=y
CONFIG_NFT_FIB_INET=y
CONFIG_NFT_XFRM=y
CONFIG_NFT_SOCKET=y
CONFIG_NFT_OSF=y
CONFIG_NFT_TPROXY=y
CONFIG_NFT_SYNPROXY=y
CONFIG_NF_DUP_NETDEV=y
CONFIG_NFT_DUP_NETDEV=y
CONFIG_NFT_FWD_NETDEV=y
CONFIG_NFT_FIB_NETDEV=y
CONFIG_NFT_REJECT_NETDEV=y
CONFIG_NF_FLOW_TABLE_INET=y
CONFIG_NF_FLOW_TABLE=y
# CONFIG_NF_FLOW_TABLE_PROCFS is not set
CONFIG_NETFILTER_XTABLES=y
CONFIG_NETFILTER_XTABLES_COMPAT=y
#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=y
CONFIG_NETFILTER_XT_CONNMARK=y
CONFIG_NETFILTER_XT_SET=y
#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=y
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_CT=y
CONFIG_NETFILTER_XT_TARGET_DSCP=y
CONFIG_NETFILTER_XT_TARGET_HL=y
CONFIG_NETFILTER_XT_TARGET_HMARK=y
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
CONFIG_NETFILTER_XT_TARGET_LED=y
CONFIG_NETFILTER_XT_TARGET_LOG=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_NAT=y
CONFIG_NETFILTER_XT_TARGET_NETMAP=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
CONFIG_NETFILTER_XT_TARGET_RATEEST=y
CONFIG_NETFILTER_XT_TARGET_REDIRECT=y
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y
CONFIG_NETFILTER_XT_TARGET_TEE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=y
#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y
CONFIG_NETFILTER_XT_MATCH_BPF=y
CONFIG_NETFILTER_XT_MATCH_CGROUP=y
CONFIG_NETFILTER_XT_MATCH_CLUSTER=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_CPU=y
CONFIG_NETFILTER_XT_MATCH_DCCP=y
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=y
CONFIG_NETFILTER_XT_MATCH_DSCP=y
CONFIG_NETFILTER_XT_MATCH_ECN=y
CONFIG_NETFILTER_XT_MATCH_ESP=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_HL=y
CONFIG_NETFILTER_XT_MATCH_IPCOMP=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_IPVS=y
CONFIG_NETFILTER_XT_MATCH_L2TP=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
CONFIG_NETFILTER_XT_MATCH_NFACCT=y
CONFIG_NETFILTER_XT_MATCH_OSF=y
CONFIG_NETFILTER_XT_MATCH_OWNER=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_RATEEST=y
CONFIG_NETFILTER_XT_MATCH_REALM=y
CONFIG_NETFILTER_XT_MATCH_RECENT=y
CONFIG_NETFILTER_XT_MATCH_SCTP=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TCPMSS=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
# end of Core Netfilter Configuration
CONFIG_IP_SET=y
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=y
CONFIG_IP_SET_BITMAP_IPMAC=y
CONFIG_IP_SET_BITMAP_PORT=y
CONFIG_IP_SET_HASH_IP=y
CONFIG_IP_SET_HASH_IPMARK=y
CONFIG_IP_SET_HASH_IPPORT=y
CONFIG_IP_SET_HASH_IPPORTIP=y
CONFIG_IP_SET_HASH_IPPORTNET=y
CONFIG_IP_SET_HASH_IPMAC=y
CONFIG_IP_SET_HASH_MAC=y
CONFIG_IP_SET_HASH_NETPORTNET=y
CONFIG_IP_SET_HASH_NET=y
CONFIG_IP_SET_HASH_NETNET=y
CONFIG_IP_SET_HASH_NETPORT=y
CONFIG_IP_SET_HASH_NETIFACE=y
CONFIG_IP_SET_LIST_SET=y
CONFIG_IP_VS=y
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12
#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y
#
# IPVS scheduler
#
CONFIG_IP_VS_RR=y
CONFIG_IP_VS_WRR=y
CONFIG_IP_VS_LC=y
CONFIG_IP_VS_WLC=y
CONFIG_IP_VS_FO=y
CONFIG_IP_VS_OVF=y
CONFIG_IP_VS_LBLC=y
CONFIG_IP_VS_LBLCR=y
CONFIG_IP_VS_DH=y
CONFIG_IP_VS_SH=y
CONFIG_IP_VS_MH=y
CONFIG_IP_VS_SED=y
CONFIG_IP_VS_NQ=y
CONFIG_IP_VS_TWOS=y
#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8
#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12
#
# IPVS application helper
#
CONFIG_IP_VS_FTP=y
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=y
#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_IP_NF_IPTABLES_LEGACY=y
CONFIG_NF_SOCKET_IPV4=y
CONFIG_NF_TPROXY_IPV4=y
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=y
CONFIG_NFT_DUP_IPV4=y
CONFIG_NFT_FIB_IPV4=y
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_DUP_IPV4=y
CONFIG_NF_LOG_ARP=y
CONFIG_NF_LOG_IPV4=y
CONFIG_NF_REJECT_IPV4=y
CONFIG_NF_NAT_SNMP_BASIC=y
CONFIG_NF_NAT_PPTP=y
CONFIG_NF_NAT_H323=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_RPFILTER=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_SYNPROXY=y
CONFIG_IP_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_TARGET_ECN=y
CONFIG_IP_NF_TARGET_TTL=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_SECURITY=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_NFT_COMPAT_ARP=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
# end of IP: Netfilter Configuration
#
# IPv6: Netfilter Configuration
#
CONFIG_IP6_NF_IPTABLES_LEGACY=y
CONFIG_NF_SOCKET_IPV6=y
CONFIG_NF_TPROXY_IPV6=y
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=y
CONFIG_NFT_DUP_IPV6=y
CONFIG_NFT_FIB_IPV6=y
CONFIG_NF_DUP_IPV6=y
CONFIG_NF_REJECT_IPV6=y
CONFIG_NF_LOG_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_AH=y
CONFIG_IP6_NF_MATCH_EUI64=y
CONFIG_IP6_NF_MATCH_FRAG=y
CONFIG_IP6_NF_MATCH_OPTS=y
CONFIG_IP6_NF_MATCH_HL=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_MATCH_MH=y
CONFIG_IP6_NF_MATCH_RPFILTER=y
CONFIG_IP6_NF_MATCH_RT=y
CONFIG_IP6_NF_MATCH_SRH=y
CONFIG_IP6_NF_TARGET_HL=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_SYNPROXY=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_IP6_NF_SECURITY=y
CONFIG_IP6_NF_NAT=y
CONFIG_IP6_NF_TARGET_MASQUERADE=y
CONFIG_IP6_NF_TARGET_NPT=y
# end of IPv6: Netfilter Configuration
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_TABLES_BRIDGE=y
CONFIG_NFT_BRIDGE_META=y
CONFIG_NFT_BRIDGE_REJECT=y
CONFIG_NF_CONNTRACK_BRIDGE=y
CONFIG_BRIDGE_NF_EBTABLES_LEGACY=y
CONFIG_BRIDGE_NF_EBTABLES=y
CONFIG_BRIDGE_EBT_BROUTE=y
CONFIG_BRIDGE_EBT_T_FILTER=y
CONFIG_BRIDGE_EBT_T_NAT=y
CONFIG_BRIDGE_EBT_802_3=y
CONFIG_BRIDGE_EBT_AMONG=y
CONFIG_BRIDGE_EBT_ARP=y
CONFIG_BRIDGE_EBT_IP=y
CONFIG_BRIDGE_EBT_IP6=y
CONFIG_BRIDGE_EBT_LIMIT=y
CONFIG_BRIDGE_EBT_MARK=y
CONFIG_BRIDGE_EBT_PKTTYPE=y
CONFIG_BRIDGE_EBT_STP=y
CONFIG_BRIDGE_EBT_VLAN=y
CONFIG_BRIDGE_EBT_ARPREPLY=y
CONFIG_BRIDGE_EBT_DNAT=y
CONFIG_BRIDGE_EBT_MARK_T=y
CONFIG_BRIDGE_EBT_REDIRECT=y
CONFIG_BRIDGE_EBT_SNAT=y
CONFIG_BRIDGE_EBT_LOG=y
CONFIG_BRIDGE_EBT_NFLOG=y
CONFIG_IP_DCCP=y
CONFIG_INET_DCCP_DIAG=y
#
# DCCP CCIDs Configuration
#
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
CONFIG_IP_DCCP_CCID3=y
# CONFIG_IP_DCCP_CCID3_DEBUG is not set
CONFIG_IP_DCCP_TFRC_LIB=y
# end of DCCP CCIDs Configuration
#
# DCCP Kernel Hacking
#
# CONFIG_IP_DCCP_DEBUG is not set
# end of DCCP Kernel Hacking
CONFIG_IP_SCTP=y
# CONFIG_SCTP_DBG_OBJCNT is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=y
CONFIG_RDS=y
CONFIG_RDS_RDMA=y
CONFIG_RDS_TCP=y
# CONFIG_RDS_DEBUG is not set
CONFIG_TIPC=y
CONFIG_TIPC_MEDIA_IB=y
CONFIG_TIPC_MEDIA_UDP=y
CONFIG_TIPC_CRYPTO=y
CONFIG_TIPC_DIAG=y
CONFIG_ATM=y
CONFIG_ATM_CLIP=y
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=y
CONFIG_ATM_MPOA=y
CONFIG_ATM_BR2684=y
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=y
# CONFIG_L2TP_DEBUGFS is not set
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=y
CONFIG_L2TP_ETH=y
CONFIG_STP=y
CONFIG_GARP=y
CONFIG_MRP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
CONFIG_BRIDGE_MRP=y
CONFIG_BRIDGE_CFM=y
CONFIG_NET_DSA=y
# CONFIG_NET_DSA_TAG_NONE is not set
# CONFIG_NET_DSA_TAG_AR9331 is not set
CONFIG_NET_DSA_TAG_BRCM_COMMON=y
CONFIG_NET_DSA_TAG_BRCM=y
# CONFIG_NET_DSA_TAG_BRCM_LEGACY is not set
CONFIG_NET_DSA_TAG_BRCM_PREPEND=y
# CONFIG_NET_DSA_TAG_HELLCREEK is not set
# CONFIG_NET_DSA_TAG_GSWIP is not set
# CONFIG_NET_DSA_TAG_DSA is not set
# CONFIG_NET_DSA_TAG_EDSA is not set
CONFIG_NET_DSA_TAG_MTK=y
# CONFIG_NET_DSA_TAG_KSZ is not set
# CONFIG_NET_DSA_TAG_OCELOT is not set
# CONFIG_NET_DSA_TAG_OCELOT_8021Q is not set
CONFIG_NET_DSA_TAG_QCA=y
CONFIG_NET_DSA_TAG_RTL4_A=y
# CONFIG_NET_DSA_TAG_RTL8_4 is not set
# CONFIG_NET_DSA_TAG_RZN1_A5PSW is not set
# CONFIG_NET_DSA_TAG_LAN9303 is not set
# CONFIG_NET_DSA_TAG_SJA1105 is not set
# CONFIG_NET_DSA_TAG_TRAILER is not set
# CONFIG_NET_DSA_TAG_XRS700X is not set
CONFIG_VLAN_8021Q=y
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
CONFIG_LLC=y
CONFIG_LLC2=y
# CONFIG_ATALK is not set
CONFIG_X25=y
CONFIG_LAPB=y
CONFIG_PHONET=y
CONFIG_6LOWPAN=y
# CONFIG_6LOWPAN_DEBUGFS is not set
CONFIG_6LOWPAN_NHC=y
CONFIG_6LOWPAN_NHC_DEST=y
CONFIG_6LOWPAN_NHC_FRAGMENT=y
CONFIG_6LOWPAN_NHC_HOP=y
CONFIG_6LOWPAN_NHC_IPV6=y
CONFIG_6LOWPAN_NHC_MOBILITY=y
CONFIG_6LOWPAN_NHC_ROUTING=y
CONFIG_6LOWPAN_NHC_UDP=y
CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=y
CONFIG_6LOWPAN_GHC_UDP=y
CONFIG_6LOWPAN_GHC_ICMPV6=y
CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=y
CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=y
CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=y
CONFIG_IEEE802154=y
CONFIG_IEEE802154_NL802154_EXPERIMENTAL=y
CONFIG_IEEE802154_SOCKET=y
CONFIG_IEEE802154_6LOWPAN=y
CONFIG_MAC802154=y
CONFIG_NET_SCHED=y
#
# Queueing/Scheduling
#
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_HFSC=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_SCH_MULTIQ=y
CONFIG_NET_SCH_RED=y
CONFIG_NET_SCH_SFB=y
CONFIG_NET_SCH_SFQ=y
CONFIG_NET_SCH_TEQL=y
CONFIG_NET_SCH_TBF=y
CONFIG_NET_SCH_CBS=y
CONFIG_NET_SCH_ETF=y
CONFIG_NET_SCH_MQPRIO_LIB=y
CONFIG_NET_SCH_TAPRIO=y
CONFIG_NET_SCH_GRED=y
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_DRR=y
CONFIG_NET_SCH_MQPRIO=y
CONFIG_NET_SCH_SKBPRIO=y
CONFIG_NET_SCH_CHOKE=y
CONFIG_NET_SCH_QFQ=y
CONFIG_NET_SCH_CODEL=y
CONFIG_NET_SCH_FQ_CODEL=y
CONFIG_NET_SCH_CAKE=y
CONFIG_NET_SCH_FQ=y
CONFIG_NET_SCH_HHF=y
CONFIG_NET_SCH_PIE=y
CONFIG_NET_SCH_FQ_PIE=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_SCH_PLUG=y
CONFIG_NET_SCH_ETS=y
CONFIG_NET_SCH_DEFAULT=y
# CONFIG_DEFAULT_FQ is not set
# CONFIG_DEFAULT_CODEL is not set
# CONFIG_DEFAULT_FQ_CODEL is not set
# CONFIG_DEFAULT_FQ_PIE is not set
# CONFIG_DEFAULT_SFQ is not set
CONFIG_DEFAULT_PFIFO_FAST=y
CONFIG_DEFAULT_NET_SCH="pfifo_fast"
#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=y
CONFIG_NET_CLS_ROUTE4=y
CONFIG_NET_CLS_FW=y
CONFIG_NET_CLS_U32=y
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_FLOW=y
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_CLS_BPF=y
CONFIG_NET_CLS_FLOWER=y
CONFIG_NET_CLS_MATCHALL=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=y
CONFIG_NET_EMATCH_NBYTE=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_EMATCH_META=y
CONFIG_NET_EMATCH_TEXT=y
CONFIG_NET_EMATCH_CANID=y
CONFIG_NET_EMATCH_IPSET=y
CONFIG_NET_EMATCH_IPT=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_SAMPLE=y
CONFIG_NET_ACT_NAT=y
CONFIG_NET_ACT_PEDIT=y
CONFIG_NET_ACT_SIMP=y
CONFIG_NET_ACT_SKBEDIT=y
CONFIG_NET_ACT_CSUM=y
CONFIG_NET_ACT_MPLS=y
CONFIG_NET_ACT_VLAN=y
CONFIG_NET_ACT_BPF=y
CONFIG_NET_ACT_CONNMARK=y
CONFIG_NET_ACT_CTINFO=y
CONFIG_NET_ACT_SKBMOD=y
CONFIG_NET_ACT_IFE=y
CONFIG_NET_ACT_TUNNEL_KEY=y
CONFIG_NET_ACT_CT=y
CONFIG_NET_ACT_GATE=y
CONFIG_NET_IFE_SKBMARK=y
CONFIG_NET_IFE_SKBPRIO=y
CONFIG_NET_IFE_SKBTCINDEX=y
CONFIG_NET_TC_SKB_EXT=y
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=y
CONFIG_BATMAN_ADV_BATMAN_V=y
CONFIG_BATMAN_ADV_BLA=y
CONFIG_BATMAN_ADV_DAT=y
CONFIG_BATMAN_ADV_NC=y
CONFIG_BATMAN_ADV_MCAST=y
# CONFIG_BATMAN_ADV_DEBUG is not set
# CONFIG_BATMAN_ADV_TRACING is not set
CONFIG_OPENVSWITCH=y
CONFIG_OPENVSWITCH_GRE=y
CONFIG_OPENVSWITCH_VXLAN=y
CONFIG_OPENVSWITCH_GENEVE=y
CONFIG_VSOCKETS=y
CONFIG_VSOCKETS_DIAG=y
CONFIG_VSOCKETS_LOOPBACK=y
# CONFIG_VMWARE_VMCI_VSOCKETS is not set
CONFIG_VIRTIO_VSOCKETS=y
CONFIG_VIRTIO_VSOCKETS_COMMON=y
CONFIG_NETLINK_DIAG=y
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=y
CONFIG_MPLS_ROUTING=y
CONFIG_MPLS_IPTUNNEL=y
CONFIG_NET_NSH=y
CONFIG_HSR=y
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
CONFIG_QRTR=y
CONFIG_QRTR_TUN=y
# CONFIG_QRTR_MHI is not set
CONFIG_NET_NCSI=y
# CONFIG_NCSI_OEM_CMD_GET_MAC is not set
# CONFIG_NCSI_OEM_CMD_KEEP_PHY is not set
# CONFIG_PCPU_DEV_REFCNT is not set
CONFIG_MAX_SKB_FRAGS=17
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_SOCK_RX_QUEUE_MAPPING=y
CONFIG_XPS=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_NET_FLOW_LIMIT=y
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options
CONFIG_HAMRADIO=y
#
# Packet Radio protocols
#
CONFIG_AX25=y
CONFIG_AX25_DAMA_SLAVE=y
CONFIG_NETROM=y
CONFIG_ROSE=y
#
# AX.25 network device drivers
#
CONFIG_MKISS=y
CONFIG_6PACK=y
CONFIG_BPQETHER=y
# CONFIG_BAYCOM_SER_FDX is not set
# CONFIG_BAYCOM_SER_HDX is not set
# CONFIG_BAYCOM_PAR is not set
# CONFIG_YAM is not set
# end of AX.25 network device drivers
CONFIG_CAN=y
CONFIG_CAN_RAW=y
CONFIG_CAN_BCM=y
CONFIG_CAN_GW=y
CONFIG_CAN_J1939=y
CONFIG_CAN_ISOTP=y
CONFIG_BT=y
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=y
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=y
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_CMTP=y
CONFIG_BT_HIDP=y
CONFIG_BT_LE=y
CONFIG_BT_LE_L2CAP_ECRED=y
CONFIG_BT_6LOWPAN=y
CONFIG_BT_LEDS=y
CONFIG_BT_MSFTEXT=y
# CONFIG_BT_AOSPEXT is not set
# CONFIG_BT_DEBUGFS is not set
# CONFIG_BT_SELFTEST is not set
#
# Bluetooth device drivers
#
CONFIG_BT_INTEL=y
CONFIG_BT_BCM=y
CONFIG_BT_RTL=y
CONFIG_BT_QCA=y
CONFIG_BT_MTK=y
CONFIG_BT_HCIBTUSB=y
# CONFIG_BT_HCIBTUSB_AUTOSUSPEND is not set
CONFIG_BT_HCIBTUSB_POLL_SYNC=y
CONFIG_BT_HCIBTUSB_BCM=y
CONFIG_BT_HCIBTUSB_MTK=y
CONFIG_BT_HCIBTUSB_RTL=y
# CONFIG_BT_HCIBTSDIO is not set
CONFIG_BT_HCIUART=y
CONFIG_BT_HCIUART_SERDEV=y
CONFIG_BT_HCIUART_H4=y
# CONFIG_BT_HCIUART_NOKIA is not set
CONFIG_BT_HCIUART_BCSP=y
# CONFIG_BT_HCIUART_ATH3K is not set
CONFIG_BT_HCIUART_LL=y
CONFIG_BT_HCIUART_3WIRE=y
# CONFIG_BT_HCIUART_INTEL is not set
# CONFIG_BT_HCIUART_BCM is not set
# CONFIG_BT_HCIUART_RTL is not set
CONFIG_BT_HCIUART_QCA=y
CONFIG_BT_HCIUART_AG6XX=y
CONFIG_BT_HCIUART_MRVL=y
CONFIG_BT_HCIBCM203X=y
# CONFIG_BT_HCIBCM4377 is not set
CONFIG_BT_HCIBPA10X=y
CONFIG_BT_HCIBFUSB=y
# CONFIG_BT_HCIDTL1 is not set
# CONFIG_BT_HCIBT3C is not set
# CONFIG_BT_HCIBLUECARD is not set
CONFIG_BT_HCIVHCI=y
# CONFIG_BT_MRVL is not set
CONFIG_BT_ATH3K=y
# CONFIG_BT_MTKSDIO is not set
# CONFIG_BT_MTKUART is not set
# CONFIG_BT_VIRTIO is not set
# CONFIG_BT_NXPUART is not set
# end of Bluetooth device drivers
CONFIG_AF_RXRPC=y
CONFIG_AF_RXRPC_IPV6=y
# CONFIG_AF_RXRPC_INJECT_LOSS is not set
# CONFIG_AF_RXRPC_INJECT_RX_DELAY is not set
# CONFIG_AF_RXRPC_DEBUG is not set
CONFIG_RXKAD=y
# CONFIG_RXPERF is not set
CONFIG_AF_KCM=y
CONFIG_STREAM_PARSER=y
# CONFIG_MCTP is not set
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=y
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
CONFIG_CFG80211_DEBUGFS=y
CONFIG_CFG80211_CRDA_SUPPORT=y
CONFIG_CFG80211_WEXT=y
CONFIG_MAC80211=y
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_MESH=y
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
CONFIG_RFKILL=y
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
# CONFIG_RFKILL_GPIO is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_FD=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_NET_9P_RDMA=y
# CONFIG_NET_9P_DEBUG is not set
CONFIG_CAIF=y
CONFIG_CAIF_DEBUG=y
CONFIG_CAIF_NETDEV=y
CONFIG_CAIF_USB=y
CONFIG_CEPH_LIB=y
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
CONFIG_NFC=y
CONFIG_NFC_DIGITAL=y
CONFIG_NFC_NCI=y
# CONFIG_NFC_NCI_SPI is not set
CONFIG_NFC_NCI_UART=y
CONFIG_NFC_HCI=y
CONFIG_NFC_SHDLC=y
#
# Near Field Communication (NFC) devices
#
# CONFIG_NFC_TRF7970A is not set
CONFIG_NFC_SIM=y
CONFIG_NFC_PORT100=y
CONFIG_NFC_VIRTUAL_NCI=y
CONFIG_NFC_FDP=y
# CONFIG_NFC_FDP_I2C is not set
# CONFIG_NFC_PN544_I2C is not set
CONFIG_NFC_PN533=y
CONFIG_NFC_PN533_USB=y
# CONFIG_NFC_PN533_I2C is not set
# CONFIG_NFC_PN532_UART is not set
# CONFIG_NFC_MICROREAD_I2C is not set
CONFIG_NFC_MRVL=y
CONFIG_NFC_MRVL_USB=y
# CONFIG_NFC_MRVL_UART is not set
# CONFIG_NFC_MRVL_I2C is not set
# CONFIG_NFC_ST21NFCA_I2C is not set
# CONFIG_NFC_ST_NCI_I2C is not set
# CONFIG_NFC_ST_NCI_SPI is not set
# CONFIG_NFC_NXP_NCI is not set
# CONFIG_NFC_S3FWRN5_I2C is not set
# CONFIG_NFC_S3FWRN82_UART is not set
# CONFIG_NFC_ST95HF is not set
# end of Near Field Communication (NFC) devices
CONFIG_PSAMPLE=y
CONFIG_NET_IFE=y
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SELFTESTS=y
CONFIG_NET_SOCK_MSG=y
CONFIG_NET_DEVLINK=y
CONFIG_PAGE_POOL=y
# CONFIG_PAGE_POOL_STATS is not set
CONFIG_FAILOVER=y
CONFIG_ETHTOOL_NETLINK=y
#
# Device Drivers
#
CONFIG_HAVE_EISA=y
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
# CONFIG_PCIEAER_INJECT is not set
# CONFIG_PCIE_ECRC is not set
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
# CONFIG_PCIE_DPC is not set
# CONFIG_PCIE_PTM is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
# CONFIG_PCI_PF_STUB is not set
CONFIG_PCI_ATS=y
CONFIG_PCI_ECAM=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_P2PDMA is not set
CONFIG_PCI_LABEL=y
# CONFIG_PCI_DYNAMIC_OF_NODES is not set
# CONFIG_PCIE_BUS_TUNE_OFF is not set
CONFIG_PCIE_BUS_DEFAULT=y
# CONFIG_PCIE_BUS_SAFE is not set
# CONFIG_PCIE_BUS_PERFORMANCE is not set
# CONFIG_PCIE_BUS_PEER2PEER is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_HOTPLUG_PCI=y
# CONFIG_HOTPLUG_PCI_ACPI is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
#
# PCI controller drivers
#
# CONFIG_PCI_FTPCI100 is not set
CONFIG_PCI_HOST_COMMON=y
CONFIG_PCI_HOST_GENERIC=y
# CONFIG_VMD is not set
# CONFIG_PCIE_MICROCHIP_HOST is not set
# CONFIG_PCIE_XILINX is not set
#
# Cadence-based PCIe controllers
#
# CONFIG_PCIE_CADENCE_PLAT_HOST is not set
# CONFIG_PCIE_CADENCE_PLAT_EP is not set
# end of Cadence-based PCIe controllers
#
# DesignWare-based PCIe controllers
#
# CONFIG_PCI_MESON is not set
# CONFIG_PCIE_INTEL_GW is not set
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCIE_DW_PLAT_EP is not set
# end of DesignWare-based PCIe controllers
#
# Mobiveil-based PCIe controllers
#
# end of Mobiveil-based PCIe controllers
# end of PCI controller drivers
#
# PCI Endpoint
#
CONFIG_PCI_ENDPOINT=y
# CONFIG_PCI_ENDPOINT_CONFIGFS is not set
# CONFIG_PCI_EPF_TEST is not set
# CONFIG_PCI_EPF_NTB is not set
# end of PCI Endpoint
#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers
# CONFIG_CXL_BUS is not set
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y
#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
# CONFIG_RAPIDIO is not set
#
# Generic Driver Options
#
CONFIG_AUXILIARY_BUS=y
CONFIG_UEVENT_HELPER=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_DEVTMPFS_SAFE is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
#
# Firmware loader
#
CONFIG_FW_LOADER=y
# CONFIG_FW_LOADER_DEBUG is not set
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_FW_LOADER_SYSFS=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_FW_LOADER_COMPRESS=y
# CONFIG_FW_LOADER_COMPRESS_XZ is not set
# CONFIG_FW_LOADER_COMPRESS_ZSTD is not set
CONFIG_FW_CACHE=y
# CONFIG_FW_UPLOAD is not set
# end of Firmware loader
CONFIG_WANT_DEV_COREDUMP=y
CONFIG_ALLOW_DEV_COREDUMP=y
CONFIG_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_GENERIC_CPU_DEVICES=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=y
CONFIG_REGMAP_MMIO=y
CONFIG_REGMAP_IRQ=y
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT is not set
# end of Generic Driver Options
#
# Bus devices
#
# CONFIG_MOXTET is not set
CONFIG_MHI_BUS=y
# CONFIG_MHI_BUS_DEBUG is not set
# CONFIG_MHI_BUS_PCI_GENERIC is not set
# CONFIG_MHI_BUS_EP is not set
# end of Bus devices
#
# Cache Drivers
#
# end of Cache Drivers
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
#
# Firmware Drivers
#
#
# ARM System Control and Management Interface Protocol
#
# end of ARM System Control and Management Interface Protocol
# CONFIG_EDD is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
# CONFIG_ISCSI_IBFT is not set
# CONFIG_FW_CFG_SYSFS is not set
CONFIG_SYSFB=y
# CONFIG_SYSFB_SIMPLEFB is not set
CONFIG_GOOGLE_FIRMWARE=y
# CONFIG_GOOGLE_SMI is not set
# CONFIG_GOOGLE_CBMEM is not set
CONFIG_GOOGLE_COREBOOT_TABLE=y
CONFIG_GOOGLE_MEMCONSOLE=y
# CONFIG_GOOGLE_MEMCONSOLE_X86_LEGACY is not set
# CONFIG_GOOGLE_FRAMEBUFFER_COREBOOT is not set
CONFIG_GOOGLE_MEMCONSOLE_COREBOOT=y
CONFIG_GOOGLE_VPD=y
#
# Qualcomm firmware drivers
#
# end of Qualcomm firmware drivers
#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers
# CONFIG_GNSS is not set
CONFIG_MTD=y
# CONFIG_MTD_TESTS is not set
#
# Partition parsers
#
# CONFIG_MTD_CMDLINE_PARTS is not set
# CONFIG_MTD_OF_PARTS is not set
# CONFIG_MTD_REDBOOT_PARTS is not set
# end of Partition parsers
#
# User Modules And Translation Layers
#
CONFIG_MTD_BLKDEVS=y
CONFIG_MTD_BLOCK=y
#
# Note that in some cases UBI block is preferred. See MTD_UBI_BLOCK.
#
CONFIG_FTL=y
# CONFIG_NFTL is not set
# CONFIG_INFTL is not set
# CONFIG_RFD_FTL is not set
# CONFIG_SSFDC is not set
# CONFIG_SM_FTL is not set
# CONFIG_MTD_OOPS is not set
# CONFIG_MTD_SWAP is not set
# CONFIG_MTD_PARTITIONED_MASTER is not set
#
# RAM/ROM/Flash chip drivers
#
# CONFIG_MTD_CFI is not set
# CONFIG_MTD_JEDECPROBE is not set
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
# CONFIG_MTD_RAM is not set
# CONFIG_MTD_ROM is not set
# CONFIG_MTD_ABSENT is not set
# end of RAM/ROM/Flash chip drivers
#
# Mapping drivers for chip access
#
# CONFIG_MTD_COMPLEX_MAPPINGS is not set
# CONFIG_MTD_PLATRAM is not set
# end of Mapping drivers for chip access
#
# Self-contained MTD device drivers
#
# CONFIG_MTD_PMC551 is not set
# CONFIG_MTD_DATAFLASH is not set
# CONFIG_MTD_MCHP23K256 is not set
# CONFIG_MTD_MCHP48L640 is not set
# CONFIG_MTD_SST25L is not set
CONFIG_MTD_SLRAM=y
CONFIG_MTD_PHRAM=y
CONFIG_MTD_MTDRAM=y
CONFIG_MTDRAM_TOTAL_SIZE=128
CONFIG_MTDRAM_ERASE_SIZE=4
CONFIG_MTD_BLOCK2MTD=y
#
# Disk-On-Chip Device Drivers
#
# CONFIG_MTD_DOCG3 is not set
# end of Self-contained MTD device drivers
#
# NAND
#
# CONFIG_MTD_ONENAND is not set
# CONFIG_MTD_RAW_NAND is not set
# CONFIG_MTD_SPI_NAND is not set
#
# ECC engine support
#
# CONFIG_MTD_NAND_ECC_SW_HAMMING is not set
# CONFIG_MTD_NAND_ECC_SW_BCH is not set
# CONFIG_MTD_NAND_ECC_MXIC is not set
# end of ECC engine support
# end of NAND
#
# LPDDR & LPDDR2 PCM memory drivers
#
# CONFIG_MTD_LPDDR is not set
# end of LPDDR & LPDDR2 PCM memory drivers
# CONFIG_MTD_SPI_NOR is not set
CONFIG_MTD_UBI=y
CONFIG_MTD_UBI_WL_THRESHOLD=4096
CONFIG_MTD_UBI_BEB_LIMIT=20
# CONFIG_MTD_UBI_FASTMAP is not set
# CONFIG_MTD_UBI_GLUEBI is not set
# CONFIG_MTD_UBI_BLOCK is not set
# CONFIG_MTD_UBI_FAULT_INJECTION is not set
# CONFIG_MTD_UBI_NVMEM is not set
# CONFIG_MTD_HYPERBUS is not set
CONFIG_DTC=y
CONFIG_OF=y
# CONFIG_OF_UNITTEST is not set
CONFIG_OF_FLATTREE=y
CONFIG_OF_EARLY_FLATTREE=y
CONFIG_OF_KOBJ=y
CONFIG_OF_ADDRESS=y
CONFIG_OF_IRQ=y
CONFIG_OF_RESERVED_MEM=y
# CONFIG_OF_OVERLAY is not set
CONFIG_OF_NUMA=y
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=y
# CONFIG_PARPORT_PC is not set
# CONFIG_PARPORT_1284 is not set
CONFIG_PARPORT_NOT_PC=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y
#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=y
CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION=y
# CONFIG_BLK_DEV_FD is not set
CONFIG_CDROM=y
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_ZRAM=y
CONFIG_ZRAM_DEF_COMP_LZORLE=y
# CONFIG_ZRAM_DEF_COMP_ZSTD is not set
# CONFIG_ZRAM_DEF_COMP_LZ4 is not set
# CONFIG_ZRAM_DEF_COMP_LZO is not set
# CONFIG_ZRAM_DEF_COMP_LZ4HC is not set
# CONFIG_ZRAM_DEF_COMP_842 is not set
CONFIG_ZRAM_DEF_COMP="lzo-rle"
# CONFIG_ZRAM_WRITEBACK is not set
# CONFIG_ZRAM_TRACK_ENTRY_ACTIME is not set
# CONFIG_ZRAM_MEMORY_TRACKING is not set
# CONFIG_ZRAM_MULTI_COMP is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=16
# CONFIG_BLK_DEV_DRBD is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_ATA_OVER_ETH=y
CONFIG_VIRTIO_BLK=y
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_UBLK is not set
CONFIG_BLK_DEV_RNBD=y
CONFIG_BLK_DEV_RNBD_CLIENT=y
#
# NVME Support
#
CONFIG_NVME_CORE=y
CONFIG_BLK_DEV_NVME=y
CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_VERBOSE_ERRORS is not set
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=y
CONFIG_NVME_RDMA=y
CONFIG_NVME_FC=y
CONFIG_NVME_TCP=y
# CONFIG_NVME_TCP_TLS is not set
# CONFIG_NVME_HOST_AUTH is not set
CONFIG_NVME_TARGET=y
# CONFIG_NVME_TARGET_PASSTHRU is not set
CONFIG_NVME_TARGET_LOOP=y
CONFIG_NVME_TARGET_RDMA=y
CONFIG_NVME_TARGET_FC=y
CONFIG_NVME_TARGET_FCLOOP=y
CONFIG_NVME_TARGET_TCP=y
# CONFIG_NVME_TARGET_TCP_TLS is not set
# CONFIG_NVME_TARGET_AUTH is not set
# end of NVME Support
#
# Misc devices
#
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_LATTICE_ECP3_CONFIG is not set
# CONFIG_SRAM is not set
# CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
CONFIG_MISC_RTSX=y
# CONFIG_HISI_HIKEY_USB is not set
# CONFIG_OPEN_DICE is not set
# CONFIG_VCPU_STALL_DETECTOR is not set
# CONFIG_NSM is not set
# CONFIG_C2PORT is not set
#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_AT25 is not set
# CONFIG_EEPROM_MAX6875 is not set
CONFIG_EEPROM_93CX6=y
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support
# CONFIG_CB710_CORE is not set
#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline
# CONFIG_SENSORS_LIS3_I2C is not set
# CONFIG_ALTERA_STAPL is not set
# CONFIG_INTEL_MEI is not set
CONFIG_VMWARE_VMCI=y
# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_BCM_VK is not set
# CONFIG_MISC_ALCOR_PCI is not set
# CONFIG_MISC_RTSX_PCI is not set
CONFIG_MISC_RTSX_USB=y
# CONFIG_UACCE is not set
# CONFIG_PVPANIC is not set
# CONFIG_GP_PCI1XXXX is not set
# end of Misc devices
#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=y
CONFIG_SCSI_COMMON=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=y
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=y
CONFIG_BLK_DEV_BSG=y
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y
#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=y
CONFIG_SCSI_SAS_ATTRS=y
CONFIG_SCSI_SAS_LIBSAS=y
CONFIG_SCSI_SAS_ATA=y
# CONFIG_SCSI_SAS_HOST_SMP is not set
CONFIG_SCSI_SRP_ATTRS=y
# end of SCSI Transports
CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
CONFIG_SCSI_HPSA=y
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_MPI3MR is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_LIBFC is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_ISCI is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_EFCT is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=y
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# end of SCSI device support
CONFIG_ATA=y
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y
#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=y
CONFIG_SATA_MOBILE_LPM_POLICY=0
# CONFIG_SATA_AHCI_PLATFORM is not set
# CONFIG_AHCI_DWC is not set
# CONFIG_AHCI_CEVA is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y
#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y
#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=y
# CONFIG_SATA_DWC is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set
#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
CONFIG_PATA_AMD=y
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
CONFIG_PATA_OLDPIIX=y
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
CONFIG_PATA_SCH=y
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set
#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PCMCIA is not set
# CONFIG_PATA_OF_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set
#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=y
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_BITMAP_FILE=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID10=y
CONFIG_MD_RAID456=y
# CONFIG_MD_CLUSTER is not set
CONFIG_BCACHE=y
# CONFIG_BCACHE_DEBUG is not set
# CONFIG_BCACHE_ASYNC_REGISTRATION is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=y
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=y
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=y
CONFIG_DM_PERSISTENT_DATA=y
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_THIN_PROVISIONING=y
CONFIG_DM_CACHE=y
CONFIG_DM_CACHE_SMQ=y
CONFIG_DM_WRITECACHE=y
# CONFIG_DM_EBS is not set
# CONFIG_DM_ERA is not set
CONFIG_DM_CLONE=y
CONFIG_DM_MIRROR=y
# CONFIG_DM_LOG_USERSPACE is not set
CONFIG_DM_RAID=y
CONFIG_DM_ZERO=y
CONFIG_DM_MULTIPATH=y
CONFIG_DM_MULTIPATH_QL=y
CONFIG_DM_MULTIPATH_ST=y
# CONFIG_DM_MULTIPATH_HST is not set
# CONFIG_DM_MULTIPATH_IOA is not set
# CONFIG_DM_DELAY is not set
# CONFIG_DM_DUST is not set
# CONFIG_DM_INIT is not set
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=y
CONFIG_DM_VERITY=y
# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
CONFIG_DM_VERITY_FEC=y
# CONFIG_DM_SWITCH is not set
# CONFIG_DM_LOG_WRITES is not set
CONFIG_DM_INTEGRITY=y
CONFIG_DM_ZONED=y
CONFIG_DM_AUDIT=y
# CONFIG_DM_VDO is not set
CONFIG_TARGET_CORE=y
# CONFIG_TCM_IBLOCK is not set
# CONFIG_TCM_FILEIO is not set
# CONFIG_TCM_PSCSI is not set
# CONFIG_LOOPBACK_TARGET is not set
# CONFIG_ISCSI_TARGET is not set
# CONFIG_SBP_TARGET is not set
# CONFIG_REMOTE_TARGET is not set
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=y
CONFIG_FIREWIRE_SBP2=y
CONFIG_FIREWIRE_NET=y
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support
# CONFIG_MACINTOSH_DRIVERS is not set
CONFIG_NETDEVICES=y
CONFIG_MII=y
CONFIG_NET_CORE=y
CONFIG_BONDING=y
CONFIG_DUMMY=y
CONFIG_WIREGUARD=y
# CONFIG_WIREGUARD_DEBUG is not set
CONFIG_EQUALIZER=y
CONFIG_NET_FC=y
CONFIG_IFB=y
CONFIG_NET_TEAM=y
CONFIG_NET_TEAM_MODE_BROADCAST=y
CONFIG_NET_TEAM_MODE_ROUNDROBIN=y
CONFIG_NET_TEAM_MODE_RANDOM=y
CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=y
CONFIG_NET_TEAM_MODE_LOADBALANCE=y
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
CONFIG_IPVLAN_L3S=y
CONFIG_IPVLAN=y
CONFIG_IPVTAP=y
CONFIG_VXLAN=y
CONFIG_GENEVE=y
CONFIG_BAREUDP=y
CONFIG_GTP=y
# CONFIG_AMT is not set
CONFIG_MACSEC=y
CONFIG_NETCONSOLE=y
# CONFIG_NETCONSOLE_DYNAMIC is not set
# CONFIG_NETCONSOLE_EXTENDED_LOG is not set
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
CONFIG_TAP=y
CONFIG_TUN_VNET_CROSS_LE=y
CONFIG_VETH=y
CONFIG_VIRTIO_NET=y
CONFIG_NLMON=y
# CONFIG_NETKIT is not set
CONFIG_NET_VRF=y
CONFIG_VSOCKMON=y
# CONFIG_MHI_NET is not set
# CONFIG_ARCNET is not set
CONFIG_ATM_DRIVERS=y
# CONFIG_ATM_DUMMY is not set
CONFIG_ATM_TCP=y
# CONFIG_ATM_LANAI is not set
# CONFIG_ATM_ENI is not set
# CONFIG_ATM_NICSTAR is not set
# CONFIG_ATM_IDT77252 is not set
# CONFIG_ATM_IA is not set
# CONFIG_ATM_FORE200E is not set
# CONFIG_ATM_HE is not set
# CONFIG_ATM_SOLOS is not set
CONFIG_CAIF_DRIVERS=y
CONFIG_CAIF_TTY=y
CONFIG_CAIF_VIRTIO=y
#
# Distributed Switch Architecture drivers
#
# CONFIG_B53 is not set
# CONFIG_NET_DSA_BCM_SF2 is not set
# CONFIG_NET_DSA_LOOP is not set
# CONFIG_NET_DSA_HIRSCHMANN_HELLCREEK is not set
# CONFIG_NET_DSA_LANTIQ_GSWIP is not set
# CONFIG_NET_DSA_MT7530 is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MICROCHIP_KSZ_COMMON is not set
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_AR9331 is not set
# CONFIG_NET_DSA_QCA8K is not set
# CONFIG_NET_DSA_SJA1105 is not set
# CONFIG_NET_DSA_XRS700X_I2C is not set
# CONFIG_NET_DSA_XRS700X_MDIO is not set
# CONFIG_NET_DSA_REALTEK is not set
# CONFIG_NET_DSA_SMSC_LAN9303_I2C is not set
# CONFIG_NET_DSA_SMSC_LAN9303_MDIO is not set
# CONFIG_NET_DSA_VITESSE_VSC73XX_SPI is not set
# CONFIG_NET_DSA_VITESSE_VSC73XX_PLATFORM is not set
# end of Distributed Switch Architecture drivers
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_AGERE is not set
# CONFIG_NET_VENDOR_ALACRITECH is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=y
# CONFIG_ENA_ETHERNET is not set
# CONFIG_NET_VENDOR_AMD is not set
# CONFIG_NET_VENDOR_AQUANTIA is not set
# CONFIG_NET_VENDOR_ARC is not set
CONFIG_NET_VENDOR_ASIX=y
# CONFIG_SPI_AX88796C is not set
# CONFIG_NET_VENDOR_ATHEROS is not set
# CONFIG_CX_ECAT is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_VENDOR_CADENCE is not set
# CONFIG_NET_VENDOR_CAVIUM is not set
# CONFIG_NET_VENDOR_CHELSIO is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_NET_VENDOR_CORTINA is not set
CONFIG_NET_VENDOR_DAVICOM=y
# CONFIG_DM9051 is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DEC is not set
# CONFIG_NET_VENDOR_DLINK is not set
# CONFIG_NET_VENDOR_EMULEX is not set
CONFIG_NET_VENDOR_ENGLEDER=y
# CONFIG_TSNEP is not set
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_FUJITSU is not set
CONFIG_NET_VENDOR_FUNGIBLE=y
# CONFIG_FUN_ETH is not set
CONFIG_NET_VENDOR_GOOGLE=y
CONFIG_GVE=y
# CONFIG_NET_VENDOR_HUAWEI is not set
CONFIG_NET_VENDOR_I825XX=y
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=y
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
# CONFIG_IGB is not set
# CONFIG_IGBVF is not set
# CONFIG_IXGBE is not set
# CONFIG_IXGBEVF is not set
# CONFIG_I40E is not set
# CONFIG_I40EVF is not set
# CONFIG_ICE is not set
# CONFIG_FM10K is not set
# CONFIG_IGC is not set
# CONFIG_IDPF is not set
# CONFIG_JME is not set
# CONFIG_NET_VENDOR_ADI is not set
CONFIG_NET_VENDOR_LITEX=y
# CONFIG_LITEX_LITEETH is not set
# CONFIG_NET_VENDOR_MARVELL is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
CONFIG_MLX4_CORE=y
# CONFIG_MLX4_DEBUG is not set
# CONFIG_MLX4_CORE_GEN2 is not set
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
CONFIG_NET_VENDOR_MICROSOFT=y
# CONFIG_NET_VENDOR_MYRI is not set
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NI is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_NETERION is not set
# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set
# CONFIG_ETHOC is not set
# CONFIG_NET_VENDOR_PACKET_ENGINES is not set
# CONFIG_NET_VENDOR_PENSANDO is not set
# CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_BROCADE is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RDC is not set
# CONFIG_NET_VENDOR_REALTEK is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
# CONFIG_NET_VENDOR_SAMSUNG is not set
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
# CONFIG_NET_VENDOR_SOLARFLARE is not set
# CONFIG_NET_VENDOR_SMSC is not set
# CONFIG_NET_VENDOR_SOCIONEXT is not set
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_SUN is not set
# CONFIG_NET_VENDOR_SYNOPSYS is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
# CONFIG_NET_VENDOR_TI is not set
CONFIG_NET_VENDOR_VERTEXCOM=y
# CONFIG_MSE102X is not set
# CONFIG_NET_VENDOR_VIA is not set
CONFIG_NET_VENDOR_WANGXUN=y
# CONFIG_NGBE is not set
# CONFIG_TXGBE is not set
# CONFIG_NET_VENDOR_WIZNET is not set
# CONFIG_NET_VENDOR_XILINX is not set
# CONFIG_NET_VENDOR_XIRCOM is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLINK=y
CONFIG_PHYLIB=y
CONFIG_SWPHY=y
# CONFIG_LED_TRIGGER_PHY is not set
CONFIG_PHYLIB_LEDS=y
CONFIG_FIXED_PHY=y
# CONFIG_SFP is not set
#
# MII PHY device drivers
#
# CONFIG_AMD_PHY is not set
# CONFIG_ADIN_PHY is not set
# CONFIG_ADIN1100_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
CONFIG_AX88796B_PHY=y
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM84881_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_CICADA_PHY is not set
# CONFIG_CORTINA_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_INTEL_XWAY_PHY is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MARVELL_PHY is not set
# CONFIG_MARVELL_10G_PHY is not set
# CONFIG_MARVELL_88Q2XXX_PHY is not set
# CONFIG_MARVELL_88X2222_PHY is not set
# CONFIG_MAXLINEAR_GPHY is not set
# CONFIG_MEDIATEK_GE_PHY is not set
# CONFIG_MICREL_PHY is not set
# CONFIG_MICROCHIP_T1S_PHY is not set
CONFIG_MICROCHIP_PHY=y
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
# CONFIG_MOTORCOMM_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_NXP_CBTX_PHY is not set
# CONFIG_NXP_C45_TJA11XX_PHY is not set
# CONFIG_NXP_TJA11XX_PHY is not set
# CONFIG_NCN26000_PHY is not set
# CONFIG_AT803X_PHY is not set
# CONFIG_QCA83XX_PHY is not set
# CONFIG_QCA808X_PHY is not set
# CONFIG_QCA807X_PHY is not set
# CONFIG_QSEMI_PHY is not set
CONFIG_REALTEK_PHY=y
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
CONFIG_SMSC_PHY=y
# CONFIG_STE10XP is not set
# CONFIG_TERANETICS_PHY is not set
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
# CONFIG_DP83TD510_PHY is not set
# CONFIG_DP83TG720_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
# CONFIG_PSE_CONTROLLER is not set
CONFIG_CAN_DEV=y
CONFIG_CAN_VCAN=y
CONFIG_CAN_VXCAN=y
CONFIG_CAN_NETLINK=y
CONFIG_CAN_CALC_BITTIMING=y
CONFIG_CAN_RX_OFFLOAD=y
# CONFIG_CAN_CAN327 is not set
# CONFIG_CAN_FLEXCAN is not set
# CONFIG_CAN_GRCAN is not set
# CONFIG_CAN_KVASER_PCIEFD is not set
CONFIG_CAN_SLCAN=y
# CONFIG_CAN_C_CAN is not set
# CONFIG_CAN_CC770 is not set
# CONFIG_CAN_CTUCANFD_PCI is not set
# CONFIG_CAN_CTUCANFD_PLATFORM is not set
# CONFIG_CAN_ESD_402_PCI is not set
CONFIG_CAN_IFI_CANFD=y
# CONFIG_CAN_M_CAN is not set
# CONFIG_CAN_PEAK_PCIEFD is not set
# CONFIG_CAN_SJA1000 is not set
# CONFIG_CAN_SOFTING is not set
#
# CAN SPI interfaces
#
# CONFIG_CAN_HI311X is not set
# CONFIG_CAN_MCP251X is not set
# CONFIG_CAN_MCP251XFD is not set
# end of CAN SPI interfaces
#
# CAN USB interfaces
#
CONFIG_CAN_8DEV_USB=y
CONFIG_CAN_EMS_USB=y
# CONFIG_CAN_ESD_USB is not set
# CONFIG_CAN_ETAS_ES58X is not set
# CONFIG_CAN_F81604 is not set
CONFIG_CAN_GS_USB=y
CONFIG_CAN_KVASER_USB=y
CONFIG_CAN_MCBA_USB=y
CONFIG_CAN_PEAK_USB=y
# CONFIG_CAN_UCAN is not set
# end of CAN USB interfaces
# CONFIG_CAN_DEBUG_DEVICES is not set
CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
CONFIG_FWNODE_MDIO=y
CONFIG_OF_MDIO=y
CONFIG_ACPI_MDIO=y
CONFIG_MDIO_DEVRES=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_MDIO_BCM_UNIMAC is not set
# CONFIG_MDIO_HISI_FEMAC is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_MSCC_MIIM is not set
# CONFIG_MDIO_OCTEON is not set
# CONFIG_MDIO_IPQ4019 is not set
# CONFIG_MDIO_IPQ8064 is not set
# CONFIG_MDIO_THUNDER is not set
#
# MDIO Multiplexers
#
# CONFIG_MDIO_BUS_MUX_GPIO is not set
# CONFIG_MDIO_BUS_MUX_MULTIPLEXER is not set
# CONFIG_MDIO_BUS_MUX_MMIOREG is not set
#
# PCS device drivers
#
# end of PCS device drivers
# CONFIG_PLIP is not set
CONFIG_PPP=y
CONFIG_PPP_BSDCOMP=y
CONFIG_PPP_DEFLATE=y
CONFIG_PPP_FILTER=y
CONFIG_PPP_MPPE=y
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOATM=y
CONFIG_PPPOE=y
# CONFIG_PPPOE_HASH_BITS_1 is not set
# CONFIG_PPPOE_HASH_BITS_2 is not set
CONFIG_PPPOE_HASH_BITS_4=y
# CONFIG_PPPOE_HASH_BITS_8 is not set
CONFIG_PPPOE_HASH_BITS=4
CONFIG_PPTP=y
CONFIG_PPPOL2TP=y
CONFIG_PPP_ASYNC=y
CONFIG_PPP_SYNC_TTY=y
CONFIG_SLIP=y
CONFIG_SLHC=y
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
CONFIG_SLIP_MODE_SLIP6=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_CATC=y
CONFIG_USB_KAWETH=y
CONFIG_USB_PEGASUS=y
CONFIG_USB_RTL8150=y
CONFIG_USB_RTL8152=y
CONFIG_USB_LAN78XX=y
CONFIG_USB_USBNET=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_AX88179_178A=y
CONFIG_USB_NET_CDCETHER=y
CONFIG_USB_NET_CDC_EEM=y
CONFIG_USB_NET_CDC_NCM=y
CONFIG_USB_NET_HUAWEI_CDC_NCM=y
CONFIG_USB_NET_CDC_MBIM=y
CONFIG_USB_NET_DM9601=y
CONFIG_USB_NET_SR9700=y
CONFIG_USB_NET_SR9800=y
CONFIG_USB_NET_SMSC75XX=y
CONFIG_USB_NET_SMSC95XX=y
CONFIG_USB_NET_GL620A=y
CONFIG_USB_NET_NET1080=y
CONFIG_USB_NET_PLUSB=y
CONFIG_USB_NET_MCS7830=y
CONFIG_USB_NET_RNDIS_HOST=y
CONFIG_USB_NET_CDC_SUBSET_ENABLE=y
CONFIG_USB_NET_CDC_SUBSET=y
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_BELKIN=y
CONFIG_USB_ARMLINUX=y
CONFIG_USB_EPSON2888=y
CONFIG_USB_KC2190=y
CONFIG_USB_NET_ZAURUS=y
CONFIG_USB_NET_CX82310_ETH=y
CONFIG_USB_NET_KALMIA=y
CONFIG_USB_NET_QMI_WWAN=y
CONFIG_USB_HSO=y
CONFIG_USB_NET_INT51X1=y
CONFIG_USB_CDC_PHONET=y
CONFIG_USB_IPHETH=y
CONFIG_USB_SIERRA_NET=y
CONFIG_USB_VL600=y
CONFIG_USB_NET_CH9200=y
# CONFIG_USB_NET_AQC111 is not set
CONFIG_USB_RTL8153_ECM=y
CONFIG_WLAN=y
CONFIG_WLAN_VENDOR_ADMTEK=y
# CONFIG_ADM8211 is not set
CONFIG_ATH_COMMON=y
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K is not set
# CONFIG_ATH5K_PCI is not set
CONFIG_ATH9K_HW=y
CONFIG_ATH9K_COMMON=y
CONFIG_ATH9K_COMMON_DEBUG=y
CONFIG_ATH9K_BTCOEX_SUPPORT=y
CONFIG_ATH9K=y
CONFIG_ATH9K_PCI=y
CONFIG_ATH9K_AHB=y
CONFIG_ATH9K_DEBUGFS=y
# CONFIG_ATH9K_STATION_STATISTICS is not set
CONFIG_ATH9K_DYNACK=y
# CONFIG_ATH9K_WOW is not set
CONFIG_ATH9K_RFKILL=y
CONFIG_ATH9K_CHANNEL_CONTEXT=y
CONFIG_ATH9K_PCOEM=y
# CONFIG_ATH9K_PCI_NO_EEPROM is not set
CONFIG_ATH9K_HTC=y
CONFIG_ATH9K_HTC_DEBUGFS=y
# CONFIG_ATH9K_HWRNG is not set
# CONFIG_ATH9K_COMMON_SPECTRAL is not set
CONFIG_CARL9170=y
CONFIG_CARL9170_LEDS=y
# CONFIG_CARL9170_DEBUGFS is not set
CONFIG_CARL9170_WPC=y
CONFIG_CARL9170_HWRNG=y
CONFIG_ATH6KL=y
# CONFIG_ATH6KL_SDIO is not set
CONFIG_ATH6KL_USB=y
# CONFIG_ATH6KL_DEBUG is not set
# CONFIG_ATH6KL_TRACING is not set
CONFIG_AR5523=y
# CONFIG_WIL6210 is not set
CONFIG_ATH10K=y
CONFIG_ATH10K_CE=y
CONFIG_ATH10K_PCI=y
# CONFIG_ATH10K_AHB is not set
# CONFIG_ATH10K_SDIO is not set
CONFIG_ATH10K_USB=y
# CONFIG_ATH10K_DEBUG is not set
# CONFIG_ATH10K_DEBUGFS is not set
# CONFIG_ATH10K_TRACING is not set
# CONFIG_WCN36XX is not set
CONFIG_ATH11K=y
# CONFIG_ATH11K_PCI is not set
# CONFIG_ATH11K_DEBUG is not set
# CONFIG_ATH11K_DEBUGFS is not set
# CONFIG_ATH11K_TRACING is not set
# CONFIG_ATH12K is not set
# CONFIG_WLAN_VENDOR_ATMEL is not set
# CONFIG_WLAN_VENDOR_BROADCOM is not set
# CONFIG_WLAN_VENDOR_INTEL is not set
# CONFIG_WLAN_VENDOR_INTERSIL is not set
# CONFIG_WLAN_VENDOR_MARVELL is not set
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
# CONFIG_WLAN_VENDOR_MICROCHIP is not set
CONFIG_WLAN_VENDOR_PURELIFI=y
# CONFIG_PLFXLC is not set
# CONFIG_WLAN_VENDOR_RALINK is not set
# CONFIG_WLAN_VENDOR_REALTEK is not set
# CONFIG_WLAN_VENDOR_RSI is not set
CONFIG_WLAN_VENDOR_SILABS=y
# CONFIG_WFX is not set
# CONFIG_WLAN_VENDOR_ST is not set
# CONFIG_WLAN_VENDOR_TI is not set
# CONFIG_WLAN_VENDOR_ZYDAS is not set
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
CONFIG_MAC80211_HWSIM=y
CONFIG_VIRT_WIFI=y
CONFIG_WAN=y
CONFIG_HDLC=y
CONFIG_HDLC_RAW=y
CONFIG_HDLC_RAW_ETH=y
CONFIG_HDLC_CISCO=y
CONFIG_HDLC_FR=y
CONFIG_HDLC_PPP=y
CONFIG_HDLC_X25=y
# CONFIG_FRAMER is not set
# CONFIG_PCI200SYN is not set
# CONFIG_WANXL is not set
# CONFIG_PC300TOO is not set
# CONFIG_FARSYNC is not set
CONFIG_LAPBETHER=y
CONFIG_IEEE802154_DRIVERS=y
# CONFIG_IEEE802154_FAKELB is not set
# CONFIG_IEEE802154_AT86RF230 is not set
# CONFIG_IEEE802154_MRF24J40 is not set
# CONFIG_IEEE802154_CC2520 is not set
CONFIG_IEEE802154_ATUSB=y
# CONFIG_IEEE802154_ADF7242 is not set
# CONFIG_IEEE802154_CA8210 is not set
# CONFIG_IEEE802154_MCR20A is not set
CONFIG_IEEE802154_HWSIM=y
#
# Wireless WAN
#
CONFIG_WWAN=y
# CONFIG_WWAN_DEBUGFS is not set
# CONFIG_WWAN_HWSIM is not set
CONFIG_MHI_WWAN_CTRL=y
# CONFIG_MHI_WWAN_MBIM is not set
# CONFIG_IOSM is not set
# CONFIG_MTK_T7XX is not set
# end of Wireless WAN
CONFIG_VMXNET3=y
# CONFIG_FUJITSU_ES is not set
CONFIG_USB4_NET=y
CONFIG_NETDEVSIM=y
CONFIG_NET_FAILOVER=y
CONFIG_ISDN=y
CONFIG_ISDN_CAPI=y
CONFIG_CAPI_TRACE=y
CONFIG_ISDN_CAPI_MIDDLEWARE=y
CONFIG_MISDN=y
CONFIG_MISDN_DSP=y
CONFIG_MISDN_L1OIP=y
#
# mISDN hardware drivers
#
# CONFIG_MISDN_HFCPCI is not set
# CONFIG_MISDN_HFCMULTI is not set
CONFIG_MISDN_HFCUSB=y
# CONFIG_MISDN_AVMFRITZ is not set
# CONFIG_MISDN_SPEEDFAX is not set
# CONFIG_MISDN_INFINEON is not set
# CONFIG_MISDN_W6692 is not set
# CONFIG_MISDN_NETJET is not set
#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=y
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set
CONFIG_INPUT_VIVALDIFMAP=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
CONFIG_INPUT_MOUSEDEV_PSAUX=y
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=y
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADC is not set
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_PINEPHONE is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_OMAP4 is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_TWL4030 is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_CAP11XX is not set
# CONFIG_KEYBOARD_BCM is not set
# CONFIG_KEYBOARD_CYPRESS_SF is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_PS2_FOCALTECH=y
# CONFIG_MOUSE_PS2_VMMOUSE is not set
CONFIG_MOUSE_PS2_SMBUS=y
# CONFIG_MOUSE_SERIAL is not set
CONFIG_MOUSE_APPLETOUCH=y
CONFIG_MOUSE_BCM5974=y
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_ELAN_I2C is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_GPIO is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
CONFIG_MOUSE_SYNAPTICS_USB=y
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADC is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
CONFIG_JOYSTICK_IFORCE=y
CONFIG_JOYSTICK_IFORCE_USB=y
# CONFIG_JOYSTICK_IFORCE_232 is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_DB9 is not set
# CONFIG_JOYSTICK_GAMECON is not set
# CONFIG_JOYSTICK_TURBOGRAFX is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
CONFIG_JOYSTICK_XPAD=y
CONFIG_JOYSTICK_XPAD_FF=y
CONFIG_JOYSTICK_XPAD_LEDS=y
# CONFIG_JOYSTICK_WALKERA0701 is not set
# CONFIG_JOYSTICK_PSXPAD_SPI is not set
# CONFIG_JOYSTICK_PXRC is not set
# CONFIG_JOYSTICK_QWIIC is not set
# CONFIG_JOYSTICK_FSIA6B is not set
# CONFIG_JOYSTICK_SENSEHAT is not set
# CONFIG_JOYSTICK_SEESAW is not set
CONFIG_INPUT_TABLET=y
CONFIG_TABLET_USB_ACECAD=y
CONFIG_TABLET_USB_AIPTEK=y
CONFIG_TABLET_USB_HANWANG=y
CONFIG_TABLET_USB_KBTAB=y
CONFIG_TABLET_USB_PEGASUS=y
# CONFIG_TABLET_SERIAL_WACOM4 is not set
CONFIG_INPUT_TOUCHSCREEN=y
# CONFIG_TOUCHSCREEN_ADS7846 is not set
# CONFIG_TOUCHSCREEN_AD7877 is not set
# CONFIG_TOUCHSCREEN_AD7879 is not set
# CONFIG_TOUCHSCREEN_ADC is not set
# CONFIG_TOUCHSCREEN_AR1021_I2C is not set
# CONFIG_TOUCHSCREEN_ATMEL_MXT is not set
# CONFIG_TOUCHSCREEN_AUO_PIXCIR is not set
# CONFIG_TOUCHSCREEN_BU21013 is not set
# CONFIG_TOUCHSCREEN_BU21029 is not set
# CONFIG_TOUCHSCREEN_CHIPONE_ICN8318 is not set
# CONFIG_TOUCHSCREEN_CHIPONE_ICN8505 is not set
# CONFIG_TOUCHSCREEN_CY8CTMA140 is not set
# CONFIG_TOUCHSCREEN_CY8CTMG110 is not set
# CONFIG_TOUCHSCREEN_CYTTSP_CORE is not set
# CONFIG_TOUCHSCREEN_CYTTSP4_CORE is not set
# CONFIG_TOUCHSCREEN_CYTTSP5 is not set
# CONFIG_TOUCHSCREEN_DYNAPRO is not set
# CONFIG_TOUCHSCREEN_HAMPSHIRE is not set
# CONFIG_TOUCHSCREEN_EETI is not set
# CONFIG_TOUCHSCREEN_EGALAX is not set
# CONFIG_TOUCHSCREEN_EGALAX_SERIAL is not set
# CONFIG_TOUCHSCREEN_EXC3000 is not set
# CONFIG_TOUCHSCREEN_FUJITSU is not set
# CONFIG_TOUCHSCREEN_GOODIX is not set
# CONFIG_TOUCHSCREEN_GOODIX_BERLIN_I2C is not set
# CONFIG_TOUCHSCREEN_GOODIX_BERLIN_SPI is not set
# CONFIG_TOUCHSCREEN_HIDEEP is not set
# CONFIG_TOUCHSCREEN_HYCON_HY46XX is not set
# CONFIG_TOUCHSCREEN_HYNITRON_CSTXXX is not set
# CONFIG_TOUCHSCREEN_ILI210X is not set
# CONFIG_TOUCHSCREEN_ILITEK is not set
# CONFIG_TOUCHSCREEN_S6SY761 is not set
# CONFIG_TOUCHSCREEN_GUNZE is not set
# CONFIG_TOUCHSCREEN_EKTF2127 is not set
# CONFIG_TOUCHSCREEN_ELAN is not set
# CONFIG_TOUCHSCREEN_ELO is not set
# CONFIG_TOUCHSCREEN_WACOM_W8001 is not set
# CONFIG_TOUCHSCREEN_WACOM_I2C is not set
# CONFIG_TOUCHSCREEN_MAX11801 is not set
# CONFIG_TOUCHSCREEN_MCS5000 is not set
# CONFIG_TOUCHSCREEN_MMS114 is not set
# CONFIG_TOUCHSCREEN_MELFAS_MIP4 is not set
# CONFIG_TOUCHSCREEN_MSG2638 is not set
# CONFIG_TOUCHSCREEN_MTOUCH is not set
# CONFIG_TOUCHSCREEN_NOVATEK_NVT_TS is not set
# CONFIG_TOUCHSCREEN_IMAGIS is not set
# CONFIG_TOUCHSCREEN_IMX6UL_TSC is not set
# CONFIG_TOUCHSCREEN_INEXIO is not set
# CONFIG_TOUCHSCREEN_PENMOUNT is not set
# CONFIG_TOUCHSCREEN_EDT_FT5X06 is not set
# CONFIG_TOUCHSCREEN_TOUCHRIGHT is not set
# CONFIG_TOUCHSCREEN_TOUCHWIN is not set
# CONFIG_TOUCHSCREEN_PIXCIR is not set
# CONFIG_TOUCHSCREEN_WDT87XX_I2C is not set
CONFIG_TOUCHSCREEN_USB_COMPOSITE=y
CONFIG_TOUCHSCREEN_USB_EGALAX=y
CONFIG_TOUCHSCREEN_USB_PANJIT=y
CONFIG_TOUCHSCREEN_USB_3M=y
CONFIG_TOUCHSCREEN_USB_ITM=y
CONFIG_TOUCHSCREEN_USB_ETURBO=y
CONFIG_TOUCHSCREEN_USB_GUNZE=y
CONFIG_TOUCHSCREEN_USB_DMC_TSC10=y
CONFIG_TOUCHSCREEN_USB_IRTOUCH=y
CONFIG_TOUCHSCREEN_USB_IDEALTEK=y
CONFIG_TOUCHSCREEN_USB_GENERAL_TOUCH=y
CONFIG_TOUCHSCREEN_USB_GOTOP=y
CONFIG_TOUCHSCREEN_USB_JASTEC=y
CONFIG_TOUCHSCREEN_USB_ELO=y
CONFIG_TOUCHSCREEN_USB_E2I=y
CONFIG_TOUCHSCREEN_USB_ZYTRONIC=y
CONFIG_TOUCHSCREEN_USB_ETT_TC45USB=y
CONFIG_TOUCHSCREEN_USB_NEXIO=y
CONFIG_TOUCHSCREEN_USB_EASYTOUCH=y
# CONFIG_TOUCHSCREEN_TOUCHIT213 is not set
# CONFIG_TOUCHSCREEN_TSC_SERIO is not set
# CONFIG_TOUCHSCREEN_TSC2004 is not set
# CONFIG_TOUCHSCREEN_TSC2005 is not set
# CONFIG_TOUCHSCREEN_TSC2007 is not set
# CONFIG_TOUCHSCREEN_RM_TS is not set
# CONFIG_TOUCHSCREEN_SILEAD is not set
# CONFIG_TOUCHSCREEN_SIS_I2C is not set
# CONFIG_TOUCHSCREEN_ST1232 is not set
# CONFIG_TOUCHSCREEN_STMFTS is not set
CONFIG_TOUCHSCREEN_SUR40=y
# CONFIG_TOUCHSCREEN_SURFACE3_SPI is not set
# CONFIG_TOUCHSCREEN_SX8654 is not set
# CONFIG_TOUCHSCREEN_TPS6507X is not set
# CONFIG_TOUCHSCREEN_ZET6223 is not set
# CONFIG_TOUCHSCREEN_ZFORCE is not set
# CONFIG_TOUCHSCREEN_COLIBRI_VF50 is not set
# CONFIG_TOUCHSCREEN_ROHM_BU21023 is not set
# CONFIG_TOUCHSCREEN_IQS5XX is not set
# CONFIG_TOUCHSCREEN_IQS7211 is not set
# CONFIG_TOUCHSCREEN_ZINITIX is not set
# CONFIG_TOUCHSCREEN_HIMAX_HX83112B is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_ATMEL_CAPTOUCH is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_GPIO_BEEPER is not set
# CONFIG_INPUT_GPIO_DECODER is not set
# CONFIG_INPUT_GPIO_VIBRA is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
CONFIG_INPUT_ATI_REMOTE2=y
CONFIG_INPUT_KEYSPAN_REMOTE=y
# CONFIG_INPUT_KXTJ9 is not set
CONFIG_INPUT_POWERMATE=y
CONFIG_INPUT_YEALINK=y
CONFIG_INPUT_CM109=y
# CONFIG_INPUT_REGULATOR_HAPTIC is not set
# CONFIG_INPUT_RETU_PWRBUTTON is not set
# CONFIG_INPUT_TWL4030_PWRBUTTON is not set
# CONFIG_INPUT_TWL4030_VIBRA is not set
CONFIG_INPUT_UINPUT=y
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_GPIO_ROTARY_ENCODER is not set
# CONFIG_INPUT_DA7280_HAPTICS is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IBM_PANEL is not set
CONFIG_INPUT_IMS_PCU=y
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_IQS626A is not set
# CONFIG_INPUT_IQS7222 is not set
# CONFIG_INPUT_CMA3000 is not set
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set
# CONFIG_INPUT_DRV260X_HAPTICS is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
CONFIG_RMI4_CORE=y
# CONFIG_RMI4_I2C is not set
# CONFIG_RMI4_SPI is not set
# CONFIG_RMI4_SMB is not set
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=y
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
# CONFIG_RMI4_F34 is not set
# CONFIG_RMI4_F3A is not set
# CONFIG_RMI4_F54 is not set
# CONFIG_RMI4_F55 is not set
#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_SERIO_APBPS2 is not set
# CONFIG_SERIO_GPIO_PS2 is not set
CONFIG_USERIO=y
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support
#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
CONFIG_LEGACY_TIOCSTI=y
CONFIG_LDISC_AUTOLOAD=y
#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
CONFIG_SERIAL_8250_PNP=y
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCILIB=y
CONFIG_SERIAL_8250_PCI=y
# CONFIG_SERIAL_8250_EXAR is not set
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
# CONFIG_SERIAL_8250_PCI1XXXX is not set
CONFIG_SERIAL_8250_SHARE_IRQ=y
CONFIG_SERIAL_8250_DETECT_IRQ=y
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
# CONFIG_SERIAL_8250_DW is not set
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=y
CONFIG_SERIAL_8250_MID=y
CONFIG_SERIAL_8250_PERICOM=y
# CONFIG_SERIAL_OF_PLATFORM is not set
#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MAX3100 is not set
# CONFIG_SERIAL_MAX310X is not set
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SIFIVE is not set
# CONFIG_SERIAL_LANTIQ is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_XILINX_PS_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers
CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_N_HDLC=y
# CONFIG_IPWIRELESS is not set
CONFIG_N_GSM=y
CONFIG_NOZOMI=y
CONFIG_NULL_TTY=y
CONFIG_HVC_DRIVER=y
CONFIG_SERIAL_DEV_BUS=y
CONFIG_SERIAL_DEV_CTRL_TTYPORT=y
CONFIG_TTY_PRINTK=y
CONFIG_TTY_PRINTK_LEVEL=6
# CONFIG_PRINTER is not set
# CONFIG_PPDEV is not set
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
# CONFIG_SSIF_IPMI_BMC is not set
# CONFIG_IPMB_DEVICE_INTERFACE is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
# CONFIG_HW_RANDOM_BA431 is not set
# CONFIG_HW_RANDOM_VIA is not set
CONFIG_HW_RANDOM_VIRTIO=y
# CONFIG_HW_RANDOM_CCTRNG is not set
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
# CONFIG_DEVMEM is not set
CONFIG_NVRAM=y
# CONFIG_DEVPORT is not set
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
CONFIG_HPET_MMAP_DEFAULT=y
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
# CONFIG_HW_RANDOM_TPM is not set
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_TIS_SPI is not set
# CONFIG_TCG_TIS_I2C is not set
# CONFIG_TCG_TIS_I2C_CR50 is not set
# CONFIG_TCG_TIS_I2C_ATMEL is not set
# CONFIG_TCG_TIS_I2C_INFINEON is not set
# CONFIG_TCG_TIS_I2C_NUVOTON is not set
# CONFIG_TCG_NSC is not set
# CONFIG_TCG_ATMEL is not set
# CONFIG_TCG_INFINEON is not set
CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
# CONFIG_TELCLOCK is not set
# CONFIG_XILLYBUS is not set
# CONFIG_XILLYUSB is not set
# end of Character devices
#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_MUX=y
#
# Multiplexer I2C Chip support
#
# CONFIG_I2C_ARB_GPIO_CHALLENGE is not set
# CONFIG_I2C_MUX_GPIO is not set
# CONFIG_I2C_MUX_GPMUX is not set
# CONFIG_I2C_MUX_LTC4306 is not set
# CONFIG_I2C_MUX_PCA9541 is not set
# CONFIG_I2C_MUX_PCA954x is not set
CONFIG_I2C_MUX_REG=y
# CONFIG_I2C_MUX_MLXCPLD is not set
# end of Multiplexer I2C Chip support
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=y
CONFIG_I2C_ALGOBIT=y
#
# I2C Hardware Bus support
#
#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_AMD_MP2 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_CHT_WC is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set
#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set
#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=y
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=y
# CONFIG_I2C_DESIGNWARE_BAYTRAIL is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_RK3X is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set
#
# External I2C/SMBus adapter drivers
#
CONFIG_I2C_DIOLAN_U2C=y
CONFIG_I2C_DLN2=y
# CONFIG_I2C_CP2615 is not set
# CONFIG_I2C_PARPORT is not set
# CONFIG_I2C_PCI1XXXX is not set
CONFIG_I2C_ROBOTFUZZ_OSIF=y
# CONFIG_I2C_TAOS_EVM is not set
CONFIG_I2C_TINY_USB=y
CONFIG_I2C_VIPERBOARD=y
#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_MLXCPLD is not set
# CONFIG_I2C_VIRTIO is not set
# end of I2C Hardware Bus support
# CONFIG_I2C_STUB is not set
CONFIG_I2C_SLAVE=y
CONFIG_I2C_SLAVE_EEPROM=y
# CONFIG_I2C_SLAVE_TESTUNIT is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support
# CONFIG_I3C is not set
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
# CONFIG_SPI_MEM is not set
#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_BUTTERFLY is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_CADENCE_QUADSPI is not set
# CONFIG_SPI_DESIGNWARE is not set
CONFIG_SPI_DLN2=y
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LM70_LLP is not set
# CONFIG_SPI_FSL_SPI is not set
# CONFIG_SPI_MICROCHIP_CORE is not set
# CONFIG_SPI_MICROCHIP_CORE_QSPI is not set
# CONFIG_SPI_LANTIQ_SSC is not set
# CONFIG_SPI_OC_TINY is not set
# CONFIG_SPI_PCI1XXXX is not set
# CONFIG_SPI_PXA2XX is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_AMD is not set
#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set
#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
CONFIG_SPI_DYNAMIC=y
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set
#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_PARPORT is not set
# CONFIG_PPS_CLIENT_GPIO is not set
#
# PPS generators support
#
#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_PTP_1588_CLOCK_OPTIONAL=y
#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
CONFIG_PTP_1588_CLOCK_KVM=y
# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
# CONFIG_PTP_1588_CLOCK_IDTCM is not set
# CONFIG_PTP_1588_CLOCK_FC3W is not set
# CONFIG_PTP_1588_CLOCK_MOCK is not set
# CONFIG_PTP_1588_CLOCK_VMW is not set
# CONFIG_PTP_1588_CLOCK_OCP is not set
# end of PTP clock support
# CONFIG_PINCTRL is not set
CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_OF_GPIO=y
CONFIG_GPIO_ACPI=y
CONFIG_GPIOLIB_IRQCHIP=y
# CONFIG_DEBUG_GPIO is not set
# CONFIG_GPIO_SYSFS is not set
# CONFIG_GPIO_CDEV is not set
#
# Memory mapped GPIO drivers
#
# CONFIG_GPIO_74XX_MMIO is not set
# CONFIG_GPIO_ALTERA is not set
# CONFIG_GPIO_AMDPT is not set
# CONFIG_GPIO_CADENCE is not set
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_FTGPIO010 is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
# CONFIG_GPIO_GRGPIO is not set
# CONFIG_GPIO_HLWD is not set
# CONFIG_GPIO_ICH is not set
# CONFIG_GPIO_LOGICVC is not set
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_SIFIVE is not set
# CONFIG_GPIO_SYSCON is not set
# CONFIG_GPIO_XILINX is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers
#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers
#
# I2C GPIO expanders
#
# CONFIG_GPIO_ADNP is not set
# CONFIG_GPIO_FXL6408 is not set
# CONFIG_GPIO_DS4520 is not set
# CONFIG_GPIO_GW_PLD is not set
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCA9570 is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders
#
# MFD GPIO expanders
#
CONFIG_GPIO_DLN2=y
# CONFIG_GPIO_ELKHARTLAKE is not set
# CONFIG_GPIO_TWL4030 is not set
# end of MFD GPIO expanders
#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# CONFIG_GPIO_SODAVILLE is not set
# end of PCI GPIO expanders
#
# SPI GPIO expanders
#
# CONFIG_GPIO_74X164 is not set
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders
#
# USB GPIO expanders
#
CONFIG_GPIO_VIPERBOARD=y
# end of USB GPIO expanders
#
# Virtual GPIO drivers
#
# CONFIG_GPIO_AGGREGATOR is not set
# CONFIG_GPIO_LATCH is not set
# CONFIG_GPIO_MOCKUP is not set
# CONFIG_GPIO_VIRTIO is not set
# CONFIG_GPIO_SIM is not set
# end of Virtual GPIO drivers
# CONFIG_W1 is not set
# CONFIG_POWER_RESET is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
# CONFIG_GENERIC_ADC_BATTERY is not set
# CONFIG_IP5XXX_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SAMSUNG_SDI is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_MANAGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
CONFIG_CHARGER_ISP1704=y
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_TWL4030 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
# CONFIG_CHARGER_MANAGER is not set
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_LTC4162L is not set
# CONFIG_CHARGER_DETECTOR_MAX14656 is not set
# CONFIG_CHARGER_MAX77976 is not set
# CONFIG_CHARGER_BQ2415X is not set
CONFIG_CHARGER_BQ24190=y
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ2515X is not set
# CONFIG_CHARGER_BQ25890 is not set
# CONFIG_CHARGER_BQ25980 is not set
# CONFIG_CHARGER_BQ256XX is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_BATTERY_GOLDFISH is not set
# CONFIG_BATTERY_RT5033 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_RT9467 is not set
# CONFIG_CHARGER_RT9471 is not set
# CONFIG_CHARGER_UCS1002 is not set
# CONFIG_CHARGER_BD99954 is not set
# CONFIG_BATTERY_UG3105 is not set
# CONFIG_FUEL_GAUGE_MM8013 is not set
CONFIG_HWMON=y
# CONFIG_HWMON_DEBUG_CHIP is not set
#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7314 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM1177 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7310 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_AHT10 is not set
# CONFIG_SENSORS_AQUACOMPUTER_D5NEXT is not set
# CONFIG_SENSORS_AS370 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_ASUS_ROG_RYUJIN is not set
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_APPLESMC is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_CHIPCAP2 is not set
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_CORSAIR_PSU is not set
# CONFIG_SENSORS_DRIVETEMP is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_DELL_SMM is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_FTSTEUTATES is not set
# CONFIG_SENSORS_GIGABYTE_WATERFORCE is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_GPIO_FAN is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HS3001 is not set
# CONFIG_SENSORS_IIO_HWMON is not set
# CONFIG_SENSORS_I5500 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_POWERZ is not set
# CONFIG_SENSORS_POWR1220 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
# CONFIG_SENSORS_LTC2991 is not set
# CONFIG_SENSORS_LTC2992 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4222 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4260 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LTC4282 is not set
# CONFIG_SENSORS_MAX1111 is not set
# CONFIG_SENSORS_MAX127 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX31760 is not set
# CONFIG_MAX31827 is not set
# CONFIG_SENSORS_MAX6620 is not set
# CONFIG_SENSORS_MAX6621 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MAX31790 is not set
# CONFIG_SENSORS_MC34VR500 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_TPS23861 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_ADCXX is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM70 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_NCT6683 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NCT6775_I2C is not set
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
# CONFIG_SENSORS_NZXT_KRAKEN2 is not set
# CONFIG_SENSORS_NZXT_KRAKEN3 is not set
# CONFIG_SENSORS_NZXT_SMART2 is not set
# CONFIG_SENSORS_OCC_P8_I2C is not set
# CONFIG_SENSORS_OXP is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_PT5161L is not set
# CONFIG_SENSORS_SBTSI is not set
# CONFIG_SENSORS_SBRMI is not set
# CONFIG_SENSORS_SHT15 is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHT4x is not set
# CONFIG_SENSORS_SHTC1 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC2305 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH5627 is not set
# CONFIG_SENSORS_SCH5636 is not set
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_ADC128D818 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_ADS7871 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_INA238 is not set
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_TMP464 is not set
# CONFIG_SENSORS_TMP513 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83773G is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_XGENE is not set
#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
# CONFIG_SENSORS_ASUS_WMI is not set
# CONFIG_SENSORS_ASUS_EC is not set
# CONFIG_SENSORS_HP_WMI is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_NETLINK=y
# CONFIG_THERMAL_STATISTICS is not set
# CONFIG_THERMAL_DEBUGFS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
# CONFIG_THERMAL_OF is not set
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
# CONFIG_THERMAL_GOV_BANG_BANG is not set
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_THERMAL_MMIO is not set
#
# Intel thermal drivers
#
# CONFIG_INTEL_POWERCLAMP is not set
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_X86_PKG_TEMP_THERMAL is not set
# CONFIG_INTEL_SOC_DTS_THERMAL is not set
#
# ACPI INT340X thermal drivers
#
# CONFIG_INT340X_THERMAL is not set
# end of ACPI INT340X thermal drivers
# CONFIG_INTEL_PCH_THERMAL is not set
# CONFIG_INTEL_TCC_COOLING is not set
# CONFIG_INTEL_HFI_THERMAL is not set
# end of Intel thermal drivers
# CONFIG_GENERIC_ADC_THERMAL is not set
CONFIG_WATCHDOG=y
# CONFIG_WATCHDOG_CORE is not set
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
# CONFIG_WATCHDOG_SYSFS is not set
# CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT is not set
#
# Watchdog Pretimeout Governors
#
#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_GPIO_WATCHDOG is not set
# CONFIG_WDAT_WDT is not set
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_TWL4030_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_RETU_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_ADVANTECH_EC_WDT is not set
# CONFIG_ALIM1535_WDT is not set
# CONFIG_ALIM7101_WDT is not set
# CONFIG_EBC_C384_WDT is not set
# CONFIG_EXAR_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SP5100_TCO is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
# CONFIG_I6300ESB_WDT is not set
# CONFIG_IE6XX_WDT is not set
# CONFIG_ITCO_WDT is not set
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
# CONFIG_HP_WATCHDOG is not set
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_NV_TCO is not set
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
# CONFIG_VIA_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set
#
# PCI-based Watchdog Cards
#
# CONFIG_PCIPCWATCHDOG is not set
# CONFIG_WDTPCI is not set
#
# USB-based Watchdog Cards
#
CONFIG_USBPCWATCHDOG=y
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
# CONFIG_SSB_PCIHOST is not set
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
# CONFIG_SSB_PCMCIAHOST is not set
CONFIG_SSB_SDIOHOST_POSSIBLE=y
# CONFIG_SSB_SDIOHOST is not set
# CONFIG_SSB_DRIVER_GPIO is not set
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=y
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
# CONFIG_BCMA_HOST_PCI is not set
# CONFIG_BCMA_HOST_SOC is not set
# CONFIG_BCMA_DRIVER_PCI is not set
# CONFIG_BCMA_DRIVER_GMAC_CMN is not set
# CONFIG_BCMA_DRIVER_GPIO is not set
# CONFIG_BCMA_DEBUG is not set
#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_ACT8945A is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_MFD_SMPRO is not set
# CONFIG_MFD_AS3722 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_AAT2870_CORE is not set
# CONFIG_MFD_ATMEL_FLEXCOM is not set
# CONFIG_MFD_ATMEL_HLCDC is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_CS42L43_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_MFD_MAX5970 is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_SPI is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
CONFIG_MFD_DLN2=y
# CONFIG_MFD_GATEWORKS_GSC is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_MFD_HI6421_PMIC is not set
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=y
# CONFIG_LPC_SCH is not set
# CONFIG_INTEL_SOC_PMIC is not set
CONFIG_INTEL_SOC_PMIC_CHTWC=y
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
# CONFIG_MFD_INTEL_LPSS_ACPI is not set
# CONFIG_MFD_INTEL_LPSS_PCI is not set
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77541 is not set
# CONFIG_MFD_MAX77620 is not set
# CONFIG_MFD_MAX77650 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77714 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6370 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_MFD_OCELOT is not set
# CONFIG_EZX_PCAP is not set
# CONFIG_MFD_CPCAP is not set
CONFIG_MFD_VIPERBOARD=y
# CONFIG_MFD_NTXEC is not set
CONFIG_MFD_RETU=y
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_SY7636A is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT4831 is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RT5120 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_RK8XX_I2C is not set
# CONFIG_MFD_RK8XX_SPI is not set
# CONFIG_MFD_RN5T618 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_STMPE is not set
CONFIG_MFD_SYSCON=y
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TI_LP87565 is not set
# CONFIG_MFD_TPS65218 is not set
# CONFIG_MFD_TPS65219 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS65912_SPI is not set
# CONFIG_MFD_TPS6594_I2C is not set
# CONFIG_MFD_TPS6594_SPI is not set
CONFIG_TWL4030_CORE=y
# CONFIG_MFD_TWL4030_AUDIO is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TQMX86 is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_LOCHNAGAR is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM831X_SPI is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_ROHM_BD718XX is not set
# CONFIG_MFD_ROHM_BD71828 is not set
# CONFIG_MFD_ROHM_BD957XMUF is not set
# CONFIG_MFD_STPMIC1 is not set
# CONFIG_MFD_STMFX is not set
# CONFIG_MFD_ATC260X_I2C is not set
# CONFIG_MFD_QCOM_PM8008 is not set
# CONFIG_RAVE_SP_CORE is not set
# CONFIG_MFD_INTEL_M10_BMC_SPI is not set
# CONFIG_MFD_RSMU_I2C is not set
# CONFIG_MFD_RSMU_SPI is not set
# end of Multifunction device drivers
CONFIG_REGULATOR=y
# CONFIG_REGULATOR_DEBUG is not set
# CONFIG_REGULATOR_FIXED_VOLTAGE is not set
# CONFIG_REGULATOR_VIRTUAL_CONSUMER is not set
# CONFIG_REGULATOR_USERSPACE_CONSUMER is not set
# CONFIG_REGULATOR_NETLINK_EVENTS is not set
# CONFIG_REGULATOR_88PG86X is not set
# CONFIG_REGULATOR_ACT8865 is not set
# CONFIG_REGULATOR_AD5398 is not set
# CONFIG_REGULATOR_AW37503 is not set
# CONFIG_REGULATOR_DA9121 is not set
# CONFIG_REGULATOR_DA9210 is not set
# CONFIG_REGULATOR_DA9211 is not set
# CONFIG_REGULATOR_FAN53555 is not set
# CONFIG_REGULATOR_FAN53880 is not set
# CONFIG_REGULATOR_GPIO is not set
# CONFIG_REGULATOR_ISL9305 is not set
# CONFIG_REGULATOR_ISL6271A is not set
# CONFIG_REGULATOR_LP3971 is not set
# CONFIG_REGULATOR_LP3972 is not set
# CONFIG_REGULATOR_LP872X is not set
# CONFIG_REGULATOR_LP8755 is not set
# CONFIG_REGULATOR_LTC3589 is not set
# CONFIG_REGULATOR_LTC3676 is not set
# CONFIG_REGULATOR_MAX1586 is not set
# CONFIG_REGULATOR_MAX77503 is not set
# CONFIG_REGULATOR_MAX77857 is not set
# CONFIG_REGULATOR_MAX8649 is not set
# CONFIG_REGULATOR_MAX8660 is not set
# CONFIG_REGULATOR_MAX8893 is not set
# CONFIG_REGULATOR_MAX8952 is not set
# CONFIG_REGULATOR_MAX20086 is not set
# CONFIG_REGULATOR_MAX20411 is not set
# CONFIG_REGULATOR_MAX77826 is not set
# CONFIG_REGULATOR_MCP16502 is not set
# CONFIG_REGULATOR_MP5416 is not set
# CONFIG_REGULATOR_MP8859 is not set
# CONFIG_REGULATOR_MP886X is not set
# CONFIG_REGULATOR_MPQ7920 is not set
# CONFIG_REGULATOR_MT6311 is not set
# CONFIG_REGULATOR_PCA9450 is not set
# CONFIG_REGULATOR_PF8X00 is not set
# CONFIG_REGULATOR_PFUZE100 is not set
# CONFIG_REGULATOR_PV88060 is not set
# CONFIG_REGULATOR_PV88080 is not set
# CONFIG_REGULATOR_PV88090 is not set
# CONFIG_REGULATOR_RAA215300 is not set
# CONFIG_REGULATOR_RASPBERRYPI_TOUCHSCREEN_ATTINY is not set
# CONFIG_REGULATOR_RT4801 is not set
# CONFIG_REGULATOR_RT4803 is not set
# CONFIG_REGULATOR_RT5190A is not set
# CONFIG_REGULATOR_RT5739 is not set
# CONFIG_REGULATOR_RT5759 is not set
# CONFIG_REGULATOR_RT6160 is not set
# CONFIG_REGULATOR_RT6190 is not set
# CONFIG_REGULATOR_RT6245 is not set
# CONFIG_REGULATOR_RTQ2134 is not set
# CONFIG_REGULATOR_RTMV20 is not set
# CONFIG_REGULATOR_RTQ6752 is not set
# CONFIG_REGULATOR_RTQ2208 is not set
# CONFIG_REGULATOR_SLG51000 is not set
# CONFIG_REGULATOR_SY8106A is not set
# CONFIG_REGULATOR_SY8824X is not set
# CONFIG_REGULATOR_SY8827N is not set
# CONFIG_REGULATOR_TPS51632 is not set
# CONFIG_REGULATOR_TPS62360 is not set
# CONFIG_REGULATOR_TPS6286X is not set
# CONFIG_REGULATOR_TPS6287X is not set
# CONFIG_REGULATOR_TPS65023 is not set
# CONFIG_REGULATOR_TPS6507X is not set
# CONFIG_REGULATOR_TPS65132 is not set
# CONFIG_REGULATOR_TPS6524X is not set
CONFIG_REGULATOR_TWL4030=y
# CONFIG_REGULATOR_VCTRL is not set
CONFIG_RC_CORE=y
# CONFIG_LIRC is not set
# CONFIG_RC_MAP is not set
# CONFIG_RC_DECODERS is not set
CONFIG_RC_DEVICES=y
# CONFIG_IR_ENE is not set
# CONFIG_IR_FINTEK is not set
# CONFIG_IR_GPIO_CIR is not set
# CONFIG_IR_HIX5HD2 is not set
CONFIG_IR_IGORPLUGUSB=y
CONFIG_IR_IGUANA=y
CONFIG_IR_IMON=y
# CONFIG_IR_IMON_RAW is not set
# CONFIG_IR_ITE_CIR is not set
CONFIG_IR_MCEUSB=y
# CONFIG_IR_NUVOTON is not set
CONFIG_IR_REDRAT3=y
# CONFIG_IR_SERIAL is not set
CONFIG_IR_STREAMZAP=y
# CONFIG_IR_TOY is not set
CONFIG_IR_TTUSBIR=y
# CONFIG_IR_WINBOND_CIR is not set
CONFIG_RC_ATI_REMOTE=y
# CONFIG_RC_LOOPBACK is not set
# CONFIG_RC_XBOX_DVD is not set
CONFIG_CEC_CORE=y
#
# CEC support
#
# CONFIG_MEDIA_CEC_RC is not set
CONFIG_MEDIA_CEC_SUPPORT=y
# CONFIG_CEC_CH7322 is not set
# CONFIG_CEC_GPIO is not set
# CONFIG_CEC_SECO is not set
CONFIG_USB_PULSE8_CEC=y
CONFIG_USB_RAINSHADOW_CEC=y
# end of CEC support
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_SUPPORT_FILTER=y
# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
#
# Media device types
#
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_RADIO_SUPPORT=y
CONFIG_MEDIA_SDR_SUPPORT=y
# CONFIG_MEDIA_PLATFORM_SUPPORT is not set
CONFIG_MEDIA_TEST_SUPPORT=y
# end of Media device types
CONFIG_VIDEO_DEV=y
CONFIG_MEDIA_CONTROLLER=y
CONFIG_DVB_CORE=y
#
# Video4Linux options
#
CONFIG_VIDEO_V4L2_I2C=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
CONFIG_VIDEO_TUNER=y
CONFIG_V4L2_MEM2MEM_DEV=y
# end of Video4Linux options
#
# Media controller options
#
CONFIG_MEDIA_CONTROLLER_DVB=y
# end of Media controller options
#
# Digital TV options
#
# CONFIG_DVB_MMAP is not set
# CONFIG_DVB_NET is not set
CONFIG_DVB_MAX_ADAPTERS=16
# CONFIG_DVB_DYNAMIC_MINORS is not set
# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
# CONFIG_DVB_ULE_DEBUG is not set
# end of Digital TV options
#
# Media drivers
#
#
# Drivers filtered as selected at 'Filter media drivers'
#
#
# Media drivers
#
CONFIG_MEDIA_USB_SUPPORT=y
#
# Webcam devices
#
CONFIG_USB_GSPCA=y
CONFIG_USB_GSPCA_BENQ=y
CONFIG_USB_GSPCA_CONEX=y
CONFIG_USB_GSPCA_CPIA1=y
CONFIG_USB_GSPCA_DTCS033=y
CONFIG_USB_GSPCA_ETOMS=y
CONFIG_USB_GSPCA_FINEPIX=y
CONFIG_USB_GSPCA_JEILINJ=y
CONFIG_USB_GSPCA_JL2005BCD=y
CONFIG_USB_GSPCA_KINECT=y
CONFIG_USB_GSPCA_KONICA=y
CONFIG_USB_GSPCA_MARS=y
CONFIG_USB_GSPCA_MR97310A=y
CONFIG_USB_GSPCA_NW80X=y
CONFIG_USB_GSPCA_OV519=y
CONFIG_USB_GSPCA_OV534=y
CONFIG_USB_GSPCA_OV534_9=y
CONFIG_USB_GSPCA_PAC207=y
CONFIG_USB_GSPCA_PAC7302=y
CONFIG_USB_GSPCA_PAC7311=y
CONFIG_USB_GSPCA_SE401=y
CONFIG_USB_GSPCA_SN9C2028=y
CONFIG_USB_GSPCA_SN9C20X=y
CONFIG_USB_GSPCA_SONIXB=y
CONFIG_USB_GSPCA_SONIXJ=y
CONFIG_USB_GSPCA_SPCA1528=y
CONFIG_USB_GSPCA_SPCA500=y
CONFIG_USB_GSPCA_SPCA501=y
CONFIG_USB_GSPCA_SPCA505=y
CONFIG_USB_GSPCA_SPCA506=y
CONFIG_USB_GSPCA_SPCA508=y
CONFIG_USB_GSPCA_SPCA561=y
CONFIG_USB_GSPCA_SQ905=y
CONFIG_USB_GSPCA_SQ905C=y
CONFIG_USB_GSPCA_SQ930X=y
CONFIG_USB_GSPCA_STK014=y
CONFIG_USB_GSPCA_STK1135=y
CONFIG_USB_GSPCA_STV0680=y
CONFIG_USB_GSPCA_SUNPLUS=y
CONFIG_USB_GSPCA_T613=y
CONFIG_USB_GSPCA_TOPRO=y
CONFIG_USB_GSPCA_TOUPTEK=y
CONFIG_USB_GSPCA_TV8532=y
CONFIG_USB_GSPCA_VC032X=y
CONFIG_USB_GSPCA_VICAM=y
CONFIG_USB_GSPCA_XIRLINK_CIT=y
CONFIG_USB_GSPCA_ZC3XX=y
CONFIG_USB_GL860=y
CONFIG_USB_M5602=y
CONFIG_USB_STV06XX=y
CONFIG_USB_PWC=y
# CONFIG_USB_PWC_DEBUG is not set
CONFIG_USB_PWC_INPUT_EVDEV=y
CONFIG_USB_S2255=y
CONFIG_VIDEO_USBTV=y
CONFIG_USB_VIDEO_CLASS=y
CONFIG_USB_VIDEO_CLASS_INPUT_EVDEV=y
#
# Analog TV USB devices
#
CONFIG_VIDEO_GO7007=y
CONFIG_VIDEO_GO7007_USB=y
CONFIG_VIDEO_GO7007_LOADER=y
CONFIG_VIDEO_GO7007_USB_S2250_BOARD=y
CONFIG_VIDEO_HDPVR=y
CONFIG_VIDEO_PVRUSB2=y
CONFIG_VIDEO_PVRUSB2_SYSFS=y
CONFIG_VIDEO_PVRUSB2_DVB=y
# CONFIG_VIDEO_PVRUSB2_DEBUGIFC is not set
CONFIG_VIDEO_STK1160=y
#
# Analog/digital TV USB devices
#
CONFIG_VIDEO_AU0828=y
CONFIG_VIDEO_AU0828_V4L2=y
CONFIG_VIDEO_AU0828_RC=y
CONFIG_VIDEO_CX231XX=y
CONFIG_VIDEO_CX231XX_RC=y
CONFIG_VIDEO_CX231XX_ALSA=y
CONFIG_VIDEO_CX231XX_DVB=y
#
# Digital TV USB devices
#
CONFIG_DVB_AS102=y
CONFIG_DVB_B2C2_FLEXCOP_USB=y
# CONFIG_DVB_B2C2_FLEXCOP_USB_DEBUG is not set
CONFIG_DVB_USB_V2=y
CONFIG_DVB_USB_AF9015=y
CONFIG_DVB_USB_AF9035=y
CONFIG_DVB_USB_ANYSEE=y
CONFIG_DVB_USB_AU6610=y
CONFIG_DVB_USB_AZ6007=y
CONFIG_DVB_USB_CE6230=y
CONFIG_DVB_USB_DVBSKY=y
CONFIG_DVB_USB_EC168=y
CONFIG_DVB_USB_GL861=y
CONFIG_DVB_USB_LME2510=y
CONFIG_DVB_USB_MXL111SF=y
CONFIG_DVB_USB_RTL28XXU=y
CONFIG_DVB_USB_ZD1301=y
CONFIG_DVB_USB=y
# CONFIG_DVB_USB_DEBUG is not set
CONFIG_DVB_USB_A800=y
CONFIG_DVB_USB_AF9005=y
CONFIG_DVB_USB_AF9005_REMOTE=y
CONFIG_DVB_USB_AZ6027=y
CONFIG_DVB_USB_CINERGY_T2=y
CONFIG_DVB_USB_CXUSB=y
# CONFIG_DVB_USB_CXUSB_ANALOG is not set
CONFIG_DVB_USB_DIB0700=y
CONFIG_DVB_USB_DIB3000MC=y
CONFIG_DVB_USB_DIBUSB_MB=y
# CONFIG_DVB_USB_DIBUSB_MB_FAULTY is not set
CONFIG_DVB_USB_DIBUSB_MC=y
CONFIG_DVB_USB_DIGITV=y
CONFIG_DVB_USB_DTT200U=y
CONFIG_DVB_USB_DTV5100=y
CONFIG_DVB_USB_DW2102=y
CONFIG_DVB_USB_GP8PSK=y
CONFIG_DVB_USB_M920X=y
CONFIG_DVB_USB_NOVA_T_USB2=y
CONFIG_DVB_USB_OPERA1=y
CONFIG_DVB_USB_PCTV452E=y
CONFIG_DVB_USB_TECHNISAT_USB2=y
CONFIG_DVB_USB_TTUSB2=y
CONFIG_DVB_USB_UMT_010=y
CONFIG_DVB_USB_VP702X=y
CONFIG_DVB_USB_VP7045=y
CONFIG_SMS_USB_DRV=y
CONFIG_DVB_TTUSB_BUDGET=y
CONFIG_DVB_TTUSB_DEC=y
#
# Webcam, TV (analog/digital) USB devices
#
CONFIG_VIDEO_EM28XX=y
CONFIG_VIDEO_EM28XX_V4L2=y
CONFIG_VIDEO_EM28XX_ALSA=y
CONFIG_VIDEO_EM28XX_DVB=y
CONFIG_VIDEO_EM28XX_RC=y
#
# Software defined radio USB devices
#
CONFIG_USB_AIRSPY=y
CONFIG_USB_HACKRF=y
CONFIG_USB_MSI2500=y
# CONFIG_MEDIA_PCI_SUPPORT is not set
CONFIG_RADIO_ADAPTERS=y
# CONFIG_RADIO_MAXIRADIO is not set
# CONFIG_RADIO_SAA7706H is not set
CONFIG_RADIO_SHARK=y
CONFIG_RADIO_SHARK2=y
CONFIG_RADIO_SI4713=y
CONFIG_RADIO_TEA575X=y
# CONFIG_RADIO_TEA5764 is not set
# CONFIG_RADIO_TEF6862 is not set
# CONFIG_RADIO_WL1273 is not set
CONFIG_USB_DSBR=y
CONFIG_USB_KEENE=y
CONFIG_USB_MA901=y
CONFIG_USB_MR800=y
CONFIG_USB_RAREMONO=y
CONFIG_RADIO_SI470X=y
CONFIG_USB_SI470X=y
# CONFIG_I2C_SI470X is not set
CONFIG_USB_SI4713=y
# CONFIG_PLATFORM_SI4713 is not set
CONFIG_I2C_SI4713=y
CONFIG_V4L_TEST_DRIVERS=y
CONFIG_VIDEO_VIM2M=y
CONFIG_VIDEO_VICODEC=y
CONFIG_VIDEO_VIMC=y
CONFIG_VIDEO_VIVID=y
CONFIG_VIDEO_VIVID_CEC=y
CONFIG_VIDEO_VIVID_MAX_DEVS=64
# CONFIG_VIDEO_VISL is not set
CONFIG_DVB_TEST_DRIVERS=y
CONFIG_DVB_VIDTV=y
#
# FireWire (IEEE 1394) Adapters
#
# CONFIG_DVB_FIREDTV is not set
CONFIG_MEDIA_COMMON_OPTIONS=y
#
# common driver options
#
CONFIG_CYPRESS_FIRMWARE=y
CONFIG_TTPCI_EEPROM=y
CONFIG_UVC_COMMON=y
CONFIG_VIDEO_CX2341X=y
CONFIG_VIDEO_TVEEPROM=y
CONFIG_DVB_B2C2_FLEXCOP=y
CONFIG_SMS_SIANO_MDTV=y
CONFIG_SMS_SIANO_RC=y
CONFIG_VIDEO_V4L2_TPG=y
CONFIG_VIDEOBUF2_CORE=y
CONFIG_VIDEOBUF2_V4L2=y
CONFIG_VIDEOBUF2_MEMOPS=y
CONFIG_VIDEOBUF2_DMA_CONTIG=y
CONFIG_VIDEOBUF2_VMALLOC=y
CONFIG_VIDEOBUF2_DMA_SG=y
# end of Media drivers
#
# Media ancillary drivers
#
CONFIG_MEDIA_ATTACH=y
# CONFIG_VIDEO_IR_I2C is not set
# CONFIG_VIDEO_CAMERA_SENSOR is not set
#
# Camera ISPs
#
# CONFIG_VIDEO_THP7312 is not set
# end of Camera ISPs
#
# Lens drivers
#
# CONFIG_VIDEO_AD5820 is not set
# CONFIG_VIDEO_AK7375 is not set
# CONFIG_VIDEO_DW9714 is not set
# CONFIG_VIDEO_DW9719 is not set
# CONFIG_VIDEO_DW9768 is not set
# CONFIG_VIDEO_DW9807_VCM is not set
# end of Lens drivers
#
# Flash devices
#
# CONFIG_VIDEO_ADP1653 is not set
# CONFIG_VIDEO_LM3560 is not set
# CONFIG_VIDEO_LM3646 is not set
# end of Flash devices
#
# Audio decoders, processors and mixers
#
# CONFIG_VIDEO_CS3308 is not set
# CONFIG_VIDEO_CS5345 is not set
CONFIG_VIDEO_CS53L32A=y
CONFIG_VIDEO_MSP3400=y
# CONFIG_VIDEO_SONY_BTF_MPX is not set
# CONFIG_VIDEO_TDA7432 is not set
# CONFIG_VIDEO_TDA9840 is not set
# CONFIG_VIDEO_TEA6415C is not set
# CONFIG_VIDEO_TEA6420 is not set
# CONFIG_VIDEO_TLV320AIC23B is not set
# CONFIG_VIDEO_TVAUDIO is not set
# CONFIG_VIDEO_UDA1342 is not set
# CONFIG_VIDEO_VP27SMPX is not set
# CONFIG_VIDEO_WM8739 is not set
CONFIG_VIDEO_WM8775=y
# end of Audio decoders, processors and mixers
#
# RDS decoders
#
# CONFIG_VIDEO_SAA6588 is not set
# end of RDS decoders
#
# Video decoders
#
# CONFIG_VIDEO_ADV7180 is not set
# CONFIG_VIDEO_ADV7183 is not set
# CONFIG_VIDEO_ADV748X is not set
# CONFIG_VIDEO_ADV7604 is not set
# CONFIG_VIDEO_ADV7842 is not set
# CONFIG_VIDEO_BT819 is not set
# CONFIG_VIDEO_BT856 is not set
# CONFIG_VIDEO_BT866 is not set
# CONFIG_VIDEO_ISL7998X is not set
# CONFIG_VIDEO_KS0127 is not set
# CONFIG_VIDEO_MAX9286 is not set
# CONFIG_VIDEO_ML86V7667 is not set
# CONFIG_VIDEO_SAA7110 is not set
CONFIG_VIDEO_SAA711X=y
# CONFIG_VIDEO_TC358743 is not set
# CONFIG_VIDEO_TC358746 is not set
# CONFIG_VIDEO_TVP514X is not set
# CONFIG_VIDEO_TVP5150 is not set
# CONFIG_VIDEO_TVP7002 is not set
# CONFIG_VIDEO_TW2804 is not set
# CONFIG_VIDEO_TW9900 is not set
# CONFIG_VIDEO_TW9903 is not set
# CONFIG_VIDEO_TW9906 is not set
# CONFIG_VIDEO_TW9910 is not set
# CONFIG_VIDEO_VPX3220 is not set
#
# Video and audio decoders
#
# CONFIG_VIDEO_SAA717X is not set
CONFIG_VIDEO_CX25840=y
# end of Video decoders
#
# Video encoders
#
# CONFIG_VIDEO_ADV7170 is not set
# CONFIG_VIDEO_ADV7175 is not set
# CONFIG_VIDEO_ADV7343 is not set
# CONFIG_VIDEO_ADV7393 is not set
# CONFIG_VIDEO_ADV7511 is not set
# CONFIG_VIDEO_AK881X is not set
# CONFIG_VIDEO_SAA7127 is not set
# CONFIG_VIDEO_SAA7185 is not set
# CONFIG_VIDEO_THS8200 is not set
# end of Video encoders
#
# Video improvement chips
#
# CONFIG_VIDEO_UPD64031A is not set
# CONFIG_VIDEO_UPD64083 is not set
# end of Video improvement chips
#
# Audio/Video compression chips
#
# CONFIG_VIDEO_SAA6752HS is not set
# end of Audio/Video compression chips
#
# SDR tuner chips
#
# CONFIG_SDR_MAX2175 is not set
# end of SDR tuner chips
#
# Miscellaneous helper chips
#
# CONFIG_VIDEO_I2C is not set
# CONFIG_VIDEO_M52790 is not set
# CONFIG_VIDEO_ST_MIPID02 is not set
# CONFIG_VIDEO_THS7303 is not set
# end of Miscellaneous helper chips
#
# Video serializers and deserializers
#
# CONFIG_VIDEO_DS90UB913 is not set
# CONFIG_VIDEO_DS90UB953 is not set
# CONFIG_VIDEO_DS90UB960 is not set
# end of Video serializers and deserializers
#
# Media SPI Adapters
#
# CONFIG_CXD2880_SPI_DRV is not set
# CONFIG_VIDEO_GS1662 is not set
# end of Media SPI Adapters
CONFIG_MEDIA_TUNER=y
#
# Customize TV tuners
#
# CONFIG_MEDIA_TUNER_E4000 is not set
# CONFIG_MEDIA_TUNER_FC0011 is not set
# CONFIG_MEDIA_TUNER_FC0012 is not set
# CONFIG_MEDIA_TUNER_FC0013 is not set
# CONFIG_MEDIA_TUNER_FC2580 is not set
# CONFIG_MEDIA_TUNER_IT913X is not set
# CONFIG_MEDIA_TUNER_M88RS6000T is not set
# CONFIG_MEDIA_TUNER_MAX2165 is not set
# CONFIG_MEDIA_TUNER_MC44S803 is not set
CONFIG_MEDIA_TUNER_MSI001=y
# CONFIG_MEDIA_TUNER_MT2060 is not set
# CONFIG_MEDIA_TUNER_MT2063 is not set
# CONFIG_MEDIA_TUNER_MT20XX is not set
# CONFIG_MEDIA_TUNER_MT2131 is not set
# CONFIG_MEDIA_TUNER_MT2266 is not set
# CONFIG_MEDIA_TUNER_MXL301RF is not set
# CONFIG_MEDIA_TUNER_MXL5005S is not set
# CONFIG_MEDIA_TUNER_MXL5007T is not set
# CONFIG_MEDIA_TUNER_QM1D1B0004 is not set
# CONFIG_MEDIA_TUNER_QM1D1C0042 is not set
# CONFIG_MEDIA_TUNER_QT1010 is not set
# CONFIG_MEDIA_TUNER_R820T is not set
# CONFIG_MEDIA_TUNER_SI2157 is not set
# CONFIG_MEDIA_TUNER_SIMPLE is not set
# CONFIG_MEDIA_TUNER_TDA18212 is not set
# CONFIG_MEDIA_TUNER_TDA18218 is not set
# CONFIG_MEDIA_TUNER_TDA18250 is not set
# CONFIG_MEDIA_TUNER_TDA18271 is not set
# CONFIG_MEDIA_TUNER_TDA827X is not set
# CONFIG_MEDIA_TUNER_TDA8290 is not set
# CONFIG_MEDIA_TUNER_TDA9887 is not set
# CONFIG_MEDIA_TUNER_TEA5761 is not set
# CONFIG_MEDIA_TUNER_TEA5767 is not set
# CONFIG_MEDIA_TUNER_TUA9001 is not set
# CONFIG_MEDIA_TUNER_XC2028 is not set
# CONFIG_MEDIA_TUNER_XC4000 is not set
# CONFIG_MEDIA_TUNER_XC5000 is not set
# end of Customize TV tuners
#
# Customise DVB Frontends
#
#
# Multistandard (satellite) frontends
#
# CONFIG_DVB_M88DS3103 is not set
# CONFIG_DVB_MXL5XX is not set
# CONFIG_DVB_STB0899 is not set
# CONFIG_DVB_STB6100 is not set
# CONFIG_DVB_STV090x is not set
# CONFIG_DVB_STV0910 is not set
# CONFIG_DVB_STV6110x is not set
# CONFIG_DVB_STV6111 is not set
#
# Multistandard (cable + terrestrial) frontends
#
# CONFIG_DVB_DRXK is not set
# CONFIG_DVB_MN88472 is not set
# CONFIG_DVB_MN88473 is not set
# CONFIG_DVB_SI2165 is not set
# CONFIG_DVB_TDA18271C2DD is not set
#
# DVB-S (satellite) frontends
#
# CONFIG_DVB_CX24110 is not set
# CONFIG_DVB_CX24116 is not set
# CONFIG_DVB_CX24117 is not set
# CONFIG_DVB_CX24120 is not set
# CONFIG_DVB_CX24123 is not set
# CONFIG_DVB_DS3000 is not set
# CONFIG_DVB_MB86A16 is not set
# CONFIG_DVB_MT312 is not set
# CONFIG_DVB_S5H1420 is not set
# CONFIG_DVB_SI21XX is not set
# CONFIG_DVB_STB6000 is not set
# CONFIG_DVB_STV0288 is not set
# CONFIG_DVB_STV0299 is not set
# CONFIG_DVB_STV0900 is not set
# CONFIG_DVB_STV6110 is not set
# CONFIG_DVB_TDA10071 is not set
# CONFIG_DVB_TDA10086 is not set
# CONFIG_DVB_TDA8083 is not set
# CONFIG_DVB_TDA8261 is not set
# CONFIG_DVB_TDA826X is not set
# CONFIG_DVB_TS2020 is not set
# CONFIG_DVB_TUA6100 is not set
# CONFIG_DVB_TUNER_CX24113 is not set
# CONFIG_DVB_TUNER_ITD1000 is not set
# CONFIG_DVB_VES1X93 is not set
# CONFIG_DVB_ZL10036 is not set
# CONFIG_DVB_ZL10039 is not set
#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_AF9013=y
CONFIG_DVB_AS102_FE=y
# CONFIG_DVB_CX22700 is not set
# CONFIG_DVB_CX22702 is not set
# CONFIG_DVB_CXD2820R is not set
# CONFIG_DVB_CXD2841ER is not set
CONFIG_DVB_DIB3000MB=y
CONFIG_DVB_DIB3000MC=y
# CONFIG_DVB_DIB7000M is not set
# CONFIG_DVB_DIB7000P is not set
# CONFIG_DVB_DIB9000 is not set
# CONFIG_DVB_DRXD is not set
CONFIG_DVB_EC100=y
CONFIG_DVB_GP8PSK_FE=y
# CONFIG_DVB_L64781 is not set
# CONFIG_DVB_MT352 is not set
# CONFIG_DVB_NXT6000 is not set
CONFIG_DVB_RTL2830=y
CONFIG_DVB_RTL2832=y
CONFIG_DVB_RTL2832_SDR=y
# CONFIG_DVB_S5H1432 is not set
# CONFIG_DVB_SI2168 is not set
# CONFIG_DVB_SP887X is not set
# CONFIG_DVB_STV0367 is not set
# CONFIG_DVB_TDA10048 is not set
# CONFIG_DVB_TDA1004X is not set
# CONFIG_DVB_ZD1301_DEMOD is not set
CONFIG_DVB_ZL10353=y
# CONFIG_DVB_CXD2880 is not set
#
# DVB-C (cable) frontends
#
# CONFIG_DVB_STV0297 is not set
# CONFIG_DVB_TDA10021 is not set
# CONFIG_DVB_TDA10023 is not set
# CONFIG_DVB_VES1820 is not set
#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
# CONFIG_DVB_AU8522_DTV is not set
# CONFIG_DVB_AU8522_V4L is not set
# CONFIG_DVB_BCM3510 is not set
# CONFIG_DVB_LG2160 is not set
# CONFIG_DVB_LGDT3305 is not set
# CONFIG_DVB_LGDT3306A is not set
# CONFIG_DVB_LGDT330X is not set
# CONFIG_DVB_MXL692 is not set
# CONFIG_DVB_NXT200X is not set
# CONFIG_DVB_OR51132 is not set
# CONFIG_DVB_OR51211 is not set
# CONFIG_DVB_S5H1409 is not set
# CONFIG_DVB_S5H1411 is not set
#
# ISDB-T (terrestrial) frontends
#
# CONFIG_DVB_DIB8000 is not set
# CONFIG_DVB_MB86A20S is not set
# CONFIG_DVB_S921 is not set
#
# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
#
# CONFIG_DVB_MN88443X is not set
# CONFIG_DVB_TC90522 is not set
#
# Digital terrestrial only tuners/PLL
#
# CONFIG_DVB_PLL is not set
# CONFIG_DVB_TUNER_DIB0070 is not set
# CONFIG_DVB_TUNER_DIB0090 is not set
#
# SEC control devices for DVB-S
#
# CONFIG_DVB_A8293 is not set
CONFIG_DVB_AF9033=y
# CONFIG_DVB_ASCOT2E is not set
# CONFIG_DVB_ATBM8830 is not set
# CONFIG_DVB_HELENE is not set
# CONFIG_DVB_HORUS3A is not set
# CONFIG_DVB_ISL6405 is not set
# CONFIG_DVB_ISL6421 is not set
# CONFIG_DVB_ISL6423 is not set
# CONFIG_DVB_IX2505V is not set
# CONFIG_DVB_LGS8GL5 is not set
# CONFIG_DVB_LGS8GXX is not set
# CONFIG_DVB_LNBH25 is not set
# CONFIG_DVB_LNBH29 is not set
# CONFIG_DVB_LNBP21 is not set
# CONFIG_DVB_LNBP22 is not set
# CONFIG_DVB_M88RS2000 is not set
# CONFIG_DVB_TDA665x is not set
# CONFIG_DVB_DRX39XYJ is not set
#
# Common Interface (EN50221) controller drivers
#
# CONFIG_DVB_CXD2099 is not set
# CONFIG_DVB_SP2 is not set
# end of Customise DVB Frontends
#
# Tools to develop new frontends
#
# CONFIG_DVB_DUMMY_FE is not set
# end of Media ancillary drivers
#
# Graphics support
#
CONFIG_APERTURE_HELPERS=y
CONFIG_SCREEN_INFO=y
CONFIG_VIDEO=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_PANEL is not set
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
# CONFIG_AGP_SIS is not set
# CONFIG_AGP_VIA is not set
CONFIG_INTEL_GTT=y
# CONFIG_VGA_SWITCHEROO is not set
CONFIG_DRM=y
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_DEBUG_MM=y
CONFIG_DRM_KMS_HELPER=y
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
# CONFIG_DRM_DEBUG_MODESET_LOCK is not set
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_DP_AUX_BUS=y
CONFIG_DRM_DISPLAY_HELPER=y
CONFIG_DRM_DISPLAY_DP_HELPER=y
CONFIG_DRM_DISPLAY_DP_TUNNEL=y
# CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE is not set
CONFIG_DRM_DISPLAY_HDCP_HELPER=y
CONFIG_DRM_DISPLAY_HDMI_HELPER=y
CONFIG_DRM_DP_AUX_CHARDEV=y
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=y
CONFIG_DRM_BUDDY=y
CONFIG_DRM_VRAM_HELPER=y
CONFIG_DRM_TTM_HELPER=y
CONFIG_DRM_GEM_SHMEM_HELPER=y
#
# I2C encoder or helper chips
#
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips
#
# ARM devices
#
# CONFIG_DRM_KOMEDA is not set
# end of ARM devices
# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I915=y
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
# CONFIG_DRM_I915_GVT_KVMGT is not set
CONFIG_DRM_I915_DP_TUNNEL=y
#
# drm/i915 Debugging
#
# CONFIG_DRM_I915_WERROR is not set
# CONFIG_DRM_I915_DEBUG is not set
# CONFIG_DRM_I915_DEBUG_MMIO is not set
# CONFIG_DRM_I915_SW_FENCE_DEBUG_OBJECTS is not set
# CONFIG_DRM_I915_SW_FENCE_CHECK_DAG is not set
# CONFIG_DRM_I915_DEBUG_GUC is not set
# CONFIG_DRM_I915_SELFTEST is not set
# CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS is not set
# CONFIG_DRM_I915_DEBUG_VBLANK_EVADE is not set
# CONFIG_DRM_I915_DEBUG_RUNTIME_PM is not set
# CONFIG_DRM_I915_DEBUG_WAKEREF is not set
# end of drm/i915 Debugging
#
# drm/i915 Profile Guided Optimisation
#
CONFIG_DRM_I915_REQUEST_TIMEOUT=20000
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_PREEMPT_TIMEOUT_COMPUTE=7500
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# end of drm/i915 Profile Guided Optimisation
# CONFIG_DRM_XE is not set
CONFIG_DRM_VGEM=y
CONFIG_DRM_VKMS=y
CONFIG_DRM_VMWGFX=y
# CONFIG_DRM_VMWGFX_MKSSTATS is not set
# CONFIG_DRM_GMA500 is not set
CONFIG_DRM_UDL=y
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_QXL is not set
CONFIG_DRM_VIRTIO_GPU=y
CONFIG_DRM_VIRTIO_GPU_KMS=y
CONFIG_DRM_PANEL=y
#
# Display Panels
#
# CONFIG_DRM_PANEL_ABT_Y030XX067A is not set
# CONFIG_DRM_PANEL_ARM_VERSATILE is not set
# CONFIG_DRM_PANEL_ASUS_Z00T_TM5P5_NT35596 is not set
# CONFIG_DRM_PANEL_AUO_A030JTN01 is not set
# CONFIG_DRM_PANEL_BOE_BF060Y8M_AJ0 is not set
# CONFIG_DRM_PANEL_BOE_HIMAX8279D is not set
# CONFIG_DRM_PANEL_BOE_TH101MB31UIG002_28A is not set
# CONFIG_DRM_PANEL_BOE_TV101WUM_NL6 is not set
# CONFIG_DRM_PANEL_EBBG_FT8719 is not set
# CONFIG_DRM_PANEL_ELIDA_KD35T133 is not set
# CONFIG_DRM_PANEL_FEIXIN_K101_IM2BA02 is not set
# CONFIG_DRM_PANEL_FEIYANG_FY07024DI26A30D is not set
# CONFIG_DRM_PANEL_DSI_CM is not set
# CONFIG_DRM_PANEL_LVDS is not set
# CONFIG_DRM_PANEL_HIMAX_HX83112A is not set
# CONFIG_DRM_PANEL_HIMAX_HX8394 is not set
# CONFIG_DRM_PANEL_ILITEK_IL9322 is not set
# CONFIG_DRM_PANEL_ILITEK_ILI9341 is not set
# CONFIG_DRM_PANEL_ILITEK_ILI9805 is not set
# CONFIG_DRM_PANEL_ILITEK_ILI9881C is not set
# CONFIG_DRM_PANEL_ILITEK_ILI9882T is not set
# CONFIG_DRM_PANEL_INNOLUX_EJ030NA is not set
# CONFIG_DRM_PANEL_INNOLUX_P079ZCA is not set
# CONFIG_DRM_PANEL_JADARD_JD9365DA_H3 is not set
# CONFIG_DRM_PANEL_JDI_LPM102A188A is not set
# CONFIG_DRM_PANEL_JDI_LT070ME05000 is not set
# CONFIG_DRM_PANEL_JDI_R63452 is not set
# CONFIG_DRM_PANEL_KHADAS_TS050 is not set
# CONFIG_DRM_PANEL_KINGDISPLAY_KD097D04 is not set
# CONFIG_DRM_PANEL_LEADTEK_LTK050H3146W is not set
# CONFIG_DRM_PANEL_LEADTEK_LTK500HD1829 is not set
# CONFIG_DRM_PANEL_LG_LB035Q02 is not set
# CONFIG_DRM_PANEL_LG_LG4573 is not set
# CONFIG_DRM_PANEL_MAGNACHIP_D53E6EA8966 is not set
# CONFIG_DRM_PANEL_MANTIX_MLAF057WE51 is not set
# CONFIG_DRM_PANEL_NEC_NL8048HL11 is not set
# CONFIG_DRM_PANEL_NEWVISION_NV3051D is not set
# CONFIG_DRM_PANEL_NEWVISION_NV3052C is not set
# CONFIG_DRM_PANEL_NOVATEK_NT35510 is not set
# CONFIG_DRM_PANEL_NOVATEK_NT35560 is not set
# CONFIG_DRM_PANEL_NOVATEK_NT35950 is not set
# CONFIG_DRM_PANEL_NOVATEK_NT36523 is not set
# CONFIG_DRM_PANEL_NOVATEK_NT36672A is not set
# CONFIG_DRM_PANEL_NOVATEK_NT36672E is not set
# CONFIG_DRM_PANEL_NOVATEK_NT39016 is not set
# CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO is not set
# CONFIG_DRM_PANEL_ORISETECH_OTA5601A is not set
# CONFIG_DRM_PANEL_ORISETECH_OTM8009A is not set
# CONFIG_DRM_PANEL_OSD_OSD101T2587_53TS is not set
# CONFIG_DRM_PANEL_PANASONIC_VVX10F034N00 is not set
# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
# CONFIG_DRM_PANEL_RAYDIUM_RM67191 is not set
# CONFIG_DRM_PANEL_RAYDIUM_RM68200 is not set
# CONFIG_DRM_PANEL_RAYDIUM_RM692E5 is not set
# CONFIG_DRM_PANEL_RONBO_RB070D30 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01 is not set
# CONFIG_DRM_PANEL_SAMSUNG_ATNA33XC20 is not set
# CONFIG_DRM_PANEL_SAMSUNG_DB7430 is not set
# CONFIG_DRM_PANEL_SAMSUNG_LD9040 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6D16D0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6D27A1 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6D7AA0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E3HA2 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E63M0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_SOFEF00 is not set
# CONFIG_DRM_PANEL_SEIKO_43WVF1G is not set
# CONFIG_DRM_PANEL_SHARP_LQ101R1SX01 is not set
# CONFIG_DRM_PANEL_SHARP_LS037V7DW01 is not set
# CONFIG_DRM_PANEL_SHARP_LS043T1LE01 is not set
# CONFIG_DRM_PANEL_SHARP_LS060T1SX01 is not set
# CONFIG_DRM_PANEL_SITRONIX_ST7701 is not set
# CONFIG_DRM_PANEL_SITRONIX_ST7703 is not set
# CONFIG_DRM_PANEL_SITRONIX_ST7789V is not set
# CONFIG_DRM_PANEL_SONY_ACX565AKM is not set
# CONFIG_DRM_PANEL_SONY_TD4353_JDI is not set
# CONFIG_DRM_PANEL_SONY_TULIP_TRULY_NT35521 is not set
# CONFIG_DRM_PANEL_STARTEK_KD070FHFID015 is not set
CONFIG_DRM_PANEL_EDP=y
# CONFIG_DRM_PANEL_SIMPLE is not set
# CONFIG_DRM_PANEL_SYNAPTICS_R63353 is not set
# CONFIG_DRM_PANEL_TDO_TL070WSH30 is not set
# CONFIG_DRM_PANEL_TPO_TD028TTEC1 is not set
# CONFIG_DRM_PANEL_TPO_TD043MTEA1 is not set
# CONFIG_DRM_PANEL_TPO_TPG110 is not set
# CONFIG_DRM_PANEL_TRULY_NT35597_WQXGA is not set
# CONFIG_DRM_PANEL_VISIONOX_R66451 is not set
# CONFIG_DRM_PANEL_VISIONOX_RM69299 is not set
# CONFIG_DRM_PANEL_VISIONOX_VTDR6130 is not set
# CONFIG_DRM_PANEL_WIDECHIPS_WS2401 is not set
# CONFIG_DRM_PANEL_XINPENG_XPP055C272 is not set
# end of Display Panels
CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y
#
# Display Interface Bridges
#
# CONFIG_DRM_CHIPONE_ICN6211 is not set
# CONFIG_DRM_CHRONTEL_CH7033 is not set
# CONFIG_DRM_DISPLAY_CONNECTOR is not set
# CONFIG_DRM_ITE_IT6505 is not set
# CONFIG_DRM_LONTIUM_LT8912B is not set
# CONFIG_DRM_LONTIUM_LT9211 is not set
# CONFIG_DRM_LONTIUM_LT9611 is not set
# CONFIG_DRM_LONTIUM_LT9611UXC is not set
# CONFIG_DRM_ITE_IT66121 is not set
# CONFIG_DRM_LVDS_CODEC is not set
# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set
# CONFIG_DRM_NWL_MIPI_DSI is not set
# CONFIG_DRM_NXP_PTN3460 is not set
# CONFIG_DRM_PARADE_PS8622 is not set
# CONFIG_DRM_PARADE_PS8640 is not set
# CONFIG_DRM_SAMSUNG_DSIM is not set
# CONFIG_DRM_SIL_SII8620 is not set
# CONFIG_DRM_SII902X is not set
# CONFIG_DRM_SII9234 is not set
# CONFIG_DRM_SIMPLE_BRIDGE is not set
# CONFIG_DRM_THINE_THC63LVD1024 is not set
# CONFIG_DRM_TOSHIBA_TC358762 is not set
# CONFIG_DRM_TOSHIBA_TC358764 is not set
# CONFIG_DRM_TOSHIBA_TC358767 is not set
# CONFIG_DRM_TOSHIBA_TC358768 is not set
# CONFIG_DRM_TOSHIBA_TC358775 is not set
# CONFIG_DRM_TI_DLPC3433 is not set
# CONFIG_DRM_TI_TFP410 is not set
# CONFIG_DRM_TI_SN65DSI83 is not set
# CONFIG_DRM_TI_SN65DSI86 is not set
# CONFIG_DRM_TI_TPD12S015 is not set
# CONFIG_DRM_ANALOGIX_ANX6345 is not set
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# CONFIG_DRM_ANALOGIX_ANX7625 is not set
# CONFIG_DRM_I2C_ADV7511 is not set
# CONFIG_DRM_CDNS_DSI is not set
# CONFIG_DRM_CDNS_MHDP8546 is not set
# end of Display Interface Bridges
# CONFIG_DRM_ETNAVIV is not set
# CONFIG_DRM_LOGICVC is not set
# CONFIG_DRM_ARCPGU is not set
CONFIG_DRM_BOCHS=y
CONFIG_DRM_CIRRUS_QEMU=y
# CONFIG_DRM_GM12U320 is not set
# CONFIG_DRM_PANEL_MIPI_DBI is not set
CONFIG_DRM_SIMPLEDRM=y
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9163 is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
#
# Frame buffer Devices
#
CONFIG_FB=y
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
CONFIG_FB_VGA16=y
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_IBM_GXT4500 is not set
CONFIG_FB_VIRTUAL=y
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_SSD1307 is not set
# CONFIG_FB_SM712 is not set
CONFIG_FB_CORE=y
CONFIG_FB_NOTIFY=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DEVICE is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYSMEM_FOPS=y
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_IOMEM_FOPS=y
CONFIG_FB_IOMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y
# CONFIG_FB_MODE_HELPERS is not set
CONFIG_FB_TILEBLITTING=y
# end of Frame buffer Devices
#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=y
# CONFIG_LCD_L4F00242T03 is not set
# CONFIG_LCD_LMS283GF05 is not set
# CONFIG_LCD_LTV350QV is not set
# CONFIG_LCD_ILI922X is not set
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_TDO24M is not set
# CONFIG_LCD_VGG2432A4 is not set
# CONFIG_LCD_PLATFORM is not set
# CONFIG_LCD_AMS369FG06 is not set
# CONFIG_LCD_LMS501KF03 is not set
# CONFIG_LCD_HX8357 is not set
# CONFIG_LCD_OTM3225A is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_KTD253 is not set
# CONFIG_BACKLIGHT_KTD2801 is not set
# CONFIG_BACKLIGHT_KTZ8866 is not set
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_PANDORA is not set
# CONFIG_BACKLIGHT_GPIO is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
# CONFIG_BACKLIGHT_LED is not set
# end of Backlight & LCD device support
CONFIG_VGASTATE=y
CONFIG_VIDEOMODE_HELPERS=y
CONFIG_HDMI=y
#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION is not set
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
# CONFIG_LOGO_LINUX_CLUT224 is not set
# end of Graphics support
# CONFIG_DRM_ACCEL is not set
CONFIG_SOUND=y
CONFIG_SOUND_OSS_CORE=y
CONFIG_SOUND_OSS_CORE_PRECLAIM=y
CONFIG_SND=y
CONFIG_SND_TIMER=y
CONFIG_SND_PCM=y
CONFIG_SND_HWDEP=y
CONFIG_SND_SEQ_DEVICE=y
CONFIG_SND_RAWMIDI=y
CONFIG_SND_JACK=y
CONFIG_SND_JACK_INPUT_DEV=y
CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=y
CONFIG_SND_PCM_OSS=y
CONFIG_SND_PCM_OSS_PLUGINS=y
CONFIG_SND_PCM_TIMER=y
CONFIG_SND_HRTIMER=y
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_MAX_CARDS=32
CONFIG_SND_SUPPORT_OLD_API=y
CONFIG_SND_PROC_FS=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
CONFIG_SND_CTL_FAST_LOOKUP=y
CONFIG_SND_DEBUG=y
# CONFIG_SND_DEBUG_VERBOSE is not set
CONFIG_SND_PCM_XRUN_DEBUG=y
# CONFIG_SND_CTL_INPUT_VALIDATION is not set
# CONFIG_SND_CTL_DEBUG is not set
# CONFIG_SND_JACK_INJECTION_DEBUG is not set
CONFIG_SND_VMASTER=y
CONFIG_SND_DMA_SGBUF=y
CONFIG_SND_CTL_LED=y
CONFIG_SND_SEQUENCER=y
CONFIG_SND_SEQ_DUMMY=y
CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=y
CONFIG_SND_SEQ_MIDI=y
CONFIG_SND_SEQ_VIRMIDI=y
# CONFIG_SND_SEQ_UMP is not set
CONFIG_SND_DRIVERS=y
# CONFIG_SND_PCSP is not set
CONFIG_SND_DUMMY=y
CONFIG_SND_ALOOP=y
# CONFIG_SND_PCMTEST is not set
CONFIG_SND_VIRMIDI=y
# CONFIG_SND_MTPAV is not set
# CONFIG_SND_MTS64 is not set
# CONFIG_SND_SERIAL_U16550 is not set
# CONFIG_SND_SERIAL_GENERIC is not set
# CONFIG_SND_MPU401 is not set
# CONFIG_SND_PORTMAN2X4 is not set
CONFIG_SND_PCI=y
# CONFIG_SND_AD1889 is not set
# CONFIG_SND_ALS300 is not set
# CONFIG_SND_ALS4000 is not set
# CONFIG_SND_ALI5451 is not set
# CONFIG_SND_ASIHPI is not set
# CONFIG_SND_ATIIXP is not set
# CONFIG_SND_ATIIXP_MODEM is not set
# CONFIG_SND_AU8810 is not set
# CONFIG_SND_AU8820 is not set
# CONFIG_SND_AU8830 is not set
# CONFIG_SND_AW2 is not set
# CONFIG_SND_AZT3328 is not set
# CONFIG_SND_BT87X is not set
# CONFIG_SND_CA0106 is not set
# CONFIG_SND_CMIPCI is not set
# CONFIG_SND_OXYGEN is not set
# CONFIG_SND_CS4281 is not set
# CONFIG_SND_CS46XX is not set
# CONFIG_SND_CTXFI is not set
# CONFIG_SND_DARLA20 is not set
# CONFIG_SND_GINA20 is not set
# CONFIG_SND_LAYLA20 is not set
# CONFIG_SND_DARLA24 is not set
# CONFIG_SND_GINA24 is not set
# CONFIG_SND_LAYLA24 is not set
# CONFIG_SND_MONA is not set
# CONFIG_SND_MIA is not set
# CONFIG_SND_ECHO3G is not set
# CONFIG_SND_INDIGO is not set
# CONFIG_SND_INDIGOIO is not set
# CONFIG_SND_INDIGODJ is not set
# CONFIG_SND_INDIGOIOX is not set
# CONFIG_SND_INDIGODJX is not set
# CONFIG_SND_EMU10K1 is not set
# CONFIG_SND_EMU10K1X is not set
# CONFIG_SND_ENS1370 is not set
# CONFIG_SND_ENS1371 is not set
# CONFIG_SND_ES1938 is not set
# CONFIG_SND_ES1968 is not set
# CONFIG_SND_FM801 is not set
# CONFIG_SND_HDSP is not set
# CONFIG_SND_HDSPM is not set
# CONFIG_SND_ICE1712 is not set
# CONFIG_SND_ICE1724 is not set
# CONFIG_SND_INTEL8X0 is not set
# CONFIG_SND_INTEL8X0M is not set
# CONFIG_SND_KORG1212 is not set
# CONFIG_SND_LOLA is not set
# CONFIG_SND_LX6464ES is not set
# CONFIG_SND_MAESTRO3 is not set
# CONFIG_SND_MIXART is not set
# CONFIG_SND_NM256 is not set
# CONFIG_SND_PCXHR is not set
# CONFIG_SND_RIPTIDE is not set
# CONFIG_SND_RME32 is not set
# CONFIG_SND_RME96 is not set
# CONFIG_SND_RME9652 is not set
# CONFIG_SND_SE6X is not set
# CONFIG_SND_SONICVIBES is not set
# CONFIG_SND_TRIDENT is not set
# CONFIG_SND_VIA82XX is not set
# CONFIG_SND_VIA82XX_MODEM is not set
# CONFIG_SND_VIRTUOSO is not set
# CONFIG_SND_VX222 is not set
# CONFIG_SND_YMFPCI is not set
#
# HD-Audio
#
CONFIG_SND_HDA=y
CONFIG_SND_HDA_GENERIC_LEDS=y
CONFIG_SND_HDA_INTEL=y
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
CONFIG_SND_HDA_INPUT_BEEP=y
CONFIG_SND_HDA_INPUT_BEEP_MODE=1
CONFIG_SND_HDA_PATCH_LOADER=y
CONFIG_SND_HDA_SCODEC_COMPONENT=y
CONFIG_SND_HDA_CODEC_REALTEK=y
CONFIG_SND_HDA_CODEC_ANALOG=y
CONFIG_SND_HDA_CODEC_SIGMATEL=y
CONFIG_SND_HDA_CODEC_VIA=y
CONFIG_SND_HDA_CODEC_HDMI=y
CONFIG_SND_HDA_CODEC_CIRRUS=y
# CONFIG_SND_HDA_CODEC_CS8409 is not set
CONFIG_SND_HDA_CODEC_CONEXANT=y
CONFIG_SND_HDA_CODEC_CA0110=y
CONFIG_SND_HDA_CODEC_CA0132=y
# CONFIG_SND_HDA_CODEC_CA0132_DSP is not set
CONFIG_SND_HDA_CODEC_CMEDIA=y
CONFIG_SND_HDA_CODEC_SI3054=y
CONFIG_SND_HDA_GENERIC=y
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
# CONFIG_SND_HDA_INTEL_HDMI_SILENT_STREAM is not set
# CONFIG_SND_HDA_CTL_DEV_ID is not set
# end of HD-Audio
CONFIG_SND_HDA_CORE=y
CONFIG_SND_HDA_COMPONENT=y
CONFIG_SND_HDA_I915=y
CONFIG_SND_HDA_PREALLOC_SIZE=0
CONFIG_SND_INTEL_NHLT=y
CONFIG_SND_INTEL_DSP_CONFIG=y
CONFIG_SND_INTEL_SOUNDWIRE_ACPI=y
# CONFIG_SND_SPI is not set
CONFIG_SND_USB=y
CONFIG_SND_USB_AUDIO=y
# CONFIG_SND_USB_AUDIO_MIDI_V2 is not set
CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER=y
CONFIG_SND_USB_UA101=y
CONFIG_SND_USB_USX2Y=y
CONFIG_SND_USB_CAIAQ=y
CONFIG_SND_USB_CAIAQ_INPUT=y
CONFIG_SND_USB_US122L=y
CONFIG_SND_USB_6FIRE=y
CONFIG_SND_USB_HIFACE=y
CONFIG_SND_BCD2000=y
CONFIG_SND_USB_LINE6=y
CONFIG_SND_USB_POD=y
CONFIG_SND_USB_PODHD=y
CONFIG_SND_USB_TONEPORT=y
CONFIG_SND_USB_VARIAX=y
# CONFIG_SND_FIREWIRE is not set
CONFIG_SND_PCMCIA=y
# CONFIG_SND_VXPOCKET is not set
# CONFIG_SND_PDAUDIOCF is not set
# CONFIG_SND_SOC is not set
CONFIG_SND_X86=y
# CONFIG_HDMI_LPE_AUDIO is not set
CONFIG_SND_VIRTIO=y
CONFIG_HID_SUPPORT=y
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=y
CONFIG_HID_GENERIC=y
#
# Special HID drivers
#
CONFIG_HID_A4TECH=y
CONFIG_HID_ACCUTOUCH=y
CONFIG_HID_ACRUX=y
CONFIG_HID_ACRUX_FF=y
CONFIG_HID_APPLE=y
CONFIG_HID_APPLEIR=y
CONFIG_HID_ASUS=y
CONFIG_HID_AUREAL=y
CONFIG_HID_BELKIN=y
CONFIG_HID_BETOP_FF=y
# CONFIG_HID_BIGBEN_FF is not set
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
CONFIG_HID_CORSAIR=y
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
CONFIG_HID_PRODIKEYS=y
CONFIG_HID_CMEDIA=y
CONFIG_HID_CP2112=y
# CONFIG_HID_CREATIVE_SB0540 is not set
CONFIG_HID_CYPRESS=y
CONFIG_HID_DRAGONRISE=y
CONFIG_DRAGONRISE_FF=y
CONFIG_HID_EMS_FF=y
# CONFIG_HID_ELAN is not set
CONFIG_HID_ELECOM=y
CONFIG_HID_ELO=y
# CONFIG_HID_EVISION is not set
CONFIG_HID_EZKEY=y
# CONFIG_HID_FT260 is not set
CONFIG_HID_GEMBIRD=y
CONFIG_HID_GFRM=y
# CONFIG_HID_GLORIOUS is not set
CONFIG_HID_HOLTEK=y
CONFIG_HOLTEK_FF=y
# CONFIG_HID_GOOGLE_STADIA_FF is not set
# CONFIG_HID_VIVALDI is not set
CONFIG_HID_GT683R=y
CONFIG_HID_KEYTOUCH=y
CONFIG_HID_KYE=y
CONFIG_HID_UCLOGIC=y
CONFIG_HID_WALTOP=y
# CONFIG_HID_VIEWSONIC is not set
# CONFIG_HID_VRC2 is not set
# CONFIG_HID_XIAOMI is not set
CONFIG_HID_GYRATION=y
CONFIG_HID_ICADE=y
CONFIG_HID_ITE=y
# CONFIG_HID_JABRA is not set
CONFIG_HID_TWINHAN=y
CONFIG_HID_KENSINGTON=y
CONFIG_HID_LCPOWER=y
CONFIG_HID_LED=y
CONFIG_HID_LENOVO=y
# CONFIG_HID_LETSKETCH is not set
CONFIG_HID_LOGITECH=y
CONFIG_HID_LOGITECH_DJ=y
CONFIG_HID_LOGITECH_HIDPP=y
CONFIG_LOGITECH_FF=y
CONFIG_LOGIRUMBLEPAD2_FF=y
CONFIG_LOGIG940_FF=y
CONFIG_LOGIWHEELS_FF=y
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
CONFIG_HID_MAYFLASH=y
# CONFIG_HID_MEGAWORLD_FF is not set
CONFIG_HID_REDRAGON=y
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
CONFIG_HID_MULTITOUCH=y
# CONFIG_HID_NINTENDO is not set
CONFIG_HID_NTI=y
CONFIG_HID_NTRIG=y
# CONFIG_HID_NVIDIA_SHIELD is not set
CONFIG_HID_ORTEK=y
CONFIG_HID_PANTHERLORD=y
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PENMOUNT=y
CONFIG_HID_PETALYNX=y
CONFIG_HID_PICOLCD=y
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=y
# CONFIG_HID_PXRC is not set
# CONFIG_HID_RAZER is not set
CONFIG_HID_PRIMAX=y
CONFIG_HID_RETRODE=y
CONFIG_HID_ROCCAT=y
CONFIG_HID_SAITEK=y
CONFIG_HID_SAMSUNG=y
# CONFIG_HID_SEMITEK is not set
# CONFIG_HID_SIGMAMICRO is not set
CONFIG_HID_SONY=y
CONFIG_SONY_FF=y
CONFIG_HID_SPEEDLINK=y
# CONFIG_HID_STEAM is not set
CONFIG_HID_STEELSERIES=y
CONFIG_HID_SUNPLUS=y
CONFIG_HID_RMI=y
CONFIG_HID_GREENASIA=y
CONFIG_GREENASIA_FF=y
CONFIG_HID_SMARTJOYPLUS=y
CONFIG_SMARTJOYPLUS_FF=y
CONFIG_HID_TIVO=y
CONFIG_HID_TOPSEED=y
# CONFIG_HID_TOPRE is not set
CONFIG_HID_THINGM=y
CONFIG_HID_THRUSTMASTER=y
CONFIG_THRUSTMASTER_FF=y
CONFIG_HID_UDRAW_PS3=y
# CONFIG_HID_U2FZERO is not set
CONFIG_HID_WACOM=y
CONFIG_HID_WIIMOTE=y
CONFIG_HID_XINMO=y
CONFIG_HID_ZEROPLUS=y
CONFIG_ZEROPLUS_FF=y
CONFIG_HID_ZYDACRON=y
CONFIG_HID_SENSOR_HUB=y
CONFIG_HID_SENSOR_CUSTOM_SENSOR=y
CONFIG_HID_ALPS=y
# CONFIG_HID_MCP2200 is not set
# CONFIG_HID_MCP2221 is not set
# end of Special HID drivers
#
# HID-BPF support
#
# end of HID-BPF support
#
# USB HID support
#
CONFIG_USB_HID=y
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
# end of USB HID support
CONFIG_I2C_HID=y
# CONFIG_I2C_HID_ACPI is not set
# CONFIG_I2C_HID_OF is not set
# CONFIG_I2C_HID_OF_ELAN is not set
# CONFIG_I2C_HID_OF_GOODIX is not set
#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=y
# CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set
# end of Intel ISH HID support
#
# AMD SFH HID Support
#
# CONFIG_AMD_SFH_HID is not set
# end of AMD SFH HID Support
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_LED_TRIG=y
CONFIG_USB_ULPI_BUS=y
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_PCI_AMD=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
CONFIG_USB_FEW_INIT_RETRIES=y
CONFIG_USB_DYNAMIC_MINORS=y
CONFIG_USB_OTG=y
# CONFIG_USB_OTG_PRODUCTLIST is not set
# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
CONFIG_USB_OTG_FSM=y
CONFIG_USB_LEDS_TRIGGER_USBPORT=y
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_DEFAULT_AUTHORIZATION_MODE=1
CONFIG_USB_MON=y
#
# USB Host Controller Drivers
#
CONFIG_USB_C67X00_HCD=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_XHCI_DBGCAP=y
CONFIG_USB_XHCI_PCI=y
# CONFIG_USB_XHCI_PCI_RENESAS is not set
CONFIG_USB_XHCI_PLATFORM=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_FSL is not set
CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_USB_OXU210HP_HCD=y
CONFIG_USB_ISP116X_HCD=y
CONFIG_USB_MAX3421_HCD=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_SSB is not set
CONFIG_USB_OHCI_HCD_PLATFORM=y
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_SL811_HCD=y
CONFIG_USB_SL811_HCD_ISO=y
CONFIG_USB_SL811_CS=y
CONFIG_USB_R8A66597_HCD=y
CONFIG_USB_HCD_BCMA=y
CONFIG_USB_HCD_SSB=y
# CONFIG_USB_HCD_TEST_MODE is not set
#
# USB Device Class drivers
#
CONFIG_USB_ACM=y
CONFIG_USB_PRINTER=y
CONFIG_USB_WDM=y
CONFIG_USB_TMC=y
#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=y
# CONFIG_USB_STORAGE_DEBUG is not set
CONFIG_USB_STORAGE_REALTEK=y
CONFIG_REALTEK_AUTOPM=y
CONFIG_USB_STORAGE_DATAFAB=y
CONFIG_USB_STORAGE_FREECOM=y
CONFIG_USB_STORAGE_ISD200=y
CONFIG_USB_STORAGE_USBAT=y
CONFIG_USB_STORAGE_SDDR09=y
CONFIG_USB_STORAGE_SDDR55=y
CONFIG_USB_STORAGE_JUMPSHOT=y
CONFIG_USB_STORAGE_ALAUDA=y
CONFIG_USB_STORAGE_ONETOUCH=y
CONFIG_USB_STORAGE_KARMA=y
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
CONFIG_USB_UAS=y
#
# USB Imaging devices
#
CONFIG_USB_MDC800=y
CONFIG_USB_MICROTEK=y
CONFIG_USBIP_CORE=y
CONFIG_USBIP_VHCI_HCD=y
CONFIG_USBIP_VHCI_HC_PORTS=8
CONFIG_USBIP_VHCI_NR_HCS=16
CONFIG_USBIP_HOST=y
CONFIG_USBIP_VUDC=y
# CONFIG_USBIP_DEBUG is not set
#
# USB dual-mode controller drivers
#
# CONFIG_USB_CDNS_SUPPORT is not set
CONFIG_USB_MUSB_HDRC=y
# CONFIG_USB_MUSB_HOST is not set
# CONFIG_USB_MUSB_GADGET is not set
CONFIG_USB_MUSB_DUAL_ROLE=y
#
# Platform Glue Layer
#
#
# MUSB DMA mode
#
CONFIG_MUSB_PIO_ONLY=y
CONFIG_USB_DWC3=y
CONFIG_USB_DWC3_ULPI=y
# CONFIG_USB_DWC3_HOST is not set
CONFIG_USB_DWC3_GADGET=y
# CONFIG_USB_DWC3_DUAL_ROLE is not set
#
# Platform Glue Driver Support
#
CONFIG_USB_DWC3_PCI=y
# CONFIG_USB_DWC3_HAPS is not set
CONFIG_USB_DWC3_OF_SIMPLE=y
CONFIG_USB_DWC2=y
CONFIG_USB_DWC2_HOST=y
#
# Gadget/Dual-role mode requires USB Gadget support to be enabled
#
# CONFIG_USB_DWC2_PERIPHERAL is not set
# CONFIG_USB_DWC2_DUAL_ROLE is not set
CONFIG_USB_DWC2_PCI=y
# CONFIG_USB_DWC2_DEBUG is not set
# CONFIG_USB_DWC2_TRACK_MISSED_SOFS is not set
CONFIG_USB_CHIPIDEA=y
CONFIG_USB_CHIPIDEA_UDC=y
CONFIG_USB_CHIPIDEA_HOST=y
CONFIG_USB_CHIPIDEA_PCI=y
# CONFIG_USB_CHIPIDEA_MSM is not set
CONFIG_USB_CHIPIDEA_NPCM=y
# CONFIG_USB_CHIPIDEA_IMX is not set
# CONFIG_USB_CHIPIDEA_GENERIC is not set
# CONFIG_USB_CHIPIDEA_TEGRA is not set
CONFIG_USB_ISP1760=y
CONFIG_USB_ISP1760_HCD=y
CONFIG_USB_ISP1761_UDC=y
# CONFIG_USB_ISP1760_HOST_ROLE is not set
# CONFIG_USB_ISP1760_GADGET_ROLE is not set
CONFIG_USB_ISP1760_DUAL_ROLE=y
#
# USB port drivers
#
CONFIG_USB_SERIAL=y
CONFIG_USB_SERIAL_CONSOLE=y
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_SIMPLE=y
CONFIG_USB_SERIAL_AIRCABLE=y
CONFIG_USB_SERIAL_ARK3116=y
CONFIG_USB_SERIAL_BELKIN=y
CONFIG_USB_SERIAL_CH341=y
CONFIG_USB_SERIAL_WHITEHEAT=y
CONFIG_USB_SERIAL_DIGI_ACCELEPORT=y
CONFIG_USB_SERIAL_CP210X=y
CONFIG_USB_SERIAL_CYPRESS_M8=y
CONFIG_USB_SERIAL_EMPEG=y
CONFIG_USB_SERIAL_FTDI_SIO=y
CONFIG_USB_SERIAL_VISOR=y
CONFIG_USB_SERIAL_IPAQ=y
CONFIG_USB_SERIAL_IR=y
CONFIG_USB_SERIAL_EDGEPORT=y
CONFIG_USB_SERIAL_EDGEPORT_TI=y
CONFIG_USB_SERIAL_F81232=y
CONFIG_USB_SERIAL_F8153X=y
CONFIG_USB_SERIAL_GARMIN=y
CONFIG_USB_SERIAL_IPW=y
CONFIG_USB_SERIAL_IUU=y
CONFIG_USB_SERIAL_KEYSPAN_PDA=y
CONFIG_USB_SERIAL_KEYSPAN=y
CONFIG_USB_SERIAL_KLSI=y
CONFIG_USB_SERIAL_KOBIL_SCT=y
CONFIG_USB_SERIAL_MCT_U232=y
CONFIG_USB_SERIAL_METRO=y
CONFIG_USB_SERIAL_MOS7720=y
CONFIG_USB_SERIAL_MOS7715_PARPORT=y
CONFIG_USB_SERIAL_MOS7840=y
CONFIG_USB_SERIAL_MXUPORT=y
CONFIG_USB_SERIAL_NAVMAN=y
CONFIG_USB_SERIAL_PL2303=y
CONFIG_USB_SERIAL_OTI6858=y
CONFIG_USB_SERIAL_QCAUX=y
CONFIG_USB_SERIAL_QUALCOMM=y
CONFIG_USB_SERIAL_SPCP8X5=y
CONFIG_USB_SERIAL_SAFE=y
# CONFIG_USB_SERIAL_SAFE_PADDED is not set
CONFIG_USB_SERIAL_SIERRAWIRELESS=y
CONFIG_USB_SERIAL_SYMBOL=y
CONFIG_USB_SERIAL_TI=y
CONFIG_USB_SERIAL_CYBERJACK=y
CONFIG_USB_SERIAL_WWAN=y
CONFIG_USB_SERIAL_OPTION=y
CONFIG_USB_SERIAL_OMNINET=y
CONFIG_USB_SERIAL_OPTICON=y
CONFIG_USB_SERIAL_XSENS_MT=y
CONFIG_USB_SERIAL_WISHBONE=y
CONFIG_USB_SERIAL_SSU100=y
CONFIG_USB_SERIAL_QT2=y
CONFIG_USB_SERIAL_UPD78F0730=y
CONFIG_USB_SERIAL_XR=y
CONFIG_USB_SERIAL_DEBUG=y
#
# USB Miscellaneous drivers
#
CONFIG_USB_USS720=y
CONFIG_USB_EMI62=y
CONFIG_USB_EMI26=y
CONFIG_USB_ADUTUX=y
CONFIG_USB_SEVSEG=y
CONFIG_USB_LEGOTOWER=y
CONFIG_USB_LCD=y
CONFIG_USB_CYPRESS_CY7C63=y
CONFIG_USB_CYTHERM=y
CONFIG_USB_IDMOUSE=y
CONFIG_USB_APPLEDISPLAY=y
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_LJCA is not set
CONFIG_USB_SISUSBVGA=y
CONFIG_USB_LD=y
CONFIG_USB_TRANCEVIBRATOR=y
CONFIG_USB_IOWARRIOR=y
CONFIG_USB_TEST=y
CONFIG_USB_EHSET_TEST_FIXTURE=y
CONFIG_USB_ISIGHTFW=y
CONFIG_USB_YUREX=y
CONFIG_USB_EZUSB_FX2=y
CONFIG_USB_HUB_USB251XB=y
CONFIG_USB_HSIC_USB3503=y
CONFIG_USB_HSIC_USB4604=y
CONFIG_USB_LINK_LAYER_TEST=y
CONFIG_USB_CHAOSKEY=y
# CONFIG_USB_ONBOARD_HUB is not set
CONFIG_USB_ATM=y
CONFIG_USB_SPEEDTOUCH=y
CONFIG_USB_CXACRU=y
CONFIG_USB_UEAGLEATM=y
CONFIG_USB_XUSBATM=y
#
# USB Physical Layer drivers
#
CONFIG_USB_PHY=y
CONFIG_NOP_USB_XCEIV=y
CONFIG_USB_GPIO_VBUS=y
CONFIG_TAHVO_USB=y
CONFIG_TAHVO_USB_HOST_BY_DEFAULT=y
CONFIG_USB_ISP1301=y
# end of USB Physical Layer drivers
CONFIG_USB_GADGET=y
# CONFIG_USB_GADGET_DEBUG is not set
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_GADGET_DEBUG_FS=y
CONFIG_USB_GADGET_VBUS_DRAW=500
CONFIG_USB_GADGET_STORAGE_NUM_BUFFERS=2
CONFIG_U_SERIAL_CONSOLE=y
#
# USB Peripheral Controller
#
CONFIG_USB_GR_UDC=y
CONFIG_USB_R8A66597=y
CONFIG_USB_PXA27X=y
CONFIG_USB_MV_UDC=y
CONFIG_USB_MV_U3D=y
CONFIG_USB_SNP_CORE=y
# CONFIG_USB_SNP_UDC_PLAT is not set
# CONFIG_USB_M66592 is not set
CONFIG_USB_BDC_UDC=y
CONFIG_USB_AMD5536UDC=y
CONFIG_USB_NET2272=y
CONFIG_USB_NET2272_DMA=y
CONFIG_USB_NET2280=y
CONFIG_USB_GOKU=y
CONFIG_USB_EG20T=y
# CONFIG_USB_GADGET_XILINX is not set
# CONFIG_USB_MAX3420_UDC is not set
# CONFIG_USB_CDNS2_UDC is not set
CONFIG_USB_DUMMY_HCD=y
# end of USB Peripheral Controller
CONFIG_USB_LIBCOMPOSITE=y
CONFIG_USB_F_ACM=y
CONFIG_USB_F_SS_LB=y
CONFIG_USB_U_SERIAL=y
CONFIG_USB_U_ETHER=y
CONFIG_USB_U_AUDIO=y
CONFIG_USB_F_SERIAL=y
CONFIG_USB_F_OBEX=y
CONFIG_USB_F_NCM=y
CONFIG_USB_F_ECM=y
CONFIG_USB_F_PHONET=y
CONFIG_USB_F_EEM=y
CONFIG_USB_F_SUBSET=y
CONFIG_USB_F_RNDIS=y
CONFIG_USB_F_MASS_STORAGE=y
CONFIG_USB_F_FS=y
CONFIG_USB_F_UAC1=y
CONFIG_USB_F_UAC1_LEGACY=y
CONFIG_USB_F_UAC2=y
CONFIG_USB_F_UVC=y
CONFIG_USB_F_MIDI=y
CONFIG_USB_F_HID=y
CONFIG_USB_F_PRINTER=y
CONFIG_USB_F_TCM=y
CONFIG_USB_CONFIGFS=y
CONFIG_USB_CONFIGFS_SERIAL=y
CONFIG_USB_CONFIGFS_ACM=y
CONFIG_USB_CONFIGFS_OBEX=y
CONFIG_USB_CONFIGFS_NCM=y
CONFIG_USB_CONFIGFS_ECM=y
CONFIG_USB_CONFIGFS_ECM_SUBSET=y
CONFIG_USB_CONFIGFS_RNDIS=y
CONFIG_USB_CONFIGFS_EEM=y
CONFIG_USB_CONFIGFS_PHONET=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_LB_SS=y
CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_F_UAC1=y
CONFIG_USB_CONFIGFS_F_UAC1_LEGACY=y
CONFIG_USB_CONFIGFS_F_UAC2=y
CONFIG_USB_CONFIGFS_F_MIDI=y
# CONFIG_USB_CONFIGFS_F_MIDI2 is not set
CONFIG_USB_CONFIGFS_F_HID=y
CONFIG_USB_CONFIGFS_F_UVC=y
CONFIG_USB_CONFIGFS_F_PRINTER=y
CONFIG_USB_CONFIGFS_F_TCM=y
#
# USB Gadget precomposed configurations
#
# CONFIG_USB_ZERO is not set
# CONFIG_USB_AUDIO is not set
# CONFIG_USB_ETH is not set
# CONFIG_USB_G_NCM is not set
CONFIG_USB_GADGETFS=y
# CONFIG_USB_FUNCTIONFS is not set
# CONFIG_USB_MASS_STORAGE is not set
# CONFIG_USB_GADGET_TARGET is not set
# CONFIG_USB_G_SERIAL is not set
# CONFIG_USB_MIDI_GADGET is not set
# CONFIG_USB_G_PRINTER is not set
# CONFIG_USB_CDC_COMPOSITE is not set
# CONFIG_USB_G_NOKIA is not set
# CONFIG_USB_G_ACM_MS is not set
# CONFIG_USB_G_MULTI is not set
# CONFIG_USB_G_HID is not set
# CONFIG_USB_G_DBGP is not set
# CONFIG_USB_G_WEBCAM is not set
CONFIG_USB_RAW_GADGET=y
# end of USB Gadget precomposed configurations
CONFIG_TYPEC=y
CONFIG_TYPEC_TCPM=y
CONFIG_TYPEC_TCPCI=y
# CONFIG_TYPEC_RT1711H is not set
# CONFIG_TYPEC_TCPCI_MAXIM is not set
CONFIG_TYPEC_FUSB302=y
CONFIG_TYPEC_UCSI=y
# CONFIG_UCSI_CCG is not set
CONFIG_UCSI_ACPI=y
# CONFIG_UCSI_STM32G0 is not set
CONFIG_TYPEC_TPS6598X=y
# CONFIG_TYPEC_ANX7411 is not set
# CONFIG_TYPEC_RT1719 is not set
# CONFIG_TYPEC_HD3SS3220 is not set
# CONFIG_TYPEC_STUSB160X is not set
# CONFIG_TYPEC_WUSB3801 is not set
#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
# CONFIG_TYPEC_MUX_FSA4480 is not set
# CONFIG_TYPEC_MUX_GPIO_SBU is not set
# CONFIG_TYPEC_MUX_PI3USB30532 is not set
# CONFIG_TYPEC_MUX_IT5205 is not set
# CONFIG_TYPEC_MUX_NB7VPQ904M is not set
# CONFIG_TYPEC_MUX_PTN36502 is not set
# CONFIG_TYPEC_MUX_WCD939X_USBSS is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support
#
# USB Type-C Alternate Mode drivers
#
# CONFIG_TYPEC_DP_ALTMODE is not set
# end of USB Type-C Alternate Mode drivers
CONFIG_USB_ROLE_SWITCH=y
# CONFIG_USB_ROLES_INTEL_XHCI is not set
CONFIG_MMC=y
# CONFIG_PWRSEQ_EMMC is not set
# CONFIG_PWRSEQ_SIMPLE is not set
# CONFIG_MMC_BLOCK is not set
# CONFIG_SDIO_UART is not set
# CONFIG_MMC_TEST is not set
# CONFIG_MMC_CRYPTO is not set
#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
# CONFIG_MMC_SDHCI is not set
# CONFIG_MMC_WBSD is not set
# CONFIG_MMC_TIFM_SD is not set
# CONFIG_MMC_SPI is not set
# CONFIG_MMC_SDRICOH_CS is not set
# CONFIG_MMC_CB710 is not set
# CONFIG_MMC_VIA_SDMMC is not set
CONFIG_MMC_VUB300=y
CONFIG_MMC_USHC=y
# CONFIG_MMC_USDHI6ROL0 is not set
CONFIG_MMC_REALTEK_USB=y
# CONFIG_MMC_CQHCI is not set
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_MEMSTICK=y
# CONFIG_MEMSTICK_DEBUG is not set
#
# MemoryStick drivers
#
# CONFIG_MEMSTICK_UNSAFE_RESUME is not set
# CONFIG_MSPRO_BLOCK is not set
# CONFIG_MS_BLOCK is not set
#
# MemoryStick Host Controller Drivers
#
# CONFIG_MEMSTICK_TIFM_MS is not set
# CONFIG_MEMSTICK_JMICRON_38X is not set
# CONFIG_MEMSTICK_R592 is not set
CONFIG_MEMSTICK_REALTEK_USB=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set
#
# LED drivers
#
# CONFIG_LEDS_AN30259A is not set
# CONFIG_LEDS_APU is not set
# CONFIG_LEDS_AW200XX is not set
# CONFIG_LEDS_AW2013 is not set
# CONFIG_LEDS_BCM6328 is not set
# CONFIG_LEDS_BCM6358 is not set
# CONFIG_LEDS_CHT_WCOVE is not set
# CONFIG_LEDS_CR0014114 is not set
# CONFIG_LEDS_EL15203000 is not set
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_LM3692X is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP3952 is not set
# CONFIG_LEDS_LP8860 is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA995X is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_REGULATOR is not set
# CONFIG_LEDS_BD2606MVV is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_LT3593 is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_IS31FL319X is not set
# CONFIG_LEDS_IS31FL32XX is not set
#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
# CONFIG_LEDS_BLINKM is not set
# CONFIG_LEDS_SYSCON is not set
# CONFIG_LEDS_MLXCPLD is not set
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_SPI_BYTE is not set
# CONFIG_LEDS_LM3697 is not set
# CONFIG_LEDS_LGM is not set
#
# Flash and Torch LED drivers
#
#
# RGB LED drivers
#
#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
# CONFIG_LEDS_TRIGGER_TIMER is not set
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
# CONFIG_LEDS_TRIGGER_DISK is not set
# CONFIG_LEDS_TRIGGER_MTD is not set
# CONFIG_LEDS_TRIGGER_HEARTBEAT is not set
# CONFIG_LEDS_TRIGGER_BACKLIGHT is not set
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
# CONFIG_LEDS_TRIGGER_GPIO is not set
# CONFIG_LEDS_TRIGGER_DEFAULT_ON is not set
#
# iptables trigger is under Netfilter config (LED target)
#
# CONFIG_LEDS_TRIGGER_TRANSIENT is not set
# CONFIG_LEDS_TRIGGER_CAMERA is not set
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
CONFIG_LEDS_TRIGGER_AUDIO=y
# CONFIG_LEDS_TRIGGER_TTY is not set
#
# Simple LED drivers
#
# CONFIG_ACCESSIBILITY is not set
CONFIG_INFINIBAND=y
CONFIG_INFINIBAND_USER_MAD=y
CONFIG_INFINIBAND_USER_ACCESS=y
CONFIG_INFINIBAND_USER_MEM=y
CONFIG_INFINIBAND_ON_DEMAND_PAGING=y
CONFIG_INFINIBAND_ADDR_TRANS=y
CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y
CONFIG_INFINIBAND_VIRT_DMA=y
# CONFIG_INFINIBAND_EFA is not set
# CONFIG_INFINIBAND_ERDMA is not set
CONFIG_MLX4_INFINIBAND=y
# CONFIG_INFINIBAND_MTHCA is not set
# CONFIG_INFINIBAND_OCRDMA is not set
# CONFIG_INFINIBAND_USNIC is not set
# CONFIG_INFINIBAND_VMWARE_PVRDMA is not set
# CONFIG_INFINIBAND_RDMAVT is not set
CONFIG_RDMA_RXE=y
CONFIG_RDMA_SIW=y
CONFIG_INFINIBAND_IPOIB=y
CONFIG_INFINIBAND_IPOIB_CM=y
CONFIG_INFINIBAND_IPOIB_DEBUG=y
# CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set
CONFIG_INFINIBAND_SRP=y
# CONFIG_INFINIBAND_SRPT is not set
CONFIG_INFINIBAND_ISER=y
CONFIG_INFINIBAND_RTRS=y
CONFIG_INFINIBAND_RTRS_CLIENT=y
# CONFIG_INFINIBAND_RTRS_SERVER is not set
# CONFIG_INFINIBAND_OPA_VNIC is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_EDAC=y
# CONFIG_EDAC_LEGACY_SYSFS is not set
# CONFIG_EDAC_DEBUG is not set
# CONFIG_EDAC_DECODE_MCE is not set
# CONFIG_EDAC_E752X is not set
# CONFIG_EDAC_I82975X is not set
# CONFIG_EDAC_I3000 is not set
# CONFIG_EDAC_I3200 is not set
# CONFIG_EDAC_IE31200 is not set
# CONFIG_EDAC_X38 is not set
# CONFIG_EDAC_I5400 is not set
# CONFIG_EDAC_I7CORE is not set
# CONFIG_EDAC_I5100 is not set
# CONFIG_EDAC_I7300 is not set
# CONFIG_EDAC_SBRIDGE is not set
# CONFIG_EDAC_SKX is not set
# CONFIG_EDAC_I10NM is not set
# CONFIG_EDAC_PND2 is not set
# CONFIG_EDAC_IGEN6 is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_MC146818_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_SYSTOHC_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set
# CONFIG_RTC_NVMEM is not set
#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set
#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_HYM8563 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_MAX31335 is not set
# CONFIG_RTC_DRV_NCT3018Y is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12026 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_TWL4030 is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8010 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set
#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
# CONFIG_RTC_DRV_RX4581 is not set
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=y
#
# SPI and I2C RTC drivers
#
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set
# CONFIG_RTC_DRV_RX6110 is not set
#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_DS2404 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_ZYNQMP is not set
#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_CADENCE is not set
# CONFIG_RTC_DRV_FTRTC010 is not set
# CONFIG_RTC_DRV_R7301 is not set
#
# HID Sensor RTC drivers
#
CONFIG_RTC_DRV_HID_SENSOR_TIME=y
# CONFIG_RTC_DRV_GOLDFISH is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set
#
# DMA Devices
#
CONFIG_DMA_ENGINE=y
CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
CONFIG_DMA_OF=y
# CONFIG_ALTERA_MSGDMA is not set
# CONFIG_DW_AXI_DMAC is not set
# CONFIG_FSL_EDMA is not set
CONFIG_INTEL_IDMA64=y
# CONFIG_INTEL_IDXD is not set
# CONFIG_INTEL_IDXD_COMPAT is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_PLX_DMA is not set
# CONFIG_XILINX_DMA is not set
# CONFIG_XILINX_XDMA is not set
# CONFIG_XILINX_ZYNQMP_DPDMA is not set
# CONFIG_AMD_PTDMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=y
# CONFIG_DW_DMAC is not set
# CONFIG_DW_DMAC_PCI is not set
# CONFIG_DW_EDMA is not set
CONFIG_HSU_DMA=y
# CONFIG_SF_PDMA is not set
# CONFIG_INTEL_LDMA is not set
#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y
#
# DMABUF options
#
CONFIG_SYNC_FILE=y
CONFIG_SW_SYNC=y
CONFIG_UDMABUF=y
CONFIG_DMABUF_MOVE_NOTIFY=y
# CONFIG_DMABUF_DEBUG is not set
# CONFIG_DMABUF_SELFTESTS is not set
CONFIG_DMABUF_HEAPS=y
# CONFIG_DMABUF_SYSFS_STATS is not set
CONFIG_DMABUF_HEAPS_SYSTEM=y
CONFIG_DMABUF_HEAPS_CMA=y
# end of DMABUF options
CONFIG_DCA=y
# CONFIG_UIO is not set
CONFIG_VFIO=y
CONFIG_VFIO_DEVICE_CDEV=y
# CONFIG_VFIO_GROUP is not set
CONFIG_VFIO_VIRQFD=y
# CONFIG_VFIO_DEBUGFS is not set
#
# VFIO support for PCI devices
#
CONFIG_VFIO_PCI_CORE=y
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
CONFIG_VFIO_PCI=y
# CONFIG_VFIO_PCI_VGA is not set
# CONFIG_VFIO_PCI_IGD is not set
# CONFIG_VIRTIO_VFIO_PCI is not set
# end of VFIO support for PCI devices
CONFIG_IRQ_BYPASS_MANAGER=y
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_ADMIN_LEGACY=y
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_VDPA=y
CONFIG_VIRTIO_PMEM=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MEM=y
CONFIG_VIRTIO_INPUT=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_VIRTIO_DMA_SHARED_BUFFER=y
CONFIG_VDPA=y
CONFIG_VDPA_SIM=y
CONFIG_VDPA_SIM_NET=y
CONFIG_VDPA_SIM_BLOCK=y
CONFIG_VDPA_USER=y
# CONFIG_IFCVF is not set
# CONFIG_MLX5_VDPA_STEERING_DEBUG is not set
CONFIG_VP_VDPA=y
# CONFIG_ALIBABA_ENI_VDPA is not set
# CONFIG_SNET_VDPA is not set
CONFIG_VHOST_IOTLB=y
CONFIG_VHOST_RING=y
CONFIG_VHOST_TASK=y
CONFIG_VHOST=y
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=y
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_VSOCK=y
CONFIG_VHOST_VDPA=y
CONFIG_VHOST_CROSS_ENDIAN_LEGACY=y
#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set
# end of Microsoft Hyper-V guest support
CONFIG_GREYBUS=y
# CONFIG_GREYBUS_BEAGLEPLAY is not set
CONFIG_GREYBUS_ES2=y
CONFIG_COMEDI=y
# CONFIG_COMEDI_DEBUG is not set
CONFIG_COMEDI_DEFAULT_BUF_SIZE_KB=2048
CONFIG_COMEDI_DEFAULT_BUF_MAXSIZE_KB=20480
# CONFIG_COMEDI_MISC_DRIVERS is not set
# CONFIG_COMEDI_PCI_DRIVERS is not set
# CONFIG_COMEDI_PCMCIA_DRIVERS is not set
CONFIG_COMEDI_USB_DRIVERS=y
CONFIG_COMEDI_DT9812=y
CONFIG_COMEDI_NI_USB6501=y
CONFIG_COMEDI_USBDUX=y
CONFIG_COMEDI_USBDUXFAST=y
CONFIG_COMEDI_USBDUXSIGMA=y
CONFIG_COMEDI_VMK80XX=y
# CONFIG_COMEDI_8255_SA is not set
# CONFIG_COMEDI_KCOMEDILIB is not set
# CONFIG_COMEDI_TESTS is not set
CONFIG_STAGING=y
CONFIG_PRISM2_USB=y
# CONFIG_RTLLIB is not set
# CONFIG_RTL8723BS is not set
CONFIG_R8712U=y
# CONFIG_RTS5208 is not set
# CONFIG_VT6655 is not set
# CONFIG_VT6656 is not set
#
# IIO staging drivers
#
#
# Accelerometers
#
# CONFIG_ADIS16203 is not set
# CONFIG_ADIS16240 is not set
# end of Accelerometers
#
# Analog to digital converters
#
# CONFIG_AD7816 is not set
# end of Analog to digital converters
#
# Analog digital bi-direction converters
#
# CONFIG_ADT7316 is not set
# end of Analog digital bi-direction converters
#
# Direct Digital Synthesis
#
# CONFIG_AD9832 is not set
# CONFIG_AD9834 is not set
# end of Direct Digital Synthesis
#
# Network Analyzer, Impedance Converters
#
# CONFIG_AD5933 is not set
# end of Network Analyzer, Impedance Converters
# end of IIO staging drivers
# CONFIG_FB_SM750 is not set
# CONFIG_STAGING_MEDIA is not set
# CONFIG_LTE_GDM724X is not set
# CONFIG_MOST_COMPONENTS is not set
# CONFIG_KS7010 is not set
# CONFIG_GREYBUS_BOOTROM is not set
# CONFIG_GREYBUS_FIRMWARE is not set
CONFIG_GREYBUS_HID=y
# CONFIG_GREYBUS_LOG is not set
# CONFIG_GREYBUS_LOOPBACK is not set
# CONFIG_GREYBUS_POWER is not set
# CONFIG_GREYBUS_RAW is not set
# CONFIG_GREYBUS_VIBRATOR is not set
CONFIG_GREYBUS_BRIDGED_PHY=y
# CONFIG_GREYBUS_GPIO is not set
# CONFIG_GREYBUS_I2C is not set
# CONFIG_GREYBUS_SDIO is not set
# CONFIG_GREYBUS_SPI is not set
# CONFIG_GREYBUS_UART is not set
CONFIG_GREYBUS_USB=y
# CONFIG_PI433 is not set
# CONFIG_XIL_AXIS_FIFO is not set
# CONFIG_FIELDBUS_DEV is not set
# CONFIG_VME_BUS is not set
# CONFIG_GOLDFISH is not set
# CONFIG_CHROME_PLATFORMS is not set
# CONFIG_MELLANOX_PLATFORM is not set
CONFIG_SURFACE_PLATFORMS=y
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_GPE is not set
# CONFIG_SURFACE_HOTPLUG is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
# CONFIG_SURFACE_AGGREGATOR is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_ACPI_WMI=y
CONFIG_WMI_BMOF=y
# CONFIG_HUAWEI_WMI is not set
# CONFIG_MXM_WMI is not set
# CONFIG_NVIDIA_WMI_EC_BACKLIGHT is not set
# CONFIG_XIAOMI_WMI is not set
# CONFIG_GIGABYTE_WMI is not set
# CONFIG_YOGABOOK is not set
# CONFIG_ACERHDF is not set
# CONFIG_ACER_WIRELESS is not set
# CONFIG_ACER_WMI is not set
# CONFIG_AMD_PMC is not set
# CONFIG_AMD_HSMP is not set
# CONFIG_AMD_WBRF is not set
# CONFIG_ADV_SWBUTTON is not set
# CONFIG_APPLE_GMUX is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_ASUS_WIRELESS is not set
CONFIG_ASUS_WMI=y
# CONFIG_ASUS_NB_WMI is not set
# CONFIG_ASUS_TF103C_DOCK is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_EEEPC_WMI is not set
# CONFIG_X86_PLATFORM_DRIVERS_DELL is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_GPD_POCKET_FAN is not set
# CONFIG_X86_PLATFORM_DRIVERS_HP is not set
# CONFIG_WIRELESS_HOTKEY is not set
# CONFIG_IBM_RTL is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_LENOVO_YMC is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_THINKPAD_LMI is not set
# CONFIG_INTEL_ATOMISP2_PM is not set
# CONFIG_INTEL_IFS is not set
# CONFIG_INTEL_SAR_INT1092 is not set
# CONFIG_INTEL_SKL_INT3472 is not set
#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
# CONFIG_INTEL_WMI_THUNDERBOLT is not set
#
# Intel Uncore Frequency Control
#
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
# end of Intel Uncore Frequency Control
# CONFIG_INTEL_HID_EVENT is not set
# CONFIG_INTEL_VBTN is not set
# CONFIG_INTEL_INT0002_VGPIO is not set
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_INTEL_ISHTP_ECLITE is not set
# CONFIG_INTEL_PUNIT_IPC is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set
# CONFIG_INTEL_TURBO_MAX_3 is not set
# CONFIG_INTEL_VSEC is not set
# CONFIG_MSI_EC is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_MSI_WMI is not set
# CONFIG_PCENGINES_APU2 is not set
# CONFIG_BARCO_P50_GPIO is not set
# CONFIG_SAMSUNG_LAPTOP is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_LG_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_SYSTEM76_ACPI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_SERIAL_MULTI_INSTANTIATE is not set
# CONFIG_MLX_PLATFORM is not set
# CONFIG_INSPUR_PLATFORM_PROFILE is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
# CONFIG_SIEMENS_SIMATIC_IPC is not set
# CONFIG_WINMATE_FM07_KEYS is not set
CONFIG_P2SB=y
CONFIG_HAVE_CLK=y
CONFIG_HAVE_CLK_PREPARE=y
CONFIG_COMMON_CLK=y
# CONFIG_LMK04832 is not set
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI514 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_SI570 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CDCE925 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_CLK_TWL is not set
# CONFIG_COMMON_CLK_AXI_CLKGEN is not set
# CONFIG_COMMON_CLK_RS9_PCIE is not set
# CONFIG_COMMON_CLK_SI521XX is not set
# CONFIG_COMMON_CLK_VC3 is not set
# CONFIG_COMMON_CLK_VC5 is not set
# CONFIG_COMMON_CLK_VC7 is not set
# CONFIG_COMMON_CLK_FIXED_MMIO is not set
# CONFIG_CLK_LGM_CGU is not set
# CONFIG_XILINX_VCU is not set
# CONFIG_COMMON_CLK_XLNX_CLKWZRD is not set
# CONFIG_HWSPINLOCK is not set
#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# end of Clock Source drivers
CONFIG_MAILBOX=y
# CONFIG_PLATFORM_MHU is not set
CONFIG_PCC=y
# CONFIG_ALTERA_MBOX is not set
# CONFIG_MAILBOX_TEST is not set
CONFIG_IOMMU_IOVA=y
CONFIG_IOMMU_API=y
CONFIG_IOMMUFD_DRIVER=y
CONFIG_IOMMU_SUPPORT=y
#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support
# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_DMA_STRICT is not set
CONFIG_IOMMU_DEFAULT_DMA_LAZY=y
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_OF_IOMMU=y
CONFIG_IOMMU_DMA=y
CONFIG_IOMMU_SVA=y
CONFIG_IOMMU_IOPF=y
# CONFIG_AMD_IOMMU is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
CONFIG_INTEL_IOMMU_SVM=y
CONFIG_INTEL_IOMMU_DEFAULT_ON=y
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON=y
CONFIG_INTEL_IOMMU_PERF_EVENTS=y
CONFIG_IOMMUFD=y
CONFIG_IOMMUFD_TEST=y
CONFIG_IRQ_REMAP=y
# CONFIG_VIRTIO_IOMMU is not set
#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers
#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers
# CONFIG_SOUNDWIRE is not set
#
# SOC (System On Chip) specific Drivers
#
#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers
#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers
#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers
#
# fujitsu SoC drivers
#
# end of fujitsu SoC drivers
#
# i.MX SoC drivers
#
# end of i.MX SoC drivers
#
# Enable LiteX SoC Builder specific drivers
#
# CONFIG_LITEX_SOC_CONTROLLER is not set
# end of Enable LiteX SoC Builder specific drivers
# CONFIG_WPCM450_SOC is not set
#
# Qualcomm SoC drivers
#
CONFIG_QCOM_QMI_HELPERS=y
# end of Qualcomm SoC drivers
# CONFIG_SOC_TI is not set
#
# Xilinx SoC drivers
#
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers
#
# PM Domains
#
#
# Amlogic PM Domains
#
# end of Amlogic PM Domains
#
# Broadcom PM Domains
#
# end of Broadcom PM Domains
#
# i.MX PM Domains
#
# end of i.MX PM Domains
#
# Qualcomm PM Domains
#
# end of Qualcomm PM Domains
# end of PM Domains
# CONFIG_PM_DEVFREQ is not set
CONFIG_EXTCON=y
#
# Extcon Device Drivers
#
# CONFIG_EXTCON_ADC_JACK is not set
# CONFIG_EXTCON_FSA9480 is not set
# CONFIG_EXTCON_GPIO is not set
# CONFIG_EXTCON_INTEL_INT3496 is not set
CONFIG_EXTCON_INTEL_CHT_WC=y
# CONFIG_EXTCON_MAX3355 is not set
# CONFIG_EXTCON_PTN5150 is not set
# CONFIG_EXTCON_RT8973A is not set
# CONFIG_EXTCON_SM5502 is not set
# CONFIG_EXTCON_USB_GPIO is not set
# CONFIG_EXTCON_USBC_TUSB320 is not set
# CONFIG_MEMORY is not set
CONFIG_IIO=y
CONFIG_IIO_BUFFER=y
# CONFIG_IIO_BUFFER_CB is not set
# CONFIG_IIO_BUFFER_DMA is not set
# CONFIG_IIO_BUFFER_DMAENGINE is not set
# CONFIG_IIO_BUFFER_HW_CONSUMER is not set
CONFIG_IIO_KFIFO_BUF=y
CONFIG_IIO_TRIGGERED_BUFFER=y
# CONFIG_IIO_CONFIGFS is not set
CONFIG_IIO_TRIGGER=y
CONFIG_IIO_CONSUMERS_PER_TRIGGER=2
# CONFIG_IIO_SW_DEVICE is not set
# CONFIG_IIO_SW_TRIGGER is not set
# CONFIG_IIO_TRIGGERED_EVENT is not set
#
# Accelerometers
#
# CONFIG_ADIS16201 is not set
# CONFIG_ADIS16209 is not set
# CONFIG_ADXL313_I2C is not set
# CONFIG_ADXL313_SPI is not set
# CONFIG_ADXL345_I2C is not set
# CONFIG_ADXL345_SPI is not set
# CONFIG_ADXL355_I2C is not set
# CONFIG_ADXL355_SPI is not set
# CONFIG_ADXL367_SPI is not set
# CONFIG_ADXL367_I2C is not set
# CONFIG_ADXL372_SPI is not set
# CONFIG_ADXL372_I2C is not set
# CONFIG_BMA180 is not set
# CONFIG_BMA220 is not set
# CONFIG_BMA400 is not set
# CONFIG_BMC150_ACCEL is not set
# CONFIG_BMI088_ACCEL is not set
# CONFIG_DA280 is not set
# CONFIG_DA311 is not set
# CONFIG_DMARD06 is not set
# CONFIG_DMARD09 is not set
# CONFIG_DMARD10 is not set
# CONFIG_FXLS8962AF_I2C is not set
# CONFIG_FXLS8962AF_SPI is not set
CONFIG_HID_SENSOR_ACCEL_3D=y
# CONFIG_IIO_ST_ACCEL_3AXIS is not set
# CONFIG_IIO_KX022A_SPI is not set
# CONFIG_IIO_KX022A_I2C is not set
# CONFIG_KXSD9 is not set
# CONFIG_KXCJK1013 is not set
# CONFIG_MC3230 is not set
# CONFIG_MMA7455_I2C is not set
# CONFIG_MMA7455_SPI is not set
# CONFIG_MMA7660 is not set
# CONFIG_MMA8452 is not set
# CONFIG_MMA9551 is not set
# CONFIG_MMA9553 is not set
# CONFIG_MSA311 is not set
# CONFIG_MXC4005 is not set
# CONFIG_MXC6255 is not set
# CONFIG_SCA3000 is not set
# CONFIG_SCA3300 is not set
# CONFIG_STK8312 is not set
# CONFIG_STK8BA50 is not set
# end of Accelerometers
#
# Analog to digital converters
#
# CONFIG_AD4130 is not set
# CONFIG_AD7091R5 is not set
# CONFIG_AD7091R8 is not set
# CONFIG_AD7124 is not set
# CONFIG_AD7192 is not set
# CONFIG_AD7266 is not set
# CONFIG_AD7280 is not set
# CONFIG_AD7291 is not set
# CONFIG_AD7292 is not set
# CONFIG_AD7298 is not set
# CONFIG_AD7476 is not set
# CONFIG_AD7606_IFACE_PARALLEL is not set
# CONFIG_AD7606_IFACE_SPI is not set
# CONFIG_AD7766 is not set
# CONFIG_AD7768_1 is not set
# CONFIG_AD7780 is not set
# CONFIG_AD7791 is not set
# CONFIG_AD7793 is not set
# CONFIG_AD7887 is not set
# CONFIG_AD7923 is not set
# CONFIG_AD7949 is not set
# CONFIG_AD799X is not set
# CONFIG_AD9467 is not set
# CONFIG_ADI_AXI_ADC is not set
# CONFIG_CC10001_ADC is not set
CONFIG_DLN2_ADC=y
# CONFIG_ENVELOPE_DETECTOR is not set
# CONFIG_HI8435 is not set
# CONFIG_HX711 is not set
# CONFIG_INA2XX_ADC is not set
# CONFIG_LTC2309 is not set
# CONFIG_LTC2471 is not set
# CONFIG_LTC2485 is not set
# CONFIG_LTC2496 is not set
# CONFIG_LTC2497 is not set
# CONFIG_MAX1027 is not set
# CONFIG_MAX11100 is not set
# CONFIG_MAX1118 is not set
# CONFIG_MAX11205 is not set
# CONFIG_MAX11410 is not set
# CONFIG_MAX1241 is not set
# CONFIG_MAX1363 is not set
# CONFIG_MAX34408 is not set
# CONFIG_MAX9611 is not set
# CONFIG_MCP320X is not set
# CONFIG_MCP3422 is not set
# CONFIG_MCP3564 is not set
# CONFIG_MCP3911 is not set
# CONFIG_NAU7802 is not set
# CONFIG_PAC1934 is not set
# CONFIG_RICHTEK_RTQ6056 is not set
# CONFIG_SD_ADC_MODULATOR is not set
# CONFIG_TI_ADC081C is not set
# CONFIG_TI_ADC0832 is not set
# CONFIG_TI_ADC084S021 is not set
# CONFIG_TI_ADC12138 is not set
# CONFIG_TI_ADC108S102 is not set
# CONFIG_TI_ADC128S052 is not set
# CONFIG_TI_ADC161S626 is not set
# CONFIG_TI_ADS1015 is not set
# CONFIG_TI_ADS7924 is not set
# CONFIG_TI_ADS1100 is not set
# CONFIG_TI_ADS1298 is not set
# CONFIG_TI_ADS7950 is not set
# CONFIG_TI_ADS8344 is not set
# CONFIG_TI_ADS8688 is not set
# CONFIG_TI_ADS124S08 is not set
# CONFIG_TI_ADS131E08 is not set
# CONFIG_TI_LMP92064 is not set
# CONFIG_TI_TLC4541 is not set
# CONFIG_TI_TSC2046 is not set
# CONFIG_TWL4030_MADC is not set
# CONFIG_TWL6030_GPADC is not set
# CONFIG_VF610_ADC is not set
CONFIG_VIPERBOARD_ADC=y
# CONFIG_XILINX_XADC is not set
# end of Analog to digital converters
#
# Analog to digital and digital to analog converters
#
# CONFIG_AD74115 is not set
# CONFIG_AD74413R is not set
# end of Analog to digital and digital to analog converters
#
# Analog Front Ends
#
# CONFIG_IIO_RESCALE is not set
# end of Analog Front Ends
#
# Amplifiers
#
# CONFIG_AD8366 is not set
# CONFIG_ADA4250 is not set
# CONFIG_HMC425 is not set
# end of Amplifiers
#
# Capacitance to digital converters
#
# CONFIG_AD7150 is not set
# CONFIG_AD7746 is not set
# end of Capacitance to digital converters
#
# Chemical Sensors
#
# CONFIG_AOSONG_AGS02MA is not set
# CONFIG_ATLAS_PH_SENSOR is not set
# CONFIG_ATLAS_EZO_SENSOR is not set
# CONFIG_BME680 is not set
# CONFIG_CCS811 is not set
# CONFIG_IAQCORE is not set
# CONFIG_PMS7003 is not set
# CONFIG_SCD30_CORE is not set
# CONFIG_SCD4X is not set
# CONFIG_SENSIRION_SGP30 is not set
# CONFIG_SENSIRION_SGP40 is not set
# CONFIG_SPS30_I2C is not set
# CONFIG_SPS30_SERIAL is not set
# CONFIG_SENSEAIR_SUNRISE_CO2 is not set
# CONFIG_VZ89X is not set
# end of Chemical Sensors
#
# Hid Sensor IIO Common
#
CONFIG_HID_SENSOR_IIO_COMMON=y
CONFIG_HID_SENSOR_IIO_TRIGGER=y
# end of Hid Sensor IIO Common
#
# IIO SCMI Sensors
#
# end of IIO SCMI Sensors
#
# SSP Sensor Common
#
# CONFIG_IIO_SSP_SENSORHUB is not set
# end of SSP Sensor Common
#
# Digital to analog converters
#
# CONFIG_AD3552R is not set
# CONFIG_AD5064 is not set
# CONFIG_AD5360 is not set
# CONFIG_AD5380 is not set
# CONFIG_AD5421 is not set
# CONFIG_AD5446 is not set
# CONFIG_AD5449 is not set
# CONFIG_AD5592R is not set
# CONFIG_AD5593R is not set
# CONFIG_AD5504 is not set
# CONFIG_AD5624R_SPI is not set
# CONFIG_LTC2688 is not set
# CONFIG_AD5686_SPI is not set
# CONFIG_AD5696_I2C is not set
# CONFIG_AD5755 is not set
# CONFIG_AD5758 is not set
# CONFIG_AD5761 is not set
# CONFIG_AD5764 is not set
# CONFIG_AD5766 is not set
# CONFIG_AD5770R is not set
# CONFIG_AD5791 is not set
# CONFIG_AD7293 is not set
# CONFIG_AD7303 is not set
# CONFIG_AD8801 is not set
# CONFIG_DPOT_DAC is not set
# CONFIG_DS4424 is not set
# CONFIG_LTC1660 is not set
# CONFIG_LTC2632 is not set
# CONFIG_M62332 is not set
# CONFIG_MAX517 is not set
# CONFIG_MAX5522 is not set
# CONFIG_MAX5821 is not set
# CONFIG_MCP4725 is not set
# CONFIG_MCP4728 is not set
# CONFIG_MCP4821 is not set
# CONFIG_MCP4922 is not set
# CONFIG_TI_DAC082S085 is not set
# CONFIG_TI_DAC5571 is not set
# CONFIG_TI_DAC7311 is not set
# CONFIG_TI_DAC7612 is not set
# CONFIG_VF610_DAC is not set
# end of Digital to analog converters
#
# IIO dummy driver
#
# end of IIO dummy driver
#
# Filters
#
# CONFIG_ADMV8818 is not set
# end of Filters
#
# Frequency Synthesizers DDS/PLL
#
#
# Clock Generator/Distribution
#
# CONFIG_AD9523 is not set
# end of Clock Generator/Distribution
#
# Phase-Locked Loop (PLL) frequency synthesizers
#
# CONFIG_ADF4350 is not set
# CONFIG_ADF4371 is not set
# CONFIG_ADF4377 is not set
# CONFIG_ADMFM2000 is not set
# CONFIG_ADMV1013 is not set
# CONFIG_ADMV1014 is not set
# CONFIG_ADMV4420 is not set
# CONFIG_ADRF6780 is not set
# end of Phase-Locked Loop (PLL) frequency synthesizers
# end of Frequency Synthesizers DDS/PLL
#
# Digital gyroscope sensors
#
# CONFIG_ADIS16080 is not set
# CONFIG_ADIS16130 is not set
# CONFIG_ADIS16136 is not set
# CONFIG_ADIS16260 is not set
# CONFIG_ADXRS290 is not set
# CONFIG_ADXRS450 is not set
# CONFIG_BMG160 is not set
# CONFIG_FXAS21002C is not set
CONFIG_HID_SENSOR_GYRO_3D=y
# CONFIG_MPU3050_I2C is not set
# CONFIG_IIO_ST_GYRO_3AXIS is not set
# CONFIG_ITG3200 is not set
# end of Digital gyroscope sensors
#
# Health Sensors
#
#
# Heart Rate Monitors
#
# CONFIG_AFE4403 is not set
# CONFIG_AFE4404 is not set
# CONFIG_MAX30100 is not set
# CONFIG_MAX30102 is not set
# end of Heart Rate Monitors
# end of Health Sensors
#
# Humidity sensors
#
# CONFIG_AM2315 is not set
# CONFIG_DHT11 is not set
# CONFIG_HDC100X is not set
# CONFIG_HDC2010 is not set
# CONFIG_HDC3020 is not set
CONFIG_HID_SENSOR_HUMIDITY=y
# CONFIG_HTS221 is not set
# CONFIG_HTU21 is not set
# CONFIG_SI7005 is not set
# CONFIG_SI7020 is not set
# end of Humidity sensors
#
# Inertial measurement units
#
# CONFIG_ADIS16400 is not set
# CONFIG_ADIS16460 is not set
# CONFIG_ADIS16475 is not set
# CONFIG_ADIS16480 is not set
# CONFIG_BMI160_I2C is not set
# CONFIG_BMI160_SPI is not set
# CONFIG_BMI323_I2C is not set
# CONFIG_BMI323_SPI is not set
# CONFIG_BOSCH_BNO055_SERIAL is not set
# CONFIG_BOSCH_BNO055_I2C is not set
# CONFIG_FXOS8700_I2C is not set
# CONFIG_FXOS8700_SPI is not set
# CONFIG_KMX61 is not set
# CONFIG_INV_ICM42600_I2C is not set
# CONFIG_INV_ICM42600_SPI is not set
# CONFIG_INV_MPU6050_I2C is not set
# CONFIG_INV_MPU6050_SPI is not set
# CONFIG_IIO_ST_LSM6DSX is not set
# CONFIG_IIO_ST_LSM9DS0 is not set
# end of Inertial measurement units
#
# Light sensors
#
# CONFIG_ACPI_ALS is not set
# CONFIG_ADJD_S311 is not set
# CONFIG_ADUX1020 is not set
# CONFIG_AL3010 is not set
# CONFIG_AL3320A is not set
# CONFIG_APDS9300 is not set
# CONFIG_APDS9960 is not set
# CONFIG_AS73211 is not set
# CONFIG_BH1750 is not set
# CONFIG_BH1780 is not set
# CONFIG_CM32181 is not set
# CONFIG_CM3232 is not set
# CONFIG_CM3323 is not set
# CONFIG_CM3605 is not set
# CONFIG_CM36651 is not set
# CONFIG_GP2AP002 is not set
# CONFIG_GP2AP020A00F is not set
# CONFIG_SENSORS_ISL29018 is not set
# CONFIG_SENSORS_ISL29028 is not set
# CONFIG_ISL29125 is not set
# CONFIG_ISL76682 is not set
CONFIG_HID_SENSOR_ALS=y
CONFIG_HID_SENSOR_PROX=y
# CONFIG_JSA1212 is not set
# CONFIG_ROHM_BU27008 is not set
# CONFIG_ROHM_BU27034 is not set
# CONFIG_RPR0521 is not set
# CONFIG_LTR390 is not set
# CONFIG_LTR501 is not set
# CONFIG_LTRF216A is not set
# CONFIG_LV0104CS is not set
# CONFIG_MAX44000 is not set
# CONFIG_MAX44009 is not set
# CONFIG_NOA1305 is not set
# CONFIG_OPT3001 is not set
# CONFIG_OPT4001 is not set
# CONFIG_PA12203001 is not set
# CONFIG_SI1133 is not set
# CONFIG_SI1145 is not set
# CONFIG_STK3310 is not set
# CONFIG_ST_UVIS25 is not set
# CONFIG_TCS3414 is not set
# CONFIG_TCS3472 is not set
# CONFIG_SENSORS_TSL2563 is not set
# CONFIG_TSL2583 is not set
# CONFIG_TSL2591 is not set
# CONFIG_TSL2772 is not set
# CONFIG_TSL4531 is not set
# CONFIG_US5182D is not set
# CONFIG_VCNL4000 is not set
# CONFIG_VCNL4035 is not set
# CONFIG_VEML6030 is not set
# CONFIG_VEML6070 is not set
# CONFIG_VEML6075 is not set
# CONFIG_VL6180 is not set
# CONFIG_ZOPT2201 is not set
# end of Light sensors
#
# Magnetometer sensors
#
# CONFIG_AF8133J is not set
# CONFIG_AK8974 is not set
# CONFIG_AK8975 is not set
# CONFIG_AK09911 is not set
# CONFIG_BMC150_MAGN_I2C is not set
# CONFIG_BMC150_MAGN_SPI is not set
# CONFIG_MAG3110 is not set
CONFIG_HID_SENSOR_MAGNETOMETER_3D=y
# CONFIG_MMC35240 is not set
# CONFIG_IIO_ST_MAGN_3AXIS is not set
# CONFIG_SENSORS_HMC5843_I2C is not set
# CONFIG_SENSORS_HMC5843_SPI is not set
# CONFIG_SENSORS_RM3100_I2C is not set
# CONFIG_SENSORS_RM3100_SPI is not set
# CONFIG_TI_TMAG5273 is not set
# CONFIG_YAMAHA_YAS530 is not set
# end of Magnetometer sensors
#
# Multiplexers
#
# CONFIG_IIO_MUX is not set
# end of Multiplexers
#
# Inclinometer sensors
#
CONFIG_HID_SENSOR_INCLINOMETER_3D=y
CONFIG_HID_SENSOR_DEVICE_ROTATION=y
# end of Inclinometer sensors
#
# Triggers - standalone
#
# CONFIG_IIO_INTERRUPT_TRIGGER is not set
# CONFIG_IIO_SYSFS_TRIGGER is not set
# end of Triggers - standalone
#
# Linear and angular position sensors
#
# CONFIG_HID_SENSOR_CUSTOM_INTEL_HINGE is not set
# end of Linear and angular position sensors
#
# Digital potentiometers
#
# CONFIG_AD5110 is not set
# CONFIG_AD5272 is not set
# CONFIG_DS1803 is not set
# CONFIG_MAX5432 is not set
# CONFIG_MAX5481 is not set
# CONFIG_MAX5487 is not set
# CONFIG_MCP4018 is not set
# CONFIG_MCP4131 is not set
# CONFIG_MCP4531 is not set
# CONFIG_MCP41010 is not set
# CONFIG_TPL0102 is not set
# CONFIG_X9250 is not set
# end of Digital potentiometers
#
# Digital potentiostats
#
# CONFIG_LMP91000 is not set
# end of Digital potentiostats
#
# Pressure sensors
#
# CONFIG_ABP060MG is not set
# CONFIG_ROHM_BM1390 is not set
# CONFIG_BMP280 is not set
# CONFIG_DLHL60D is not set
# CONFIG_DPS310 is not set
CONFIG_HID_SENSOR_PRESS=y
# CONFIG_HP03 is not set
# CONFIG_HSC030PA is not set
# CONFIG_ICP10100 is not set
# CONFIG_MPL115_I2C is not set
# CONFIG_MPL115_SPI is not set
# CONFIG_MPL3115 is not set
# CONFIG_MPRLS0025PA is not set
# CONFIG_MS5611 is not set
# CONFIG_MS5637 is not set
# CONFIG_IIO_ST_PRESS is not set
# CONFIG_T5403 is not set
# CONFIG_HP206C is not set
# CONFIG_ZPA2326 is not set
# end of Pressure sensors
#
# Lightning sensors
#
# CONFIG_AS3935 is not set
# end of Lightning sensors
#
# Proximity and distance sensors
#
# CONFIG_IRSD200 is not set
# CONFIG_ISL29501 is not set
# CONFIG_LIDAR_LITE_V2 is not set
# CONFIG_MB1232 is not set
# CONFIG_PING is not set
# CONFIG_RFD77402 is not set
# CONFIG_SRF04 is not set
# CONFIG_SX9310 is not set
# CONFIG_SX9324 is not set
# CONFIG_SX9360 is not set
# CONFIG_SX9500 is not set
# CONFIG_SRF08 is not set
# CONFIG_VCNL3020 is not set
# CONFIG_VL53L0X_I2C is not set
# end of Proximity and distance sensors
#
# Resolver to digital converters
#
# CONFIG_AD2S90 is not set
# CONFIG_AD2S1200 is not set
# CONFIG_AD2S1210 is not set
# end of Resolver to digital converters
#
# Temperature sensors
#
# CONFIG_LTC2983 is not set
# CONFIG_MAXIM_THERMOCOUPLE is not set
CONFIG_HID_SENSOR_TEMP=y
# CONFIG_MLX90614 is not set
# CONFIG_MLX90632 is not set
# CONFIG_MLX90635 is not set
# CONFIG_TMP006 is not set
# CONFIG_TMP007 is not set
# CONFIG_TMP117 is not set
# CONFIG_TSYS01 is not set
# CONFIG_TSYS02D is not set
# CONFIG_MAX30208 is not set
# CONFIG_MAX31856 is not set
# CONFIG_MAX31865 is not set
# CONFIG_MCP9600 is not set
# end of Temperature sensors
# CONFIG_NTB is not set
# CONFIG_PWM is not set
#
# IRQ chip support
#
CONFIG_IRQCHIP=y
# CONFIG_AL_FIC is not set
# CONFIG_XILINX_INTC is not set
# end of IRQ chip support
# CONFIG_IPACK_BUS is not set
CONFIG_RESET_CONTROLLER=y
# CONFIG_RESET_GPIO is not set
# CONFIG_RESET_INTEL_GW is not set
# CONFIG_RESET_SIMPLE is not set
# CONFIG_RESET_TI_SYSCON is not set
# CONFIG_RESET_TI_TPS380X is not set
#
# PHY Subsystem
#
CONFIG_GENERIC_PHY=y
# CONFIG_USB_LGM_PHY is not set
# CONFIG_PHY_CAN_TRANSCEIVER is not set
#
# PHY drivers for Broadcom platforms
#
# CONFIG_BCM_KONA_USB2_PHY is not set
# end of PHY drivers for Broadcom platforms
# CONFIG_PHY_CADENCE_TORRENT is not set
# CONFIG_PHY_CADENCE_DPHY is not set
# CONFIG_PHY_CADENCE_DPHY_RX is not set
# CONFIG_PHY_CADENCE_SIERRA is not set
# CONFIG_PHY_CADENCE_SALVO is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_LAN966X_SERDES is not set
CONFIG_PHY_CPCAP_USB=y
# CONFIG_PHY_MAPPHONE_MDM6600 is not set
# CONFIG_PHY_OCELOT_SERDES is not set
CONFIG_PHY_QCOM_USB_HS=y
CONFIG_PHY_QCOM_USB_HSIC=y
CONFIG_PHY_SAMSUNG_USB2=y
CONFIG_PHY_TUSB1210=y
# CONFIG_PHY_INTEL_LGM_COMBO is not set
# CONFIG_PHY_INTEL_LGM_EMMC is not set
# end of PHY Subsystem
# CONFIG_POWERCAP is not set
# CONFIG_MCB is not set
#
# Performance monitor support
#
# CONFIG_DWC_PCIE_PMU is not set
# end of Performance monitor support
CONFIG_RAS=y
CONFIG_USB4=y
# CONFIG_USB4_DEBUGFS_WRITE is not set
# CONFIG_USB4_DMA_TEST is not set
#
# Android
#
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ANDROID_BINDERFS=y
CONFIG_ANDROID_BINDER_DEVICES="binder0,binder1"
# CONFIG_ANDROID_BINDER_IPC_SELFTEST is not set
# end of Android
CONFIG_LIBNVDIMM=y
CONFIG_BLK_DEV_PMEM=y
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=y
CONFIG_BTT=y
CONFIG_ND_PFN=y
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_OF_PMEM=y
CONFIG_NVDIMM_KEYS=y
# CONFIG_NVDIMM_SECURITY_TEST is not set
CONFIG_DAX=y
CONFIG_DEV_DAX=y
# CONFIG_DEV_DAX_PMEM is not set
# CONFIG_DEV_DAX_KMEM is not set
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y
CONFIG_NVMEM_LAYOUTS=y
#
# Layout Types
#
# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set
# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set
# end of Layout Types
# CONFIG_NVMEM_RMEM is not set
# CONFIG_NVMEM_U_BOOT_ENV is not set
#
# HW tracing support
#
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
# end of HW tracing support
# CONFIG_FPGA is not set
# CONFIG_FSI is not set
# CONFIG_TEE is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
CONFIG_COUNTER=y
# CONFIG_INTEL_QEP is not set
# CONFIG_INTERRUPT_CNT is not set
CONFIG_MOST=y
# CONFIG_MOST_USB_HDM is not set
# CONFIG_MOST_CDEV is not set
# CONFIG_MOST_SND is not set
# CONFIG_PECI is not set
# CONFIG_HTE is not set
# end of Device Drivers
#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_VALIDATE_FS_PARSER=y
CONFIG_FS_IOMAP=y
CONFIG_FS_STACK=y
CONFIG_BUFFER_HEAD=y
CONFIG_LEGACY_DIRECT_IO=y
# CONFIG_EXT2_FS is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
CONFIG_REISERFS_FS=y
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=y
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
CONFIG_JFS_DEBUG=y
# CONFIG_JFS_STATISTICS is not set
CONFIG_XFS_FS=y
# CONFIG_XFS_SUPPORT_V4 is not set
# CONFIG_XFS_SUPPORT_ASCII_CI is not set
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
# CONFIG_XFS_ONLINE_SCRUB is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
CONFIG_GFS2_FS=y
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=y
CONFIG_OCFS2_FS_O2CB=y
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=y
CONFIG_OCFS2_FS_STATS=y
# CONFIG_OCFS2_DEBUG_MASKLOG is not set
CONFIG_OCFS2_DEBUG_FS=y
CONFIG_BTRFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
CONFIG_BTRFS_ASSERT=y
CONFIG_BTRFS_FS_REF_VERIFY=y
CONFIG_NILFS2_FS=y
CONFIG_F2FS_FS=y
CONFIG_F2FS_STAT_FS=y
CONFIG_F2FS_FS_XATTR=y
CONFIG_F2FS_FS_POSIX_ACL=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_F2FS_CHECK_FS=y
CONFIG_F2FS_FAULT_INJECTION=y
CONFIG_F2FS_FS_COMPRESSION=y
CONFIG_F2FS_FS_LZO=y
CONFIG_F2FS_FS_LZORLE=y
CONFIG_F2FS_FS_LZ4=y
CONFIG_F2FS_FS_LZ4HC=y
CONFIG_F2FS_FS_ZSTD=y
# CONFIG_F2FS_IOSTAT is not set
# CONFIG_F2FS_UNFAIR_RWSEM is not set
# CONFIG_BCACHEFS_FS is not set
CONFIG_ZONEFS_FS=y
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FILE_LOCKING=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_ENCRYPTION_ALGS=y
# CONFIG_FS_ENCRYPTION_INLINE_CRYPT is not set
CONFIG_FS_VERITY=y
CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=y
CONFIG_CUSE=y
CONFIG_VIRTIO_FS=y
CONFIG_FUSE_DAX=y
CONFIG_FUSE_PASSTHROUGH=y
CONFIG_OVERLAY_FS=y
CONFIG_OVERLAY_FS_REDIRECT_DIR=y
CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW=y
CONFIG_OVERLAY_FS_INDEX=y
# CONFIG_OVERLAY_FS_NFS_EXPORT is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set
CONFIG_OVERLAY_FS_DEBUG=y
#
# Caches
#
CONFIG_NETFS_SUPPORT=y
# CONFIG_NETFS_STATS is not set
CONFIG_FSCACHE=y
# CONFIG_FSCACHE_STATS is not set
# CONFIG_FSCACHE_DEBUG is not set
CONFIG_CACHEFILES=y
# CONFIG_CACHEFILES_DEBUG is not set
# CONFIG_CACHEFILES_ERROR_INJECTION is not set
# CONFIG_CACHEFILES_ONDEMAND is not set
# end of Caches
#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=y
# end of CD-ROM/DVD Filesystems
#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
# CONFIG_FAT_DEFAULT_UTF8 is not set
CONFIG_EXFAT_FS=y
CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8"
CONFIG_NTFS3_FS=y
# CONFIG_NTFS3_64BIT_CLUSTER is not set
CONFIG_NTFS3_LZX_XPRESS=y
CONFIG_NTFS3_FS_POSIX_ACL=y
# CONFIG_NTFS_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_PROC_CHILDREN=y
CONFIG_PROC_PID_ARCH_STATUS=y
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_INODE64 is not set
CONFIG_TMPFS_QUOTA=y
CONFIG_HUGETLBFS=y
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
CONFIG_HUGETLB_PAGE=y
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
# end of Pseudo filesystems
CONFIG_MISC_FILESYSTEMS=y
CONFIG_ORANGEFS_FS=y
CONFIG_ADFS_FS=y
# CONFIG_ADFS_FS_RW is not set
CONFIG_AFFS_FS=y
CONFIG_ECRYPT_FS=y
CONFIG_ECRYPT_FS_MESSAGING=y
CONFIG_HFS_FS=y
CONFIG_HFSPLUS_FS=y
CONFIG_BEFS_FS=y
# CONFIG_BEFS_DEBUG is not set
CONFIG_BFS_FS=y
CONFIG_EFS_FS=y
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=0
CONFIG_JFFS2_FS_WRITEBUFFER=y
# CONFIG_JFFS2_FS_WBUF_VERIFY is not set
CONFIG_JFFS2_SUMMARY=y
CONFIG_JFFS2_FS_XATTR=y
CONFIG_JFFS2_FS_POSIX_ACL=y
CONFIG_JFFS2_FS_SECURITY=y
CONFIG_JFFS2_COMPRESSION_OPTIONS=y
CONFIG_JFFS2_ZLIB=y
CONFIG_JFFS2_LZO=y
CONFIG_JFFS2_RTIME=y
CONFIG_JFFS2_RUBIN=y
# CONFIG_JFFS2_CMODE_NONE is not set
CONFIG_JFFS2_CMODE_PRIORITY=y
# CONFIG_JFFS2_CMODE_SIZE is not set
# CONFIG_JFFS2_CMODE_FAVOURLZO is not set
CONFIG_UBIFS_FS=y
CONFIG_UBIFS_FS_ADVANCED_COMPR=y
CONFIG_UBIFS_FS_LZO=y
CONFIG_UBIFS_FS_ZLIB=y
CONFIG_UBIFS_FS_ZSTD=y
CONFIG_UBIFS_ATIME_SUPPORT=y
CONFIG_UBIFS_FS_XATTR=y
CONFIG_UBIFS_FS_SECURITY=y
# CONFIG_UBIFS_FS_AUTHENTICATION is not set
CONFIG_CRAMFS=y
CONFIG_CRAMFS_BLOCKDEV=y
CONFIG_CRAMFS_MTD=y
CONFIG_SQUASHFS=y
# CONFIG_SQUASHFS_FILE_CACHE is not set
CONFIG_SQUASHFS_FILE_DIRECT=y
CONFIG_SQUASHFS_DECOMP_SINGLE=y
# CONFIG_SQUASHFS_CHOICE_DECOMP_BY_MOUNT is not set
CONFIG_SQUASHFS_COMPILE_DECOMP_SINGLE=y
# CONFIG_SQUASHFS_COMPILE_DECOMP_MULTI is not set
# CONFIG_SQUASHFS_COMPILE_DECOMP_MULTI_PERCPU is not set
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
CONFIG_SQUASHFS_LZ4=y
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
CONFIG_SQUASHFS_ZSTD=y
CONFIG_SQUASHFS_4K_DEVBLK_SIZE=y
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
CONFIG_VXFS_FS=y
CONFIG_MINIX_FS=y
CONFIG_OMFS_FS=y
CONFIG_HPFS_FS=y
CONFIG_QNX4FS_FS=y
CONFIG_QNX6FS_FS=y
# CONFIG_QNX6FS_DEBUG is not set
CONFIG_ROMFS_FS=y
# CONFIG_ROMFS_BACKED_BY_BLOCK is not set
# CONFIG_ROMFS_BACKED_BY_MTD is not set
CONFIG_ROMFS_BACKED_BY_BOTH=y
CONFIG_ROMFS_ON_BLOCK=y
CONFIG_ROMFS_ON_MTD=y
CONFIG_PSTORE=y
CONFIG_PSTORE_DEFAULT_KMSG_BYTES=10240
CONFIG_PSTORE_COMPRESS=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_PMSG is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_PSTORE_BLK is not set
CONFIG_SYSV_FS=y
CONFIG_UFS_FS=y
CONFIG_UFS_FS_WRITE=y
# CONFIG_UFS_DEBUG is not set
CONFIG_EROFS_FS=y
# CONFIG_EROFS_FS_DEBUG is not set
CONFIG_EROFS_FS_XATTR=y
CONFIG_EROFS_FS_POSIX_ACL=y
CONFIG_EROFS_FS_SECURITY=y
CONFIG_EROFS_FS_ZIP=y
# CONFIG_EROFS_FS_ZIP_LZMA is not set
# CONFIG_EROFS_FS_ZIP_DEFLATE is not set
# CONFIG_EROFS_FS_ONDEMAND is not set
# CONFIG_EROFS_FS_PCPU_KTHREAD is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=y
CONFIG_PNFS_BLOCK=y
CONFIG_PNFS_FLEXFILE_LAYOUT=y
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
CONFIG_ROOT_NFS=y
CONFIG_NFS_FSCACHE=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFS_DISABLE_UDP_SUPPORT is not set
CONFIG_NFS_V4_2_READ_PLUS=y
CONFIG_NFSD=y
# CONFIG_NFSD_V2 is not set
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_PNFS=y
CONFIG_NFSD_BLOCKLAYOUT=y
CONFIG_NFSD_SCSILAYOUT=y
CONFIG_NFSD_FLEXFILELAYOUT=y
CONFIG_NFSD_V4_2_INTER_SSC=y
CONFIG_NFSD_V4_SECURITY_LABEL=y
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_GRACE_PERIOD=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_NFS_V4_2_SSC_HELPER=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=y
# CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_AES_SHA1 is not set
# CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_CAMELLIA is not set
# CONFIG_RPCSEC_GSS_KRB5_ENCTYPES_AES_SHA2 is not set
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_SUNRPC_XPRT_RDMA is not set
CONFIG_CEPH_FS=y
CONFIG_CEPH_FSCACHE=y
CONFIG_CEPH_FS_POSIX_ACL=y
# CONFIG_CEPH_FS_SECURITY_LABEL is not set
CONFIG_CIFS=y
# CONFIG_CIFS_STATS2 is not set
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
CONFIG_CIFS_SWN_UPCALL=y
CONFIG_CIFS_SMB_DIRECT=y
CONFIG_CIFS_FSCACHE=y
# CONFIG_CIFS_ROOT is not set
# CONFIG_SMB_SERVER is not set
CONFIG_SMBFS=y
# CONFIG_CODA_FS is not set
CONFIG_AFS_FS=y
# CONFIG_AFS_DEBUG is not set
CONFIG_AFS_FSCACHE=y
# CONFIG_AFS_DEBUG_CURSOR is not set
CONFIG_9P_FS=y
CONFIG_9P_FSCACHE=y
CONFIG_9P_FS_POSIX_ACL=y
CONFIG_9P_FS_SECURITY=y
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y
CONFIG_NLS_CODEPAGE_850=y
CONFIG_NLS_CODEPAGE_852=y
CONFIG_NLS_CODEPAGE_855=y
CONFIG_NLS_CODEPAGE_857=y
CONFIG_NLS_CODEPAGE_860=y
CONFIG_NLS_CODEPAGE_861=y
CONFIG_NLS_CODEPAGE_862=y
CONFIG_NLS_CODEPAGE_863=y
CONFIG_NLS_CODEPAGE_864=y
CONFIG_NLS_CODEPAGE_865=y
CONFIG_NLS_CODEPAGE_866=y
CONFIG_NLS_CODEPAGE_869=y
CONFIG_NLS_CODEPAGE_936=y
CONFIG_NLS_CODEPAGE_950=y
CONFIG_NLS_CODEPAGE_932=y
CONFIG_NLS_CODEPAGE_949=y
CONFIG_NLS_CODEPAGE_874=y
CONFIG_NLS_ISO8859_8=y
CONFIG_NLS_CODEPAGE_1250=y
CONFIG_NLS_CODEPAGE_1251=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=y
CONFIG_NLS_ISO8859_3=y
CONFIG_NLS_ISO8859_4=y
CONFIG_NLS_ISO8859_5=y
CONFIG_NLS_ISO8859_6=y
CONFIG_NLS_ISO8859_7=y
CONFIG_NLS_ISO8859_9=y
CONFIG_NLS_ISO8859_13=y
CONFIG_NLS_ISO8859_14=y
CONFIG_NLS_ISO8859_15=y
CONFIG_NLS_KOI8_R=y
CONFIG_NLS_KOI8_U=y
CONFIG_NLS_MAC_ROMAN=y
CONFIG_NLS_MAC_CELTIC=y
CONFIG_NLS_MAC_CENTEURO=y
CONFIG_NLS_MAC_CROATIAN=y
CONFIG_NLS_MAC_CYRILLIC=y
CONFIG_NLS_MAC_GAELIC=y
CONFIG_NLS_MAC_GREEK=y
CONFIG_NLS_MAC_ICELAND=y
CONFIG_NLS_MAC_INUIT=y
CONFIG_NLS_MAC_ROMANIAN=y
CONFIG_NLS_MAC_TURKISH=y
CONFIG_NLS_UTF8=y
CONFIG_NLS_UCS2_UTILS=y
CONFIG_DLM=y
# CONFIG_DLM_DEBUG is not set
CONFIG_UNICODE=y
# CONFIG_UNICODE_NORMALIZATION_SELFTEST is not set
CONFIG_IO_WQ=y
# end of File systems
#
# Security options
#
CONFIG_KEYS=y
CONFIG_KEYS_REQUEST_CACHE=y
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_BIG_KEYS=y
CONFIG_TRUSTED_KEYS=y
# CONFIG_TRUSTED_KEYS_TPM is not set
#
# No trust source selected!
#
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_USER_DECRYPTED_DATA is not set
CONFIG_KEY_DH_OPERATIONS=y
CONFIG_KEY_NOTIFICATIONS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_INFINIBAND=y
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_PATH=y
# CONFIG_INTEL_TXT is not set
CONFIG_LSM_MMAP_MIN_ADDR=65536
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
# CONFIG_STATIC_USERMODEHELPER is not set
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
# CONFIG_SECURITY_SELINUX_DEBUG is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
# CONFIG_SECURITY_LANDLOCK is not set
CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_LSM_RULES=y
CONFIG_IMA_NG_TEMPLATE=y
# CONFIG_IMA_SIG_TEMPLATE is not set
CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
# CONFIG_IMA_DEFAULT_HASH_SHA1 is not set
CONFIG_IMA_DEFAULT_HASH_SHA256=y
# CONFIG_IMA_DEFAULT_HASH_SHA512 is not set
# CONFIG_IMA_DEFAULT_HASH_WP512 is not set
CONFIG_IMA_DEFAULT_HASH="sha256"
CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_READ_POLICY=y
CONFIG_IMA_APPRAISE=y
# CONFIG_IMA_ARCH_POLICY is not set
# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
# CONFIG_IMA_APPRAISE_BOOTPARAM is not set
CONFIG_IMA_APPRAISE_MODSIG=y
# CONFIG_IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY is not set
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
# CONFIG_IMA_DISABLE_HTABLE is not set
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
CONFIG_EVM_ADD_XATTRS=y
# CONFIG_EVM_LOAD_X509 is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_LSM="landlock,lockdown,safesetid,integrity,selinux,bpf"
#
# Kernel hardening options
#
#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y
# CONFIG_ZERO_CALL_USED_REGS is not set
# end of Memory initialization
#
# Hardening of kernel data structures
#
CONFIG_LIST_HARDENED=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
# end of Hardening of kernel data structures
CONFIG_RANDSTRUCT_NONE=y
# end of Kernel hardening options
# end of Security options
CONFIG_XOR_BLOCKS=y
CONFIG_ASYNC_CORE=y
CONFIG_ASYNC_MEMCPY=y
CONFIG_ASYNC_XOR=y
CONFIG_ASYNC_PQ=y
CONFIG_ASYNC_RAID6_RECOV=y
CONFIG_CRYPTO=y
#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SIG2=y
CONFIG_CRYPTO_SKCIPHER=y
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=y
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=y
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=y
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_NULL2=y
CONFIG_CRYPTO_PCRYPT=y
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_SIMD=y
CONFIG_CRYPTO_ENGINE=y
# end of Crypto core or helper
#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=y
# CONFIG_CRYPTO_DH_RFC7919_GROUPS is not set
CONFIG_CRYPTO_ECC=y
CONFIG_CRYPTO_ECDH=y
# CONFIG_CRYPTO_ECDSA is not set
CONFIG_CRYPTO_ECRDSA=y
CONFIG_CRYPTO_SM2=y
CONFIG_CRYPTO_CURVE25519=y
# end of Public-key cryptography
#
# Block ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_TI=y
CONFIG_CRYPTO_ANUBIS=y
CONFIG_CRYPTO_ARIA=y
CONFIG_CRYPTO_BLOWFISH=y
CONFIG_CRYPTO_BLOWFISH_COMMON=y
CONFIG_CRYPTO_CAMELLIA=y
CONFIG_CRYPTO_CAST_COMMON=y
CONFIG_CRYPTO_CAST5=y
CONFIG_CRYPTO_CAST6=y
CONFIG_CRYPTO_DES=y
CONFIG_CRYPTO_FCRYPT=y
CONFIG_CRYPTO_KHAZAD=y
CONFIG_CRYPTO_SEED=y
CONFIG_CRYPTO_SERPENT=y
CONFIG_CRYPTO_SM4=y
CONFIG_CRYPTO_SM4_GENERIC=y
CONFIG_CRYPTO_TEA=y
CONFIG_CRYPTO_TWOFISH=y
CONFIG_CRYPTO_TWOFISH_COMMON=y
# end of Block ciphers
#
# Length-preserving ciphers and modes
#
CONFIG_CRYPTO_ADIANTUM=y
CONFIG_CRYPTO_ARC4=y
CONFIG_CRYPTO_CHACHA20=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_KEYWRAP=y
CONFIG_CRYPTO_LRW=y
CONFIG_CRYPTO_PCBC=y
CONFIG_CRYPTO_XCTR=y
CONFIG_CRYPTO_XTS=y
CONFIG_CRYPTO_NHPOLY1305=y
# end of Length-preserving ciphers and modes
#
# AEAD (authenticated encryption with associated data) ciphers
#
CONFIG_CRYPTO_AEGIS128=y
CONFIG_CRYPTO_CHACHA20POLY1305=y
CONFIG_CRYPTO_CCM=y
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_GENIV=y
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_ECHAINIV=y
CONFIG_CRYPTO_ESSIV=y
# end of AEAD (authenticated encryption with associated data) ciphers
#
# Hashes, digests, and MACs
#
CONFIG_CRYPTO_BLAKE2B=y
CONFIG_CRYPTO_CMAC=y
CONFIG_CRYPTO_GHASH=y
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=y
CONFIG_CRYPTO_POLYVAL=y
CONFIG_CRYPTO_POLY1305=y
CONFIG_CRYPTO_RMD160=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_SHA3=y
CONFIG_CRYPTO_SM3=y
# CONFIG_CRYPTO_SM3_GENERIC is not set
CONFIG_CRYPTO_STREEBOG=y
CONFIG_CRYPTO_VMAC=y
CONFIG_CRYPTO_WP512=y
CONFIG_CRYPTO_XCBC=y
CONFIG_CRYPTO_XXHASH=y
# end of Hashes, digests, and MACs
#
# CRCs (cyclic redundancy checks)
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32=y
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRC64_ROCKSOFT=y
# end of CRCs (cyclic redundancy checks)
#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_842=y
CONFIG_CRYPTO_LZ4=y
CONFIG_CRYPTO_LZ4HC=y
CONFIG_CRYPTO_ZSTD=y
# end of Compression
#
# Random number generation
#
CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DRBG_MENU=y
CONFIG_CRYPTO_DRBG_HMAC=y
CONFIG_CRYPTO_DRBG_HASH=y
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=y
CONFIG_CRYPTO_JITTERENTROPY=y
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS=64
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE=32
CONFIG_CRYPTO_JITTERENTROPY_OSR=1
CONFIG_CRYPTO_KDF800108_CTR=y
# end of Random number generation
#
# Userspace interface
#
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_USER_API_RNG=y
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=y
CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y
# CONFIG_CRYPTO_STATS is not set
# end of Userspace interface
CONFIG_CRYPTO_HASH_INFO=y
#
# Accelerated Cryptographic Algorithms for CPU (x86)
#
CONFIG_CRYPTO_CURVE25519_X86=y
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_BLOWFISH_X86_64=y
CONFIG_CRYPTO_CAMELLIA_X86_64=y
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=y
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=y
CONFIG_CRYPTO_CAST5_AVX_X86_64=y
CONFIG_CRYPTO_CAST6_AVX_X86_64=y
CONFIG_CRYPTO_DES3_EDE_X86_64=y
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=y
CONFIG_CRYPTO_SERPENT_AVX_X86_64=y
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=y
CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64=y
CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64=y
CONFIG_CRYPTO_TWOFISH_X86_64=y
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=y
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=y
CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64=y
# CONFIG_CRYPTO_ARIA_AESNI_AVX2_X86_64 is not set
# CONFIG_CRYPTO_ARIA_GFNI_AVX512_X86_64 is not set
CONFIG_CRYPTO_CHACHA20_X86_64=y
CONFIG_CRYPTO_AEGIS128_AESNI_SSE2=y
CONFIG_CRYPTO_NHPOLY1305_SSE2=y
CONFIG_CRYPTO_NHPOLY1305_AVX2=y
CONFIG_CRYPTO_BLAKE2S_X86=y
CONFIG_CRYPTO_POLYVAL_CLMUL_NI=y
CONFIG_CRYPTO_POLY1305_X86_64=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=y
CONFIG_CRYPTO_SM3_AVX_X86_64=y
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=y
CONFIG_CRYPTO_CRC32C_INTEL=y
CONFIG_CRYPTO_CRC32_PCLMUL=y
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=y
# end of Accelerated Cryptographic Algorithms for CPU (x86)
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=y
CONFIG_CRYPTO_DEV_PADLOCK_AES=y
CONFIG_CRYPTO_DEV_PADLOCK_SHA=y
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=y
CONFIG_CRYPTO_DEV_CCP_DD=y
# CONFIG_CRYPTO_DEV_SP_CCP is not set
# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
CONFIG_CRYPTO_DEV_QAT=y
CONFIG_CRYPTO_DEV_QAT_DH895xCC=y
CONFIG_CRYPTO_DEV_QAT_C3XXX=y
CONFIG_CRYPTO_DEV_QAT_C62X=y
# CONFIG_CRYPTO_DEV_QAT_4XXX is not set
# CONFIG_CRYPTO_DEV_QAT_420XX is not set
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=y
CONFIG_CRYPTO_DEV_QAT_C3XXXVF=y
CONFIG_CRYPTO_DEV_QAT_C62XVF=y
# CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION is not set
CONFIG_CRYPTO_DEV_VIRTIO=y
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_CCREE is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS8_PRIVATE_KEY_PARSER=y
CONFIG_PKCS7_MESSAGE_PARSER=y
CONFIG_PKCS7_TEST_KEY=y
CONFIG_SIGNED_PE_FILE_VERIFICATION=y
# CONFIG_FIPS_SIGNATURE_SELFTEST is not set
#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
CONFIG_MODULE_SIG_KEY_TYPE_RSA=y
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
CONFIG_SECONDARY_TRUSTED_KEYRING=y
# CONFIG_SECONDARY_TRUSTED_KEYRING_SIGNED_BY_BUILTIN is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
# end of Certificates for signature checking
CONFIG_BINARY_PRINTF=y
#
# Library routines
#
CONFIG_RAID6_PQ=y
# CONFIG_RAID6_PQ_BENCHMARK is not set
CONFIG_LINEAR_RANGES=y
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
# CONFIG_CORDIC is not set
# CONFIG_PRIME_NUMBERS is not set
CONFIG_RATIONAL=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
CONFIG_ARCH_USE_SYM_ANNOTATIONS=y
#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_UTILS=y
CONFIG_CRYPTO_LIB_AES=y
CONFIG_CRYPTO_LIB_ARC4=y
CONFIG_CRYPTO_LIB_GF128MUL=y
CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S=y
CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y
CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=y
CONFIG_CRYPTO_LIB_CHACHA_GENERIC=y
CONFIG_CRYPTO_LIB_CHACHA=y
CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519=y
CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=y
CONFIG_CRYPTO_LIB_CURVE25519=y
CONFIG_CRYPTO_LIB_DES=y
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=y
CONFIG_CRYPTO_LIB_POLY1305_GENERIC=y
CONFIG_CRYPTO_LIB_POLY1305=y
CONFIG_CRYPTO_LIB_CHACHA20POLY1305=y
CONFIG_CRYPTO_LIB_SHA1=y
CONFIG_CRYPTO_LIB_SHA256=y
# end of Crypto library routines
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC64_ROCKSOFT=y
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
CONFIG_CRC64=y
CONFIG_CRC4=y
CONFIG_CRC7=y
CONFIG_LIBCRC32C=y
CONFIG_CRC8=y
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_842_COMPRESS=y
CONFIG_842_DECOMPRESS=y
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_COMPRESS=y
CONFIG_LZ4HC_COMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_ZSTD_COMMON=y
CONFIG_ZSTD_COMPRESS=y
CONFIG_ZSTD_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
# CONFIG_XZ_DEC_MICROLZMA is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_DECOMPRESS_ZSTD=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=y
CONFIG_REED_SOLOMON_DEC8=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=y
CONFIG_TEXTSEARCH_BM=y
CONFIG_TEXTSEARCH_FSM=y
CONFIG_INTERVAL_TREE=y
CONFIG_INTERVAL_TREE_SPAN_ITER=y
CONFIG_XARRAY_MULTI=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_CLOSURES=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_DMA_OPS=y
CONFIG_NEED_SG_DMA_FLAGS=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DMA_DECLARE_COHERENT=y
CONFIG_SWIOTLB=y
# CONFIG_SWIOTLB_DYNAMIC is not set
# CONFIG_DMA_RESTRICTED_POOL is not set
CONFIG_DMA_CMA=y
# CONFIG_DMA_NUMA_CMA is not set
#
# Default contiguous memory area size:
#
CONFIG_CMA_SIZE_MBYTES=0
CONFIG_CMA_SIZE_SEL_MBYTES=y
# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
# CONFIG_CMA_SIZE_SEL_MIN is not set
# CONFIG_CMA_SIZE_SEL_MAX is not set
CONFIG_CMA_ALIGNMENT=8
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_DMA_MAP_BENCHMARK is not set
CONFIG_SGL_ALLOC=y
CONFIG_CHECK_SIGNATURE=y
# CONFIG_CPUMASK_OFFSTACK is not set
# CONFIG_FORCE_NR_CPUS is not set
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_CLZ_TAB=y
CONFIG_IRQ_POLL=y
CONFIG_MPILIB=y
CONFIG_SIGNATURE=y
CONFIG_DIMLIB=y
CONFIG_LIBFDT=y
CONFIG_OID_REGISTRY=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_MEMREGION=y
CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_HAS_COPY_MC=y
CONFIG_ARCH_STACKWALK=y
CONFIG_STACKDEPOT=y
CONFIG_STACKDEPOT_ALWAYS_INIT=y
CONFIG_STACKDEPOT_MAX_FRAMES=64
CONFIG_REF_TRACKER=y
CONFIG_SBITMAP=y
# CONFIG_LWQ_TEST is not set
# end of Library routines
CONFIG_FIRMWARE_TABLE=y
#
# Kernel hacking
#
#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_PRINTK_CALLER=y
# CONFIG_STACKTRACE_BUILD_ID is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
# CONFIG_BOOT_PRINTK_DELAY is not set
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MISC=y
#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_AS_HAS_NON_CONST_ULEB128=y
# CONFIG_DEBUG_INFO_NONE is not set
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
CONFIG_DEBUG_INFO_DWARF4=y
# CONFIG_DEBUG_INFO_DWARF5 is not set
# CONFIG_DEBUG_INFO_REDUCED is not set
CONFIG_DEBUG_INFO_COMPRESSED_NONE=y
# CONFIG_DEBUG_INFO_COMPRESSED_ZLIB is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
# CONFIG_DEBUG_INFO_BTF is not set
CONFIG_PAHOLE_HAS_SPLIT_BTF=y
CONFIG_PAHOLE_HAS_LANG_EXCLUDE=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set
CONFIG_OBJTOOL=y
# CONFIG_VMLINUX_MAP is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options
#
# Generic Kernel Debugging Instruments
#
# CONFIG_MAGIC_SYSRQ is not set
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN=y
CONFIG_UBSAN=y
# CONFIG_UBSAN_TRAP is not set
CONFIG_CC_HAS_UBSAN_BOUNDS_STRICT=y
CONFIG_UBSAN_BOUNDS=y
CONFIG_UBSAN_BOUNDS_STRICT=y
CONFIG_UBSAN_SHIFT=y
# CONFIG_UBSAN_DIV_ZERO is not set
CONFIG_UBSAN_SIGNED_WRAP=y
# CONFIG_UBSAN_BOOL is not set
# CONFIG_UBSAN_ENUM is not set
# CONFIG_UBSAN_ALIGNMENT is not set
# CONFIG_TEST_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
CONFIG_HAVE_KCSAN_COMPILER=y
# end of Generic Kernel Debugging Instruments
#
# Networking Debugging
#
CONFIG_NET_DEV_REFCNT_TRACKER=y
CONFIG_NET_NS_REFCNT_TRACKER=y
CONFIG_DEBUG_NET=y
# end of Networking Debugging
#
# Memory Debugging
#
CONFIG_PAGE_EXTENSION=y
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_SLUB_DEBUG=y
# CONFIG_SLUB_DEBUG_ON is not set
CONFIG_PAGE_OWNER=y
CONFIG_PAGE_TABLE_CHECK=y
CONFIG_PAGE_TABLE_CHECK_ENFORCED=y
CONFIG_PAGE_POISONING=y
# CONFIG_DEBUG_PAGE_REF is not set
# CONFIG_DEBUG_RODATA_TEST is not set
CONFIG_ARCH_HAS_DEBUG_WX=y
CONFIG_DEBUG_WX=y
CONFIG_GENERIC_PTDUMP=y
CONFIG_PTDUMP_CORE=y
CONFIG_PTDUMP_DEBUGFS=y
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_PER_VMA_LOCK_STATS is not set
CONFIG_DEBUG_OBJECTS=y
# CONFIG_DEBUG_OBJECTS_SELFTEST is not set
CONFIG_DEBUG_OBJECTS_FREE=y
CONFIG_DEBUG_OBJECTS_TIMERS=y
CONFIG_DEBUG_OBJECTS_WORK=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_DEBUG_OBJECTS_PERCPU_COUNTER=y
CONFIG_DEBUG_OBJECTS_ENABLE_DEFAULT=1
# CONFIG_SHRINKER_DEBUG is not set
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_SCHED_STACK_END_CHECK=y
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
CONFIG_DEBUG_VM_IRQSOFF=y
CONFIG_DEBUG_VM=y
CONFIG_DEBUG_VM_MAPLE_TREE=y
CONFIG_DEBUG_VM_RB=y
CONFIG_DEBUG_VM_PGFLAGS=y
CONFIG_DEBUG_VM_PGTABLE=y
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_PER_CPU_MAPS=y
CONFIG_DEBUG_KMAP_LOCAL=y
CONFIG_ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP=y
CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
CONFIG_KASAN=y
CONFIG_KASAN_GENERIC=y
# CONFIG_KASAN_OUTLINE is not set
CONFIG_KASAN_INLINE=y
CONFIG_KASAN_STACK=y
CONFIG_KASAN_VMALLOC=y
# CONFIG_KASAN_MODULE_TEST is not set
# CONFIG_KASAN_EXTRA_INFO is not set
CONFIG_HAVE_ARCH_KFENCE=y
CONFIG_KFENCE=y
CONFIG_KFENCE_SAMPLE_INTERVAL=100
CONFIG_KFENCE_NUM_OBJECTS=255
# CONFIG_KFENCE_DEFERRABLE is not set
CONFIG_KFENCE_STATIC_KEYS=y
CONFIG_KFENCE_STRESS_TEST_FAULTS=0
CONFIG_HAVE_ARCH_KMSAN=y
# end of Memory Debugging
# CONFIG_DEBUG_SHIRQ is not set
#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=86400
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_BUDDY=y
CONFIG_HARDLOCKUP_DETECTOR=y
# CONFIG_HARDLOCKUP_DETECTOR_PREFER_BUDDY is not set
CONFIG_HARDLOCKUP_DETECTOR_PERF=y
# CONFIG_HARDLOCKUP_DETECTOR_BUDDY is not set
# CONFIG_HARDLOCKUP_DETECTOR_ARCH is not set
CONFIG_HARDLOCKUP_DETECTOR_COUNTS_HRTIMER=y
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=140
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
CONFIG_WQ_WATCHDOG=y
# CONFIG_WQ_CPU_INTENSIVE_REPORT is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs
#
# Scheduler Debugging
#
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHED_INFO=y
CONFIG_SCHEDSTATS=y
# end of Scheduler Debugging
CONFIG_DEBUG_TIMEKEEPING=y
CONFIG_DEBUG_PREEMPT=y
#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)
CONFIG_NMI_CHECK_CPU=y
CONFIG_DEBUG_IRQFLAGS=y
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_KOBJECT_RELEASE is not set
#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
CONFIG_DEBUG_PLIST=y
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
# CONFIG_DEBUG_CLOSURES is not set
CONFIG_DEBUG_MAPLE_TREE=y
# end of Debug kernel data structures
#
# RCU Debugging
#
# CONFIG_RCU_SCALE_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=100
CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=21000
# CONFIG_RCU_CPU_STALL_CPUTIME is not set
# CONFIG_RCU_TRACE is not set
CONFIG_RCU_EQS_DEBUG=y
# end of RCU Debugging
# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
# CONFIG_LATENCYTOP is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_RETHOOK=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_NO_PATCHABLE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_OBJTOOL_MCOUNT=y
CONFIG_HAVE_OBJTOOL_NOP_MCOUNT=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_HAVE_BUILDTIME_MCOUNT_SORT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_BOOTTIME_TRACING is not set
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_STACK_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_PREEMPT_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_HWLAT_TRACER is not set
# CONFIG_OSNOISE_TRACER is not set
# CONFIG_TIMERLAT_TRACER is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_FTRACE_SYSCALLS is not set
# CONFIG_TRACER_SNAPSHOT is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENTS=y
CONFIG_BPF_EVENTS=y
CONFIG_DYNAMIC_EVENTS=y
CONFIG_PROBE_EVENTS=y
# CONFIG_SYNTH_EVENTS is not set
# CONFIG_USER_EVENTS is not set
# CONFIG_HIST_TRIGGERS is not set
CONFIG_TRACE_EVENT_INJECT=y
# CONFIG_TRACEPOINT_BENCHMARK is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS=y
# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
# CONFIG_RV is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_SAMPLES is not set
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
# CONFIG_STRICT_DEVMEM is not set
#
# x86 Debugging
#
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
# CONFIG_EARLY_PRINTK_USB_XDBC is not set
# CONFIG_DEBUG_TLBFLUSH is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
CONFIG_X86_DEBUG_FPU=y
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=y
# CONFIG_UNWINDER_FRAME_POINTER is not set
# end of x86 Debugging
#
# Kernel Testing and Coverage
#
# CONFIG_KUNIT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
CONFIG_FAILSLAB=y
CONFIG_FAIL_PAGE_ALLOC=y
CONFIG_FAULT_INJECTION_USERCOPY=y
CONFIG_FAIL_MAKE_REQUEST=y
CONFIG_FAIL_IO_TIMEOUT=y
CONFIG_FAIL_FUTEX=y
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_FAIL_MMC_REQUEST is not set
CONFIG_FAULT_INJECTION_CONFIGFS=y
# CONFIG_FAULT_INJECTION_STACKTRACE_FILTER is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
CONFIG_KCOV=y
CONFIG_KCOV_ENABLE_COMPARISONS=y
CONFIG_KCOV_INSTRUMENT_ALL=y
CONFIG_KCOV_IRQ_AREA_SIZE=0x40000
CONFIG_RUNTIME_TESTING_MENU=y
# CONFIG_TEST_DHRY is not set
# CONFIG_LKDTM is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_DIV64 is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_TEST_REF_TRACKER is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_SCANF is not set
# CONFIG_TEST_BITMAP is not set
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_MAPLE_TREE is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
# CONFIG_TEST_BITOPS is not set
# CONFIG_TEST_VMALLOC is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_TEST_BPF is not set
# CONFIG_TEST_BLACKHOLE_DEV is not set
# CONFIG_FIND_BIT_BENCHMARK is not set
# CONFIG_TEST_FIRMWARE is not set
# CONFIG_TEST_SYSCTL is not set
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_DYNAMIC_DEBUG is not set
# CONFIG_TEST_KMOD is not set
# CONFIG_TEST_DEBUG_VIRTUAL is not set
# CONFIG_TEST_MEMCAT_P is not set
# CONFIG_TEST_MEMINIT is not set
# CONFIG_TEST_HMM is not set
# CONFIG_TEST_FREE_PAGES is not set
# CONFIG_TEST_CLOCKSOURCE_WATCHDOG is not set
# CONFIG_TEST_OBJPOOL is not set
CONFIG_ARCH_USE_MEMTEST=y
# CONFIG_MEMTEST is not set
# end of Kernel Testing and Coverage
#
# Rust hacking
#
# end of Rust hacking
# end of Kernel hacking
^ permalink raw reply [relevance 1%]
* Re: [syzbot] [crypto?] KMSAN: uninit-value in aes_encrypt (5)
2024-04-28 10:32 4% [syzbot] [crypto?] KMSAN: uninit-value in aes_encrypt (5) syzbot
@ 2024-05-10 4:02 4% ` syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-05-10 4:02 UTC (permalink / raw)
To: davem, herbert, linux-crypto, linux-kernel, syzkaller-bugs
syzbot has found a reproducer for the following issue on:
HEAD commit: 45db3ab70092 Merge tag '6.9-rc7-ksmbd-fixes' of git://git...
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=14d9bfdf180000
kernel config: https://syzkaller.appspot.com/x/.config?x=617171361dd3cd47
dashboard link: https://syzkaller.appspot.com/bug?extid=aeb14e2539ffb6d21130
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1617adb8980000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=112f45d4980000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/232e7c2a73a5/disk-45db3ab7.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/7e9bf7c936ab/vmlinux-45db3ab7.xz
kernel image: https://storage.googleapis.com/syzbot-assets/5e8f98ee02d8/bzImage-45db3ab7.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/fcc88c919ed9/mount_1.gz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+aeb14e2539ffb6d21130@syzkaller.appspotmail.com
fscrypt: AES-256-XTS using implementation "xts(ecb(aes-fixed-time))"
=====================================================
BUG: KMSAN: uninit-value in subshift lib/crypto/aes.c:149 [inline]
BUG: KMSAN: uninit-value in aes_encrypt+0x15cc/0x1db0 lib/crypto/aes.c:282
subshift lib/crypto/aes.c:149 [inline]
aes_encrypt+0x15cc/0x1db0 lib/crypto/aes.c:282
aesti_encrypt+0x7d/0xf0 crypto/aes_ti.c:31
crypto_ecb_crypt crypto/ecb.c:23 [inline]
crypto_ecb_encrypt2+0x18a/0x300 crypto/ecb.c:40
crypto_lskcipher_crypt_sg+0x36b/0x7f0 crypto/lskcipher.c:228
crypto_lskcipher_encrypt_sg+0x8a/0xc0 crypto/lskcipher.c:247
crypto_skcipher_encrypt+0x119/0x1e0 crypto/skcipher.c:669
xts_encrypt+0x3c4/0x550 crypto/xts.c:269
crypto_skcipher_encrypt+0x1a0/0x1e0 crypto/skcipher.c:671
fscrypt_crypt_data_unit+0x4ee/0x8f0 fs/crypto/crypto.c:144
fscrypt_encrypt_pagecache_blocks+0x422/0x900 fs/crypto/crypto.c:207
ext4_bio_write_folio+0x13db/0x2e40 fs/ext4/page-io.c:526
mpage_submit_folio+0x351/0x4a0 fs/ext4/inode.c:1869
mpage_process_page_bufs+0xb92/0xe30 fs/ext4/inode.c:1982
mpage_prepare_extent_to_map+0x1702/0x22c0 fs/ext4/inode.c:2490
ext4_do_writepages+0x1117/0x62e0 fs/ext4/inode.c:2632
ext4_writepages+0x312/0x830 fs/ext4/inode.c:2768
do_writepages+0x427/0xc30 mm/page-writeback.c:2612
filemap_fdatawrite_wbc+0x1d8/0x270 mm/filemap.c:397
__filemap_fdatawrite_range mm/filemap.c:430 [inline]
file_write_and_wait_range+0x1bf/0x370 mm/filemap.c:788
generic_buffers_fsync_noflush+0x84/0x3e0 fs/buffer.c:602
ext4_fsync_nojournal fs/ext4/fsync.c:88 [inline]
ext4_sync_file+0x5ba/0x13a0 fs/ext4/fsync.c:151
vfs_fsync_range+0x20d/0x270 fs/sync.c:188
generic_write_sync include/linux/fs.h:2795 [inline]
ext4_buffered_write_iter+0x9ad/0xaa0 fs/ext4/file.c:305
ext4_file_write_iter+0x208/0x3450
call_write_iter include/linux/fs.h:2110 [inline]
new_sync_write fs/read_write.c:497 [inline]
vfs_write+0xb63/0x1520 fs/read_write.c:590
ksys_write+0x20f/0x4c0 fs/read_write.c:643
__do_sys_write fs/read_write.c:655 [inline]
__se_sys_write fs/read_write.c:652 [inline]
__x64_sys_write+0x93/0xe0 fs/read_write.c:652
x64_sys_call+0x3062/0x3b50 arch/x86/include/generated/asm/syscalls_64.h:2
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Uninit was stored to memory at:
le128_xor include/crypto/b128ops.h:69 [inline]
xts_xor_tweak+0x4ae/0xbf0 crypto/xts.c:123
xts_xor_tweak_pre crypto/xts.c:135 [inline]
xts_encrypt+0x296/0x550 crypto/xts.c:268
crypto_skcipher_encrypt+0x1a0/0x1e0 crypto/skcipher.c:671
fscrypt_crypt_data_unit+0x4ee/0x8f0 fs/crypto/crypto.c:144
fscrypt_encrypt_pagecache_blocks+0x422/0x900 fs/crypto/crypto.c:207
ext4_bio_write_folio+0x13db/0x2e40 fs/ext4/page-io.c:526
mpage_submit_folio+0x351/0x4a0 fs/ext4/inode.c:1869
mpage_process_page_bufs+0xb92/0xe30 fs/ext4/inode.c:1982
mpage_prepare_extent_to_map+0x1702/0x22c0 fs/ext4/inode.c:2490
ext4_do_writepages+0x1117/0x62e0 fs/ext4/inode.c:2632
ext4_writepages+0x312/0x830 fs/ext4/inode.c:2768
do_writepages+0x427/0xc30 mm/page-writeback.c:2612
filemap_fdatawrite_wbc+0x1d8/0x270 mm/filemap.c:397
__filemap_fdatawrite_range mm/filemap.c:430 [inline]
file_write_and_wait_range+0x1bf/0x370 mm/filemap.c:788
generic_buffers_fsync_noflush+0x84/0x3e0 fs/buffer.c:602
ext4_fsync_nojournal fs/ext4/fsync.c:88 [inline]
ext4_sync_file+0x5ba/0x13a0 fs/ext4/fsync.c:151
vfs_fsync_range+0x20d/0x270 fs/sync.c:188
generic_write_sync include/linux/fs.h:2795 [inline]
ext4_buffered_write_iter+0x9ad/0xaa0 fs/ext4/file.c:305
ext4_file_write_iter+0x208/0x3450
call_write_iter include/linux/fs.h:2110 [inline]
new_sync_write fs/read_write.c:497 [inline]
vfs_write+0xb63/0x1520 fs/read_write.c:590
ksys_write+0x20f/0x4c0 fs/read_write.c:643
__do_sys_write fs/read_write.c:655 [inline]
__se_sys_write fs/read_write.c:652 [inline]
__x64_sys_write+0x93/0xe0 fs/read_write.c:652
x64_sys_call+0x3062/0x3b50 arch/x86/include/generated/asm/syscalls_64.h:2
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Uninit was created at:
__alloc_pages+0x9d6/0xe70 mm/page_alloc.c:4598
alloc_pages_mpol+0x299/0x990 mm/mempolicy.c:2264
alloc_pages mm/mempolicy.c:2335 [inline]
folio_alloc+0x1d0/0x230 mm/mempolicy.c:2342
filemap_alloc_folio+0xa6/0x440 mm/filemap.c:984
__filemap_get_folio+0xa10/0x14b0 mm/filemap.c:1926
ext4_write_begin+0x3e5/0x2230 fs/ext4/inode.c:1159
generic_perform_write+0x400/0xc60 mm/filemap.c:3974
ext4_buffered_write_iter+0x564/0xaa0 fs/ext4/file.c:299
ext4_file_write_iter+0x208/0x3450
call_write_iter include/linux/fs.h:2110 [inline]
new_sync_write fs/read_write.c:497 [inline]
vfs_write+0xb63/0x1520 fs/read_write.c:590
ksys_write+0x20f/0x4c0 fs/read_write.c:643
__do_sys_write fs/read_write.c:655 [inline]
__se_sys_write fs/read_write.c:652 [inline]
__x64_sys_write+0x93/0xe0 fs/read_write.c:652
x64_sys_call+0x3062/0x3b50 arch/x86/include/generated/asm/syscalls_64.h:2
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
CPU: 0 PID: 5048 Comm: syz-executor132 Not tainted 6.9.0-rc7-syzkaller-00056-g45db3ab70092 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
=====================================================
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [relevance 4%]
* [PATCHES part 2 03/10] grow_dev_folio(): we only want ->bd_inode->i_mapping there
@ 2024-05-08 6:44 14% ` Al Viro
2024-05-08 6:44 7% ` [PATCHES part 2 05/10] fs/buffer.c: massage the remaining users of ->bd_inode to ->bd_mapping Al Viro
1 sibling, 0 replies; 200+ results
From: Al Viro @ 2024-05-08 6:44 UTC (permalink / raw)
To: linux-fsdevel; +Cc: axboe, brauner, hch
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Link: https://lore.kernel.org/r/20240411145346.2516848-3-viro@zeniv.linux.org.uk
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
fs/buffer.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index d5a0932ae68d..78a4e95ba2f2 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1034,12 +1034,12 @@ static sector_t folio_init_buffers(struct folio *folio,
static bool grow_dev_folio(struct block_device *bdev, sector_t block,
pgoff_t index, unsigned size, gfp_t gfp)
{
- struct inode *inode = bdev->bd_inode;
+ struct address_space *mapping = bdev->bd_mapping;
struct folio *folio;
struct buffer_head *bh;
sector_t end_block = 0;
- folio = __filemap_get_folio(inode->i_mapping, index,
+ folio = __filemap_get_folio(mapping, index,
FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
if (IS_ERR(folio))
return false;
@@ -1073,10 +1073,10 @@ static bool grow_dev_folio(struct block_device *bdev, sector_t block,
* lock to be atomic wrt __find_get_block(), which does not
* run under the folio lock.
*/
- spin_lock(&inode->i_mapping->i_private_lock);
+ spin_lock(&mapping->i_private_lock);
link_dev_buffers(folio, bh);
end_block = folio_init_buffers(folio, bdev, size);
- spin_unlock(&inode->i_mapping->i_private_lock);
+ spin_unlock(&mapping->i_private_lock);
unlock:
folio_unlock(folio);
folio_put(folio);
--
2.39.2
^ permalink raw reply related [relevance 14%]
* [PATCHES part 2 05/10] fs/buffer.c: massage the remaining users of ->bd_inode to ->bd_mapping
2024-05-08 6:44 14% ` [PATCHES part 2 03/10] grow_dev_folio(): we only want ->bd_inode->i_mapping there Al Viro
@ 2024-05-08 6:44 7% ` Al Viro
1 sibling, 0 replies; 200+ results
From: Al Viro @ 2024-05-08 6:44 UTC (permalink / raw)
To: linux-fsdevel; +Cc: axboe, brauner, hch
both for ->i_blkbits and both want the address_space in question anyway.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
---
fs/buffer.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 78a4e95ba2f2..ac29e0f221bc 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -189,8 +189,8 @@ EXPORT_SYMBOL(end_buffer_write_sync);
static struct buffer_head *
__find_get_block_slow(struct block_device *bdev, sector_t block)
{
- struct inode *bd_inode = bdev->bd_inode;
- struct address_space *bd_mapping = bd_inode->i_mapping;
+ struct address_space *bd_mapping = bdev->bd_mapping;
+ const int blkbits = bd_mapping->host->i_blkbits;
struct buffer_head *ret = NULL;
pgoff_t index;
struct buffer_head *bh;
@@ -199,7 +199,7 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
int all_mapped = 1;
static DEFINE_RATELIMIT_STATE(last_warned, HZ, 1);
- index = ((loff_t)block << bd_inode->i_blkbits) / PAGE_SIZE;
+ index = ((loff_t)block << blkbits) / PAGE_SIZE;
folio = __filemap_get_folio(bd_mapping, index, FGP_ACCESSED, 0);
if (IS_ERR(folio))
goto out;
@@ -233,7 +233,7 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
(unsigned long long)block,
(unsigned long long)bh->b_blocknr,
bh->b_state, bh->b_size, bdev,
- 1 << bd_inode->i_blkbits);
+ 1 << blkbits);
}
out_unlock:
spin_unlock(&bd_mapping->i_private_lock);
@@ -1696,16 +1696,16 @@ EXPORT_SYMBOL(create_empty_buffers);
*/
void clean_bdev_aliases(struct block_device *bdev, sector_t block, sector_t len)
{
- struct inode *bd_inode = bdev->bd_inode;
- struct address_space *bd_mapping = bd_inode->i_mapping;
+ struct address_space *bd_mapping = bdev->bd_mapping;
+ const int blkbits = bd_mapping->host->i_blkbits;
struct folio_batch fbatch;
- pgoff_t index = ((loff_t)block << bd_inode->i_blkbits) / PAGE_SIZE;
+ pgoff_t index = ((loff_t)block << blkbits) / PAGE_SIZE;
pgoff_t end;
int i, count;
struct buffer_head *bh;
struct buffer_head *head;
- end = ((loff_t)(block + len - 1) << bd_inode->i_blkbits) / PAGE_SIZE;
+ end = ((loff_t)(block + len - 1) << blkbits) / PAGE_SIZE;
folio_batch_init(&fbatch);
while (filemap_get_folios(bd_mapping, &index, end, &fbatch)) {
count = folio_batch_count(&fbatch);
--
2.39.2
^ permalink raw reply related [relevance 7%]
* Re: [PATCH] ext4: remove the redundant folio_wait_stable()
2024-04-19 2:30 6% [PATCH] ext4: remove the redundant folio_wait_stable() Zhang Yi
2024-04-19 9:28 0% ` Jan Kara
@ 2024-05-07 23:03 0% ` Theodore Ts'o
1 sibling, 0 replies; 200+ results
From: Theodore Ts'o @ 2024-05-07 23:03 UTC (permalink / raw)
To: linux-ext4, Zhang Yi
Cc: Theodore Ts'o, linux-fsdevel, adilger.kernel, jack, yi.zhang,
chengzhihao1, yukuai3
On Fri, 19 Apr 2024 10:30:05 +0800, Zhang Yi wrote:
> __filemap_get_folio() with FGP_WRITEBEGIN parameter has already wait
> for stable folio, so remove the redundant folio_wait_stable() in
> ext4_da_write_begin(), it was left over from the commit cc883236b792
> ("ext4: drop unnecessary journal handle in delalloc write") that
> removed the retry getting page logic.
>
>
> [...]
Applied, thanks!
[1/1] ext4: remove the redundant folio_wait_stable()
commit: df0b5afc62f3368d657a8fe4a8d393ac481474c2
Best regards,
--
Theodore Ts'o <tytso@mit.edu>
^ permalink raw reply [relevance 0%]
* CVE-2022-48689: tcp: TX zerocopy should not sense pfmemalloc status
@ 2024-05-03 17:45 4% Greg Kroah-Hartman
0 siblings, 0 replies; 200+ results
From: Greg Kroah-Hartman @ 2024-05-03 17:45 UTC (permalink / raw)
To: linux-cve-announce; +Cc: Greg Kroah-Hartman
Description
===========
In the Linux kernel, the following vulnerability has been resolved:
tcp: TX zerocopy should not sense pfmemalloc status
We got a recent syzbot report [1] showing a possible misuse
of pfmemalloc page status in TCP zerocopy paths.
Indeed, for pages coming from user space or other layers,
using page_is_pfmemalloc() is moot, and possibly could give
false positives.
There has been attempts to make page_is_pfmemalloc() more robust,
but not using it in the first place in this context is probably better,
removing cpu cycles.
Note to stable teams :
You need to backport 84ce071e38a6 ("net: introduce
__skb_fill_page_desc_noacc") as a prereq.
Race is more probable after commit c07aea3ef4d4
("mm: add a signature in struct page") because page_is_pfmemalloc()
is now using low order bit from page->lru.next, which can change
more often than page->index.
Low order bit should never be set for lru.next (when used as an anchor
in LRU list), so KCSAN report is mostly a false positive.
Backporting to older kernel versions seems not necessary.
[1]
BUG: KCSAN: data-race in lru_add_fn / tcp_build_frag
write to 0xffffea0004a1d2c8 of 8 bytes by task 18600 on cpu 0:
__list_add include/linux/list.h:73 [inline]
list_add include/linux/list.h:88 [inline]
lruvec_add_folio include/linux/mm_inline.h:105 [inline]
lru_add_fn+0x440/0x520 mm/swap.c:228
folio_batch_move_lru+0x1e1/0x2a0 mm/swap.c:246
folio_batch_add_and_move mm/swap.c:263 [inline]
folio_add_lru+0xf1/0x140 mm/swap.c:490
filemap_add_folio+0xf8/0x150 mm/filemap.c:948
__filemap_get_folio+0x510/0x6d0 mm/filemap.c:1981
pagecache_get_page+0x26/0x190 mm/folio-compat.c:104
grab_cache_page_write_begin+0x2a/0x30 mm/folio-compat.c:116
ext4_da_write_begin+0x2dd/0x5f0 fs/ext4/inode.c:2988
generic_perform_write+0x1d4/0x3f0 mm/filemap.c:3738
ext4_buffered_write_iter+0x235/0x3e0 fs/ext4/file.c:270
ext4_file_write_iter+0x2e3/0x1210
call_write_iter include/linux/fs.h:2187 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x468/0x760 fs/read_write.c:578
ksys_write+0xe8/0x1a0 fs/read_write.c:631
__do_sys_write fs/read_write.c:643 [inline]
__se_sys_write fs/read_write.c:640 [inline]
__x64_sys_write+0x3e/0x50 fs/read_write.c:640
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
read to 0xffffea0004a1d2c8 of 8 bytes by task 18611 on cpu 1:
page_is_pfmemalloc include/linux/mm.h:1740 [inline]
__skb_fill_page_desc include/linux/skbuff.h:2422 [inline]
skb_fill_page_desc include/linux/skbuff.h:2443 [inline]
tcp_build_frag+0x613/0xb20 net/ipv4/tcp.c:1018
do_tcp_sendpages+0x3e8/0xaf0 net/ipv4/tcp.c:1075
tcp_sendpage_locked net/ipv4/tcp.c:1140 [inline]
tcp_sendpage+0x89/0xb0 net/ipv4/tcp.c:1150
inet_sendpage+0x7f/0xc0 net/ipv4/af_inet.c:833
kernel_sendpage+0x184/0x300 net/socket.c:3561
sock_sendpage+0x5a/0x70 net/socket.c:1054
pipe_to_sendpage+0x128/0x160 fs/splice.c:361
splice_from_pipe_feed fs/splice.c:415 [inline]
__splice_from_pipe+0x222/0x4d0 fs/splice.c:559
splice_from_pipe fs/splice.c:594 [inline]
generic_splice_sendpage+0x89/0xc0 fs/splice.c:743
do_splice_from fs/splice.c:764 [inline]
direct_splice_actor+0x80/0xa0 fs/splice.c:931
splice_direct_to_actor+0x305/0x620 fs/splice.c:886
do_splice_direct+0xfb/0x180 fs/splice.c:974
do_sendfile+0x3bf/0x910 fs/read_write.c:1249
__do_sys_sendfile64 fs/read_write.c:1317 [inline]
__se_sys_sendfile64 fs/read_write.c:1303 [inline]
__x64_sys_sendfile64+0x10c/0x150 fs/read_write.c:1303
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
value changed: 0x0000000000000000 -> 0xffffea0004a1d288
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 18611 Comm: syz-executor.4 Not tainted 6.0.0-rc2-syzkaller-00248-ge022620b5d05-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
The Linux kernel CVE team has assigned CVE-2022-48689 to this issue.
Affected and fixed versions
===========================
Issue introduced in 5.14 with commit c07aea3ef4d4 and fixed in 5.15.68 with commit 8527c9a6bf8e
Issue introduced in 5.14 with commit c07aea3ef4d4 and fixed in 5.19.9 with commit 6730c48ed6b0
Issue introduced in 5.14 with commit c07aea3ef4d4 and fixed in 6.0 with commit 326140063946
Please see https://www.kernel.org for a full list of currently supported
kernel versions by the kernel community.
Unaffected versions might change over time as fixes are backported to
older supported kernel versions. The official CVE entry at
https://cve.org/CVERecord/?id=CVE-2022-48689
will be updated if fixes are backported, please check that for the most
up to date information about this issue.
Affected files
==============
The file(s) affected by this issue are:
include/linux/skbuff.h
net/core/datagram.c
net/ipv4/tcp.c
Mitigation
==========
The Linux kernel CVE team recommends that you update to the latest
stable kernel version for this, and many other bugfixes. Individual
changes are never tested alone, but rather are part of a larger kernel
release. Cherry-picking individual commits is not recommended or
supported by the Linux kernel community at all. If however, updating to
the latest release is impossible, the individual changes to resolve this
issue can be found at these commits:
https://git.kernel.org/stable/c/8527c9a6bf8e54fef0a8d3d7d8874a48c725c915
https://git.kernel.org/stable/c/6730c48ed6b0cd939fc9b30b2d621ce0b89bea83
https://git.kernel.org/stable/c/3261400639463a853ba2b3be8bd009c2a8089775
^ permalink raw reply [relevance 4%]
* [PATCH v5 03/11] filemap: allocate mapping_min_order folios in the page cache
@ 2024-05-03 9:53 14% ` Luis Chamberlain
0 siblings, 0 replies; 200+ results
From: Luis Chamberlain @ 2024-05-03 9:53 UTC (permalink / raw)
To: akpm, willy, djwong, brauner, david, chandan.babu
Cc: hare, ritesh.list, john.g.garry, ziy, linux-fsdevel, linux-xfs,
linux-mm, linux-block, gost.dev, p.raghav, kernel, mcgrof
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
Supporting mapping_min_order implies that we guarantee each folio in the
page cache has at least an order of mapping_min_order. When adding new
folios to the page cache we must also ensure the index used is aligned to
the mapping_min_order as the page cache requires the index to be aligned
to the order of the folio.
Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
mm/filemap.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 30de18c4fd28..f0c0cfbbd134 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -858,6 +858,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
mapping_set_update(&xas, mapping);
if (!huge) {
@@ -1895,8 +1897,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
+ index = mapping_align_start_index(mapping, index);
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
gfp |= __GFP_WRITE;
@@ -1936,7 +1940,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2425,13 +2429,16 @@ static int filemap_update_page(struct kiocb *iocb,
}
static int filemap_create_folio(struct file *file,
- struct address_space *mapping, pgoff_t index,
+ struct address_space *mapping, loff_t pos,
struct folio_batch *fbatch)
{
struct folio *folio;
int error;
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ pgoff_t index;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ min_order);
if (!folio)
return -ENOMEM;
@@ -2449,6 +2456,8 @@ static int filemap_create_folio(struct file *file,
* well to keep locking rules simple.
*/
filemap_invalidate_lock_shared(mapping);
+ /* index in PAGE units but aligned to min_order number of pages. */
+ index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
error = filemap_add_folio(mapping, folio, index,
mapping_gfp_constraint(mapping, GFP_KERNEL));
if (error == -EEXIST)
@@ -2509,8 +2518,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
return -EAGAIN;
- err = filemap_create_folio(filp, mapping,
- iocb->ki_pos >> PAGE_SHIFT, fbatch);
+ err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
if (err == AOP_TRUNCATED_PAGE)
goto retry;
return err;
@@ -3708,9 +3716,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
+ index = mapping_align_start_index(mapping, index);
err = filemap_add_folio(mapping, folio, index, gfp);
if (unlikely(err)) {
folio_put(folio);
--
2.43.0
^ permalink raw reply related [relevance 14%]
* Re: [PATCH 04/18] fsverity: support block-based Merkle tree caching
@ 2024-05-02 4:42 5% ` Christoph Hellwig
0 siblings, 0 replies; 200+ results
From: Christoph Hellwig @ 2024-05-02 4:42 UTC (permalink / raw)
To: Darrick J. Wong
Cc: Christoph Hellwig, aalbersh, ebiggers, linux-xfs, alexl, walters,
fsverity, linux-fsdevel
On Wed, May 01, 2024 at 03:35:19PM -0700, Darrick J. Wong wrote:
> Got a link? This is the first I've heard of this, but TBH I've been
> ignoring a /lot/ of things trying to get online repair merged (thank
> you!) over the past months...
This was long before I got involved with repair :)
Below is what I found in my local tree. It doesn't have a proper commit
log, so I probably only sent it out as a RFC in reply to a patch series
posting, most likely untested:
commit c11dcbe101a240c7a9e9bae7efaff2779d88b292
Author: Christoph Hellwig <hch@lst.de>
Date: Mon Oct 16 14:14:11 2023 +0200
fsverity block interface
diff --git a/Documentation/filesystems/fsverity.rst b/Documentation/filesystems/fsverity.rst
index af889512c6ac99..c616d530a89086 100644
--- a/Documentation/filesystems/fsverity.rst
+++ b/Documentation/filesystems/fsverity.rst
@@ -648,7 +648,7 @@ which verifies data that has been read into the pagecache of a verity
inode. The containing folio must still be locked and not Uptodate, so
it's not yet readable by userspace. As needed to do the verification,
fsverity_verify_blocks() will call back into the filesystem to read
-hash blocks via fsverity_operations::read_merkle_tree_page().
+hash blocks via fsverity_operations::read_merkle_tree_block().
fsverity_verify_blocks() returns false if verification failed; in this
case, the filesystem must not set the folio Uptodate. Following this,
diff --git a/fs/btrfs/verity.c b/fs/btrfs/verity.c
index 2b34796f68d349..4b6134923232e7 100644
--- a/fs/btrfs/verity.c
+++ b/fs/btrfs/verity.c
@@ -713,20 +713,20 @@ int btrfs_get_verity_descriptor(struct inode *inode, void *buf, size_t buf_size)
*
* Returns the page we read, or an ERR_PTR on error.
*/
-static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages,
- u8 log_blocksize)
+static int btrfs_read_merkle_tree_block(struct inode *inode,
+ unsigned int offset, struct fsverity_block *block,
+ unsigned long num_ra_pages)
{
struct folio *folio;
+ pgoff_t index = offset >> PAGE_SHIFT;
u64 off = (u64)index << PAGE_SHIFT;
loff_t merkle_pos = merkle_file_pos(inode);
int ret;
if (merkle_pos < 0)
- return ERR_PTR(merkle_pos);
+ return merkle_pos;
if (merkle_pos > inode->i_sb->s_maxbytes - off - PAGE_SIZE)
- return ERR_PTR(-EFBIG);
+ return -EFBIG;
index += merkle_pos >> PAGE_SHIFT;
again:
folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
@@ -739,7 +739,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
if (!folio_test_uptodate(folio)) {
folio_unlock(folio);
folio_put(folio);
- return ERR_PTR(-EIO);
+ return -EIO;
}
folio_unlock(folio);
goto out;
@@ -748,7 +748,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
folio = filemap_alloc_folio(mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS),
0);
if (!folio)
- return ERR_PTR(-ENOMEM);
+ return -ENOMEM;
ret = filemap_add_folio(inode->i_mapping, folio, index, GFP_NOFS);
if (ret) {
@@ -756,7 +756,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
/* Did someone else insert a folio here? */
if (ret == -EEXIST)
goto again;
- return ERR_PTR(ret);
+ return ret;
}
/*
@@ -769,7 +769,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
folio_address(folio), PAGE_SIZE, &folio->page);
if (ret < 0) {
folio_put(folio);
- return ERR_PTR(ret);
+ return ret;
}
if (ret < PAGE_SIZE)
folio_zero_segment(folio, ret, PAGE_SIZE);
@@ -778,7 +778,8 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
folio_unlock(folio);
out:
- return folio_file_page(folio, index);
+ return fsverity_set_block_page(block, folio_file_page(folio, index),
+ offset);
}
/*
@@ -809,6 +810,7 @@ const struct fsverity_operations btrfs_verityops = {
.begin_enable_verity = btrfs_begin_enable_verity,
.end_enable_verity = btrfs_end_enable_verity,
.get_verity_descriptor = btrfs_get_verity_descriptor,
- .read_merkle_tree_page = btrfs_read_merkle_tree_page,
+ .read_merkle_tree_block = btrfs_read_merkle_tree_block,
.write_merkle_tree_block = btrfs_write_merkle_tree_block,
+ .drop_merkle_tree_block = fsverity_drop_page_merke_tree_block,
};
diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
index 4e2f01f048c09b..5623e2c1c302e8 100644
--- a/fs/ext4/verity.c
+++ b/fs/ext4/verity.c
@@ -358,15 +358,13 @@ static int ext4_get_verity_descriptor(struct inode *inode, void *buf,
return desc_size;
}
-static struct page *ext4_read_merkle_tree_page(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages,
- u8 log_blocksize)
+static int ext4_read_merkle_tree_block(struct inode *inode, unsigned int offset,
+ struct fsverity_block *block, unsigned long num_ra_pages)
{
struct folio *folio;
+ pgoff_t index;
- index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
-
+ index = (ext4_verity_metadata_pos(inode) + offset) >> PAGE_SHIFT;
folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
@@ -377,9 +375,10 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
folio = read_mapping_folio(inode->i_mapping, index, NULL);
if (IS_ERR(folio))
- return ERR_CAST(folio);
+ return PTR_ERR(folio);
}
- return folio_file_page(folio, index);
+ return fsverity_set_block_page(block, folio_file_page(folio, index),
+ offset);
}
static int ext4_write_merkle_tree_block(struct inode *inode, const void *buf,
@@ -394,6 +393,7 @@ const struct fsverity_operations ext4_verityops = {
.begin_enable_verity = ext4_begin_enable_verity,
.end_enable_verity = ext4_end_enable_verity,
.get_verity_descriptor = ext4_get_verity_descriptor,
- .read_merkle_tree_page = ext4_read_merkle_tree_page,
+ .read_merkle_tree_block = ext4_read_merkle_tree_block,
.write_merkle_tree_block = ext4_write_merkle_tree_block,
+ .drop_merkle_tree_block = fsverity_drop_page_merke_tree_block,
};
diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
index 601ab9f0c02492..aac9281e9c4565 100644
--- a/fs/f2fs/verity.c
+++ b/fs/f2fs/verity.c
@@ -255,15 +255,13 @@ static int f2fs_get_verity_descriptor(struct inode *inode, void *buf,
return size;
}
-static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages,
- u8 log_blocksize)
+static int f2fs_read_merkle_tree_block(struct inode *inode, unsigned int offset,
+ struct fsverity_block *block, unsigned long num_ra_pages)
{
struct page *page;
+ pgoff_t index;
- index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT;
-
+ index = (f2fs_verity_metadata_pos(inode) + offset) >> PAGE_SHIFT;
page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
if (!page || !PageUptodate(page)) {
DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
@@ -274,7 +272,7 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
page = read_mapping_page(inode->i_mapping, index, NULL);
}
- return page;
+ return fsverity_set_block_page(block, page, offset);
}
static int f2fs_write_merkle_tree_block(struct inode *inode, const void *buf,
@@ -289,6 +287,7 @@ const struct fsverity_operations f2fs_verityops = {
.begin_enable_verity = f2fs_begin_enable_verity,
.end_enable_verity = f2fs_end_enable_verity,
.get_verity_descriptor = f2fs_get_verity_descriptor,
- .read_merkle_tree_page = f2fs_read_merkle_tree_page,
+ .read_merkle_tree_block = f2fs_read_merkle_tree_block,
.write_merkle_tree_block = f2fs_write_merkle_tree_block,
+ .drop_merkle_tree_block = fsverity_drop_page_merke_tree_block,
};
diff --git a/fs/verity/read_metadata.c b/fs/verity/read_metadata.c
index 182bddf5dec54c..5e362f8562bd5d 100644
--- a/fs/verity/read_metadata.c
+++ b/fs/verity/read_metadata.c
@@ -12,10 +12,33 @@
#include <linux/sched/signal.h>
#include <linux/uaccess.h>
+int fsverity_set_block_page(struct fsverity_block *block,
+ struct page *page, unsigned int index)
+{
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+ block->kaddr = page_address(page) + (index % PAGE_SIZE);
+ block->cached = PageChecked(page);
+ block->context = page;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(fsverity_set_block_page);
+
+void fsverity_drop_page_merke_tree_block(struct fsverity_block *block)
+{
+ struct page *page = block->context;
+
+ if (block->verified)
+ SetPageChecked(page);
+ put_page(page);
+}
+EXPORT_SYMBOL_GPL(fsverity_drop_page_merke_tree_block);
+
static int fsverity_read_merkle_tree(struct inode *inode,
const struct fsverity_info *vi,
void __user *buf, u64 offset, int length)
{
+ const struct fsverity_operations *vop = inode->i_sb->s_vop;
u64 end_offset;
unsigned int offs_in_block;
unsigned int block_size = vi->tree_params.block_size;
@@ -45,20 +68,19 @@ static int fsverity_read_merkle_tree(struct inode *inode,
struct fsverity_block block;
block.len = block_size;
- if (fsverity_read_merkle_tree_block(inode,
- index << vi->tree_params.log_blocksize,
- &block, num_ra_pages)) {
- fsverity_drop_block(inode, &block);
+ if (vop->read_merkle_tree_block(inode,
+ index << vi->tree_params.log_blocksize,
+ &block, num_ra_pages)) {
err = -EFAULT;
break;
}
if (copy_to_user(buf, block.kaddr + offs_in_block, bytes_to_copy)) {
- fsverity_drop_block(inode, &block);
+ vop->drop_merkle_tree_block(&block);
err = -EFAULT;
break;
}
- fsverity_drop_block(inode, &block);
+ vop->drop_merkle_tree_block(&block);
block.kaddr = NULL;
retval += bytes_to_copy;
diff --git a/fs/verity/verify.c b/fs/verity/verify.c
index dfe01f12184341..9b84262a6fa413 100644
--- a/fs/verity/verify.c
+++ b/fs/verity/verify.c
@@ -42,6 +42,7 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
const void *data, u64 data_pos, unsigned long max_ra_pages)
{
const struct merkle_tree_params *params = &vi->tree_params;
+ const struct fsverity_operations *vop = inode->i_sb->s_vop;
const unsigned int hsize = params->digest_size;
int level;
int err;
@@ -115,9 +116,9 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
block->len = params->block_size;
num_ra_pages = level == 0 ?
min(max_ra_pages, params->tree_pages - hpage_idx) : 0;
- err = fsverity_read_merkle_tree_block(
- inode, hblock_idx << params->log_blocksize, block,
- num_ra_pages);
+ err = vop->read_merkle_tree_block(inode,
+ hblock_idx << params->log_blocksize, block,
+ num_ra_pages);
if (err) {
fsverity_err(inode,
"Error %d reading Merkle tree block %lu",
@@ -127,7 +128,7 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
if (is_hash_block_verified(vi, hblock_idx, block->cached)) {
memcpy(_want_hash, block->kaddr + hoffset, hsize);
want_hash = _want_hash;
- fsverity_drop_block(inode, block);
+ vop->drop_merkle_tree_block(block);
goto descend;
}
hblocks[level].index = hblock_idx;
@@ -157,7 +158,7 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
block->verified = true;
memcpy(_want_hash, haddr + hoffset, hsize);
want_hash = _want_hash;
- fsverity_drop_block(inode, block);
+ vop->drop_merkle_tree_block(block);
}
/* Finally, verify the data block. */
@@ -174,9 +175,8 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
params->hash_alg->name, hsize, want_hash,
params->hash_alg->name, hsize, real_hash);
error:
- for (; level > 0; level--) {
- fsverity_drop_block(inode, &hblocks[level - 1].block);
- }
+ for (; level > 0; level--)
+ vop->drop_merkle_tree_block(&hblocks[level - 1].block);
return false;
}
diff --git a/include/linux/fsverity.h b/include/linux/fsverity.h
index ce37a430bc97f2..ae9ae7719af558 100644
--- a/include/linux/fsverity.h
+++ b/include/linux/fsverity.h
@@ -104,27 +104,6 @@ struct fsverity_operations {
int (*get_verity_descriptor)(struct inode *inode, void *buf,
size_t bufsize);
- /**
- * Read a Merkle tree page of the given inode.
- *
- * @inode: the inode
- * @index: 0-based index of the page within the Merkle tree
- * @num_ra_pages: The number of Merkle tree pages that should be
- * prefetched starting at @index if the page at @index
- * isn't already cached. Implementations may ignore this
- * argument; it's only a performance optimization.
- *
- * This can be called at any time on an open verity file. It may be
- * called by multiple processes concurrently, even with the same page.
- *
- * Note that this must retrieve a *page*, not necessarily a *block*.
- *
- * Return: the page on success, ERR_PTR() on failure
- */
- struct page *(*read_merkle_tree_page)(struct inode *inode,
- pgoff_t index,
- unsigned long num_ra_pages,
- u8 log_blocksize);
/**
* Read a Merkle tree block of the given inode.
* @inode: the inode
@@ -162,13 +141,12 @@ struct fsverity_operations {
/**
* Release the reference to a Merkle tree block
- *
- * @page: the block to release
+ * @block: the block to release
*
* This is called when fs-verity is done with a block obtained with
* ->read_merkle_tree_block().
*/
- void (*drop_block)(struct fsverity_block *block);
+ void (*drop_merkle_tree_block)(struct fsverity_block *block);
};
#ifdef CONFIG_FS_VERITY
@@ -217,74 +195,16 @@ static inline void fsverity_cleanup_inode(struct inode *inode)
int fsverity_ioctl_read_metadata(struct file *filp, const void __user *uarg);
+int fsverity_set_block_page(struct fsverity_block *block,
+ struct page *page, unsigned int index);
+void fsverity_drop_page_merke_tree_block(struct fsverity_block *block);
+
/* verify.c */
bool fsverity_verify_blocks(struct folio *folio, size_t len, size_t offset);
void fsverity_verify_bio(struct bio *bio);
void fsverity_enqueue_verify_work(struct work_struct *work);
-/**
- * fsverity_drop_block() - drop block obtained with ->read_merkle_tree_block()
- * @inode: inode in use for verification or metadata reading
- * @block: block to be dropped
- *
- * Generic put_page() method. Calls out back to filesystem if ->drop_block() is
- * set, otherwise do nothing.
- *
- */
-static inline void fsverity_drop_block(struct inode *inode,
- struct fsverity_block *block)
-{
- if (inode->i_sb->s_vop->drop_block)
- inode->i_sb->s_vop->drop_block(block);
- else {
- struct page *page = (struct page *)block->context;
-
- if (block->verified)
- SetPageChecked(page);
-
- put_page(page);
- }
-}
-
-/**
- * fsverity_read_block_from_page() - layer between fs using read page
- * and read block
- * @inode: inode in use for verification or metadata reading
- * @index: index of the block in the tree (offset into the tree)
- * @block: block to be read
- * @num_ra_pages: number of pages to readahead, may be ignored
- *
- * Depending on fs implementation use read_merkle_tree_block or
- * read_merkle_tree_page.
- */
-static inline int fsverity_read_merkle_tree_block(struct inode *inode,
- unsigned int index,
- struct fsverity_block *block,
- unsigned long num_ra_pages)
-{
- struct page *page;
-
- if (inode->i_sb->s_vop->read_merkle_tree_block)
- return inode->i_sb->s_vop->read_merkle_tree_block(
- inode, index, block, num_ra_pages);
-
- page = inode->i_sb->s_vop->read_merkle_tree_page(
- inode, index >> PAGE_SHIFT, num_ra_pages,
- block->len);
-
- block->kaddr = page_address(page) + (index % PAGE_SIZE);
- block->cached = PageChecked(page);
- block->context = page;
-
- if (IS_ERR(page))
- return PTR_ERR(page);
- else
- return 0;
-}
-
-
-
#else /* !CONFIG_FS_VERITY */
static inline struct fsverity_info *fsverity_get_info(const struct inode *inode)
@@ -362,20 +282,6 @@ static inline void fsverity_enqueue_verify_work(struct work_struct *work)
WARN_ON_ONCE(1);
}
-static inline void fsverity_drop_page(struct inode *inode, struct page *page)
-{
- WARN_ON_ONCE(1);
-}
-
-static inline int fsverity_read_merkle_tree_block(struct inode *inode,
- unsigned int index,
- struct fsverity_block *block,
- unsigned long num_ra_pages)
-{
- WARN_ON_ONCE(1);
- return -EOPNOTSUPP;
-}
-
#endif /* !CONFIG_FS_VERITY */
static inline bool fsverity_verify_folio(struct folio *folio)
^ permalink raw reply related [relevance 5%]
* Re: [PATCH v9 00/36] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph
@ 2024-04-30 13:32 1% ` Masami Hiramatsu
0 siblings, 0 replies; 200+ results
From: Masami Hiramatsu @ 2024-04-30 13:32 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Alexei Starovoitov, Steven Rostedt, Florent Revest,
linux-trace-kernel, LKML, Martin KaFai Lau, bpf, Sven Schnelle,
Alexei Starovoitov, Jiri Olsa, Arnaldo Carvalho de Melo,
Daniel Borkmann, Alan Maguire, Mark Rutland, Peter Zijlstra,
Thomas Gleixner, Guo Ren
[-- Attachment #1: Type: text/plain, Size: 5166 bytes --]
On Mon, 29 Apr 2024 13:25:04 -0700
Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> On Mon, Apr 29, 2024 at 6:51 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > Hi Andrii,
> >
> > On Thu, 25 Apr 2024 13:31:53 -0700
> > Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > > Hey Masami,
> > >
> > > I can't really review most of that code as I'm completely unfamiliar
> > > with all those inner workings of fprobe/ftrace/function_graph. I left
> > > a few comments where there were somewhat more obvious BPF-related
> > > pieces.
> > >
> > > But I also did run our BPF benchmarks on probes/for-next as a baseline
> > > and then with your series applied on top. Just to see if there are any
> > > regressions. I think it will be a useful data point for you.
> >
> > Thanks for testing!
> >
> > >
> > > You should be already familiar with the bench tool we have in BPF
> > > selftests (I used it on some other patches for your tree).
> >
> > What patches we need?
> >
>
> You mean for this `bench` tool? They are part of BPF selftests (under
> tools/testing/selftests/bpf), you can build them by running:
>
> $ make RELEASE=1 -j$(nproc) bench
>
> After that you'll get a self-container `bench` binary, which has all
> the self-contained benchmarks.
>
> You might also find a small script (benchs/run_bench_trigger.sh inside
> BPF selftests directory) helpful, it collects final summary of the
> benchmark run and optionally accepts a specific set of benchmarks. So
> you can use it like this:
>
> $ benchs/run_bench_trigger.sh kprobe kprobe-multi
> kprobe : 18.731 ± 0.639M/s
> kprobe-multi : 23.938 ± 0.612M/s
>
> By default it will run a wider set of benchmarks (no uprobes, but a
> bunch of extra fentry/fexit tests and stuff like this).
origin:
# benchs/run_bench_trigger.sh
kretprobe : 1.329 ± 0.007M/s
kretprobe-multi: 1.341 ± 0.004M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.288 ± 0.014M/s
kretprobe-multi: 1.365 ± 0.002M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.329 ± 0.002M/s
kretprobe-multi: 1.331 ± 0.011M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.311 ± 0.003M/s
kretprobe-multi: 1.318 ± 0.002M/s s
patched:
# benchs/run_bench_trigger.sh
kretprobe : 1.274 ± 0.003M/s
kretprobe-multi: 1.397 ± 0.002M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.307 ± 0.002M/s
kretprobe-multi: 1.406 ± 0.004M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.279 ± 0.004M/s
kretprobe-multi: 1.330 ± 0.014M/s
# benchs/run_bench_trigger.sh
kretprobe : 1.256 ± 0.010M/s
kretprobe-multi: 1.412 ± 0.003M/s
Hmm, in my case, it seems smaller differences (~3%?).
I attached perf report results for those, but I don't see large difference.
> > >
> > > BASELINE
> > > ========
> > > kprobe : 24.634 ± 0.205M/s
> > > kprobe-multi : 28.898 ± 0.531M/s
> > > kretprobe : 10.478 ± 0.015M/s
> > > kretprobe-multi: 11.012 ± 0.063M/s
> > >
> > > THIS PATCH SET ON TOP
> > > =====================
> > > kprobe : 25.144 ± 0.027M/s (+2%)
> > > kprobe-multi : 28.909 ± 0.074M/s
> > > kretprobe : 9.482 ± 0.008M/s (-9.5%)
> > > kretprobe-multi: 13.688 ± 0.027M/s (+24%)
> >
> > This looks good. Kretprobe should also use kretprobe-multi (fprobe)
> > eventually because it should be a single callback version of
> > kretprobe-multi.
I ran another benchmark (prctl loop, attached), the origin kernel result is here;
# sh ./benchmark.sh
count = 10000000, took 6.748133 sec
And the patched kernel result;
# sh ./benchmark.sh
count = 10000000, took 6.644095 sec
I confirmed that the parf result has no big difference.
Thank you,
> >
> > >
> > > These numbers are pretty stable and look to be more or less representative.
> > >
> > > As you can see, kprobes got a bit faster, kprobe-multi seems to be
> > > about the same, though.
> > >
> > > Then (I suppose they are "legacy") kretprobes got quite noticeably
> > > slower, almost by 10%. Not sure why, but looks real after re-running
> > > benchmarks a bunch of times and getting stable results.
> >
> > Hmm, kretprobe on x86 should use ftrace + rethook even with my series.
> > So nothing should be changed. Maybe cache access pattern has been
> > changed?
> > I'll check it with tracefs (to remove the effect from bpf related changes)
> >
> > >
> > > On the other hand, multi-kretprobes got significantly faster (+24%!).
> > > Again, I don't know if it is expected or not, but it's a nice
> > > improvement.
> >
> > Thanks!
> >
> > >
> > > If you have any idea why kretprobes would get so much slower, it would
> > > be nice to look into that and see if you can mitigate the regression
> > > somehow. Thanks!
> >
> > OK, let me check it.
> >
> > Thank you!
> >
> > >
> > >
> > > > 51 files changed, 2325 insertions(+), 882 deletions(-)
> > > > create mode 100644 tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc
> > > >
> > > > --
> > > > Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > >
> >
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
[-- Attachment #2: prctl_loop.c --]
[-- Type: text/x-csrc, Size: 555 bytes --]
#include <sys/prctl.h>
#include <unistd.h>
#include <sys/time.h>
#include <stdio.h>
int main(void)
{
struct timeval tv1, tv2;
unsigned long count = 0;
gettimeofday(&tv1, NULL);
do {
prctl(PR_GET_DUMPABLE, 0, 0, 0, 0);
count++;
} while (count < 10000000);
gettimeofday(&tv2, NULL);
tv2.tv_sec -= tv1.tv_sec;
if (tv2.tv_usec > tv1.tv_usec) {
tv2.tv_usec -= tv1.tv_usec;
} else {
tv2.tv_usec = tv2.tv_usec + 1000000 - tv1.tv_usec;
tv2.tv_sec--;
}
printf("count = %lu, took %ld.%06ld \n", count, tv2.tv_sec, tv2.tv_usec);
return 0;
}
[-- Attachment #3: perf-out-kretprobe-nopatch.txt --]
[-- Type: text/plain, Size: 65382 bytes --]
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 32K of event 'task-clock:ppp'
# Event count (approx.): 8035250000
#
# Children Self Command Shared Object Symbol
# ........ ........ ....... ................................................. .....................................................
#
99.56% 0.00% bench libc.so.6 [.] start_thread
|
---start_thread
|
|--97.95%--syscall
| |
| |--58.91%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.61%--__x64_sys_getpgid
| | | |
| | | |--11.69%--0xffffffffa02050de
| | | | kprobe_ftrace_handler
| | | | |
| | | | |--6.26%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.29%--objpool_pop
| | | | | |
| | | | | --1.97%--rethook_try_get
| | | | |
| | | | |--2.41%--rcu_is_watching
| | | | |
| | | | --0.93%--get_kprobe
| | | |
| | | --5.59%--do_getpgid
| | | |
| | | --4.85%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.42%--__radix_tree_lookup
| | |
| | |--14.68%--arch_rethook_trampoline
| | | |
| | | --12.96%--arch_rethook_trampoline_callback
| | | |
| | | --12.69%--rethook_trampoline_handler
| | | |
| | | |--10.89%--kretprobe_rethook_handler
| | | | |
| | | | --9.80%--kretprobe_dispatcher
| | | | |
| | | | --6.85%--kretprobe_perf_func
| | | | |
| | | | --6.57%--trace_call_bpf
| | | | |
| | | | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.67%--migrate_disable
| | | |
| | | --0.88%--objpool_push
| | |
| | --0.56%--syscall_exit_to_user_mode
| |
| --4.50%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.50%--syscall@plt
98.00% 34.25% bench libc.so.6 [.] syscall
|
|--63.76%--syscall
| |
| |--58.97%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.61%--__x64_sys_getpgid
| | | |
| | | |--11.69%--0xffffffffa02050de
| | | | kprobe_ftrace_handler
| | | | |
| | | | |--6.26%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.29%--objpool_pop
| | | | | |
| | | | | --1.97%--rethook_try_get
| | | | |
| | | | |--2.41%--rcu_is_watching
| | | | |
| | | | --0.93%--get_kprobe
| | | |
| | | --5.59%--do_getpgid
| | | |
| | | --4.85%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.42%--__radix_tree_lookup
| | |
| | |--14.68%--arch_rethook_trampoline
| | | |
| | | --12.96%--arch_rethook_trampoline_callback
| | | |
| | | --12.69%--rethook_trampoline_handler
| | | |
| | | |--10.89%--kretprobe_rethook_handler
| | | | |
| | | | --9.80%--kretprobe_dispatcher
| | | | |
| | | | --6.85%--kretprobe_perf_func
| | | | |
| | | | --6.57%--trace_call_bpf
| | | | |
| | | | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.67%--migrate_disable
| | | |
| | | --0.88%--objpool_push
| | |
| | --0.56%--syscall_exit_to_user_mode
| |
| --4.50%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--34.25%--start_thread
syscall
59.08% 0.00% bench [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
|
---entry_SYSCALL_64_after_hwframe
do_syscall_64
|
|--19.61%--__x64_sys_getpgid
| |
| |--11.69%--0xffffffffa02050de
| | kprobe_ftrace_handler
| | |
| | |--6.26%--pre_handler_kretprobe
| | | |
| | | |--3.29%--objpool_pop
| | | |
| | | --1.97%--rethook_try_get
| | |
| | |--2.41%--rcu_is_watching
| | |
| | --0.93%--get_kprobe
| |
| --5.59%--do_getpgid
| |
| --4.85%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.42%--__radix_tree_lookup
|
|--14.68%--arch_rethook_trampoline
| |
| --12.96%--arch_rethook_trampoline_callback
| |
| --12.69%--rethook_trampoline_handler
| |
| |--10.89%--kretprobe_rethook_handler
| | |
| | --9.80%--kretprobe_dispatcher
| | |
| | --6.85%--kretprobe_perf_func
| | |
| | --6.57%--trace_call_bpf
| | |
| | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.67%--migrate_disable
| |
| --0.88%--objpool_push
|
--0.56%--syscall_exit_to_user_mode
59.08% 24.07% bench [kernel.kallsyms] [k] do_syscall_64
|
|--35.01%--do_syscall_64
| |
| |--19.61%--__x64_sys_getpgid
| | |
| | |--11.69%--0xffffffffa02050de
| | | kprobe_ftrace_handler
| | | |
| | | |--6.26%--pre_handler_kretprobe
| | | | |
| | | | |--3.29%--objpool_pop
| | | | |
| | | | --1.97%--rethook_try_get
| | | |
| | | |--2.41%--rcu_is_watching
| | | |
| | | --0.93%--get_kprobe
| | |
| | --5.59%--do_getpgid
| | |
| | --4.85%--find_task_by_vpid
| | |
| | |--2.01%--idr_find
| | |
| | --1.42%--__radix_tree_lookup
| |
| |--14.68%--arch_rethook_trampoline
| | |
| | --12.96%--arch_rethook_trampoline_callback
| | |
| | --12.69%--rethook_trampoline_handler
| | |
| | |--10.89%--kretprobe_rethook_handler
| | | |
| | | --9.80%--kretprobe_dispatcher
| | | |
| | | --6.85%--kretprobe_perf_func
| | | |
| | | --6.57%--trace_call_bpf
| | | |
| | | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | --0.67%--migrate_disable
| | |
| | --0.88%--objpool_push
| |
| --0.56%--syscall_exit_to_user_mode
|
--24.06%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
19.66% 0.21% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
--19.44%--__x64_sys_getpgid
|
|--11.74%--0xffffffffa02050de
| kprobe_ftrace_handler
| |
| |--6.30%--pre_handler_kretprobe
| | |
| | |--3.29%--objpool_pop
| | |
| | --1.97%--rethook_try_get
| |
| |--2.41%--rcu_is_watching
| |
| --0.93%--get_kprobe
|
--5.59%--do_getpgid
|
--4.85%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.42%--__radix_tree_lookup
14.71% 1.75% bench [kernel.kallsyms] [k] arch_rethook_trampoline
|
|--12.96%--arch_rethook_trampoline
| |
| --12.96%--arch_rethook_trampoline_callback
| |
| --12.69%--rethook_trampoline_handler
| |
| |--10.89%--kretprobe_rethook_handler
| | |
| | --9.80%--kretprobe_dispatcher
| | |
| | --6.85%--kretprobe_perf_func
| | |
| | --6.57%--trace_call_bpf
| | |
| | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.67%--migrate_disable
| |
| --0.88%--objpool_push
|
--1.75%--start_thread
syscall
|
--1.71%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
12.96% 0.27% bench [kernel.kallsyms] [k] arch_rethook_trampoline_callback
|
--12.69%--arch_rethook_trampoline_callback
rethook_trampoline_handler
|
|--10.89%--kretprobe_rethook_handler
| |
| --9.80%--kretprobe_dispatcher
| |
| --6.85%--kretprobe_perf_func
| |
| --6.57%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--0.88%--objpool_push
12.69% 0.88% bench [kernel.kallsyms] [k] rethook_trampoline_handler
|
|--11.81%--rethook_trampoline_handler
| |
| |--10.89%--kretprobe_rethook_handler
| | |
| | --9.80%--kretprobe_dispatcher
| | |
| | --6.85%--kretprobe_perf_func
| | |
| | --6.57%--trace_call_bpf
| | |
| | |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.67%--migrate_disable
| |
| --0.88%--objpool_push
|
--0.88%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
11.74% 2.10% bench [kernel.kallsyms] [k] kprobe_ftrace_handler
|
|--9.64%--kprobe_ftrace_handler
| |
| |--6.30%--pre_handler_kretprobe
| | |
| | |--3.29%--objpool_pop
| | |
| | --1.97%--rethook_try_get
| |
| |--2.41%--rcu_is_watching
| |
| --0.93%--get_kprobe
|
--2.10%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
11.74% 0.00% bench [unknown] [k] 0xffffffffa02050de
|
---0xffffffffa02050de
kprobe_ftrace_handler
|
|--6.30%--pre_handler_kretprobe
| |
| |--3.29%--objpool_pop
| |
| --1.97%--rethook_try_get
|
|--2.41%--rcu_is_watching
|
--0.93%--get_kprobe
10.89% 1.09% bench [kernel.kallsyms] [k] kretprobe_rethook_handler
|
|--9.80%--kretprobe_rethook_handler
| kretprobe_dispatcher
| |
| --6.85%--kretprobe_perf_func
| |
| --6.57%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--1.09%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
9.80% 2.94% bench [kernel.kallsyms] [k] kretprobe_dispatcher
|
|--6.86%--kretprobe_dispatcher
| |
| --6.85%--kretprobe_perf_func
| |
| --6.57%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--2.94%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
6.94% 6.93% bench bpf_prog_21856463590f61f1_bench_trigger_kretprobe [k] bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--6.93%--start_thread
syscall
|
|--4.49%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--2.44%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
bpf_prog_21856463590f61f1_bench_trigger_kretprobe
6.85% 0.28% bench [kernel.kallsyms] [k] kretprobe_perf_func
|
--6.57%--kretprobe_perf_func
trace_call_bpf
|
|--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--0.67%--migrate_disable
6.57% 2.91% bench [kernel.kallsyms] [k] trace_call_bpf
|
|--3.67%--trace_call_bpf
| |
| |--2.44%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.67%--migrate_disable
|
--2.91%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
6.30% 0.81% bench [kernel.kallsyms] [k] pre_handler_kretprobe
|
|--5.49%--pre_handler_kretprobe
| |
| |--3.29%--objpool_pop
| |
| --1.97%--rethook_try_get
|
--0.81%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
pre_handler_kretprobe
5.59% 0.27% bench [kernel.kallsyms] [k] do_getpgid
|
--5.32%--do_getpgid
|
--4.85%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.42%--__radix_tree_lookup
4.85% 1.39% bench [kernel.kallsyms] [k] find_task_by_vpid
|
|--3.46%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.42%--__radix_tree_lookup
|
--1.39%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
3.29% 3.29% bench [kernel.kallsyms] [k] objpool_pop
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
pre_handler_kretprobe
objpool_pop
2.55% 2.55% bench [kernel.kallsyms] [k] rcu_is_watching
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
|
--2.41%--rcu_is_watching
2.01% 2.01% bench [kernel.kallsyms] [k] idr_find
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
idr_find
1.97% 1.83% bench [kernel.kallsyms] [k] rethook_try_get
|
--1.83%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
pre_handler_kretprobe
rethook_try_get
1.50% 1.50% bench bench [.] syscall@plt
|
---start_thread
syscall@plt
1.42% 1.42% bench [kernel.kallsyms] [k] __radix_tree_lookup
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
__radix_tree_lookup
0.93% 0.93% bench [kernel.kallsyms] [k] get_kprobe
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
0xffffffffa02050de
kprobe_ftrace_handler
get_kprobe
0.88% 0.88% bench [kernel.kallsyms] [k] objpool_push
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
objpool_push
0.67% 0.67% bench [kernel.kallsyms] [k] migrate_disable
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
migrate_disable
0.56% 0.56% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
syscall_exit_to_user_mode
0.45% 0.45% bench [kernel.kallsyms] [k] __rcu_read_lock
0.44% 0.44% bench [unknown] [k] 0xffffffffa0205005
0.39% 0.39% bench [kernel.kallsyms] [k] migrate_enable
0.36% 0.36% bench [unknown] [k] 0xffffffffa020515d
0.30% 0.00% bench libc.so.6 [.] __libc_start_call_main
0.30% 0.00% bench bench [.] main
0.30% 0.00% bench bench [.] setup_benchmark
0.30% 0.00% bench bench [.] trigger_kretprobe_setup
0.27% 0.27% bench [unknown] [k] 0xffffffffa0205011
0.27% 0.00% bench bench [.] trigger_bench__open_and_load
0.27% 0.00% bench bench [.] bpf_object__load_skeleton
0.27% 0.00% bench bench [.] bpf_object__load
0.27% 0.00% bench bench [.] bpf_object_load
0.23% 0.15% bench [kernel.kallsyms] [k] rethook_hook
0.22% 0.00% bench bench [.] bpf_object__load_vmlinux_btf
0.22% 0.00% bench bench [.] libbpf_find_kernel_btf
0.22% 0.00% bench bench [.] btf__parse
0.22% 0.00% bench bench [.] btf_parse
0.22% 0.00% bench bench [.] btf_parse_raw
0.21% 0.21% bench [kernel.kallsyms] [k] __x86_indirect_thunk_array
0.20% 0.20% bench [unknown] [k] 0xffffffffa0205000
0.18% 0.18% bench [kernel.kallsyms] [k] __rcu_read_unlock
0.16% 0.16% bench [unknown] [k] 0xffffffffa020506c
0.16% 0.01% bench [kernel.kallsyms] [k] do_user_addr_fault
0.16% 0.00% bench [kernel.kallsyms] [k] asm_exc_page_fault
0.16% 0.00% bench [kernel.kallsyms] [k] exc_page_fault
0.14% 0.00% bench [kernel.kallsyms] [k] handle_mm_fault
0.14% 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.14% 0.00% bench [kernel.kallsyms] [k] do_anonymous_page
0.13% 0.00% bench [kernel.kallsyms] [k] vma_alloc_folio
0.13% 0.02% bench libc.so.6 [.] __memmove_sse2_unaligned_erms
0.12% 0.00% bench [kernel.kallsyms] [k] alloc_pages_mpol
0.12% 0.00% bench [kernel.kallsyms] [k] __alloc_pages
0.12% 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.12% 0.12% bench [kernel.kallsyms] [k] clear_page_orig
0.11% 0.11% bench bench [.] trigger_producer
0.10% 0.00% bench bench [.] btf_new
0.08% 0.08% bench [kernel.kallsyms] [k] arch_rethook_prepare
0.07% 0.00% bench [unknown] [k] 0000000000000000
0.07% 0.00% bench bench [.] btf_sanity_check
0.07% 0.07% bench [unknown] [k] 0xffffffffa020508e
0.07% 0.01% bench libc.so.6 [.] read
0.06% 0.02% bench bench [.] btf_validate_type
0.05% 0.05% bench [unknown] [k] 0xffffffffa02050e6
0.05% 0.05% bench [unknown] [k] 0xffffffffa020507f
0.05% 0.05% bench [unknown] [k] 0xffffffffa0205150
0.05% 0.00% bench [kernel.kallsyms] [k] ksys_read
0.05% 0.00% bench [kernel.kallsyms] [k] vfs_read
0.05% 0.00% bench [kernel.kallsyms] [k] kernfs_file_read_iter
0.04% 0.04% bench [unknown] [k] 0xffffffffa0205016
0.04% 0.04% bench [unknown] [k] 0xffffffffa020513c
0.04% 0.00% bench [kernel.kallsyms] [k] _copy_to_iter
0.04% 0.01% bench [kernel.kallsyms] [k] rep_movs_alternative
0.04% 0.00% bench [kernel.kallsyms] [k] ftrace_modify_all_code
0.04% 0.04% bench [kernel.kallsyms] [k] arch_rethook_fixup_return
0.04% 0.04% bench [unknown] [k] 0xffffffffa02050ad
0.04% 0.03% bench [kernel.kallsyms] [k] radix_tree_lookup
0.04% 0.04% bench [unknown] [k] 0xffffffffa0205116
0.04% 0.00% bench [kernel.kallsyms] [k] 0xffffffff8108da38
0.04% 0.00% bench [kernel.kallsyms] [k] do_group_exit
0.04% 0.00% bench [kernel.kallsyms] [k] do_exit
0.03% 0.03% bench [kernel.kallsyms] [k] __do_softirq
0.03% 0.03% bench [unknown] [k] 0xffffffffa0205020
0.03% 0.03% bench [unknown] [k] 0xffffffffa02050cc
0.03% 0.00% bench [kernel.kallsyms] [k] asm_sysvec_apic_timer_interrupt
0.03% 0.00% bench [kernel.kallsyms] [k] sysvec_apic_timer_interrupt
0.03% 0.00% bench [kernel.kallsyms] [k] irq_exit_rcu
0.03% 0.00% bench [kernel.kallsyms] [k] task_work_run
0.03% 0.00% bench [kernel.kallsyms] [k] __fput
0.03% 0.03% bench [unknown] [k] 0xffffffffa020512b
0.03% 0.00% bench bench [.] feat_supported
0.03% 0.00% bench bench [.] sys_bpf_fd
0.03% 0.00% bench [kernel.kallsyms] [k] __x64_sys_bpf
0.03% 0.00% bench [kernel.kallsyms] [k] __sys_bpf
0.03% 0.00% bench [kernel.kallsyms] [k] bpf_prog_load
0.03% 0.00% bench bench [.] btf_parse_type_sec
0.03% 0.03% bench [unknown] [k] 0xffffffffa020509e
0.03% 0.03% bench [unknown] [k] 0xffffffffa0205102
0.02% 0.02% bench [unknown] [k] 0xffffffffa020502a
0.02% 0.02% bench [unknown] [k] 0xffffffffa02050bc
0.02% 0.02% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.02% 0.00% bench bench [.] bpf_program__attach
0.02% 0.00% bench bench [.] attach_kprobe
0.02% 0.00% bench bench [.] bpf_program__attach_kprobe_opts
0.02% 0.00% bench [kernel.kallsyms] [k] __do_sys_perf_event_open
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] perf_release
0.02% 0.01% bench bench [.] btf__type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_release_kernel
0.02% 0.00% bench [kernel.kallsyms] [k] perf_init_event
0.02% 0.00% bench [kernel.kallsyms] [k] _free_event
0.02% 0.00% bench [kernel.kallsyms] [k] perf_try_init_event
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_destroy
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_event_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_unreg.isra.0
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_select_runtime
0.02% 0.00% bench [kernel.kallsyms] [k] disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_int_jit_compile
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_jit_binary_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] __disarm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] alloc_new_pack
0.02% 0.00% bench [kernel.kallsyms] [k] unregister_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_shutdown.part.0
0.02% 0.01% bench [kernel.kallsyms] [k] ftrace_replace_code
0.02% 0.00% bench bench [.] bpf_object__probe_loading
0.02% 0.00% bench bench [.] bump_rlimit_memlock
0.02% 0.01% bench bench [.] btf_validate_id
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_reg
0.02% 0.00% bench [kernel.kallsyms] [k] enable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] enable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __arm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function_nolock
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_startup
0.02% 0.00% bench [kernel.kallsyms] [k] on_each_cpu_cond_mask
0.02% 0.02% bench [kernel.kallsyms] [k] memset_orig
0.02% 0.00% bench bench [.] probe_memcg_account
0.02% 0.02% bench bench [.] btf_type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_update_ftrace_func
0.02% 0.00% bench [kernel.kallsyms] [k] text_poke_bp
0.02% 0.00% bench [kernel.kallsyms] [k] text_poke_bp_batch
0.02% 0.02% bench [unknown] [k] 0xffffffffa0205004
0.02% 0.02% bench [unknown] [k] 0xffffffffa0205038
0.02% 0.02% bench [unknown] [k] 0xffffffffa0205050
0.02% 0.00% bench bench [.] bpf_object__load_progs
0.02% 0.00% bench bench [.] bpf_object_load_prog
0.02% 0.00% bench bench [.] btf_add_type_idx_entry
0.01% 0.01% bench bench [.] btf_kind
0.01% 0.01% bench [unknown] [k] 0xffffffffa020503b
0.01% 0.01% bench [unknown] [k] 0xffffffffa0205058
0.01% 0.00% bench bench [.] sys_bpf_prog_load
0.01% 0.00% bench bench [.] btf_add_type_offs_mem
0.01% 0.00% bench bench [.] btf_validate_str
0.01% 0.01% bench bench [.] btf_type_size
0.01% 0.01% bench [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.01% 0.00% bench bench [.] libbpf_prepare_prog_load
0.01% 0.00% bench bench [.] libbpf_find_attach_btf_id
0.01% 0.00% bench bench [.] find_kernel_btf_id
0.01% 0.00% bench bench [.] find_attach_btf_id
0.01% 0.00% bench bench [.] find_btf_by_prefix_kind
0.01% 0.00% bench bench [.] btf__find_by_name_kind
0.01% 0.00% bench bench [.] btf_find_by_name_kind
0.01% 0.01% bench [unknown] [k] 0xffffffffa020502f
0.01% 0.00% bench bench [.] kernel_supports
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_map_object
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] __GI___open64_nocancel
0.01% 0.00% bench libc.so.6 [.] __memset_sse2_unaligned_erms
0.01% 0.00% bench [kernel.kallsyms] [k] __x64_sys_openat
0.01% 0.00% bench [kernel.kallsyms] [k] do_sys_openat2
0.01% 0.00% bench [kernel.kallsyms] [k] do_filp_open
0.01% 0.00% bench [kernel.kallsyms] [k] path_openat
0.01% 0.00% bench [kernel.kallsyms] [k] p9_client_rpc
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_record
0.01% 0.01% bench [kernel.kallsyms] [k] kmem_cache_alloc
0.01% 0.01% bench [unknown] [k] 0xffffffffa0205025
0.01% 0.01% bench [unknown] [k] 0xffffffffa0205040
0.01% 0.00% bench bench [.] bpf_object__sanitize_maps
0.01% 0.00% bench [kernel.kallsyms] [k] _raw_spin_lock
0.01% 0.00% bench bench [.] probe_kern_array_mmap
0.01% 0.00% bench bench [.] bpf_map_create
0.01% 0.00% bench bench [.] probe_kern_prog_name
0.01% 0.01% bench bench [.] btf__str_by_offset
0.01% 0.01% bench bench [.] btf_vlen
0.01% 0.00% bench bench [.] bpf_prog_load
0.01% 0.00% bench [kernel.kallsyms] [k] lock_mm_and_find_vma
0.01% 0.00% bench [kernel.kallsyms] [k] zap_pte_range
0.01% 0.00% bench [kernel.kallsyms] [k] set_memory_rox
0.01% 0.00% bench [kernel.kallsyms] [k] change_page_attr_set_clr
0.01% 0.00% bench [kernel.kallsyms] [k] __change_page_attr_set_clr
0.01% 0.00% bench [kernel.kallsyms] [k] do_open
0.01% 0.00% bench [kernel.kallsyms] [k] do_dentry_open
0.01% 0.00% bench [kernel.kallsyms] [k] v9fs_file_open
0.01% 0.00% bench [kernel.kallsyms] [k] __change_page_attr
0.01% 0.00% bench [kernel.kallsyms] [k] __pte_offset_map_lock
0.01% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node_range
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_vmas
0.01% 0.00% bench [kernel.kallsyms] [k] __vmalloc_area_node
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_page_range
0.01% 0.00% bench [kernel.kallsyms] [k] p9_virtio_request
0.00% 0.00% bench [kernel.kallsyms] [k] lookup_address_in_pgd
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_pages_pte_range
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.00% 0.00% bench [kernel.kallsyms] [k] __alloc_pages_bulk
0.00% 0.00% bench [kernel.kallsyms] [k] default_send_IPI_allbutself
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_lookup_ip
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_prefixes.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] rmqueue
0.00% 0.00% bench [kernel.kallsyms] [k] in_lock_functions
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_check_record
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_test_record
0.00% 0.00% bench [kernel.kallsyms] [k] mas_walk
0.00% 0.00% bench [kernel.kallsyms] [k] __mod_memcg_lruvec_state
0.00% 0.00% bench [kernel.kallsyms] [k] kmem_cache_alloc_lru
0.00% 0.00% bench [kernel.kallsyms] [k] perf_output_begin
0.00% 0.00% bench [kernel.kallsyms] [k] iput
0.00% 0.00% bench [kernel.kallsyms] [k] mas_alloc_nodes
0.00% 0.00% bench [kernel.kallsyms] [k] memcpy_orig
0.00% 0.00% bench [kernel.kallsyms] [k] tmigr_handle_remote
0.00% 0.00% bench bench [.] bpf_object__relocate
0.00% 0.00% bench bench [.] btf_strs_data
0.00% 0.00% bench bench [.] bpf_program_fixup_func_info
0.00% 0.00% bench bench [.] libbpf_add_mem
0.00% 0.00% bench bench [.] probe_kern_arg_ctx_tag
0.00% 0.00% bench [kernel.kallsyms] [k] zap_present_ptes
0.00% 0.00% bench [unknown] [k] 0xffffffffa0205089
0.00% 0.00% bench libc.so.6 [.] _int_realloc
0.00% 0.00% bench [unknown] [k] 0x31392e3033202820
0.00% 0.00% bench libc.so.6 [.] __GI___libc_write
0.00% 0.00% bench [kernel.kallsyms] [k] create_local_trace_kprobe
0.00% 0.00% bench libc.so.6 [.] __munmap
0.00% 0.00% bench [kernel.kallsyms] [k] register_kretprobe
0.00% 0.00% bench libc.so.6 [.] __brk
0.00% 0.00% bench libc.so.6 [.] clone3
0.00% 0.00% bench libc.so.6 [.] __strcmp_sse2
0.00% 0.00% bench [kernel.kallsyms] [k] ksys_write
0.00% 0.00% bench [kernel.kallsyms] [k] register_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_release
0.00% 0.00% bench [kernel.kallsyms] [k] check_kprobe_address_safe
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mm
0.00% 0.00% bench [kernel.kallsyms] [k] vfs_write
0.00% 0.00% bench [kernel.kallsyms] [k] __do_sys_brk
0.00% 0.00% bench [kernel.kallsyms] [k] __do_sys_clone3
0.00% 0.00% bench [kernel.kallsyms] [k] __vm_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_alloc_no_stats
0.00% 0.00% bench [kernel.kallsyms] [k] bpf_prog_put_deferred
0.00% 0.00% bench [kernel.kallsyms] [k] cpa_process_alias
0.00% 0.00% bench [kernel.kallsyms] [k] file_tty_write.constprop.0
0.00% 0.00% bench [kernel.kallsyms] [k] jump_label_text_reserved
0.00% 0.00% bench [kernel.kallsyms] [k] mmput
0.00% 0.00% bench [kernel.kallsyms] [k] open_last_lookups
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node
0.00% 0.00% bench [kernel.kallsyms] [k] arch_jump_entry_size
0.00% 0.00% bench [kernel.kallsyms] [k] do_brk_flags
0.00% 0.00% bench [kernel.kallsyms] [k] do_vmi_align_munmap.constprop.0
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] idr_alloc_cyclic
0.00% 0.00% bench [kernel.kallsyms] [k] iterate_tty_write
0.00% 0.00% bench [kernel.kallsyms] [k] kernel_clone
0.00% 0.00% bench [kernel.kallsyms] [k] lookup_open.isra.0
0.00% 0.00% bench [kernel.kallsyms] [k] module_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_event
0.00% 0.00% bench [kernel.kallsyms] [k] v9fs_dir_release
0.00% 0.00% bench bench [.] collect_measurements
0.00% 0.00% bench libc.so.6 [.] __strnlen_ifunc
0.00% 0.00% bench [kernel.kallsyms] [k] copy_process
0.00% 0.00% bench [kernel.kallsyms] [k] d_alloc_parallel
0.00% 0.00% bench [kernel.kallsyms] [k] idr_alloc_u32
0.00% 0.00% bench [kernel.kallsyms] [k] insn_decode
0.00% 0.00% bench [kernel.kallsyms] [k] mas_store_gfp
0.00% 0.00% bench [kernel.kallsyms] [k] n_tty_write
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_clunk
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_open
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_walk
0.00% 0.00% bench [kernel.kallsyms] [k] perf_iterate_sb
0.00% 0.00% bench [kernel.kallsyms] [k] unmap_region
0.00% 0.00% bench [unknown] [.] 0x0000000000000040
0.00% 0.00% bench libc.so.6 [.] __mpn_extract_double
0.00% 0.00% bench [kernel.kallsyms] [k] __split_large_page
0.00% 0.00% bench [kernel.kallsyms] [k] d_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] dput
0.00% 0.00% bench [kernel.kallsyms] [k] dup_task_struct
0.00% 0.00% bench [kernel.kallsyms] [k] find_vma
0.00% 0.00% bench [kernel.kallsyms] [k] idr_get_free
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_displacement
0.00% 0.00% bench [kernel.kallsyms] [k] mas_wr_bnode
0.00% 0.00% bench [kernel.kallsyms] [k] perf_iterate_ctx
0.00% 0.00% bench [kernel.kallsyms] [k] process_output_block
0.00% 0.00% bench [kernel.kallsyms] [k] wp_page_copy
0.00% 0.00% bench bench [.] bpf_object__open_skeleton
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_sysdep_start
0.00% 0.00% bench libc.so.6 [.] __restore_rt
0.00% 0.00% bench [unknown] [.] 0x000055e503ff7c50
0.00% 0.00% bench libc.so.6 [.] _IO_file_xsgetn
0.00% 0.00% bench [kernel.kallsyms] [k] __anon_vma_prepare
0.00% 0.00% bench [kernel.kallsyms] [k] __d_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] __dentry_kill
0.00% 0.00% bench [kernel.kallsyms] [k] __ftrace_hash_rec_update.part.0
0.00% 0.00% bench [kernel.kallsyms] [k] __lruvec_stat_mod_folio
0.00% 0.00% bench [kernel.kallsyms] [k] alloc_pages_bulk_array_mempolicy
0.00% 0.00% bench [kernel.kallsyms] [k] alloc_thread_stack_node
0.00% 0.00% bench [kernel.kallsyms] [k] btf_vmlinux_read
0.00% 0.00% bench [kernel.kallsyms] [k] insn_get_modrm
0.00% 0.00% bench [kernel.kallsyms] [k] lock_vma_under_rcu
0.00% 0.00% bench [kernel.kallsyms] [k] mas_split.isra.0
0.00% 0.00% bench [kernel.kallsyms] [k] mt_find
0.00% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_output
0.00% 0.00% bench [kernel.kallsyms] [k] preempt_count_add
0.00% 0.00% bench [kernel.kallsyms] [k] prepare_to_wait_event
0.00% 0.00% bench [kernel.kallsyms] [k] radix_tree_node_alloc.constprop.0
0.00% 0.00% bench [kernel.kallsyms] [k] smp_call_function
0.00% 0.00% bench [kernel.kallsyms] [k] uart_write
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_small_pages_range_noflush
0.00% 0.00% bench bench [.] populate_skeleton_progs
0.00% 0.00% bench bench [.] sigalarm_handler
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] dl_main
0.00% 0.00% bench libc.so.6 [.] __vfprintf_internal
#
# (Tip: Show individual samples with: perf script)
#
[-- Attachment #4: perf-out-kretprobe-patched.txt --]
[-- Type: text/plain, Size: 66042 bytes --]
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 32K of event 'task-clock:ppp'
# Event count (approx.): 8042250000
#
# Children Self Command Shared Object Symbol
# ........ ........ ....... ................................................. .....................................................
#
99.52% 0.00% bench libc.so.6 [.] start_thread
|
---start_thread
|
|--97.57%--syscall
| |
| |--59.31%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.37%--__x64_sys_getpgid
| | | |
| | | |--12.79%--ftrace_trampoline
| | | | |
| | | | --10.73%--kprobe_ftrace_handler
| | | | |
| | | | |--6.03%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.10%--objpool_pop
| | | | | |
| | | | | --1.86%--rethook_try_get
| | | | |
| | | | |--2.00%--rcu_is_watching
| | | | |
| | | | --0.50%--get_kprobe
| | | |
| | | --6.29%--do_getpgid
| | | |
| | | --5.54%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.52%--__radix_tree_lookup
| | |
| | |--13.87%--arch_rethook_trampoline
| | | |
| | | --12.14%--arch_rethook_trampoline_callback
| | | |
| | | --11.91%--rethook_trampoline_handler
| | | |
| | | |--10.24%--kretprobe_rethook_handler
| | | | |
| | | | --9.28%--kretprobe_dispatcher
| | | | |
| | | | --6.35%--kretprobe_perf_func
| | | | |
| | | | --5.99%--trace_call_bpf
| | | | |
| | | | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.95%--migrate_disable
| | | |
| | | --0.95%--objpool_push
| | |
| | --0.53%--syscall_exit_to_user_mode
| |
| --4.37%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.75%--syscall@plt
97.63% 33.61% bench libc.so.6 [.] syscall
|
|--64.02%--syscall
| |
| |--59.37%--entry_SYSCALL_64_after_hwframe
| | do_syscall_64
| | |
| | |--19.37%--__x64_sys_getpgid
| | | |
| | | |--12.79%--ftrace_trampoline
| | | | |
| | | | --10.73%--kprobe_ftrace_handler
| | | | |
| | | | |--6.03%--pre_handler_kretprobe
| | | | | |
| | | | | |--3.10%--objpool_pop
| | | | | |
| | | | | --1.86%--rethook_try_get
| | | | |
| | | | |--2.00%--rcu_is_watching
| | | | |
| | | | --0.50%--get_kprobe
| | | |
| | | --6.29%--do_getpgid
| | | |
| | | --5.54%--find_task_by_vpid
| | | |
| | | |--2.01%--idr_find
| | | |
| | | --1.52%--__radix_tree_lookup
| | |
| | |--13.87%--arch_rethook_trampoline
| | | |
| | | --12.14%--arch_rethook_trampoline_callback
| | | |
| | | --11.91%--rethook_trampoline_handler
| | | |
| | | |--10.24%--kretprobe_rethook_handler
| | | | |
| | | | --9.28%--kretprobe_dispatcher
| | | | |
| | | | --6.35%--kretprobe_perf_func
| | | | |
| | | | --5.99%--trace_call_bpf
| | | | |
| | | | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | | |
| | | | --0.95%--migrate_disable
| | | |
| | | --0.95%--objpool_push
| | |
| | --0.53%--syscall_exit_to_user_mode
| |
| --4.37%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--33.61%--start_thread
syscall
59.54% 0.00% bench [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
|
---entry_SYSCALL_64_after_hwframe
do_syscall_64
|
|--19.37%--__x64_sys_getpgid
| |
| |--12.79%--ftrace_trampoline
| | |
| | --10.73%--kprobe_ftrace_handler
| | |
| | |--6.03%--pre_handler_kretprobe
| | | |
| | | |--3.10%--objpool_pop
| | | |
| | | --1.86%--rethook_try_get
| | |
| | |--2.00%--rcu_is_watching
| | |
| | --0.50%--get_kprobe
| |
| --6.29%--do_getpgid
| |
| --5.54%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.52%--__radix_tree_lookup
|
|--13.87%--arch_rethook_trampoline
| |
| --12.14%--arch_rethook_trampoline_callback
| |
| --11.91%--rethook_trampoline_handler
| |
| |--10.24%--kretprobe_rethook_handler
| | |
| | --9.28%--kretprobe_dispatcher
| | |
| | --6.35%--kretprobe_perf_func
| | |
| | --5.99%--trace_call_bpf
| | |
| | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.95%--migrate_disable
| |
| --0.95%--objpool_push
|
--0.53%--syscall_exit_to_user_mode
59.54% 25.54% bench [kernel.kallsyms] [k] do_syscall_64
|
|--34.00%--do_syscall_64
| |
| |--19.37%--__x64_sys_getpgid
| | |
| | |--12.79%--ftrace_trampoline
| | | |
| | | --10.73%--kprobe_ftrace_handler
| | | |
| | | |--6.03%--pre_handler_kretprobe
| | | | |
| | | | |--3.10%--objpool_pop
| | | | |
| | | | --1.86%--rethook_try_get
| | | |
| | | |--2.00%--rcu_is_watching
| | | |
| | | --0.50%--get_kprobe
| | |
| | --6.29%--do_getpgid
| | |
| | --5.54%--find_task_by_vpid
| | |
| | |--2.01%--idr_find
| | |
| | --1.52%--__radix_tree_lookup
| |
| |--13.87%--arch_rethook_trampoline
| | |
| | --12.14%--arch_rethook_trampoline_callback
| | |
| | --11.91%--rethook_trampoline_handler
| | |
| | |--10.24%--kretprobe_rethook_handler
| | | |
| | | --9.28%--kretprobe_dispatcher
| | | |
| | | --6.35%--kretprobe_perf_func
| | | |
| | | --5.99%--trace_call_bpf
| | | |
| | | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | | |
| | | --0.95%--migrate_disable
| | |
| | --0.95%--objpool_push
| |
| --0.53%--syscall_exit_to_user_mode
|
--25.54%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
19.40% 0.29% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
--19.11%--__x64_sys_getpgid
|
|--12.82%--ftrace_trampoline
| |
| --10.76%--kprobe_ftrace_handler
| |
| |--6.06%--pre_handler_kretprobe
| | |
| | |--3.10%--objpool_pop
| | |
| | --1.86%--rethook_try_get
| |
| |--2.00%--rcu_is_watching
| |
| --0.50%--get_kprobe
|
--6.29%--do_getpgid
|
--5.54%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.52%--__radix_tree_lookup
13.91% 1.77% bench [kernel.kallsyms] [k] arch_rethook_trampoline
|
|--12.14%--arch_rethook_trampoline
| arch_rethook_trampoline_callback
| |
| --11.91%--rethook_trampoline_handler
| |
| |--10.24%--kretprobe_rethook_handler
| | |
| | --9.28%--kretprobe_dispatcher
| | |
| | --6.35%--kretprobe_perf_func
| | |
| | --5.99%--trace_call_bpf
| | |
| | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.95%--migrate_disable
| |
| --0.95%--objpool_push
|
--1.77%--start_thread
syscall
|
--1.73%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
12.82% 2.06% bench ftrace_trampoline [k] ftrace_trampoline
|
|--10.76%--ftrace_trampoline
| kprobe_ftrace_handler
| |
| |--6.06%--pre_handler_kretprobe
| | |
| | |--3.10%--objpool_pop
| | |
| | --1.86%--rethook_try_get
| |
| |--2.00%--rcu_is_watching
| |
| --0.50%--get_kprobe
|
--2.06%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
12.14% 0.23% bench [kernel.kallsyms] [k] arch_rethook_trampoline_callback
|
--11.91%--arch_rethook_trampoline_callback
rethook_trampoline_handler
|
|--10.24%--kretprobe_rethook_handler
| |
| --9.28%--kretprobe_dispatcher
| |
| --6.35%--kretprobe_perf_func
| |
| --5.99%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--0.95%--objpool_push
11.91% 0.69% bench [kernel.kallsyms] [k] rethook_trampoline_handler
|
|--11.22%--rethook_trampoline_handler
| |
| |--10.24%--kretprobe_rethook_handler
| | |
| | --9.28%--kretprobe_dispatcher
| | |
| | --6.35%--kretprobe_perf_func
| | |
| | --5.99%--trace_call_bpf
| | |
| | |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| | |
| | --0.95%--migrate_disable
| |
| --0.95%--objpool_push
|
--0.69%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
10.76% 2.20% bench [kernel.kallsyms] [k] kprobe_ftrace_handler
|
|--8.55%--kprobe_ftrace_handler
| |
| |--6.06%--pre_handler_kretprobe
| | |
| | |--3.10%--objpool_pop
| | |
| | --1.86%--rethook_try_get
| |
| |--2.00%--rcu_is_watching
| |
| --0.50%--get_kprobe
|
--2.20%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
10.24% 0.96% bench [kernel.kallsyms] [k] kretprobe_rethook_handler
|
|--9.28%--kretprobe_rethook_handler
| kretprobe_dispatcher
| |
| --6.35%--kretprobe_perf_func
| |
| --5.99%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--0.96%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
9.28% 2.85% bench [kernel.kallsyms] [k] kretprobe_dispatcher
|
|--6.43%--kretprobe_dispatcher
| |
| --6.35%--kretprobe_perf_func
| |
| --5.99%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--2.85%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
6.35% 0.36% bench [kernel.kallsyms] [k] kretprobe_perf_func
|
--5.99%--kretprobe_perf_func
trace_call_bpf
|
|--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--0.95%--migrate_disable
6.29% 0.27% bench [kernel.kallsyms] [k] do_getpgid
|
--6.02%--do_getpgid
|
--5.54%--find_task_by_vpid
|
|--2.01%--idr_find
|
--1.52%--__radix_tree_lookup
6.23% 6.23% bench bpf_prog_21856463590f61f1_bench_trigger_kretprobe [k] bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
---start_thread
syscall
|
|--4.37%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
|
--1.86%--entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
bpf_prog_21856463590f61f1_bench_trigger_kretprobe
6.06% 0.89% bench [kernel.kallsyms] [k] pre_handler_kretprobe
|
|--5.17%--pre_handler_kretprobe
| |
| |--3.10%--objpool_pop
| |
| --1.86%--rethook_try_get
|
--0.89%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
pre_handler_kretprobe
5.99% 2.67% bench [kernel.kallsyms] [k] trace_call_bpf
|
|--3.32%--trace_call_bpf
| |
| |--1.86%--bpf_prog_21856463590f61f1_bench_trigger_kretprobe
| |
| --0.95%--migrate_disable
|
--2.67%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
5.54% 1.97% bench [kernel.kallsyms] [k] find_task_by_vpid
|
|--3.57%--find_task_by_vpid
| |
| |--2.01%--idr_find
| |
| --1.52%--__radix_tree_lookup
|
--1.97%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
3.10% 3.10% bench [kernel.kallsyms] [k] objpool_pop
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
pre_handler_kretprobe
objpool_pop
2.08% 2.08% bench [kernel.kallsyms] [k] rcu_is_watching
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
|
--2.00%--rcu_is_watching
2.01% 2.01% bench [kernel.kallsyms] [k] idr_find
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
idr_find
1.86% 1.78% bench [kernel.kallsyms] [k] rethook_try_get
|
--1.78%--start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
pre_handler_kretprobe
rethook_try_get
1.75% 1.75% bench bench [.] syscall@plt
|
---start_thread
syscall@plt
1.52% 1.52% bench [kernel.kallsyms] [k] __radix_tree_lookup
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
do_getpgid
find_task_by_vpid
__radix_tree_lookup
0.95% 0.95% bench [kernel.kallsyms] [k] objpool_push
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
objpool_push
0.95% 0.95% bench [kernel.kallsyms] [k] migrate_disable
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
arch_rethook_trampoline
arch_rethook_trampoline_callback
rethook_trampoline_handler
kretprobe_rethook_handler
kretprobe_dispatcher
kretprobe_perf_func
trace_call_bpf
migrate_disable
0.53% 0.53% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
syscall_exit_to_user_mode
0.50% 0.50% bench [kernel.kallsyms] [k] get_kprobe
|
---start_thread
syscall
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_getpgid
ftrace_trampoline
kprobe_ftrace_handler
get_kprobe
0.44% 0.44% bench [kernel.kallsyms] [k] __rcu_read_lock
0.35% 0.35% bench [kernel.kallsyms] [k] migrate_enable
0.29% 0.29% bench [kernel.kallsyms] [k] __x86_indirect_thunk_array
0.28% 0.00% bench libc.so.6 [.] __libc_start_call_main
0.28% 0.00% bench bench [.] main
0.28% 0.00% bench bench [.] setup_benchmark
0.27% 0.00% bench bench [.] trigger_kretprobe_setup
0.25% 0.00% bench bench [.] trigger_bench__open_and_load
0.25% 0.00% bench bench [.] bpf_object__load_skeleton
0.25% 0.00% bench bench [.] bpf_object__load
0.25% 0.00% bench bench [.] bpf_object_load
0.21% 0.21% bench [kernel.kallsyms] [k] __rcu_read_unlock
0.20% 0.14% bench [kernel.kallsyms] [k] rethook_hook
0.20% 0.20% bench bench [.] trigger_producer
0.14% 0.00% bench [kernel.kallsyms] [k] asm_exc_page_fault
0.14% 0.00% bench [kernel.kallsyms] [k] exc_page_fault
0.14% 0.01% bench [kernel.kallsyms] [k] do_user_addr_fault
0.14% 0.00% bench bench [.] libbpf_find_kernel_btf
0.13% 0.00% bench bench [.] bpf_object__load_vmlinux_btf
0.13% 0.00% bench bench [.] btf__parse
0.13% 0.00% bench bench [.] btf_parse
0.13% 0.00% bench bench [.] btf_parse_raw
0.13% 0.00% bench [kernel.kallsyms] [k] handle_mm_fault
0.13% 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.11% 0.00% bench bench [.] btf_new
0.10% 0.00% bench [unknown] [k] 0000000000000000
0.10% 0.00% bench [kernel.kallsyms] [k] do_anonymous_page
0.10% 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.10% 0.00% bench libc.so.6 [.] read
0.10% 0.00% bench [kernel.kallsyms] [k] ksys_read
0.10% 0.00% bench [kernel.kallsyms] [k] vfs_read
0.10% 0.00% bench [kernel.kallsyms] [k] rep_movs_alternative
0.10% 0.00% bench [kernel.kallsyms] [k] __alloc_pages
0.09% 0.00% bench [kernel.kallsyms] [k] kernfs_file_read_iter
0.09% 0.00% bench [kernel.kallsyms] [k] _copy_to_iter
0.09% 0.09% bench [kernel.kallsyms] [k] clear_page_orig
0.09% 0.00% bench [kernel.kallsyms] [k] alloc_pages_mpol
0.08% 0.00% bench [kernel.kallsyms] [k] vma_alloc_folio
0.07% 0.00% bench bench [.] bpf_object__load_progs
0.07% 0.00% bench bench [.] bpf_object_load_prog
0.07% 0.01% bench bench [.] btf_sanity_check
0.07% 0.00% bench bench [.] libbpf_prepare_prog_load
0.07% 0.00% bench bench [.] libbpf_find_attach_btf_id
0.07% 0.00% bench bench [.] find_kernel_btf_id
0.07% 0.00% bench bench [.] find_attach_btf_id
0.07% 0.00% bench bench [.] find_btf_by_prefix_kind
0.07% 0.00% bench bench [.] btf__find_by_name_kind
0.07% 0.07% bench [kernel.kallsyms] [k] arch_rethook_prepare
0.06% 0.01% bench bench [.] btf_validate_type
0.06% 0.02% bench bench [.] btf_find_by_name_kind
0.05% 0.05% bench [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.04% 0.00% bench libc.so.6 [.] __GI___libc_write
0.04% 0.00% bench [kernel.kallsyms] [k] ksys_write
0.04% 0.00% bench [kernel.kallsyms] [k] vfs_write
0.04% 0.00% bench [kernel.kallsyms] [k] file_tty_write.constprop.0
0.04% 0.00% bench [kernel.kallsyms] [k] iterate_tty_write
0.04% 0.00% bench [kernel.kallsyms] [k] n_tty_write
0.04% 0.00% bench [kernel.kallsyms] [k] uart_write
0.04% 0.00% bench [kernel.kallsyms] [k] process_output_block
0.03% 0.00% bench [kernel.kallsyms] [k] __x64_sys_bpf
0.03% 0.02% bench bench [.] btf__type_by_id
0.03% 0.00% bench [kernel.kallsyms] [k] __sys_bpf
0.03% 0.03% bench [kernel.kallsyms] [k] arch_rethook_fixup_return
0.03% 0.00% bench bench [.] feat_supported
0.03% 0.00% bench bench [.] sys_bpf_fd
0.03% 0.00% bench [kernel.kallsyms] [k] bpf_prog_load
0.03% 0.02% bench bench [.] btf_parse_type_sec
0.03% 0.03% bench [kernel.kallsyms] [k] radix_tree_lookup
0.03% 0.00% bench bench [.] bpf_program__attach
0.03% 0.00% bench [kernel.kallsyms] [k] do_pte_missing
0.03% 0.02% bench libc.so.6 [.] __memmove_sse2_unaligned_erms
0.03% 0.00% bench [kernel.kallsyms] [k] ftrace_modify_all_code
0.02% 0.00% bench bench [.] attach_kprobe
0.02% 0.00% bench bench [.] bpf_program__attach_kprobe_opts
0.02% 0.00% bench [kernel.kallsyms] [k] do_read_fault
0.02% 0.00% bench [kernel.kallsyms] [k] __do_fault
0.02% 0.00% bench [kernel.kallsyms] [k] 0xffffffff8108dbc8
0.02% 0.00% bench [kernel.kallsyms] [k] __do_sys_perf_event_open
0.02% 0.00% bench [kernel.kallsyms] [k] do_group_exit
0.02% 0.00% bench [kernel.kallsyms] [k] do_exit
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_replace_code
0.02% 0.00% bench [kernel.kallsyms] [k] perf_init_event
0.02% 0.00% bench [kernel.kallsyms] [k] __fput
0.02% 0.00% bench [kernel.kallsyms] [k] perf_try_init_event
0.02% 0.02% bench bench [.] btf_type_by_id
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_event_init
0.02% 0.02% bench bench [.] btf_kind
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_init
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_select_runtime
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_int_jit_compile
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_jit_binary_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] bpf_prog_pack_alloc
0.02% 0.00% bench [kernel.kallsyms] [k] filemap_fault
0.02% 0.00% bench [kernel.kallsyms] [k] alloc_new_pack
0.02% 0.01% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.02% 0.00% bench bench [.] bpf_object__probe_loading
0.02% 0.00% bench bench [.] sys_bpf_prog_load
0.02% 0.00% bench [kernel.kallsyms] [k] task_work_run
0.02% 0.01% bench bench [.] btf_validate_str
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_init
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_reg
0.02% 0.00% bench [kernel.kallsyms] [k] filemap_read_folio
0.02% 0.00% bench [kernel.kallsyms] [k] netfs_read_folio
0.02% 0.00% bench [kernel.kallsyms] [k] netfs_begin_read
0.02% 0.00% bench [kernel.kallsyms] [k] on_each_cpu_cond_mask
0.02% 0.00% bench bench [.] kernel_supports
0.02% 0.01% bench bench [.] btf__str_by_offset
0.02% 0.01% bench libc.so.6 [.] __strcmp_sse2
0.02% 0.00% bench [kernel.kallsyms] [k] perf_release
0.02% 0.00% bench [kernel.kallsyms] [k] perf_event_release_kernel
0.02% 0.00% bench [kernel.kallsyms] [k] _free_event
0.02% 0.00% bench [kernel.kallsyms] [k] perf_kprobe_destroy
0.02% 0.00% bench [kernel.kallsyms] [k] enable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] perf_trace_event_unreg.isra.0
0.02% 0.00% bench [kernel.kallsyms] [k] disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] enable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] __arm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_trace_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] __disable_kprobe
0.02% 0.00% bench [kernel.kallsyms] [k] register_ftrace_function_nolock
0.02% 0.00% bench [kernel.kallsyms] [k] __disarm_kprobe_ftrace
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_startup
0.02% 0.00% bench [kernel.kallsyms] [k] unregister_ftrace_function
0.02% 0.00% bench [kernel.kallsyms] [k] ftrace_shutdown.part.0
0.02% 0.00% bench [kernel.kallsyms] [k] v9fs_issue_read
0.02% 0.00% bench [kernel.kallsyms] [k] p9_client_read
0.02% 0.00% bench [kernel.kallsyms] [k] p9_client_read_once
0.01% 0.01% bench [kernel.kallsyms] [k] memset_orig
0.01% 0.00% bench bench [.] bump_rlimit_memlock
0.01% 0.00% bench bench [.] probe_memcg_account
0.01% 0.01% bench bench [.] btf_vlen
0.01% 0.00% bench [kernel.kallsyms] [k] set_memory_rox
0.01% 0.00% bench [kernel.kallsyms] [k] change_page_attr_set_clr
0.01% 0.00% bench [kernel.kallsyms] [k] p9_client_zc_rpc.constprop.0
0.01% 0.00% bench [kernel.kallsyms] [k] p9_virtio_zc_request
0.01% 0.00% bench bench [.] bpf_object__sanitize_maps
0.01% 0.00% bench bench [.] bpf_prog_load
0.01% 0.00% bench bench [.] probe_kern_array_mmap
0.01% 0.00% bench bench [.] bpf_map_create
0.01% 0.00% bench bench [.] probe_kern_prog_name
0.01% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_map_object
0.01% 0.00% bench bench [.] btf_validate_id
0.01% 0.01% bench [kernel.kallsyms] [k] default_send_IPI_allbutself
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_check_record
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.01% 0.01% bench [kernel.kallsyms] [k] default_send_IPI_self
0.01% 0.01% bench [kernel.kallsyms] [k] finish_task_switch.isra.0
0.01% 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_record
0.01% 0.00% bench [kernel.kallsyms] [k] p9_get_mapped_pages.part.0.constprop.0
0.01% 0.01% bench bench [.] btf_strs_data
0.01% 0.01% bench [kernel.kallsyms] [k] mem_cgroup_commit_charge
0.01% 0.01% bench bench [.] btf_add_type_offs_mem
0.01% 0.00% bench bench [.] btf_type_size
0.01% 0.00% bench [unknown] [k] 0x000055fcb8980c50
0.01% 0.00% bench [unknown] [k] 0x32322e3239312820
0.01% 0.00% bench [unknown] [k] 0x35372e30312d2820
0.01% 0.00% bench [unknown] [k] 0x38392e3820202820
0.01% 0.00% bench [kernel.kallsyms] [k] zap_pte_range
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_prog_release
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_prog_put_deferred
0.01% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_event
0.01% 0.00% bench [kernel.kallsyms] [k] perf_iterate_sb
0.01% 0.00% bench [kernel.kallsyms] [k] bpf_check
0.01% 0.00% bench [kernel.kallsyms] [k] do_vmi_align_munmap.constprop.0
0.01% 0.00% bench [kernel.kallsyms] [k] perf_event_bpf_output
0.01% 0.00% bench [kernel.kallsyms] [k] kmalloc_large
0.01% 0.00% bench [kernel.kallsyms] [k] perf_output_end
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_region
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_vmas
0.01% 0.00% bench [kernel.kallsyms] [k] __kmalloc_large_node
0.01% 0.00% bench [kernel.kallsyms] [k] p9_client_rpc
0.01% 0.00% bench [kernel.kallsyms] [k] perf_output_put_handle
0.01% 0.00% bench [kernel.kallsyms] [k] rmqueue
0.01% 0.00% bench [kernel.kallsyms] [k] text_poke_bp_batch
0.01% 0.00% bench [kernel.kallsyms] [k] unmap_page_range
0.01% 0.00% bench [kernel.kallsyms] [k] cpa_flush
0.01% 0.00% bench [kernel.kallsyms] [k] do_output_char
0.01% 0.00% bench [kernel.kallsyms] [k] irq_work_queue
0.01% 0.00% bench [kernel.kallsyms] [k] schedule
0.01% 0.01% bench libc.so.6 [.] _IO_file_xsgetn
0.01% 0.00% bench [kernel.kallsyms] [k] __mem_cgroup_charge
0.01% 0.00% bench [kernel.kallsyms] [k] __schedule
0.01% 0.00% bench [kernel.kallsyms] [k] arch_irq_work_raise
0.01% 0.00% bench bench [.] btf__name_by_offset
0.00% 0.00% bench [kernel.kallsyms] [k] allocate_slab
0.00% 0.00% bench [kernel.kallsyms] [k] _raw_spin_trylock
0.00% 0.00% bench [kernel.kallsyms] [k] iter_xarray_get_pages
0.00% 0.00% bench [unknown] [k] 0x0000000000000040
0.00% 0.00% bench bench [.] bpf_object__create_maps
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_sysdep_start
0.00% 0.00% bench [kernel.kallsyms] [k] percpu_counter_add_batch
0.00% 0.00% bench [kernel.kallsyms] [k] within_kprobe_blacklist
0.00% 0.00% bench bench [.] bpf_object__populate_internal_map
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] dl_main
0.00% 0.00% bench libm.so.6 [.] __sqrt
0.00% 0.00% bench bench [.] bpf_map_update_elem
0.00% 0.00% bench bench [.] bpf_object__relocate
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] _dl_relocate_object
0.00% 0.00% bench [unknown] [k] 0x000000000000003f
0.00% 0.00% bench bench [.] bpf_program_fixup_func_info
0.00% 0.00% bench bench [.] bpf_program__attach_perf_event_opts
0.00% 0.00% bench [kernel.kallsyms] [k] folio_add_lru
0.00% 0.00% bench [kernel.kallsyms] [k] iov_iter_advance
0.00% 0.00% bench [kernel.kallsyms] [k] strncpy_from_user
0.00% 0.00% bench bench [.] probe_kern_arg_ctx_tag
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] __GI___read_nocancel
0.00% 0.00% bench libc.so.6 [.] __close
0.00% 0.00% bench libc.so.6 [.] __printf_fp
0.00% 0.00% bench [kernel.kallsyms] [k] folio_lruvec_lock_irqsave
0.00% 0.00% bench [kernel.kallsyms] [k] mas_walk
0.00% 0.00% bench [kernel.kallsyms] [k] map_update_elem
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] __GI___open64_nocancel
0.00% 0.00% bench [kernel.kallsyms] [k] folio_mark_accessed
0.00% 0.00% bench [kernel.kallsyms] [k] _copy_from_user
0.00% 0.00% bench ld-linux-x86-64.so.2 [.] mmap64
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_close
0.00% 0.00% bench libc.so.6 [.] _int_realloc
0.00% 0.00% bench [unknown] [k] 0x2020207374696820
0.00% 0.00% bench [unknown] [k] 0x2d6769727427206b
0.00% 0.00% bench [unknown] [k] 0x31342e33312d2820
0.00% 0.00% bench [unknown] [k] 0x31382e3631202820
0.00% 0.00% bench [unknown] [k] 0x33372e34332d2820
0.00% 0.00% bench [unknown] [k] 0x33392e3531202820
0.00% 0.00% bench [unknown] [k] 0x38342e36312d2820
0.00% 0.00% bench [unknown] [k] 0x68636e6562207075
0.00% 0.00% bench [kernel.kallsyms] [k] xas_find
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_openat
0.00% 0.00% bench bench [.] _start
0.00% 0.00% bench [kernel.kallsyms] [k] do_sys_openat2
0.00% 0.00% bench [kernel.kallsyms] [k] ksys_mmap_pgoff
0.00% 0.00% bench [kernel.kallsyms] [k] module_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_unbuffered_read_iter
0.00% 0.00% bench libc.so.6 [.] __munmap
0.00% 0.00% bench [kernel.kallsyms] [k] __register_ftrace_function
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_node_range
0.00% 0.00% bench [kernel.kallsyms] [k] do_filp_open
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_unbuffered_read_iter_locked
0.00% 0.00% bench [kernel.kallsyms] [k] vm_mmap_pgoff
0.00% 0.00% bench libc.so.6 [.] _int_malloc
0.00% 0.00% bench libc.so.6 [.] __strlen_sse2
0.00% 0.00% bench [kernel.kallsyms] [k] __change_page_attr_set_clr
0.00% 0.00% bench [kernel.kallsyms] [k] __vmalloc_area_node
0.00% 0.00% bench [kernel.kallsyms] [k] do_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mm
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] path_openat
0.00% 0.00% bench [kernel.kallsyms] [k] __x64_sys_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] arch_ftrace_update_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] cpa_process_alias
0.00% 0.00% bench [kernel.kallsyms] [k] do_open
0.00% 0.00% bench [kernel.kallsyms] [k] mmap_region
0.00% 0.00% bench [kernel.kallsyms] [k] mmput
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_small_pages_range_noflush
0.00% 0.00% bench [kernel.kallsyms] [k] __vm_munmap
0.00% 0.00% bench [kernel.kallsyms] [k] create_trampoline
0.00% 0.00% bench [kernel.kallsyms] [k] do_dentry_open
0.00% 0.00% bench [kernel.kallsyms] [k] exit_mmap
0.00% 0.00% bench [kernel.kallsyms] [k] ftrace_update_ftrace_func
0.00% 0.00% bench [kernel.kallsyms] [k] vmap_pages_pte_range
0.00% 0.00% bench [kernel.kallsyms] [k] __change_page_attr
0.00% 0.00% bench [kernel.kallsyms] [k] __pte_alloc_kernel
0.00% 0.00% bench [kernel.kallsyms] [k] create_local_trace_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_bp
0.00% 0.00% bench [kernel.kallsyms] [k] v9fs_file_open
0.00% 0.00% bench [kernel.kallsyms] [k] __filemap_get_folio
0.00% 0.00% bench [kernel.kallsyms] [k] __split_large_page
0.00% 0.00% bench [kernel.kallsyms] [k] _vm_unmap_aliases
0.00% 0.00% bench [kernel.kallsyms] [k] lru_add_drain
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_alloc_request
0.00% 0.00% bench [kernel.kallsyms] [k] p9_client_open
0.00% 0.00% bench [kernel.kallsyms] [k] register_kretprobe
0.00% 0.00% bench [kernel.kallsyms] [k] text_poke_finish
0.00% 0.00% bench libc.so.6 [.] __GI___printf_fp_l
0.00% 0.00% bench [kernel.kallsyms] [k] __kmalloc
0.00% 0.00% bench [kernel.kallsyms] [k] __purge_vmap_area_lazy
0.00% 0.00% bench [kernel.kallsyms] [k] __rmqueue_pcplist
0.00% 0.00% bench [kernel.kallsyms] [k] filemap_map_pages
0.00% 0.00% bench [kernel.kallsyms] [k] lru_add_drain_cpu
0.00% 0.00% bench [kernel.kallsyms] [k] register_kprobe
0.00% 0.00% bench [kernel.kallsyms] [k] shmem_fault
0.00% 0.00% bench bench [.] bpf_object__open_skeleton
0.00% 0.00% bench libc.so.6 [.] __unregister_atfork
0.00% 0.00% bench [kernel.kallsyms] [k] ___slab_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] check_kprobe_address_safe
0.00% 0.00% bench [kernel.kallsyms] [k] folio_batch_move_lru
0.00% 0.00% bench [kernel.kallsyms] [k] iov_iter_get_pages_alloc2
0.00% 0.00% bench [kernel.kallsyms] [k] lock_vma_under_rcu
0.00% 0.00% bench [kernel.kallsyms] [k] netfs_rreq_prepare_read
0.00% 0.00% bench [kernel.kallsyms] [k] next_uptodate_folio
0.00% 0.00% bench [kernel.kallsyms] [k] p9_virtio_request
0.00% 0.00% bench [kernel.kallsyms] [k] pcpu_alloc
0.00% 0.00% bench [kernel.kallsyms] [k] rmqueue_bulk
0.00% 0.00% bench [kernel.kallsyms] [k] shmem_get_folio_gfp
0.00% 0.00% bench [kernel.kallsyms] [k] zap_present_ptes
0.00% 0.00% bench bench [.] bpf_object__open_mem
0.00% 0.00% bench bench [.] btf_add_type_idx_entry
0.00% 0.00% bench libc.so.6 [.] __vfprintf_internal
0.00% 0.00% bench [unknown] [.] 0xdfac2c2953a319ce
#
# (Tip: To separate samples by time use perf report --sort time,overhead,sym)
#
^ permalink raw reply [relevance 1%]
* Re: [PATCH 04/11] filemap: add FGP_CREAT_ONLY
2024-04-25 5:52 0% ` Paolo Bonzini
@ 2024-04-29 13:26 0% ` Vlastimil Babka
0 siblings, 0 replies; 200+ results
From: Vlastimil Babka @ 2024-04-29 13:26 UTC (permalink / raw)
To: Paolo Bonzini, linux-kernel, kvm, Matthew Wilcox
Cc: seanjc, michael.roth, isaku.yamahata, Yosry Ahmed
On 4/25/24 7:52 AM, Paolo Bonzini wrote:
> On 4/4/24 20:50, Paolo Bonzini wrote:
>> KVM would like to add a ioctl to encrypt and install a page into private
>> memory (i.e. into a guest_memfd), in preparation for launching an
>> encrypted guest.
>>
>> This API should be used only once per page (unless there are failures),
>> so we want to rule out the possibility of operating on a page that is
>> already in the guest_memfd's filemap. Overwriting the page is almost
>> certainly a sign of a bug, so we might as well forbid it.
>>
>> Therefore, introduce a new flag for __filemap_get_folio (to be passed
>> together with FGP_CREAT) that allows *adding* a new page to the filemap
>> but not returning an existing one.
>>
>> An alternative possibility would be to force KVM users to initialize
>> the whole filemap in one go, but that is complicated by the fact that
>> the filemap includes pages of different kinds, including some that are
>> per-vCPU rather than per-VM. Basically the result would be closer to
>> a system call that multiplexes multiple ioctls, than to something
>> cleaner like readv/writev.
>>
>> Races between callers that pass FGP_CREAT_ONLY are uninteresting to
>> the filemap code: one of the racers wins and one fails with EEXIST,
>> similar to calling open(2) with O_CREAT|O_EXCL. It doesn't matter to
>> filemap.c if the missing synchronization is in the kernel or in userspace,
>> and in fact it could even be intentional. (In the case of KVM it turns
>> out that a mutex is taken around these calls for unrelated reasons,
>> so there can be no races.)
>>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Yosry Ahmed <yosryahmed@google.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Matthew, are your objections still valid or could I have your ack?
So per the sub-thread on PATCH 09/11, IIUC this is now moot, right?
Vlastimil
> Thanks,
>
> Paolo
>
>> ---
>> include/linux/pagemap.h | 2 ++
>> mm/filemap.c | 4 ++++
>> 2 files changed, 6 insertions(+)
>>
>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>> index f879c1d54da7..a8c0685e8c08 100644
>> --- a/include/linux/pagemap.h
>> +++ b/include/linux/pagemap.h
>> @@ -587,6 +587,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
>> * * %FGP_CREAT - If no folio is present then a new folio is allocated,
>> * added to the page cache and the VM's LRU list. The folio is
>> * returned locked.
>> + * * %FGP_CREAT_ONLY - Fail if a folio is present
>> * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
>> * folio is already in cache. If the folio was allocated, unlock it
>> * before returning so the caller can do the same dance.
>> @@ -607,6 +608,7 @@ typedef unsigned int __bitwise fgf_t;
>> #define FGP_NOWAIT ((__force fgf_t)0x00000020)
>> #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
>> #define FGP_STABLE ((__force fgf_t)0x00000080)
>> +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
>> #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
>>
>> #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index 7437b2bd75c1..e7440e189ebd 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -1863,6 +1863,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
>> folio = NULL;
>> if (!folio)
>> goto no_page;
>> + if (fgp_flags & FGP_CREAT_ONLY) {
>> + folio_put(folio);
>> + return ERR_PTR(-EEXIST);
>> + }
>>
>> if (fgp_flags & FGP_LOCK) {
>> if (fgp_flags & FGP_NOWAIT) {
>
^ permalink raw reply [relevance 0%]
* [syzbot] [nilfs?] possible deadlock in nilfs_dirty_inode (3)
@ 2024-04-29 12:03 4% syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-04-29 12:03 UTC (permalink / raw)
To: konishi.ryusuke, linux-fsdevel, linux-kernel, linux-nilfs,
syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: e88c4cfcb7b8 Merge tag 'for-6.9-rc5-tag' of git://git.kern..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10f92380980000
kernel config: https://syzkaller.appspot.com/x/.config?x=19891bd776e81b8b
dashboard link: https://syzkaller.appspot.com/bug?extid=ca73f5a22aec76875d85
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-e88c4cfc.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3d83e80db525/vmlinux-e88c4cfc.xz
kernel image: https://storage.googleapis.com/syzbot-assets/847604848213/bzImage-e88c4cfc.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ca73f5a22aec76875d85@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc5-syzkaller-00042-ge88c4cfcb7b8 #0 Not tainted
------------------------------------------------------
kswapd0/110 is trying to acquire lock:
ffff88806d060610 (sb_internal#3){.+.+}-{0:0}, at: nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
but task is already holding lock:
ffffffff8d9373c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire mm/page_alloc.c:3698 [inline]
fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3712
might_alloc include/linux/sched/mm.h:312 [inline]
prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4346
__alloc_pages+0x194/0x2460 mm/page_alloc.c:4564
alloc_pages_mpol+0x275/0x610 mm/mempolicy.c:2264
folio_alloc+0x1e/0x40 mm/mempolicy.c:2342
filemap_alloc_folio+0x3ba/0x490 mm/filemap.c:984
__filemap_get_folio+0x52b/0xa90 mm/filemap.c:1926
pagecache_get_page+0x2c/0x260 mm/folio-compat.c:93
block_write_begin+0x38/0x4a0 fs/buffer.c:2209
nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
page_symlink+0x356/0x450 fs/namei.c:5229
nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
vfs_symlink fs/namei.c:4481 [inline]
vfs_symlink+0x3e8/0x630 fs/namei.c:4465
do_symlinkat+0x263/0x310 fs/namei.c:4507
__do_sys_symlink fs/namei.c:4528 [inline]
__se_sys_symlink fs/namei.c:4526 [inline]
__ia32_sys_symlink+0x78/0xa0 fs/namei.c:4526
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
-> #1 (&nilfs->ns_segctor_sem){++++}-{3:3}:
down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
nilfs_symlink+0x114/0x3c0 fs/nilfs2/namei.c:140
vfs_symlink fs/namei.c:4481 [inline]
vfs_symlink+0x3e8/0x630 fs/namei.c:4465
do_symlinkat+0x263/0x310 fs/namei.c:4507
__do_sys_symlink fs/namei.c:4528 [inline]
__se_sys_symlink fs/namei.c:4526 [inline]
__ia32_sys_symlink+0x78/0xa0 fs/namei.c:4526
do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
__do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
entry_SYSENTER_compat_after_hwframe+0x84/0x8e
-> #0 (sb_internal#3){.+.+}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1664 [inline]
sb_start_intwrite include/linux/fs.h:1847 [inline]
nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
__mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2477
mark_inode_dirty_sync include/linux/fs.h:2410 [inline]
iput.part.0+0x5b/0x7f0 fs/inode.c:1764
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Chain exists of:
sb_internal#3 --> &nilfs->ns_segctor_sem --> fs_reclaim
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&nilfs->ns_segctor_sem);
lock(fs_reclaim);
rlock(sb_internal#3);
*** DEADLOCK ***
2 locks held by kswapd0/110:
#0: ffffffff8d9373c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1a10 mm/vmscan.c:6782
#1: ffff88806d0600e0 (&type->s_umount_key#55){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
#1: ffff88806d0600e0 (&type->s_umount_key#55){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196
stack backtrace:
CPU: 2 PID: 110 Comm: kswapd0 Not tainted 6.9.0-rc5-syzkaller-00042-ge88c4cfcb7b8 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
lock_acquire kernel/locking/lockdep.c:5754 [inline]
lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1664 [inline]
sb_start_intwrite include/linux/fs.h:1847 [inline]
nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
__mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2477
mark_inode_dirty_sync include/linux/fs.h:2410 [inline]
iput.part.0+0x5b/0x7f0 fs/inode.c:1764
iput+0x5c/0x80 fs/inode.c:1757
dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
__dentry_kill+0x1d0/0x600 fs/dcache.c:603
shrink_kill fs/dcache.c:1048 [inline]
shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
super_cache_scan+0x32a/0x550 fs/super.c:221
do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
shrink_slab_memcg mm/shrinker.c:548 [inline]
shrink_slab+0xa87/0x1310 mm/shrinker.c:626
shrink_one+0x493/0x7c0 mm/vmscan.c:4774
shrink_many mm/vmscan.c:4835 [inline]
lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4935
shrink_node mm/vmscan.c:5894 [inline]
kswapd_shrink_node mm/vmscan.c:6704 [inline]
balance_pgdat+0x10d1/0x1a10 mm/vmscan.c:6895
kswapd+0x5ea/0xbf0 mm/vmscan.c:7164
kthread+0x2c1/0x3a0 kernel/kthread.c:388
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 4%]
* Re: [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache
2024-04-26 15:12 0% ` Darrick J. Wong
@ 2024-04-28 20:59 0% ` Pankaj Raghav (Samsung)
0 siblings, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-04-28 20:59 UTC (permalink / raw)
To: Darrick J. Wong
Cc: willy, brauner, david, chandan.babu, akpm, linux-fsdevel, hare,
linux-kernel, linux-mm, linux-xfs, mcgrof, gost.dev, p.raghav
On Fri, Apr 26, 2024 at 08:12:43AM -0700, Darrick J. Wong wrote:
> On Thu, Apr 25, 2024 at 01:37:38PM +0200, Pankaj Raghav (Samsung) wrote:
> > From: Luis Chamberlain <mcgrof@kernel.org>
> >
> > filemap_create_folio() and do_read_cache_folio() were always allocating
> > folio of order 0. __filemap_get_folio was trying to allocate higher
> > order folios when fgp_flags had higher order hint set but it will default
> > to order 0 folio if higher order memory allocation fails.
> >
> > Supporting mapping_min_order implies that we guarantee each folio in the
> > page cache has at least an order of mapping_min_order. When adding new
> > folios to the page cache we must also ensure the index used is aligned to
> > the mapping_min_order as the page cache requires the index to be aligned
> > to the order of the folio.
>
> If we cannot find a folio of at least min_order size, what error is sent
> back?
>
> If the answer is "the same error that you get if we cannot allocate a
> base page today (aka ENOMEM)", then I think I understand this enough to
> say
>
> Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Yes. We will get a ENOMEM if we cannot allocate min_order size folio. :)
Thanks!
>
> --D
>
> > Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> > Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
> > Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> > ---
> > mm/filemap.c | 24 +++++++++++++++++-------
> > 1 file changed, 17 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 30de18c4fd28..f0c0cfbbd134 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -858,6 +858,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
> >
> > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> > VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
> > + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
> > + folio);
> > mapping_set_update(&xas, mapping);
> >
> > if (!huge) {
> > @@ -1895,8 +1897,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > folio_wait_stable(folio);
> > no_page:
> > if (!folio && (fgp_flags & FGP_CREAT)) {
> > - unsigned order = FGF_GET_ORDER(fgp_flags);
> > + unsigned int min_order = mapping_min_folio_order(mapping);
> > + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> > int err;
> > + index = mapping_align_start_index(mapping, index);
> >
> > if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
> > gfp |= __GFP_WRITE;
> > @@ -1936,7 +1940,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > break;
> > folio_put(folio);
> > folio = NULL;
> > - } while (order-- > 0);
> > + } while (order-- > min_order);
> >
> > if (err == -EEXIST)
> > goto repeat;
> > @@ -2425,13 +2429,16 @@ static int filemap_update_page(struct kiocb *iocb,
> > }
> >
> > static int filemap_create_folio(struct file *file,
> > - struct address_space *mapping, pgoff_t index,
> > + struct address_space *mapping, loff_t pos,
> > struct folio_batch *fbatch)
> > {
> > struct folio *folio;
> > int error;
> > + unsigned int min_order = mapping_min_folio_order(mapping);
> > + pgoff_t index;
> >
> > - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
> > + folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
> > + min_order);
> > if (!folio)
> > return -ENOMEM;
> >
> > @@ -2449,6 +2456,8 @@ static int filemap_create_folio(struct file *file,
> > * well to keep locking rules simple.
> > */
> > filemap_invalidate_lock_shared(mapping);
> > + /* index in PAGE units but aligned to min_order number of pages. */
> > + index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
> > error = filemap_add_folio(mapping, folio, index,
> > mapping_gfp_constraint(mapping, GFP_KERNEL));
> > if (error == -EEXIST)
> > @@ -2509,8 +2518,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
> > if (!folio_batch_count(fbatch)) {
> > if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
> > return -EAGAIN;
> > - err = filemap_create_folio(filp, mapping,
> > - iocb->ki_pos >> PAGE_SHIFT, fbatch);
> > + err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
> > if (err == AOP_TRUNCATED_PAGE)
> > goto retry;
> > return err;
> > @@ -3708,9 +3716,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
> > repeat:
> > folio = filemap_get_folio(mapping, index);
> > if (IS_ERR(folio)) {
> > - folio = filemap_alloc_folio(gfp, 0);
> > + folio = filemap_alloc_folio(gfp,
> > + mapping_min_folio_order(mapping));
> > if (!folio)
> > return ERR_PTR(-ENOMEM);
> > + index = mapping_align_start_index(mapping, index);
> > err = filemap_add_folio(mapping, folio, index, gfp);
> > if (unlikely(err)) {
> > folio_put(folio);
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [relevance 0%]
* [syzbot] [crypto?] KMSAN: uninit-value in aes_encrypt (5)
@ 2024-04-28 10:32 4% syzbot
2024-05-10 4:02 4% ` syzbot
0 siblings, 1 reply; 200+ results
From: syzbot @ 2024-04-28 10:32 UTC (permalink / raw)
To: davem, herbert, linux-crypto, linux-kernel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 5d12ed4bea43 Merge tag 'i2c-for-6.9-rc6' of git://git.kern..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16491b80980000
kernel config: https://syzkaller.appspot.com/x/.config?x=1c4a1df36b3414a8
dashboard link: https://syzkaller.appspot.com/bug?extid=aeb14e2539ffb6d21130
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/bb5148c91210/disk-5d12ed4b.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/49a9a8f075f4/vmlinux-5d12ed4b.xz
kernel image: https://storage.googleapis.com/syzbot-assets/1309b451ab44/bzImage-5d12ed4b.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+aeb14e2539ffb6d21130@syzkaller.appspotmail.com
=====================================================
BUG: KMSAN: uninit-value in subshift lib/crypto/aes.c:149 [inline]
BUG: KMSAN: uninit-value in aes_encrypt+0x15cc/0x1db0 lib/crypto/aes.c:282
subshift lib/crypto/aes.c:149 [inline]
aes_encrypt+0x15cc/0x1db0 lib/crypto/aes.c:282
aesti_encrypt+0x7d/0xf0 crypto/aes_ti.c:31
crypto_ecb_crypt crypto/ecb.c:23 [inline]
crypto_ecb_encrypt2+0x18a/0x300 crypto/ecb.c:40
crypto_lskcipher_crypt_sg+0x36b/0x7f0 crypto/lskcipher.c:228
crypto_lskcipher_encrypt_sg+0x8a/0xc0 crypto/lskcipher.c:247
crypto_skcipher_encrypt+0x119/0x1e0 crypto/skcipher.c:669
xts_encrypt+0x3c4/0x550 crypto/xts.c:269
crypto_skcipher_encrypt+0x1a0/0x1e0 crypto/skcipher.c:671
fscrypt_crypt_data_unit+0x4ee/0x8f0 fs/crypto/crypto.c:144
fscrypt_encrypt_pagecache_blocks+0x422/0x900 fs/crypto/crypto.c:207
ext4_bio_write_folio+0x13db/0x2e40 fs/ext4/page-io.c:526
mpage_submit_folio+0x351/0x4a0 fs/ext4/inode.c:1869
mpage_process_page_bufs+0xb92/0xe30 fs/ext4/inode.c:1982
mpage_process_folio fs/ext4/inode.c:2036 [inline]
mpage_map_and_submit_buffers fs/ext4/inode.c:2105 [inline]
mpage_map_and_submit_extent fs/ext4/inode.c:2254 [inline]
ext4_do_writepages+0x353e/0x62e0 fs/ext4/inode.c:2679
ext4_writepages+0x312/0x830 fs/ext4/inode.c:2768
do_writepages+0x427/0xc30 mm/page-writeback.c:2612
__writeback_single_inode+0x10d/0x12c0 fs/fs-writeback.c:1650
writeback_sb_inodes+0xb48/0x1be0 fs/fs-writeback.c:1941
wb_writeback+0x4a1/0xdf0 fs/fs-writeback.c:2117
wb_do_writeback fs/fs-writeback.c:2264 [inline]
wb_workfn+0x40b/0x1940 fs/fs-writeback.c:2304
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa81/0x1bd0 kernel/workqueue.c:3335
worker_thread+0xea5/0x1560 kernel/workqueue.c:3416
kthread+0x3e2/0x540 kernel/kthread.c:388
ret_from_fork+0x6d/0x90 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Uninit was stored to memory at:
le128_xor include/crypto/b128ops.h:69 [inline]
xts_xor_tweak+0x4ae/0xbf0 crypto/xts.c:123
xts_xor_tweak_pre crypto/xts.c:135 [inline]
xts_encrypt+0x296/0x550 crypto/xts.c:268
crypto_skcipher_encrypt+0x1a0/0x1e0 crypto/skcipher.c:671
fscrypt_crypt_data_unit+0x4ee/0x8f0 fs/crypto/crypto.c:144
fscrypt_encrypt_pagecache_blocks+0x422/0x900 fs/crypto/crypto.c:207
ext4_bio_write_folio+0x13db/0x2e40 fs/ext4/page-io.c:526
mpage_submit_folio+0x351/0x4a0 fs/ext4/inode.c:1869
mpage_process_page_bufs+0xb92/0xe30 fs/ext4/inode.c:1982
mpage_process_folio fs/ext4/inode.c:2036 [inline]
mpage_map_and_submit_buffers fs/ext4/inode.c:2105 [inline]
mpage_map_and_submit_extent fs/ext4/inode.c:2254 [inline]
ext4_do_writepages+0x353e/0x62e0 fs/ext4/inode.c:2679
ext4_writepages+0x312/0x830 fs/ext4/inode.c:2768
do_writepages+0x427/0xc30 mm/page-writeback.c:2612
__writeback_single_inode+0x10d/0x12c0 fs/fs-writeback.c:1650
writeback_sb_inodes+0xb48/0x1be0 fs/fs-writeback.c:1941
wb_writeback+0x4a1/0xdf0 fs/fs-writeback.c:2117
wb_do_writeback fs/fs-writeback.c:2264 [inline]
wb_workfn+0x40b/0x1940 fs/fs-writeback.c:2304
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa81/0x1bd0 kernel/workqueue.c:3335
worker_thread+0xea5/0x1560 kernel/workqueue.c:3416
kthread+0x3e2/0x540 kernel/kthread.c:388
ret_from_fork+0x6d/0x90 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Uninit was created at:
__alloc_pages+0x9d6/0xe70 mm/page_alloc.c:4598
alloc_pages_mpol+0x299/0x990 mm/mempolicy.c:2264
alloc_pages mm/mempolicy.c:2335 [inline]
folio_alloc+0x1d0/0x230 mm/mempolicy.c:2342
filemap_alloc_folio+0xa6/0x440 mm/filemap.c:984
__filemap_get_folio+0xa10/0x14b0 mm/filemap.c:1926
ext4_write_begin+0x3e5/0x2230 fs/ext4/inode.c:1159
ext4_da_write_begin+0x4cd/0xec0 fs/ext4/inode.c:2869
generic_perform_write+0x400/0xc60 mm/filemap.c:3974
ext4_buffered_write_iter+0x564/0xaa0 fs/ext4/file.c:299
ext4_file_write_iter+0x208/0x3450
__kernel_write_iter+0x68b/0xc40 fs/read_write.c:523
__kernel_write+0xca/0x100 fs/read_write.c:543
__dump_emit fs/coredump.c:813 [inline]
dump_emit+0x3aa/0x5d0 fs/coredump.c:850
writenote+0x2ad/0x480 fs/binfmt_elf.c:1422
write_note_info fs/binfmt_elf.c:1912 [inline]
elf_core_dump+0x4f77/0x59c0 fs/binfmt_elf.c:2064
do_coredump+0x32d5/0x4920 fs/coredump.c:764
get_signal+0x267e/0x2d00 kernel/signal.c:2896
arch_do_signal_or_restart+0x53/0xcb0 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
irqentry_exit_to_user_mode+0xa6/0x160 kernel/entry/common.c:231
irqentry_exit+0x16/0x60 kernel/entry/common.c:334
exc_general_protection+0x2e6/0x4b0 arch/x86/kernel/traps.c:644
asm_exc_general_protection+0x2b/0x30 arch/x86/include/asm/idtentry.h:617
CPU: 0 PID: 57 Comm: kworker/u8:3 Not tainted 6.9.0-rc5-syzkaller-00329-g5d12ed4bea43 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: writeback wb_workfn (flush-7:1)
=====================================================
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 4%]
* Re: [PATCH 09/11] KVM: guest_memfd: Add interface for populating gmem pages with user data
@ 2024-04-26 15:17 3% ` Sean Christopherson
0 siblings, 0 replies; 200+ results
From: Sean Christopherson @ 2024-04-26 15:17 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Isaku Yamahata, linux-kernel, kvm, michael.roth, isaku.yamahata
On Fri, Apr 26, 2024, Paolo Bonzini wrote:
> On Thu, Apr 25, 2024 at 6:00 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Thu, Apr 25, 2024, Paolo Bonzini wrote:
> > > On Thu, Apr 25, 2024 at 3:12 AM Isaku Yamahata <isaku.yamahata@intel.com> wrote:
> > > > > > get_user_pages_fast(source addr)
> > > > > > read_lock(mmu_lock)
> > > > > > kvm_tdp_mmu_get_walk_private_pfn(vcpu, gpa, &pfn);
> > > > > > if the page table doesn't map gpa, error.
> > > > > > TDH.MEM.PAGE.ADD()
> > > > > > TDH.MR.EXTEND()
> > > > > > read_unlock(mmu_lock)
> > > > > > put_page()
> > > > >
> > > > > Hmm, KVM doesn't _need_ to use invalidate_lock to protect against guest_memfd
> > > > > invalidation, but I also don't see why it would cause problems.
> > >
> > > The invalidate_lock is only needed to operate on the guest_memfd, but
> > > it's a rwsem so there are no risks of lock inversion.
> > >
> > > > > I.e. why not
> > > > > take mmu_lock() in TDX's post_populate() implementation?
> > > >
> > > > We can take the lock. Because we have already populated the GFN of guest_memfd,
> > > > we need to make kvm_gmem_populate() not pass FGP_CREAT_ONLY. Otherwise we'll
> > > > get -EEXIST.
> > >
> > > I don't understand why TDH.MEM.PAGE.ADD() cannot be called from the
> > > post-populate hook. Can the code for TDH.MEM.PAGE.ADD be shared
> > > between the memory initialization ioctl and the page fault hook in
> > > kvm_x86_ops?
> >
> > Ah, because TDX is required to pre-fault the memory to establish the S-EPT walk,
> > and pre-faulting means guest_memfd()
> >
> > Requiring that guest_memfd not have a page when initializing the guest image
> > seems wrong, i.e. I don't think we want FGP_CREAT_ONLY. And not just because I
> > am a fan of pre-faulting, I think the semantics are bad.
>
> Ok, fair enough. I wanted to do the once-only test in common code but since
> SEV code checks for the RMP I can remove that. One less headache.
I definitely don't object to having a check in common code, and I'd be in favor
of removing the RMP checks if possible, but tracking needs to be something more
explicit in guest_memfd.
*sigh*
I even left behind a TODO for this exact thing, and y'all didn't even wave at it
as you flew by :-)
/*
* Use the up-to-date flag to track whether or not the memory has been
* zeroed before being handed off to the guest. There is no backing
* storage for the memory, so the folio will remain up-to-date until
* it's removed.
*
* TODO: Skip clearing pages when trusted firmware will do it when <==========================
* assigning memory to the guest.
*/
if (!folio_test_uptodate(folio)) {
unsigned long nr_pages = folio_nr_pages(folio);
unsigned long i;
for (i = 0; i < nr_pages; i++)
clear_highpage(folio_page(folio, i));
folio_mark_uptodate(folio);
}
if (prepare) {
int r = kvm_gmem_prepare_folio(inode, index, folio);
if (r < 0) {
folio_unlock(folio);
folio_put(folio);
return ERR_PTR(r);
}
}
Compile tested only (and not even fully as I didn't bother defining
CONFIG_HAVE_KVM_GMEM_INITIALIZE), but I think this is the basic gist.
8< --------------------------------
// SPDX-License-Identifier: GPL-2.0
#include <linux/backing-dev.h>
#include <linux/falloc.h>
#include <linux/kvm_host.h>
#include <linux/pagemap.h>
#include <linux/anon_inodes.h>
#include "kvm_mm.h"
struct kvm_gmem {
struct kvm *kvm;
struct xarray bindings;
struct list_head entry;
};
static int kvm_gmem_initialize_folio(struct kvm *kvm, struct folio *folio,
pgoff_t index, void __user *src,
void *opaque)
{
#ifdef CONFIG_HAVE_KVM_GMEM_INITIALIZE
return kvm_arch_gmem_initialize(kvm, folio, index, src, opaque);
#else
unsigned long nr_pages = folio_nr_pages(folio);
unsigned long i;
if (WARN_ON_ONCE(src))
return -EIO;
for (i = 0; i < nr_pages; i++)
clear_highpage(folio_file_page(folio, index + i));
#endif
return 0;
}
static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
{
return gfn - slot->base_gfn + slot->gmem.pgoff;
}
static inline struct file *kvm_gmem_get_file(struct kvm_memory_slot *slot)
{
/*
* Do not return slot->gmem.file if it has already been closed;
* there might be some time between the last fput() and when
* kvm_gmem_release() clears slot->gmem.file, and you do not
* want to spin in the meanwhile.
*/
return get_file_active(&slot->gmem.file);
}
static struct folio *__kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
{
fgf_t fgp_flags = FGP_LOCK | FGP_ACCESSED | FGP_CREAT;
struct folio *folio;
/*
* The caller is responsible for managing the up-to-date flag (or not),
* as the memory doesn't need to be initialized until it's actually
* mapped into the guest. Waiting to initialize memory is necessary
* for VM types where the memory can be initialized exactly once.
*
* Ignore accessed, referenced, and dirty flags. The memory is
* unevictable and there is no storage to write back to.
*
* TODO: Support huge pages.
*/
folio = __filemap_get_folio(inode->i_mapping, index, fgp_flags,
mapping_gfp_mask(inode->i_mapping));
if (folio_test_hwpoison(folio)) {
folio_unlock(folio);
return ERR_PTR(-EHWPOISON);
}
return folio;
}
static struct folio *kvm_gmem_get_folio(struct file *file,
struct kvm_memory_slot *slot,
gfn_t gfn)
{
pgoff_t index = kvm_gmem_get_index(slot, gfn);
struct kvm_gmem *gmem = file->private_data;
struct inode *inode;
if (file != slot->gmem.file) {
WARN_ON_ONCE(slot->gmem.file);
return ERR_PTR(-EFAULT);
}
gmem = file->private_data;
if (xa_load(&gmem->bindings, index) != slot) {
WARN_ON_ONCE(xa_load(&gmem->bindings, index));
return ERR_PTR(-EIO);
}
inode = file_inode(file);
/*
* The caller is responsible for managing the up-to-date flag (or not),
* as the memory doesn't need to be initialized until it's actually
* mapped into the guest. Waiting to initialize memory is necessary
* for VM types where the memory can be initialized exactly once.
*
* Ignore accessed, referenced, and dirty flags. The memory is
* unevictable and there is no storage to write back to.
*
* TODO: Support huge pages.
*/
return __kvm_gmem_get_folio(inode, index);
}
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
{
pgoff_t index = kvm_gmem_get_index(slot, gfn);
struct file *file = kvm_gmem_get_file(slot);
struct folio *folio;
struct page *page;
if (!file)
return -EFAULT;
folio = kvm_gmem_get_folio(file, slot, gfn);
if (IS_ERR(folio))
goto out;
if (!folio_test_uptodate(folio)) {
kvm_gmem_initialize_folio(kvm, folio, index, NULL, NULL);
folio_mark_uptodate(folio);
}
page = folio_file_page(folio, index);
*pfn = page_to_pfn(page);
if (max_order)
*max_order = 0;
out:
fput(file);
return IS_ERR(folio) ? PTR_ERR(folio) : 0;
}
EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
long kvm_gmem_populate(struct kvm *kvm, gfn_t base_gfn, void __user *base_src,
long npages, void *opaque)
{
struct kvm_memory_slot *slot;
struct file *file;
int ret = 0, max_order;
long i;
lockdep_assert_held(&kvm->slots_lock);
if (npages < 0)
return -EINVAL;
slot = gfn_to_memslot(kvm, base_gfn);
if (!kvm_slot_can_be_private(slot))
return -EINVAL;
file = kvm_gmem_get_file(slot);
if (!file)
return -EFAULT;
filemap_invalidate_lock(file->f_mapping);
npages = min_t(ulong, slot->npages - (base_gfn - slot->base_gfn), npages);
for (i = 0; i < npages; i += (1 << max_order)) {
void __user *src = base_src + i * PAGE_SIZE;
gfn_t gfn = base_gfn + i;
pgoff_t index = kvm_gmem_get_index(slot, gfn);
struct folio *folio;
folio = kvm_gmem_get_folio(file, slot, gfn);
if (IS_ERR(folio)) {
ret = PTR_ERR(folio);
break;
}
if (folio_test_uptodate(folio)) {
folio_put(folio);
ret = -EEXIST;
break;
}
kvm_gmem_initialize_folio(kvm, folio, index, src, opaque);
folio_unlock(folio);
folio_put(folio);
}
filemap_invalidate_unlock(file->f_mapping);
fput(file);
return ret && !i ? ret : i;
}
EXPORT_SYMBOL_GPL(kvm_gmem_populate);
static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
pgoff_t end)
{
bool flush = false, found_memslot = false;
struct kvm_memory_slot *slot;
struct kvm *kvm = gmem->kvm;
unsigned long index;
xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) {
pgoff_t pgoff = slot->gmem.pgoff;
struct kvm_gfn_range gfn_range = {
.start = slot->base_gfn + max(pgoff, start) - pgoff,
.end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff,
.slot = slot,
.may_block = true,
};
if (!found_memslot) {
found_memslot = true;
KVM_MMU_LOCK(kvm);
kvm_mmu_invalidate_begin(kvm);
}
flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range);
}
if (flush)
kvm_flush_remote_tlbs(kvm);
if (found_memslot)
KVM_MMU_UNLOCK(kvm);
}
static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start,
pgoff_t end)
{
struct kvm *kvm = gmem->kvm;
if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) {
KVM_MMU_LOCK(kvm);
kvm_mmu_invalidate_end(kvm);
KVM_MMU_UNLOCK(kvm);
}
}
static long __kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
{
struct list_head *gmem_list = &inode->i_mapping->i_private_list;
pgoff_t start = offset >> PAGE_SHIFT;
pgoff_t end = (offset + len) >> PAGE_SHIFT;
struct kvm_gmem *gmem;
list_for_each_entry(gmem, gmem_list, entry)
kvm_gmem_invalidate_begin(gmem, start, end);
truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1);
list_for_each_entry(gmem, gmem_list, entry)
kvm_gmem_invalidate_end(gmem, start, end);
return 0;
}
static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len)
{
int r;
/*
* Bindings must be stable across invalidation to ensure the start+end
* are balanced.
*/
filemap_invalidate_lock(inode->i_mapping);
r = __kvm_gmem_punch_hole(inode, offset, len);
filemap_invalidate_unlock(inode->i_mapping);
return r;
}
static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len)
{
struct address_space *mapping = inode->i_mapping;
pgoff_t start, index, end;
int r;
/* Dedicated guest is immutable by default. */
if (offset + len > i_size_read(inode))
return -EINVAL;
filemap_invalidate_lock_shared(mapping);
start = offset >> PAGE_SHIFT;
end = (offset + len) >> PAGE_SHIFT;
r = 0;
for (index = start; index < end; ) {
struct folio *folio;
if (signal_pending(current)) {
r = -EINTR;
break;
}
folio = __kvm_gmem_get_folio(inode, index);
if (IS_ERR(folio)) {
r = PTR_ERR(folio);
break;
}
index = folio_next_index(folio);
folio_unlock(folio);
folio_put(folio);
/* 64-bit only, wrapping the index should be impossible. */
if (WARN_ON_ONCE(!index))
break;
cond_resched();
}
filemap_invalidate_unlock_shared(mapping);
return r;
}
static long kvm_gmem_fallocate(struct file *file, int mode, loff_t offset,
loff_t len)
{
int ret;
if (!(mode & FALLOC_FL_KEEP_SIZE))
return -EOPNOTSUPP;
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
return -EOPNOTSUPP;
if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len))
return -EINVAL;
if (mode & FALLOC_FL_PUNCH_HOLE)
ret = kvm_gmem_punch_hole(file_inode(file), offset, len);
else
ret = kvm_gmem_allocate(file_inode(file), offset, len);
if (!ret)
file_modified(file);
return ret;
}
static int kvm_gmem_release(struct inode *inode, struct file *file)
{
struct kvm_gmem *gmem = file->private_data;
struct kvm_memory_slot *slot;
struct kvm *kvm = gmem->kvm;
unsigned long index;
/*
* Prevent concurrent attempts to *unbind* a memslot. This is the last
* reference to the file and thus no new bindings can be created, but
* dereferencing the slot for existing bindings needs to be protected
* against memslot updates, specifically so that unbind doesn't race
* and free the memslot (kvm_gmem_get_file() will return NULL).
*/
mutex_lock(&kvm->slots_lock);
filemap_invalidate_lock(inode->i_mapping);
xa_for_each(&gmem->bindings, index, slot)
rcu_assign_pointer(slot->gmem.file, NULL);
synchronize_rcu();
/*
* All in-flight operations are gone and new bindings can be created.
* Zap all SPTEs pointed at by this file. Do not free the backing
* memory, as its lifetime is associated with the inode, not the file.
*/
kvm_gmem_invalidate_begin(gmem, 0, -1ul);
kvm_gmem_invalidate_end(gmem, 0, -1ul);
list_del(&gmem->entry);
filemap_invalidate_unlock(inode->i_mapping);
mutex_unlock(&kvm->slots_lock);
xa_destroy(&gmem->bindings);
kfree(gmem);
kvm_put_kvm(kvm);
return 0;
}
static struct file_operations kvm_gmem_fops = {
.open = generic_file_open,
.release = kvm_gmem_release,
.fallocate = kvm_gmem_fallocate,
};
void kvm_gmem_init(struct module *module)
{
kvm_gmem_fops.owner = module;
}
static int kvm_gmem_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
WARN_ON_ONCE(1);
return -EINVAL;
}
static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *folio)
{
struct list_head *gmem_list = &mapping->i_private_list;
struct kvm_gmem *gmem;
pgoff_t start, end;
filemap_invalidate_lock_shared(mapping);
start = folio->index;
end = start + folio_nr_pages(folio);
list_for_each_entry(gmem, gmem_list, entry)
kvm_gmem_invalidate_begin(gmem, start, end);
/*
* Do not truncate the range, what action is taken in response to the
* error is userspace's decision (assuming the architecture supports
* gracefully handling memory errors). If/when the guest attempts to
* access a poisoned page, kvm_gmem_get_pfn() will return -EHWPOISON,
* at which point KVM can either terminate the VM or propagate the
* error to userspace.
*/
list_for_each_entry(gmem, gmem_list, entry)
kvm_gmem_invalidate_end(gmem, start, end);
filemap_invalidate_unlock_shared(mapping);
return MF_DELAYED;
}
#ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE
static void kvm_gmem_free_folio(struct folio *folio)
{
struct page *page = folio_page(folio, 0);
kvm_pfn_t pfn = page_to_pfn(page);
int order = folio_order(folio);
kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
}
#endif
static const struct address_space_operations kvm_gmem_aops = {
.dirty_folio = noop_dirty_folio,
.migrate_folio = kvm_gmem_migrate_folio,
.error_remove_folio = kvm_gmem_error_folio,
#ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE
.free_folio = kvm_gmem_free_folio,
#endif
};
static int kvm_gmem_getattr(struct mnt_idmap *idmap, const struct path *path,
struct kstat *stat, u32 request_mask,
unsigned int query_flags)
{
struct inode *inode = path->dentry->d_inode;
generic_fillattr(idmap, request_mask, inode, stat);
return 0;
}
static int kvm_gmem_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
struct iattr *attr)
{
return -EINVAL;
}
static const struct inode_operations kvm_gmem_iops = {
.getattr = kvm_gmem_getattr,
.setattr = kvm_gmem_setattr,
};
static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
{
const char *anon_name = "[kvm-gmem]";
struct kvm_gmem *gmem;
struct inode *inode;
struct file *file;
int fd, err;
fd = get_unused_fd_flags(0);
if (fd < 0)
return fd;
gmem = kzalloc(sizeof(*gmem), GFP_KERNEL);
if (!gmem) {
err = -ENOMEM;
goto err_fd;
}
file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem,
O_RDWR, NULL);
if (IS_ERR(file)) {
err = PTR_ERR(file);
goto err_gmem;
}
file->f_flags |= O_LARGEFILE;
inode = file->f_inode;
WARN_ON(file->f_mapping != inode->i_mapping);
inode->i_private = (void *)(unsigned long)flags;
inode->i_op = &kvm_gmem_iops;
inode->i_mapping->a_ops = &kvm_gmem_aops;
inode->i_mapping->flags |= AS_INACCESSIBLE;
inode->i_mode |= S_IFREG;
inode->i_size = size;
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
mapping_set_unmovable(inode->i_mapping);
/* Unmovable mappings are supposed to be marked unevictable as well. */
WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
kvm_get_kvm(kvm);
gmem->kvm = kvm;
xa_init(&gmem->bindings);
list_add(&gmem->entry, &inode->i_mapping->i_private_list);
fd_install(fd, file);
return fd;
err_gmem:
kfree(gmem);
err_fd:
put_unused_fd(fd);
return err;
}
int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
{
loff_t size = args->size;
u64 flags = args->flags;
u64 valid_flags = 0;
if (flags & ~valid_flags)
return -EINVAL;
if (size <= 0 || !PAGE_ALIGNED(size))
return -EINVAL;
return __kvm_gmem_create(kvm, size, flags);
}
int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
unsigned int fd, loff_t offset)
{
loff_t size = slot->npages << PAGE_SHIFT;
unsigned long start, end;
struct kvm_gmem *gmem;
struct inode *inode;
struct file *file;
int r = -EINVAL;
BUILD_BUG_ON(sizeof(gfn_t) != sizeof(slot->gmem.pgoff));
file = fget(fd);
if (!file)
return -EBADF;
if (file->f_op != &kvm_gmem_fops)
goto err;
gmem = file->private_data;
if (gmem->kvm != kvm)
goto err;
inode = file_inode(file);
if (offset < 0 || !PAGE_ALIGNED(offset) ||
offset + size > i_size_read(inode))
goto err;
filemap_invalidate_lock(inode->i_mapping);
start = offset >> PAGE_SHIFT;
end = start + slot->npages;
if (!xa_empty(&gmem->bindings) &&
xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) {
filemap_invalidate_unlock(inode->i_mapping);
goto err;
}
/*
* No synchronize_rcu() needed, any in-flight readers are guaranteed to
* be see either a NULL file or this new file, no need for them to go
* away.
*/
rcu_assign_pointer(slot->gmem.file, file);
slot->gmem.pgoff = start;
xa_store_range(&gmem->bindings, start, end - 1, slot, GFP_KERNEL);
filemap_invalidate_unlock(inode->i_mapping);
/*
* Drop the reference to the file, even on success. The file pins KVM,
* not the other way 'round. Active bindings are invalidated if the
* file is closed before memslots are destroyed.
*/
r = 0;
err:
fput(file);
return r;
}
void kvm_gmem_unbind(struct kvm_memory_slot *slot)
{
unsigned long start = slot->gmem.pgoff;
unsigned long end = start + slot->npages;
struct kvm_gmem *gmem;
struct file *file;
/*
* Nothing to do if the underlying file was already closed (or is being
* closed right now), kvm_gmem_release() invalidates all bindings.
*/
file = kvm_gmem_get_file(slot);
if (!file)
return;
gmem = file->private_data;
filemap_invalidate_lock(file->f_mapping);
xa_store_range(&gmem->bindings, start, end - 1, NULL, GFP_KERNEL);
rcu_assign_pointer(slot->gmem.file, NULL);
synchronize_rcu();
filemap_invalidate_unlock(file->f_mapping);
fput(file);
}
^ permalink raw reply [relevance 3%]
* Re: [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache
2024-04-25 11:37 14% ` [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-04-25 19:04 0% ` Hannes Reinecke
@ 2024-04-26 15:12 0% ` Darrick J. Wong
2024-04-28 20:59 0% ` Pankaj Raghav (Samsung)
1 sibling, 1 reply; 200+ results
From: Darrick J. Wong @ 2024-04-26 15:12 UTC (permalink / raw)
To: Pankaj Raghav (Samsung)
Cc: willy, brauner, david, chandan.babu, akpm, linux-fsdevel, hare,
linux-kernel, linux-mm, linux-xfs, mcgrof, gost.dev, p.raghav
On Thu, Apr 25, 2024 at 01:37:38PM +0200, Pankaj Raghav (Samsung) wrote:
> From: Luis Chamberlain <mcgrof@kernel.org>
>
> filemap_create_folio() and do_read_cache_folio() were always allocating
> folio of order 0. __filemap_get_folio was trying to allocate higher
> order folios when fgp_flags had higher order hint set but it will default
> to order 0 folio if higher order memory allocation fails.
>
> Supporting mapping_min_order implies that we guarantee each folio in the
> page cache has at least an order of mapping_min_order. When adding new
> folios to the page cache we must also ensure the index used is aligned to
> the mapping_min_order as the page cache requires the index to be aligned
> to the order of the folio.
If we cannot find a folio of at least min_order size, what error is sent
back?
If the answer is "the same error that you get if we cannot allocate a
base page today (aka ENOMEM)", then I think I understand this enough to
say
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
> mm/filemap.c | 24 +++++++++++++++++-------
> 1 file changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 30de18c4fd28..f0c0cfbbd134 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -858,6 +858,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
>
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
> + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
> + folio);
> mapping_set_update(&xas, mapping);
>
> if (!huge) {
> @@ -1895,8 +1897,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> folio_wait_stable(folio);
> no_page:
> if (!folio && (fgp_flags & FGP_CREAT)) {
> - unsigned order = FGF_GET_ORDER(fgp_flags);
> + unsigned int min_order = mapping_min_folio_order(mapping);
> + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> int err;
> + index = mapping_align_start_index(mapping, index);
>
> if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
> gfp |= __GFP_WRITE;
> @@ -1936,7 +1940,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> break;
> folio_put(folio);
> folio = NULL;
> - } while (order-- > 0);
> + } while (order-- > min_order);
>
> if (err == -EEXIST)
> goto repeat;
> @@ -2425,13 +2429,16 @@ static int filemap_update_page(struct kiocb *iocb,
> }
>
> static int filemap_create_folio(struct file *file,
> - struct address_space *mapping, pgoff_t index,
> + struct address_space *mapping, loff_t pos,
> struct folio_batch *fbatch)
> {
> struct folio *folio;
> int error;
> + unsigned int min_order = mapping_min_folio_order(mapping);
> + pgoff_t index;
>
> - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
> + folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
> + min_order);
> if (!folio)
> return -ENOMEM;
>
> @@ -2449,6 +2456,8 @@ static int filemap_create_folio(struct file *file,
> * well to keep locking rules simple.
> */
> filemap_invalidate_lock_shared(mapping);
> + /* index in PAGE units but aligned to min_order number of pages. */
> + index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
> error = filemap_add_folio(mapping, folio, index,
> mapping_gfp_constraint(mapping, GFP_KERNEL));
> if (error == -EEXIST)
> @@ -2509,8 +2518,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
> if (!folio_batch_count(fbatch)) {
> if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
> return -EAGAIN;
> - err = filemap_create_folio(filp, mapping,
> - iocb->ki_pos >> PAGE_SHIFT, fbatch);
> + err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
> if (err == AOP_TRUNCATED_PAGE)
> goto retry;
> return err;
> @@ -3708,9 +3716,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
> repeat:
> folio = filemap_get_folio(mapping, index);
> if (IS_ERR(folio)) {
> - folio = filemap_alloc_folio(gfp, 0);
> + folio = filemap_alloc_folio(gfp,
> + mapping_min_folio_order(mapping));
> if (!folio)
> return ERR_PTR(-ENOMEM);
> + index = mapping_align_start_index(mapping, index);
> err = filemap_add_folio(mapping, folio, index, gfp);
> if (unlikely(err)) {
> folio_put(folio);
> --
> 2.34.1
>
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH RFC 2/7] filemap: Change mapping_set_folio_min_order() -> mapping_set_folio_orders()
2024-04-25 14:47 0% ` Pankaj Raghav (Samsung)
@ 2024-04-26 8:02 0% ` John Garry
0 siblings, 0 replies; 200+ results
From: John Garry @ 2024-04-26 8:02 UTC (permalink / raw)
To: Pankaj Raghav (Samsung)
Cc: axboe, brauner, djwong, viro, jack, akpm, willy, dchinner, tytso,
hch, martin.petersen, nilay, ritesh.list, mcgrof, linux-block,
linux-kernel, linux-xfs, linux-fsdevel, linux-mm, ojaswin,
p.raghav, jbongio, okiselev
On 25/04/2024 15:47, Pankaj Raghav (Samsung) wrote:
> On Mon, Apr 22, 2024 at 02:39:18PM +0000, John Garry wrote:
>> Borrowed from:
>>
>> https://urldefense.com/v3/__https://lore.kernel.org/linux-fsdevel/20240213093713.1753368-2-kernel@pankajraghav.com/__;!!ACWV5N9M2RV99hQ!LvajFab0xQx8oBWDlDtVY8duiLDjOKX91G4YqadoCu6gqatA2H0FzBUvdSC69dqXNoe2QvStSwrxIZ142MXOKk8$
>> (credit given in due course)
>>
>> We will need to be able to only use a single folio order for buffered
>> atomic writes, so allow the mapping folio order min and max be set.
>
>>
>> We still have the restriction of not being able to support order-1
>> folios - it will be required to lift this limit at some stage.
>
> This is already supported upstream for file-backed folios:
> commit: 8897277acfef7f70fdecc054073bea2542fc7a1b
ok
>
>> index fc8eb9c94e9c..c22455fa28a1 100644
>> --- a/include/linux/pagemap.h
>> +++ b/include/linux/pagemap.h
>> @@ -363,9 +363,10 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
>> #endif
>>
>> /*
>> - * mapping_set_folio_min_order() - Set the minimum folio order
>> + * mapping_set_folio_orders() - Set the minimum and max folio order
>
> In the new series (sorry forgot to CC you),
no worries, I saw it
> I added a new helper called
> mapping_set_folio_order_range() which does something similar to avoid
> confusion based on willy's suggestion:
> https://urldefense.com/v3/__https://lore.kernel.org/linux-xfs/20240425113746.335530-3-kernel@pankajraghav.com/__;!!ACWV5N9M2RV99hQ!LvajFab0xQx8oBWDlDtVY8duiLDjOKX91G4YqadoCu6gqatA2H0FzBUvdSC69dqXNoe2QvStSwrxIZ14opzAoso$
>
Fine, I can include that
> mapping_set_folio_min_order() also sets max folio order to be
> MAX_PAGECACHE_ORDER order anyway. So no need of explicitly calling it
> here?
>
Here mapping_set_folio_min_order() is being replaced with
mapping_set_folio_order_range(), so not sure why you mention that.
Regardless, I'll use your mapping_set_folio_order_range().
>> /**
>> @@ -400,7 +406,7 @@ static inline void mapping_set_folio_min_order(struct address_space *mapping,
>> */
>> static inline void mapping_set_large_folios(struct address_space *mapping)
>> {
>> - mapping_set_folio_min_order(mapping, 0);
>> + mapping_set_folio_orders(mapping, 0, MAX_PAGECACHE_ORDER);
>> }
>>
>> static inline unsigned int mapping_max_folio_order(struct address_space *mapping)
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index d81530b0aac0..d5effe50ddcb 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -1898,9 +1898,15 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
>> no_page:
>> if (!folio && (fgp_flags & FGP_CREAT)) {
>> unsigned int min_order = mapping_min_folio_order(mapping);
>> - unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
>> + unsigned int max_order = mapping_max_folio_order(mapping);
>> + unsigned int order = FGF_GET_ORDER(fgp_flags);
>> int err;
>>
>> + if (order > max_order)
>> + order = max_order;
>> + else if (order < min_order)
>> + order = max_order;
>
> order = min_order; ?
right
Thanks,
John
^ permalink raw reply [relevance 0%]
* [merged mm-stable] memprofiling-documentation.patch removed from -mm tree
@ 2024-04-26 3:58 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-04-26 3:58 UTC (permalink / raw)
To: mm-commits, wedsonaf, viro, vbabka, tj, surenb, peterz,
pasha.tatashin, ojeda, keescook, gary, dennis, cl, boqun.feng,
bjorn3_gh, benno.lossin, aliceryhl, alex.gaynor, a.hindborg,
kent.overstreet, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 6140 bytes --]
The quilt patch titled
Subject: memprofiling: documentation
has been removed from the -mm tree. Its filename was
memprofiling-documentation.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Kent Overstreet <kent.overstreet@linux.dev>
Subject: memprofiling: documentation
Date: Thu, 21 Mar 2024 09:36:59 -0700
Provide documentation for memory allocation profiling.
Link: https://lkml.kernel.org/r/20240321163705.3067592-38-surenb@google.com
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alex Gaynor <alex.gaynor@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@samsung.com>
Cc: Benno Lossin <benno.lossin@proton.me>
Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Gary Guo <gary@garyguo.net>
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
Documentation/mm/allocation-profiling.rst | 100 ++++++++++++++++++++
Documentation/mm/index.rst | 1
2 files changed, 101 insertions(+)
--- /dev/null
+++ a/Documentation/mm/allocation-profiling.rst
@@ -0,0 +1,100 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+MEMORY ALLOCATION PROFILING
+===========================
+
+Low overhead (suitable for production) accounting of all memory allocations,
+tracked by file and line number.
+
+Usage:
+kconfig options:
+- CONFIG_MEM_ALLOC_PROFILING
+
+- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+
+- CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ adds warnings for allocations that weren't accounted because of a
+ missing annotation
+
+Boot parameter:
+ sysctl.vm.mem_profiling=0|1|never
+
+ When set to "never", memory allocation profiling overhead is minimized and it
+ cannot be enabled at runtime (sysctl becomes read-only).
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
+
+sysctl:
+ /proc/sys/vm/mem_profiling
+
+Runtime info:
+ /proc/allocinfo
+
+Example output::
+
+ root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
+ 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
+ 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
+ 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
+ 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 55M 4887 mm/slub.c:2259 func:alloc_slab_page
+ 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+
+===================
+Theory of operation
+===================
+
+Memory allocation profiling builds off of code tagging, which is a library for
+declaring static structs (that typically describe a file and line number in
+some way, hence code tagging) and then finding and operating on them at runtime,
+- i.e. iterating over them to print them in debugfs/procfs.
+
+To add accounting for an allocation call, we replace it with a macro
+invocation, alloc_hooks(), that
+- declares a code tag
+- stashes a pointer to it in task_struct
+- calls the real allocation function
+- and finally, restores the task_struct alloc tag pointer to its previous value.
+
+This allows for alloc_hooks() calls to be nested, with the most recent one
+taking effect. This is important for allocations internal to the mm/ code that
+do not properly belong to the outer allocation context and should be counted
+separately: for example, slab object extension vectors, or when the slab
+allocates pages from the page allocator.
+
+Thus, proper usage requires determining which function in an allocation call
+stack should be tagged. There are many helper functions that essentially wrap
+e.g. kmalloc() and do a little more work, then are called in multiple places;
+we'll generally want the accounting to happen in the callers of these helpers,
+not in the helpers themselves.
+
+To fix up a given helper, for example foo(), do the following:
+- switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
+
+- rename it to foo_noprof()
+
+- define a macro version of foo() like so:
+
+ #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
+
+It's also possible to stash a pointer to an alloc tag in your own data structures.
+
+Do this when you're implementing a generic data structure that does allocations
+"on behalf of" some other code - for example, the rhashtable code. This way,
+instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
+break it out by rhashtable type.
+
+To do so:
+- Hook your data structure's init function, like any other allocation function.
+
+- Within your init function, use the convenience macro alloc_tag_record() to
+ record alloc tag in your data structure.
+
+- Then, use the following form for your allocations:
+ alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
--- a/Documentation/mm/index.rst~memprofiling-documentation
+++ a/Documentation/mm/index.rst
@@ -26,6 +26,7 @@ see the :doc:`admin guide <../admin-guid
page_cache
shmfs
oom
+ allocation-profiling
Legacy Documentation
====================
_
Patches currently in -mm which might be from kent.overstreet@linux.dev are
^ permalink raw reply [relevance 4%]
* [merged mm-stable] lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch removed from -mm tree
@ 2024-04-26 3:58 3% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-04-26 3:58 UTC (permalink / raw)
To: mm-commits, wedsonaf, viro, vbabka, tj, peterz, pasha.tatashin,
ojeda, klarasmodin, kent.overstreet, keescook, gary, dennis, cl,
boqun.feng, bjorn3_gh, benno.lossin, aliceryhl, alex.gaynor,
a.hindborg, surenb, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 19109 bytes --]
The quilt patch titled
Subject: lib: add allocation tagging support for memory allocation profiling
has been removed from the -mm tree. Its filename was
lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Suren Baghdasaryan <surenb@google.com>
Subject: lib: add allocation tagging support for memory allocation profiling
Date: Thu, 21 Mar 2024 09:36:35 -0700
Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily
instrument memory allocators. It registers an "alloc_tags" codetag type
with /proc/allocinfo interface to output allocation tag information when
the feature is enabled.
CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory
allocation profiling instrumentation.
Memory allocation profiling can be enabled or disabled at runtime using
/proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n.
CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation
profiling by default.
[surenb@google.com: Documentation/filesystems/proc.rst: fix allocinfo title]
Link: https://lkml.kernel.org/r/20240326073813.727090-1-surenb@google.com
[surenb@google.com: do limited memory accounting for modules with ARCH_NEEDS_WEAK_PER_CPU]
Link: https://lkml.kernel.org/r/20240402180933.1663992-2-surenb@google.com
[klarasmodin@gmail.com: explicitly include irqflags.h in alloc_tag.h]
Link: https://lkml.kernel.org/r/20240407133252.173636-1-klarasmodin@gmail.com
[surenb@google.com: fix alloc_tag_init() to prevent passing NULL to PTR_ERR()]
Link: https://lkml.kernel.org/r/20240417003349.2520094-1-surenb@google.com
Link: https://lkml.kernel.org/r/20240321163705.3067592-14-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Klara Modin <klarasmodin@gmail.com>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alex Gaynor <alex.gaynor@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@samsung.com>
Cc: Benno Lossin <benno.lossin@proton.me>
Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Gary Guo <gary@garyguo.net>
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
Documentation/admin-guide/sysctl/vm.rst | 16 ++
Documentation/filesystems/proc.rst | 29 ++++
include/asm-generic/codetag.lds.h | 14 +
include/asm-generic/vmlinux.lds.h | 3
include/linux/alloc_tag.h | 156 ++++++++++++++++++++++
include/linux/sched.h | 24 +++
lib/Kconfig.debug | 25 +++
lib/Makefile | 2
lib/alloc_tag.c | 152 +++++++++++++++++++++
scripts/module.lds.S | 7
10 files changed, 428 insertions(+)
--- a/Documentation/admin-guide/sysctl/vm.rst~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/Documentation/admin-guide/sysctl/vm.rst
@@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
+- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
@@ -425,6 +426,21 @@ e.g., up to one or two maps per allocati
The default value is 65530.
+mem_profiling
+==============
+
+Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
+
+1: Enable memory profiling.
+
+0: Disable memory profiling.
+
+Enabling memory profiling introduces a small performance overhead for all
+memory allocations.
+
+The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
+
+
memory_failure_early_kill:
==========================
--- a/Documentation/filesystems/proc.rst~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/Documentation/filesystems/proc.rst
@@ -688,6 +688,7 @@ files are there, and which are missing.
============ ===============================================================
File Content
============ ===============================================================
+ allocinfo Memory allocations profiling information
apm Advanced power management info
bootconfig Kernel command line obtained from boot config,
and, if there were kernel parameters from the
@@ -953,6 +954,34 @@ also be allocatable although a lot of fi
reclaimed to achieve this.
+allocinfo
+~~~~~~~~~
+
+Provides information about memory allocations at all locations in the code
+base. Each allocation in the code is identified by its source file, line
+number, module (if originates from a loadable module) and the function calling
+the allocation. The number of bytes allocated and number of calls at each
+location are reported.
+
+Example output.
+
+::
+
+ > sort -rn /proc/allocinfo
+ 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
+ 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
+ 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
+ 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
+ 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
+ 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ ...
+
+
meminfo
~~~~~~~
--- /dev/null
+++ a/include/asm-generic/codetag.lds.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_GENERIC_CODETAG_LDS_H
+#define __ASM_GENERIC_CODETAG_LDS_H
+
+#define SECTION_WITH_BOUNDARIES(_name) \
+ . = ALIGN(8); \
+ __start_##_name = .; \
+ KEEP(*(_name)) \
+ __stop_##_name = .;
+
+#define CODETAG_SECTIONS() \
+ SECTION_WITH_BOUNDARIES(alloc_tags)
+
+#endif /* __ASM_GENERIC_CODETAG_LDS_H */
--- a/include/asm-generic/vmlinux.lds.h~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/include/asm-generic/vmlinux.lds.h
@@ -50,6 +50,8 @@
* [__nosave_begin, __nosave_end] for the nosave data
*/
+#include <asm-generic/codetag.lds.h>
+
#ifndef LOAD_OFFSET
#define LOAD_OFFSET 0
#endif
@@ -366,6 +368,7 @@
. = ALIGN(8); \
BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \
BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \
+ CODETAG_SECTIONS() \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
TRACE_PRINTKS() \
--- /dev/null
+++ a/include/linux/alloc_tag.h
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * allocation tagging
+ */
+#ifndef _LINUX_ALLOC_TAG_H
+#define _LINUX_ALLOC_TAG_H
+
+#include <linux/bug.h>
+#include <linux/codetag.h>
+#include <linux/container_of.h>
+#include <linux/preempt.h>
+#include <asm/percpu.h>
+#include <linux/cpumask.h>
+#include <linux/static_key.h>
+#include <linux/irqflags.h>
+
+struct alloc_tag_counters {
+ u64 bytes;
+ u64 calls;
+};
+
+/*
+ * An instance of this structure is created in a special ELF section at every
+ * allocation callsite. At runtime, the special section is treated as
+ * an array of these. Embedded codetag utilizes codetag framework.
+ */
+struct alloc_tag {
+ struct codetag ct;
+ struct alloc_tag_counters __percpu *counters;
+} __aligned(8);
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+
+static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct)
+{
+ return container_of(ct, struct alloc_tag, ct);
+}
+
+#ifdef ARCH_NEEDS_WEAK_PER_CPU
+/*
+ * When percpu variables are required to be defined as weak, static percpu
+ * variables can't be used inside a function (see comments for DECLARE_PER_CPU_SECTION).
+ * Instead we will accound all module allocations to a single counter.
+ */
+DECLARE_PER_CPU(struct alloc_tag_counters, _shared_alloc_tag);
+
+#define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+ __section("alloc_tags") = { \
+ .ct = CODE_TAG_INIT, \
+ .counters = &_shared_alloc_tag };
+
+#else /* ARCH_NEEDS_WEAK_PER_CPU */
+
+#define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+ __section("alloc_tags") = { \
+ .ct = CODE_TAG_INIT, \
+ .counters = &_alloc_tag_cntr };
+
+#endif /* ARCH_NEEDS_WEAK_PER_CPU */
+
+DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static inline bool mem_alloc_profiling_enabled(void)
+{
+ return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ &mem_alloc_profiling_key);
+}
+
+static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag)
+{
+ struct alloc_tag_counters v = { 0, 0 };
+ struct alloc_tag_counters *counter;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ counter = per_cpu_ptr(tag->counters, cpu);
+ v.bytes += counter->bytes;
+ v.calls += counter->calls;
+ }
+
+ return v;
+}
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ WARN_ONCE(ref && ref->ct,
+ "alloc_tag was not cleared (got tag for %s:%u)\n",
+ ref->ct->filename, ref->ct->lineno);
+
+ WARN_ONCE(!tag, "current->alloc_tag not set");
+}
+
+static inline void alloc_tag_sub_check(union codetag_ref *ref)
+{
+ WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n");
+}
+#else
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag) {}
+static inline void alloc_tag_sub_check(union codetag_ref *ref) {}
+#endif
+
+/* Caller should verify both ref and tag to be valid */
+static inline void __alloc_tag_ref_set(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ ref->ct = &tag->ct;
+ /*
+ * We need in increment the call counter every time we have a new
+ * allocation or when we split a large allocation into smaller ones.
+ * Each new reference for every sub-allocation needs to increment call
+ * counter because when we free each part the counter will be decremented.
+ */
+ this_cpu_inc(tag->counters->calls);
+}
+
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes)
+{
+ alloc_tag_add_check(ref, tag);
+ if (!ref || !tag)
+ return;
+
+ __alloc_tag_ref_set(ref, tag);
+ this_cpu_add(tag->counters->bytes, bytes);
+}
+
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes)
+{
+ struct alloc_tag *tag;
+
+ alloc_tag_sub_check(ref);
+ if (!ref || !ref->ct)
+ return;
+
+ tag = ct_to_alloc_tag(ref->ct);
+
+ this_cpu_sub(tag->counters->bytes, bytes);
+ this_cpu_dec(tag->counters->calls);
+
+ ref->ct = NULL;
+}
+
+#else /* CONFIG_MEM_ALLOC_PROFILING */
+
+#define DEFINE_ALLOC_TAG(_alloc_tag)
+static inline bool mem_alloc_profiling_enabled(void) { return false; }
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag,
+ size_t bytes) {}
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {}
+
+#endif /* CONFIG_MEM_ALLOC_PROFILING */
+
+#endif /* _LINUX_ALLOC_TAG_H */
--- a/include/linux/sched.h~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/include/linux/sched.h
@@ -770,6 +770,10 @@ struct task_struct {
unsigned int flags;
unsigned int ptrace;
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+ struct alloc_tag *alloc_tag;
+#endif
+
#ifdef CONFIG_SMP
int on_cpu;
struct __call_single_node wake_entry;
@@ -810,6 +814,7 @@ struct task_struct {
struct task_group *sched_task_group;
#endif
+
#ifdef CONFIG_UCLAMP_TASK
/*
* Clamp values requested for a scheduling entity.
@@ -2187,4 +2192,23 @@ static inline int sched_core_idle_cpu(in
extern void sched_set_stop_task(int cpu, struct task_struct *stop);
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag)
+{
+ swap(current->alloc_tag, tag);
+ return tag;
+}
+
+static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old)
+{
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n");
+#endif
+ current->alloc_tag = old;
+}
+#else
+#define alloc_tag_save(_tag) NULL
+#define alloc_tag_restore(_tag, _old) do {} while (0)
+#endif
+
#endif
--- /dev/null
+++ a/lib/alloc_tag.c
@@ -0,0 +1,152 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/alloc_tag.h>
+#include <linux/fs.h>
+#include <linux/gfp.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_buf.h>
+#include <linux/seq_file.h>
+
+static struct codetag_type *alloc_tag_cttype;
+
+DEFINE_PER_CPU(struct alloc_tag_counters, _shared_alloc_tag);
+EXPORT_SYMBOL(_shared_alloc_tag);
+
+DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static void *allocinfo_start(struct seq_file *m, loff_t *pos)
+{
+ struct codetag_iterator *iter;
+ struct codetag *ct;
+ loff_t node = *pos;
+
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ m->private = iter;
+ if (!iter)
+ return NULL;
+
+ codetag_lock_module_list(alloc_tag_cttype, true);
+ *iter = codetag_get_ct_iter(alloc_tag_cttype);
+ while ((ct = codetag_next_ct(iter)) != NULL && node)
+ node--;
+
+ return ct ? iter : NULL;
+}
+
+static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ struct codetag *ct = codetag_next_ct(iter);
+
+ (*pos)++;
+ if (!ct)
+ return NULL;
+
+ return iter;
+}
+
+static void allocinfo_stop(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)m->private;
+
+ if (iter) {
+ codetag_lock_module_list(alloc_tag_cttype, false);
+ kfree(iter);
+ }
+}
+
+static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct)
+{
+ struct alloc_tag *tag = ct_to_alloc_tag(ct);
+ struct alloc_tag_counters counter = alloc_tag_read(tag);
+ s64 bytes = counter.bytes;
+
+ seq_buf_printf(out, "%12lli %8llu ", bytes, counter.calls);
+ codetag_to_text(out, ct);
+ seq_buf_putc(out, ' ');
+ seq_buf_putc(out, '\n');
+}
+
+static int allocinfo_show(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ char *bufp;
+ size_t n = seq_get_buf(m, &bufp);
+ struct seq_buf buf;
+
+ seq_buf_init(&buf, bufp, n);
+ alloc_tag_to_text(&buf, iter->ct);
+ seq_commit(m, seq_buf_used(&buf));
+ return 0;
+}
+
+static const struct seq_operations allocinfo_seq_op = {
+ .start = allocinfo_start,
+ .next = allocinfo_next,
+ .stop = allocinfo_stop,
+ .show = allocinfo_show,
+};
+
+static void __init procfs_init(void)
+{
+ proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op);
+}
+
+static bool alloc_tag_module_unload(struct codetag_type *cttype,
+ struct codetag_module *cmod)
+{
+ struct codetag_iterator iter = codetag_get_ct_iter(cttype);
+ struct alloc_tag_counters counter;
+ bool module_unused = true;
+ struct alloc_tag *tag;
+ struct codetag *ct;
+
+ for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) {
+ if (iter.cmod != cmod)
+ continue;
+
+ tag = ct_to_alloc_tag(ct);
+ counter = alloc_tag_read(tag);
+
+ if (WARN(counter.bytes,
+ "%s:%u module %s func:%s has %llu allocated at module unload",
+ ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes))
+ module_unused = false;
+ }
+
+ return module_unused;
+}
+
+static struct ctl_table memory_allocation_profiling_sysctls[] = {
+ {
+ .procname = "mem_profiling",
+ .data = &mem_alloc_profiling_key,
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ .mode = 0444,
+#else
+ .mode = 0644,
+#endif
+ .proc_handler = proc_do_static_key,
+ },
+ { }
+};
+
+static int __init alloc_tag_init(void)
+{
+ const struct codetag_type_desc desc = {
+ .section = "alloc_tags",
+ .tag_size = sizeof(struct alloc_tag),
+ .module_unload = alloc_tag_module_unload,
+ };
+
+ alloc_tag_cttype = codetag_register_type(&desc);
+ if (IS_ERR(alloc_tag_cttype))
+ return PTR_ERR(alloc_tag_cttype);
+
+ register_sysctl_init("vm", memory_allocation_profiling_sysctls);
+ procfs_init();
+
+ return 0;
+}
+module_init(alloc_tag_init);
--- a/lib/Kconfig.debug~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/lib/Kconfig.debug
@@ -972,6 +972,31 @@ config CODE_TAGGING
bool
select KALLSYMS
+config MEM_ALLOC_PROFILING
+ bool "Enable memory allocation profiling"
+ default n
+ depends on PROC_FS
+ depends on !DEBUG_FORCE_WEAK_PER_CPU
+ select CODE_TAGGING
+ help
+ Track allocation source code and record total allocation size
+ initiated at that code location. The mechanism can be used to track
+ memory leaks with a low performance and memory impact.
+
+config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ bool "Enable memory allocation profiling by default"
+ default y
+ depends on MEM_ALLOC_PROFILING
+
+config MEM_ALLOC_PROFILING_DEBUG
+ bool "Memory allocation profiler debugging"
+ default n
+ depends on MEM_ALLOC_PROFILING
+ select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ help
+ Adds warnings with helpful error messages for memory allocation
+ profiling.
+
source "lib/Kconfig.kasan"
source "lib/Kconfig.kfence"
source "lib/Kconfig.kmsan"
--- a/lib/Makefile~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/lib/Makefile
@@ -234,6 +234,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_CODE_TAGGING) += codetag.o
+obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o
+
lib-$(CONFIG_GENERIC_BUG) += bug.o
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
--- a/scripts/module.lds.S~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/scripts/module.lds.S
@@ -9,6 +9,8 @@
#define DISCARD_EH_FRAME *(.eh_frame)
#endif
+#include <asm-generic/codetag.lds.h>
+
SECTIONS {
/DISCARD/ : {
*(.discard)
@@ -47,12 +49,17 @@ SECTIONS {
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
+ CODETAG_SECTIONS()
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
+#else
+ .data : {
+ CODETAG_SECTIONS()
+ }
#endif
}
_
Patches currently in -mm which might be from surenb@google.com are
userfaultfd-remove-write_once-when-setting-folio-index-during-uffdio_move.patch
^ permalink raw reply [relevance 3%]
* [merged mm-stable] fix-missing-vmalloch-includes.patch removed from -mm tree
@ 2024-04-26 3:57 2% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-04-26 3:57 UTC (permalink / raw)
To: mm-commits, wedsonaf, viro, vbabka, tj, surenb, peterz,
pasha.tatashin, ojeda, keescook, gary, dennis, cl, boqun.feng,
bjorn3_gh, benno.lossin, arnd, aliceryhl, alex.gaynor,
a.hindborg, kent.overstreet, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 27071 bytes --]
The quilt patch titled
Subject: fix missing vmalloc.h includes
has been removed from the -mm tree. Its filename was
fix-missing-vmalloch-includes.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Kent Overstreet <kent.overstreet@linux.dev>
Subject: fix missing vmalloc.h includes
Date: Thu, 21 Mar 2024 09:36:23 -0700
Patch series "Memory allocation profiling", v6.
Overview:
Low overhead [1] per-callsite memory allocation profiling. Not just for
debug kernels, overhead low enough to be deployed in production.
Example output:
root@moria-kvm:~# sort -rn /proc/allocinfo
127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
56373248 4737 mm/slub.c:2259 func:alloc_slab_page
14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
3940352 962 mm/memory.c:4214 func:alloc_anon_folio
2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
...
Usage:
kconfig options:
- CONFIG_MEM_ALLOC_PROFILING
- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
- CONFIG_MEM_ALLOC_PROFILING_DEBUG
adds warnings for allocations that weren't accounted because of a
missing annotation
sysctl:
/proc/sys/vm/mem_profiling
Runtime info:
/proc/allocinfo
Notes:
[1]: Overhead
To measure the overhead we are comparing the following configurations:
(1) Baseline with CONFIG_MEMCG_KMEM=n
(2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
(3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
(4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
(5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
(6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
(7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
Performance overhead:
To evaluate performance we implemented an in-kernel test executing
multiple get_free_page/free_page and kmalloc/kfree calls with allocation
sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
affinity set to a specific CPU to minimize the noise. Below are results
from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
56 core Intel Xeon:
kmalloc pgalloc
(1 baseline) 6.764s 16.902s
(2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
(3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
(4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
(5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
(6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
(7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
Memory overhead:
Kernel size:
text data bss dec diff
(1) 26515311 18890222 17018880 62424413
(2) 26524728 19423818 16740352 62688898 264485
(3) 26524724 19423818 16740352 62688894 264481
(4) 26524728 19423818 16740352 62688898 264485
(5) 26541782 18964374 16957440 62463596 39183
Memory consumption on a 56 core Intel CPU with 125GB of memory:
Code tags: 192 kB
PageExts: 262144 kB (256MB)
SlabExts: 9876 kB (9.6MB)
PcpuExts: 512 kB (0.5MB)
Total overhead is 0.2% of total memory.
Benchmarks:
Hackbench tests run 100 times:
hackbench -s 512 -l 200 -g 15 -f 25 -P
baseline disabled profiling enabled profiling
avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
stdev 0.0137 0.0188 0.0077
hackbench -l 10000
baseline disabled profiling enabled profiling
avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
stdev 0.0933 0.0286 0.0489
stress-ng tests:
stress-ng --class memory --seq 4 -t 60
stress-ng --class cpu --seq 4 -t 60
Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
[2] https://lore.kernel.org/all/20240306182440.2003814-1-surenb@google.com/
This patch (of 37):
The next patch drops vmalloc.h from a system header in order to fix a
circular dependency; this adds it to all the files that were pulling it in
implicitly.
[kent.overstreet@linux.dev: fix arch/alpha/lib/memcpy.c]
Link: https://lkml.kernel.org/r/20240327002152.3339937-1-kent.overstreet@linux.dev
[surenb@google.com: fix arch/x86/mm/numa_32.c]
Link: https://lkml.kernel.org/r/20240402180933.1663992-1-surenb@google.com
[kent.overstreet@linux.dev: a few places were depending on sizes.h]
Link: https://lkml.kernel.org/r/20240404034744.1664840-1-kent.overstreet@linux.dev
[arnd@arndb.de: fix mm/kasan/hw_tags.c]
Link: https://lkml.kernel.org/r/20240404124435.3121534-1-arnd@kernel.org
[surenb@google.com: fix arc build]
Link: https://lkml.kernel.org/r/20240405225115.431056-1-surenb@google.com
Link: https://lkml.kernel.org/r/20240321163705.3067592-1-surenb@google.com
Link: https://lkml.kernel.org/r/20240321163705.3067592-2-surenb@google.com
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alex Gaynor <alex.gaynor@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@samsung.com>
Cc: Benno Lossin <benno.lossin@proton.me>
Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Gary Guo <gary@garyguo.net>
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
arch/alpha/lib/checksum.c | 1 +
arch/alpha/lib/fpreg.c | 1 +
arch/arc/include/asm/mmu-arcv2.h | 2 ++
arch/arm/kernel/irq.c | 1 +
arch/arm/kernel/traps.c | 1 +
arch/arm64/kernel/efi.c | 1 +
arch/loongarch/include/asm/kfence.h | 1 +
arch/powerpc/kernel/iommu.c | 1 +
arch/powerpc/mm/mem.c | 1 +
arch/riscv/kernel/elf_kexec.c | 1 +
arch/riscv/kernel/probes/kprobes.c | 1 +
arch/s390/kernel/cert_store.c | 1 +
arch/s390/kernel/ipl.c | 1 +
arch/x86/include/asm/io.h | 1 +
arch/x86/kernel/cpu/sgx/main.c | 1 +
arch/x86/kernel/irq_64.c | 1 +
arch/x86/mm/fault.c | 1 +
arch/x86/mm/numa_32.c | 1 +
drivers/accel/ivpu/ivpu_mmu_context.c | 1 +
drivers/gpu/drm/gma500/mmu.c | 1 +
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 1 +
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c | 1 +
drivers/gpu/drm/i915/gt/shmem_utils.c | 1 +
drivers/gpu/drm/i915/gvt/firmware.c | 1 +
drivers/gpu/drm/i915/gvt/gtt.c | 1 +
drivers/gpu/drm/i915/gvt/handlers.c | 1 +
drivers/gpu/drm/i915/gvt/mmio.c | 1 +
drivers/gpu/drm/i915/gvt/vgpu.c | 1 +
drivers/gpu/drm/i915/intel_gvt.c | 1 +
drivers/gpu/drm/imagination/pvr_vm_mips.c | 1 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 1 +
drivers/gpu/drm/omapdrm/omap_gem.c | 1 +
drivers/gpu/drm/v3d/v3d_bo.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_binding.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 1 +
drivers/gpu/drm/xen/xen_drm_front_gem.c | 1 +
drivers/hwtracing/coresight/coresight-trbe.c | 1 +
drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c | 1 +
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_mbox.c | 1 +
drivers/net/ethernet/microsoft/mana/hw_channel.c | 1 +
drivers/platform/x86/uv_sysfs.c | 1 +
drivers/scsi/mpi3mr/mpi3mr_transport.c | 2 ++
drivers/vfio/pci/pds/dirty.c | 1 +
drivers/virt/acrn/mm.c | 1 +
drivers/virtio/virtio_mem.c | 1 +
include/asm-generic/io.h | 1 +
include/linux/io.h | 1 +
include/linux/pds/pds_common.h | 2 ++
include/rdma/rdmavt_qp.h | 1 +
mm/debug_vm_pgtable.c | 1 +
mm/kasan/hw_tags.c | 1 +
sound/pci/hda/cs35l41_hda.c | 1 +
56 files changed, 59 insertions(+)
--- a/arch/alpha/lib/checksum.c~fix-missing-vmalloch-includes
+++ a/arch/alpha/lib/checksum.c
@@ -14,6 +14,7 @@
#include <linux/string.h>
#include <asm/byteorder.h>
+#include <asm/checksum.h>
static inline unsigned short from64to16(unsigned long x)
{
--- a/arch/alpha/lib/fpreg.c~fix-missing-vmalloch-includes
+++ a/arch/alpha/lib/fpreg.c
@@ -8,6 +8,7 @@
#include <linux/compiler.h>
#include <linux/export.h>
#include <linux/preempt.h>
+#include <asm/fpu.h>
#include <asm/thread_info.h>
#if defined(CONFIG_ALPHA_EV6) || defined(CONFIG_ALPHA_EV67)
--- a/arch/arc/include/asm/mmu-arcv2.h~fix-missing-vmalloch-includes
+++ a/arch/arc/include/asm/mmu-arcv2.h
@@ -9,6 +9,8 @@
#ifndef _ASM_ARC_MMU_ARCV2_H
#define _ASM_ARC_MMU_ARCV2_H
+#include <soc/arc/aux.h>
+
/*
* TLB Management regs
*/
--- a/arch/arm64/kernel/efi.c~fix-missing-vmalloch-includes
+++ a/arch/arm64/kernel/efi.c
@@ -10,6 +10,7 @@
#include <linux/efi.h>
#include <linux/init.h>
#include <linux/screen_info.h>
+#include <linux/vmalloc.h>
#include <asm/efi.h>
#include <asm/stacktrace.h>
--- a/arch/arm/kernel/irq.c~fix-missing-vmalloch-includes
+++ a/arch/arm/kernel/irq.c
@@ -32,6 +32,7 @@
#include <linux/kallsyms.h>
#include <linux/proc_fs.h>
#include <linux/export.h>
+#include <linux/vmalloc.h>
#include <asm/hardware/cache-l2x0.h>
#include <asm/hardware/cache-uniphier.h>
--- a/arch/arm/kernel/traps.c~fix-missing-vmalloch-includes
+++ a/arch/arm/kernel/traps.c
@@ -26,6 +26,7 @@
#include <linux/sched/debug.h>
#include <linux/sched/task_stack.h>
#include <linux/irq.h>
+#include <linux/vmalloc.h>
#include <linux/atomic.h>
#include <asm/cacheflush.h>
--- a/arch/loongarch/include/asm/kfence.h~fix-missing-vmalloch-includes
+++ a/arch/loongarch/include/asm/kfence.h
@@ -10,6 +10,7 @@
#define _ASM_LOONGARCH_KFENCE_H
#include <linux/kfence.h>
+#include <linux/vmalloc.h>
#include <asm/pgtable.h>
#include <asm/tlb.h>
--- a/arch/powerpc/kernel/iommu.c~fix-missing-vmalloch-includes
+++ a/arch/powerpc/kernel/iommu.c
@@ -26,6 +26,7 @@
#include <linux/iommu.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
+#include <linux/vmalloc.h>
#include <asm/io.h>
#include <asm/iommu.h>
#include <asm/pci-bridge.h>
--- a/arch/powerpc/mm/mem.c~fix-missing-vmalloch-includes
+++ a/arch/powerpc/mm/mem.c
@@ -16,6 +16,7 @@
#include <linux/highmem.h>
#include <linux/suspend.h>
#include <linux/dma-direct.h>
+#include <linux/vmalloc.h>
#include <asm/swiotlb.h>
#include <asm/machdep.h>
--- a/arch/riscv/kernel/elf_kexec.c~fix-missing-vmalloch-includes
+++ a/arch/riscv/kernel/elf_kexec.c
@@ -19,6 +19,7 @@
#include <linux/libfdt.h>
#include <linux/types.h>
#include <linux/memblock.h>
+#include <linux/vmalloc.h>
#include <asm/setup.h>
int arch_kimage_file_post_load_cleanup(struct kimage *image)
--- a/arch/riscv/kernel/probes/kprobes.c~fix-missing-vmalloch-includes
+++ a/arch/riscv/kernel/probes/kprobes.c
@@ -6,6 +6,7 @@
#include <linux/extable.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
+#include <linux/vmalloc.h>
#include <asm/ptrace.h>
#include <linux/uaccess.h>
#include <asm/sections.h>
--- a/arch/s390/kernel/cert_store.c~fix-missing-vmalloch-includes
+++ a/arch/s390/kernel/cert_store.c
@@ -21,6 +21,7 @@
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
+#include <linux/vmalloc.h>
#include <crypto/sha2.h>
#include <keys/user-type.h>
#include <asm/debug.h>
--- a/arch/s390/kernel/ipl.c~fix-missing-vmalloch-includes
+++ a/arch/s390/kernel/ipl.c
@@ -20,6 +20,7 @@
#include <linux/gfp.h>
#include <linux/crash_dump.h>
#include <linux/debug_locks.h>
+#include <linux/vmalloc.h>
#include <asm/asm-extable.h>
#include <asm/diag.h>
#include <asm/ipl.h>
--- a/arch/x86/include/asm/io.h~fix-missing-vmalloch-includes
+++ a/arch/x86/include/asm/io.h
@@ -42,6 +42,7 @@
#include <asm/early_ioremap.h>
#include <asm/pgtable_types.h>
#include <asm/shared/io.h>
+#include <asm/special_insns.h>
#define build_mmio_read(name, size, type, reg, barrier) \
static inline type name(const volatile void __iomem *addr) \
--- a/arch/x86/kernel/cpu/sgx/main.c~fix-missing-vmalloch-includes
+++ a/arch/x86/kernel/cpu/sgx/main.c
@@ -13,6 +13,7 @@
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
+#include <linux/vmalloc.h>
#include <asm/sgx.h>
#include "driver.h"
#include "encl.h"
--- a/arch/x86/kernel/irq_64.c~fix-missing-vmalloch-includes
+++ a/arch/x86/kernel/irq_64.c
@@ -18,6 +18,7 @@
#include <linux/uaccess.h>
#include <linux/smp.h>
#include <linux/sched/task_stack.h>
+#include <linux/vmalloc.h>
#include <asm/cpu_entry_area.h>
#include <asm/softirq_stack.h>
--- a/arch/x86/mm/fault.c~fix-missing-vmalloch-includes
+++ a/arch/x86/mm/fault.c
@@ -20,6 +20,7 @@
#include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/
#include <linux/mm_types.h>
#include <linux/mm.h> /* find_and_lock_vma() */
+#include <linux/vmalloc.h>
#include <asm/cpufeature.h> /* boot_cpu_has, ... */
#include <asm/traps.h> /* dotraplinkage, ... */
--- a/arch/x86/mm/numa_32.c~fix-missing-vmalloch-includes
+++ a/arch/x86/mm/numa_32.c
@@ -24,6 +24,7 @@
#include <linux/memblock.h>
#include <linux/init.h>
+#include <linux/vmalloc.h>
#include <asm/pgtable_areas.h>
#include "numa_internal.h"
--- a/drivers/accel/ivpu/ivpu_mmu_context.c~fix-missing-vmalloch-includes
+++ a/drivers/accel/ivpu/ivpu_mmu_context.c
@@ -6,6 +6,7 @@
#include <linux/bitfield.h>
#include <linux/highmem.h>
#include <linux/set_memory.h>
+#include <linux/vmalloc.h>
#include <drm/drm_cache.h>
--- a/drivers/gpu/drm/gma500/mmu.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/gma500/mmu.c
@@ -5,6 +5,7 @@
**************************************************************************/
#include <linux/highmem.h>
+#include <linux/vmalloc.h>
#include "mmu.h"
#include "psb_drv.h"
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -5,6 +5,7 @@
*/
#include <drm/drm_cache.h>
+#include <linux/vmalloc.h>
#include "gt/intel_gt.h"
#include "gt/intel_tlb.h"
--- a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
@@ -4,6 +4,7 @@
* Copyright © 2016 Intel Corporation
*/
+#include <linux/vmalloc.h>
#include "mock_dmabuf.h"
static struct sg_table *mock_map_dma_buf(struct dma_buf_attachment *attachment,
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -7,6 +7,7 @@
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
+#include <linux/vmalloc.h>
#include "i915_drv.h"
#include "gem/i915_gem_object.h"
--- a/drivers/gpu/drm/i915/gvt/firmware.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/firmware.c
@@ -30,6 +30,7 @@
#include <linux/firmware.h>
#include <linux/crc32.h>
+#include <linux/vmalloc.h>
#include "i915_drv.h"
#include "gvt.h"
--- a/drivers/gpu/drm/i915/gvt/gtt.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/gtt.c
@@ -39,6 +39,7 @@
#include "trace.h"
#include "gt/intel_gt_regs.h"
+#include <linux/vmalloc.h>
#if defined(VERBOSE_DEBUG)
#define gvt_vdbg_mm(fmt, args...) gvt_dbg_mm(fmt, ##args)
--- a/drivers/gpu/drm/i915/gvt/handlers.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/handlers.c
@@ -52,6 +52,7 @@
#include "display/skl_watermark_regs.h"
#include "display/vlv_dsi_pll_regs.h"
#include "gt/intel_gt_regs.h"
+#include <linux/vmalloc.h>
/* XXX FIXME i915 has changed PP_XXX definition */
#define PCH_PP_STATUS _MMIO(0xc7200)
--- a/drivers/gpu/drm/i915/gvt/mmio.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/mmio.c
@@ -33,6 +33,7 @@
*
*/
+#include <linux/vmalloc.h>
#include "i915_drv.h"
#include "i915_reg.h"
#include "gvt.h"
--- a/drivers/gpu/drm/i915/gvt/vgpu.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -34,6 +34,7 @@
#include "i915_drv.h"
#include "gvt.h"
#include "i915_pvinfo.h"
+#include <linux/vmalloc.h>
void populate_pvinfo_page(struct intel_vgpu *vgpu)
{
--- a/drivers/gpu/drm/i915/intel_gvt.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/intel_gvt.c
@@ -28,6 +28,7 @@
#include "gt/intel_context.h"
#include "gt/intel_ring.h"
#include "gt/shmem_utils.h"
+#include <linux/vmalloc.h>
/**
* DOC: Intel GVT-g host support
--- a/drivers/gpu/drm/imagination/pvr_vm_mips.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/imagination/pvr_vm_mips.c
@@ -14,6 +14,7 @@
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/types.h>
+#include <linux/vmalloc.h>
/**
* pvr_vm_mips_init() - Initialise MIPS FW pagetable
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -4,6 +4,7 @@
*/
#include <linux/dma-buf.h>
+#include <linux/vmalloc.h>
#include <drm/drm.h>
#include <drm/drm_device.h>
--- a/drivers/gpu/drm/omapdrm/omap_gem.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -9,6 +9,7 @@
#include <linux/shmem_fs.h>
#include <linux/spinlock.h>
#include <linux/pfn_t.h>
+#include <linux/vmalloc.h>
#include <drm/drm_prime.h>
#include <drm/drm_vma_manager.h>
--- a/drivers/gpu/drm/v3d/v3d_bo.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/v3d/v3d_bo.c
@@ -21,6 +21,7 @@
#include <linux/dma-buf.h>
#include <linux/pfn_t.h>
+#include <linux/vmalloc.h>
#include "v3d_drv.h"
#include "uapi/drm/v3d_drm.h"
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
@@ -54,6 +54,7 @@
#include "vmwgfx_drv.h"
#include "vmwgfx_binding.h"
#include "device_include/svga3d_reg.h"
+#include <linux/vmalloc.h>
#define VMW_BINDING_RT_BIT 0
#define VMW_BINDING_PS_BIT 1
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
@@ -31,6 +31,7 @@
#include <drm/ttm/ttm_placement.h>
#include <linux/sched/signal.h>
+#include <linux/vmalloc.h>
bool vmw_supports_3d(struct vmw_private *dev_priv)
{
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c
@@ -25,6 +25,7 @@
*
**************************************************************************/
+#include <linux/vmalloc.h>
#include "vmwgfx_devcaps.h"
#include "vmwgfx_drv.h"
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -53,6 +53,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/version.h>
+#include <linux/vmalloc.h>
#define VMWGFX_DRIVER_DESC "Linux drm driver for VMware graphics devices"
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -35,6 +35,7 @@
#include <linux/sync_file.h>
#include <linux/hashtable.h>
+#include <linux/vmalloc.h>
/*
* Helper macro to get dx_ctx_node if available otherwise print an error
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
@@ -31,6 +31,7 @@
#include <drm/vmwgfx_drm.h>
#include <linux/pci.h>
+#include <linux/vmalloc.h>
int vmw_getparam_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -11,6 +11,7 @@
#include <linux/dma-buf.h>
#include <linux/scatterlist.h>
#include <linux/shmem_fs.h>
+#include <linux/vmalloc.h>
#include <drm/drm_gem.h>
#include <drm/drm_prime.h>
--- a/drivers/hwtracing/coresight/coresight-trbe.c~fix-missing-vmalloch-includes
+++ a/drivers/hwtracing/coresight/coresight-trbe.c
@@ -17,6 +17,7 @@
#include <asm/barrier.h>
#include <asm/cpufeature.h>
+#include <linux/vmalloc.h>
#include "coresight-self-hosted-trace.h"
#include "coresight-trbe.h"
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c~fix-missing-vmalloch-includes
+++ a/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c
@@ -15,6 +15,7 @@
#include <linux/io.h>
#include <linux/pci.h>
#include <linux/etherdevice.h>
+#include <linux/vmalloc.h>
#include "octep_config.h"
#include "octep_main.h"
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_mbox.c~fix-missing-vmalloch-includes
+++ a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_mbox.c
@@ -7,6 +7,7 @@
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/netdevice.h>
+#include <linux/vmalloc.h>
#include "octep_vf_config.h"
#include "octep_vf_main.h"
--- a/drivers/net/ethernet/microsoft/mana/hw_channel.c~fix-missing-vmalloch-includes
+++ a/drivers/net/ethernet/microsoft/mana/hw_channel.c
@@ -3,6 +3,7 @@
#include <net/mana/gdma.h>
#include <net/mana/hw_channel.h>
+#include <linux/vmalloc.h>
static int mana_hwc_get_msg_index(struct hw_channel_context *hwc, u16 *msg_id)
{
--- a/drivers/platform/x86/uv_sysfs.c~fix-missing-vmalloch-includes
+++ a/drivers/platform/x86/uv_sysfs.c
@@ -11,6 +11,7 @@
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/kobject.h>
+#include <linux/vmalloc.h>
#include <asm/uv/bios.h>
#include <asm/uv/uv.h>
#include <asm/uv/uv_hub.h>
--- a/drivers/scsi/mpi3mr/mpi3mr_transport.c~fix-missing-vmalloch-includes
+++ a/drivers/scsi/mpi3mr/mpi3mr_transport.c
@@ -7,6 +7,8 @@
*
*/
+#include <linux/vmalloc.h>
+
#include "mpi3mr.h"
/**
--- a/drivers/vfio/pci/pds/dirty.c~fix-missing-vmalloch-includes
+++ a/drivers/vfio/pci/pds/dirty.c
@@ -3,6 +3,7 @@
#include <linux/interval_tree.h>
#include <linux/vfio.h>
+#include <linux/vmalloc.h>
#include <linux/pds/pds_common.h>
#include <linux/pds/pds_core_if.h>
--- a/drivers/virt/acrn/mm.c~fix-missing-vmalloch-includes
+++ a/drivers/virt/acrn/mm.c
@@ -12,6 +12,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/slab.h>
+#include <linux/vmalloc.h>
#include "acrn_drv.h"
--- a/drivers/virtio/virtio_mem.c~fix-missing-vmalloch-includes
+++ a/drivers/virtio/virtio_mem.c
@@ -21,6 +21,7 @@
#include <linux/bitmap.h>
#include <linux/lockdep.h>
#include <linux/log2.h>
+#include <linux/vmalloc.h>
#include <acpi/acpi_numa.h>
--- a/include/asm-generic/io.h~fix-missing-vmalloch-includes
+++ a/include/asm-generic/io.h
@@ -9,6 +9,7 @@
#include <asm/page.h> /* I/O is all done through memory accesses */
#include <linux/string.h> /* for memset() and memcpy() */
+#include <linux/sizes.h>
#include <linux/types.h>
#include <linux/instruction_pointer.h>
--- a/include/linux/io.h~fix-missing-vmalloch-includes
+++ a/include/linux/io.h
@@ -6,6 +6,7 @@
#ifndef _LINUX_IO_H
#define _LINUX_IO_H
+#include <linux/sizes.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/bug.h>
--- a/include/linux/pds/pds_common.h~fix-missing-vmalloch-includes
+++ a/include/linux/pds/pds_common.h
@@ -4,6 +4,8 @@
#ifndef _PDS_COMMON_H_
#define _PDS_COMMON_H_
+#include <linux/notifier.h>
+
#define PDS_CORE_DRV_NAME "pds_core"
/* the device's internal addressing uses up to 52 bits */
--- a/include/rdma/rdmavt_qp.h~fix-missing-vmalloch-includes
+++ a/include/rdma/rdmavt_qp.h
@@ -11,6 +11,7 @@
#include <rdma/ib_verbs.h>
#include <rdma/rdmavt_cq.h>
#include <rdma/rvt-abi.h>
+#include <linux/vmalloc.h>
/*
* Atomic bit definitions for r_aflags.
*/
--- a/mm/debug_vm_pgtable.c~fix-missing-vmalloch-includes
+++ a/mm/debug_vm_pgtable.c
@@ -30,6 +30,7 @@
#include <linux/start_kernel.h>
#include <linux/sched/mm.h>
#include <linux/io.h>
+#include <linux/vmalloc.h>
#include <asm/cacheflush.h>
#include <asm/pgalloc.h>
--- a/mm/kasan/hw_tags.c~fix-missing-vmalloch-includes
+++ a/mm/kasan/hw_tags.c
@@ -16,6 +16,7 @@
#include <linux/static_key.h>
#include <linux/string.h>
#include <linux/types.h>
+#include <linux/vmalloc.h>
#include "kasan.h"
--- a/sound/pci/hda/cs35l41_hda.c~fix-missing-vmalloch-includes
+++ a/sound/pci/hda/cs35l41_hda.c
@@ -13,6 +13,7 @@
#include <sound/soc.h>
#include <linux/pm_runtime.h>
#include <linux/spi/spi.h>
+#include <linux/vmalloc.h>
#include "hda_local.h"
#include "hda_auto_parser.h"
#include "hda_jack.h"
_
Patches currently in -mm which might be from kent.overstreet@linux.dev are
^ permalink raw reply [relevance 2%]
* Re: [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache
2024-04-25 11:37 14% ` [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
@ 2024-04-25 19:04 0% ` Hannes Reinecke
2024-04-26 15:12 0% ` Darrick J. Wong
1 sibling, 0 replies; 200+ results
From: Hannes Reinecke @ 2024-04-25 19:04 UTC (permalink / raw)
To: Pankaj Raghav (Samsung),
willy, djwong, brauner, david, chandan.babu, akpm
Cc: linux-fsdevel, linux-kernel, linux-mm, linux-xfs, mcgrof,
gost.dev, p.raghav
On 4/25/24 13:37, Pankaj Raghav (Samsung) wrote:
> From: Luis Chamberlain <mcgrof@kernel.org>
>
> filemap_create_folio() and do_read_cache_folio() were always allocating
> folio of order 0. __filemap_get_folio was trying to allocate higher
> order folios when fgp_flags had higher order hint set but it will default
> to order 0 folio if higher order memory allocation fails.
>
> Supporting mapping_min_order implies that we guarantee each folio in the
> page cache has at least an order of mapping_min_order. When adding new
> folios to the page cache we must also ensure the index used is aligned to
> the mapping_min_order as the page cache requires the index to be aligned
> to the order of the folio.
>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
> mm/filemap.c | 24 +++++++++++++++++-------
> 1 file changed, 17 insertions(+), 7 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [relevance 0%]
* Re: Kernel RIP 0010:cifs_flush_folio
[not found] ` <CA+EPQ664FHmSU-XW2e63jz1hEYNYVS-RdY6309g7-hvUMdt5Ew@mail.gmail.com>
@ 2024-04-25 16:52 0% ` Shyam Prasad N
0 siblings, 0 replies; 200+ results
From: Shyam Prasad N @ 2024-04-25 16:52 UTC (permalink / raw)
To: Ritvik Budhiraja; +Cc: Steve French, linux-cifs, sprasad, David Howells
On Thu, Apr 25, 2024 at 12:05 PM Ritvik Budhiraja
<budhirajaritviksmb@gmail.com> wrote:
>
> The test that failed was generic/074, with: Output mismatch;
> Write failed at offset 9933824, Write failed at offset 9961472,
> Write failed at offset 9950208. The kernel version for the machine
> was Ubuntu 22.04, 6.5.0-1018-azure
>
> The target server was Azure Files XNFS
Correction. The server for the test that generated this stack would be
Azure Files SMB. Not NFS.
>
> On Thu, 25 Apr 2024 at 11:53, Steve French <smfrench@gmail.com> wrote:
>>
>> That is plausible that it is the same bug as in the report. What
>> kernel version is the xfstest failure on (and which xfstest)?
>>
>> Presumably this does not fail with recent kernels (e.g. 6.7 or later) correct?
>>
>> Since this is clone range (which not all servers support), what is the
>> target server (ksmbd? Samba on btrfs? Windows on REFS?)
>>
>> On Thu, Apr 25, 2024 at 1:14 AM Ritvik Budhiraja
>> <budhirajaritviksmb@gmail.com> wrote:
>> >
>> > Hi Steve,
>> > While investigating xnfstest results I came across the below kernel oops. I have seen this in some of the xfstest failures. I wanted to know if this is a known issue?
>> >
>> > I have identified a similar ubuntu bug: Bug #2060919 “cifs: Copying file to same directory results in pa...” : Bugs : linux package : Ubuntu (launchpad.net)
>> >
>> > Reference dmesg logs:
>> > BUG: unable to handle page fault for address: fffffffffffffffe
>> > [Tue Apr 23 09:22:02 2024] #PF: supervisor read access in kernel mode
>> > [Tue Apr 23 09:22:02 2024] #PF: error_code(0x0000) - not-present page
>> > [Tue Apr 23 09:22:02 2024] PGD 19d43b067 P4D 19d43b067 PUD 19d43d067 PMD 0
>> > [Tue Apr 23 09:22:02 2024] Oops: 0000 [#68] SMP NOPTI
>> > [Tue Apr 23 09:22:02 2024] CPU: 1 PID: 3856364 Comm: fsstress Tainted: G D 6.5.0-1018-azure #19~22.04.2-Ubuntu
>> > [Tue Apr 23 09:22:03 2024] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023
>> > [Tue Apr 23 09:22:03 2024] RIP: 0010:cifs_flush_folio+0x41/0xe0 [cifs]
>> > [Tue Apr 23 09:22:03 2024] Code: 49 89 cd 31 c9 41 54 53 48 89 f3 48 c1 ee 0c 48 83 ec 10 48 8b 7f 30 44 89 45 d4 e8 29 61 8e c6 49 89 c4 31 c0 4d 85 e4 74 7d <49> 8b 14 24 b8 00 10 00 00 f7 c2 00 00 01 00 74 12 41 0f b6 4c 24
>> > [Tue Apr 23 09:22:03 2024] RSP: 0018:ffffb182c3d3fcc0 EFLAGS: 00010282
>> > [Tue Apr 23 09:22:03 2024] RAX: 0000000000000000 RBX: 0000000011d00000 RCX: 0000000000000000
>> > [Tue Apr 23 09:22:03 2024] RDX: 0000000000000000 RSI: 0000000000011d00 RDI: ffffb182c3d3fc10
>> > [Tue Apr 23 09:22:03 2024] RBP: ffffb182c3d3fcf8 R08: 0000000000000001 R09: 0000000000000000
>> > [Tue Apr 23 09:22:03 2024] R10: 0000000011cfffff R11: 0000000000000000 R12: fffffffffffffffe
>> > [Tue Apr 23 09:22:03 2024] R13: ffffb182c3d3fd48 R14: ffff994311023c30 R15: ffffb182c3d3fd40
>> > [Tue Apr 23 09:22:03 2024] FS: 00007c82b3e10740(0000) GS:ffff9944b7d00000(0000) knlGS:0000000000000000
>> > [Tue Apr 23 09:22:03 2024] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> > [Tue Apr 23 09:22:03 2024] CR2: fffffffffffffffe CR3: 00000001acb52000 CR4: 0000000000350ee0
>> > [Tue Apr 23 09:22:03 2024] Call Trace:
>> > [Tue Apr 23 09:22:03 2024] <TASK>
>> > [Tue Apr 23 09:22:03 2024] ? show_regs+0x6a/0x80
>> > [Tue Apr 23 09:22:03 2024] ? __die+0x25/0x70
>> > [Tue Apr 23 09:22:03 2024] ? page_fault_oops+0x79/0x180
>> > [Tue Apr 23 09:22:03 2024] ? srso_return_thunk+0x5/0x10
>> > [Tue Apr 23 09:22:03 2024] ? search_exception_tables+0x61/0x70
>> > [Tue Apr 23 09:22:03 2024] ? srso_return_thunk+0x5/0x10
>> > [Tue Apr 23 09:22:03 2024] ? kernelmode_fixup_or_oops+0xa2/0x120
>> > [Tue Apr 23 09:22:03 2024] ? __bad_area_nosemaphore+0x16f/0x280
>> > [Tue Apr 23 09:22:03 2024] ? terminate_walk+0x97/0xf0
>> > [Tue Apr 23 09:22:03 2024] ? bad_area_nosemaphore+0x16/0x20
>> > [Tue Apr 23 09:22:03 2024] ? do_kern_addr_fault+0x62/0x80
>> > [Tue Apr 23 09:22:03 2024] ? exc_page_fault+0xdb/0x160
>> > [Tue Apr 23 09:22:03 2024] ? asm_exc_page_fault+0x27/0x30
>> > [Tue Apr 23 09:22:03 2024] ? cifs_flush_folio+0x41/0xe0 [cifs]
>> > [Tue Apr 23 09:22:03 2024] cifs_remap_file_range+0x16c/0x5e0 [cifs]
>> > [Tue Apr 23 09:22:03 2024] do_clone_file_range+0x107/0x290
>> > [Tue Apr 23 09:22:03 2024] vfs_clone_file_range+0x3f/0x120
>> > [Tue Apr 23 09:22:03 2024] ioctl_file_clone+0x4d/0xa0
>> > [Tue Apr 23 09:22:03 2024] do_vfs_ioctl+0x35c/0x860
>> > [Tue Apr 23 09:22:03 2024] __x64_sys_ioctl+0x73/0xd0
>> > [Tue Apr 23 09:22:03 2024] do_syscall_64+0x5c/0x90
>> > [Tue Apr 23 09:22:03 2024] ? srso_return_thunk+0x5/0x10
>> > [Tue Apr 23 09:22:03 2024] ? exc_page_fault+0x80/0x160
>> > [Tue Apr 23 09:22:03 2024] entry_SYSCALL_64_after_hwframe+0x6e/0xd8
>>
>>
>>
>> --
>> Thanks,
>>
>> Steve
I reviewed the launchpad bug. This problem seems to be well understood.
The problem seems to be well understood:
>>> Since the Ubuntu mantic kernel consumes both 6.1.y and 6.7.y / 6.8.y stable patches, this patch was applied to mantic's 6.5 kernel by mistake, and contains the wrong logic for how __filemap_get_folio() works in 6.5.
So the order of backport application seems to have led to this problem.
--
Regards,
Shyam
^ permalink raw reply [relevance 0%]
* Re: [PATCH RFC 2/7] filemap: Change mapping_set_folio_min_order() -> mapping_set_folio_orders()
2024-04-22 14:39 6% ` [PATCH RFC 2/7] filemap: Change mapping_set_folio_min_order() -> mapping_set_folio_orders() John Garry
@ 2024-04-25 14:47 0% ` Pankaj Raghav (Samsung)
2024-04-26 8:02 0% ` John Garry
0 siblings, 1 reply; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-04-25 14:47 UTC (permalink / raw)
To: John Garry
Cc: axboe, brauner, djwong, viro, jack, akpm, willy, dchinner, tytso,
hch, martin.petersen, nilay, ritesh.list, mcgrof, linux-block,
linux-kernel, linux-xfs, linux-fsdevel, linux-mm, ojaswin,
p.raghav, jbongio, okiselev
On Mon, Apr 22, 2024 at 02:39:18PM +0000, John Garry wrote:
> Borrowed from:
>
> https://lore.kernel.org/linux-fsdevel/20240213093713.1753368-2-kernel@pankajraghav.com/
> (credit given in due course)
>
> We will need to be able to only use a single folio order for buffered
> atomic writes, so allow the mapping folio order min and max be set.
>
> We still have the restriction of not being able to support order-1
> folios - it will be required to lift this limit at some stage.
This is already supported upstream for file-backed folios:
commit: 8897277acfef7f70fdecc054073bea2542fc7a1b
> index fc8eb9c94e9c..c22455fa28a1 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -363,9 +363,10 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
> #endif
>
> /*
> - * mapping_set_folio_min_order() - Set the minimum folio order
> + * mapping_set_folio_orders() - Set the minimum and max folio order
In the new series (sorry forgot to CC you), I added a new helper called
mapping_set_folio_order_range() which does something similar to avoid
confusion based on willy's suggestion:
https://lore.kernel.org/linux-xfs/20240425113746.335530-3-kernel@pankajraghav.com/
mapping_set_folio_min_order() also sets max folio order to be
MAX_PAGECACHE_ORDER order anyway. So no need of explicitly calling it
here?
> /**
> @@ -400,7 +406,7 @@ static inline void mapping_set_folio_min_order(struct address_space *mapping,
> */
> static inline void mapping_set_large_folios(struct address_space *mapping)
> {
> - mapping_set_folio_min_order(mapping, 0);
> + mapping_set_folio_orders(mapping, 0, MAX_PAGECACHE_ORDER);
> }
>
> static inline unsigned int mapping_max_folio_order(struct address_space *mapping)
> diff --git a/mm/filemap.c b/mm/filemap.c
> index d81530b0aac0..d5effe50ddcb 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1898,9 +1898,15 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> no_page:
> if (!folio && (fgp_flags & FGP_CREAT)) {
> unsigned int min_order = mapping_min_folio_order(mapping);
> - unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> + unsigned int max_order = mapping_max_folio_order(mapping);
> + unsigned int order = FGF_GET_ORDER(fgp_flags);
> int err;
>
> + if (order > max_order)
> + order = max_order;
> + else if (order < min_order)
> + order = max_order;
order = min_order; ?
--
Pankaj
^ permalink raw reply [relevance 0%]
* [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache
@ 2024-04-25 11:37 14% ` Pankaj Raghav (Samsung)
2024-04-25 19:04 0% ` Hannes Reinecke
2024-04-26 15:12 0% ` Darrick J. Wong
0 siblings, 2 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-04-25 11:37 UTC (permalink / raw)
To: willy, djwong, brauner, david, chandan.babu, akpm
Cc: linux-fsdevel, hare, linux-kernel, linux-mm, linux-xfs, mcgrof,
gost.dev, p.raghav
From: Luis Chamberlain <mcgrof@kernel.org>
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
Supporting mapping_min_order implies that we guarantee each folio in the
page cache has at least an order of mapping_min_order. When adding new
folios to the page cache we must also ensure the index used is aligned to
the mapping_min_order as the page cache requires the index to be aligned
to the order of the folio.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
mm/filemap.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 30de18c4fd28..f0c0cfbbd134 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -858,6 +858,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
mapping_set_update(&xas, mapping);
if (!huge) {
@@ -1895,8 +1897,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
+ index = mapping_align_start_index(mapping, index);
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
gfp |= __GFP_WRITE;
@@ -1936,7 +1940,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2425,13 +2429,16 @@ static int filemap_update_page(struct kiocb *iocb,
}
static int filemap_create_folio(struct file *file,
- struct address_space *mapping, pgoff_t index,
+ struct address_space *mapping, loff_t pos,
struct folio_batch *fbatch)
{
struct folio *folio;
int error;
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ pgoff_t index;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ min_order);
if (!folio)
return -ENOMEM;
@@ -2449,6 +2456,8 @@ static int filemap_create_folio(struct file *file,
* well to keep locking rules simple.
*/
filemap_invalidate_lock_shared(mapping);
+ /* index in PAGE units but aligned to min_order number of pages. */
+ index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
error = filemap_add_folio(mapping, folio, index,
mapping_gfp_constraint(mapping, GFP_KERNEL));
if (error == -EEXIST)
@@ -2509,8 +2518,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
return -EAGAIN;
- err = filemap_create_folio(filp, mapping,
- iocb->ki_pos >> PAGE_SHIFT, fbatch);
+ err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
if (err == AOP_TRUNCATED_PAGE)
goto retry;
return err;
@@ -3708,9 +3716,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
+ index = mapping_align_start_index(mapping, index);
err = filemap_add_folio(mapping, folio, index, gfp);
if (unlikely(err)) {
folio_put(folio);
--
2.34.1
^ permalink raw reply related [relevance 14%]
* Re: [PATCH 04/11] filemap: add FGP_CREAT_ONLY
2024-04-04 18:50 12% ` [PATCH 04/11] filemap: add FGP_CREAT_ONLY Paolo Bonzini
@ 2024-04-25 5:52 0% ` Paolo Bonzini
2024-04-29 13:26 0% ` Vlastimil Babka
0 siblings, 1 reply; 200+ results
From: Paolo Bonzini @ 2024-04-25 5:52 UTC (permalink / raw)
To: linux-kernel, kvm, Matthew Wilcox, Vlastimil Babka
Cc: seanjc, michael.roth, isaku.yamahata, Yosry Ahmed
On 4/4/24 20:50, Paolo Bonzini wrote:
> KVM would like to add a ioctl to encrypt and install a page into private
> memory (i.e. into a guest_memfd), in preparation for launching an
> encrypted guest.
>
> This API should be used only once per page (unless there are failures),
> so we want to rule out the possibility of operating on a page that is
> already in the guest_memfd's filemap. Overwriting the page is almost
> certainly a sign of a bug, so we might as well forbid it.
>
> Therefore, introduce a new flag for __filemap_get_folio (to be passed
> together with FGP_CREAT) that allows *adding* a new page to the filemap
> but not returning an existing one.
>
> An alternative possibility would be to force KVM users to initialize
> the whole filemap in one go, but that is complicated by the fact that
> the filemap includes pages of different kinds, including some that are
> per-vCPU rather than per-VM. Basically the result would be closer to
> a system call that multiplexes multiple ioctls, than to something
> cleaner like readv/writev.
>
> Races between callers that pass FGP_CREAT_ONLY are uninteresting to
> the filemap code: one of the racers wins and one fails with EEXIST,
> similar to calling open(2) with O_CREAT|O_EXCL. It doesn't matter to
> filemap.c if the missing synchronization is in the kernel or in userspace,
> and in fact it could even be intentional. (In the case of KVM it turns
> out that a mutex is taken around these calls for unrelated reasons,
> so there can be no races.)
>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Yosry Ahmed <yosryahmed@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Matthew, are your objections still valid or could I have your ack?
Thanks,
Paolo
> ---
> include/linux/pagemap.h | 2 ++
> mm/filemap.c | 4 ++++
> 2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index f879c1d54da7..a8c0685e8c08 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -587,6 +587,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> * * %FGP_CREAT - If no folio is present then a new folio is allocated,
> * added to the page cache and the VM's LRU list. The folio is
> * returned locked.
> + * * %FGP_CREAT_ONLY - Fail if a folio is present
> * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
> * folio is already in cache. If the folio was allocated, unlock it
> * before returning so the caller can do the same dance.
> @@ -607,6 +608,7 @@ typedef unsigned int __bitwise fgf_t;
> #define FGP_NOWAIT ((__force fgf_t)0x00000020)
> #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
> #define FGP_STABLE ((__force fgf_t)0x00000080)
> +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
> #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
>
> #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 7437b2bd75c1..e7440e189ebd 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1863,6 +1863,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> folio = NULL;
> if (!folio)
> goto no_page;
> + if (fgp_flags & FGP_CREAT_ONLY) {
> + folio_put(folio);
> + return ERR_PTR(-EEXIST);
> + }
>
> if (fgp_flags & FGP_LOCK) {
> if (fgp_flags & FGP_NOWAIT) {
^ permalink raw reply [relevance 0%]
* [PATCH v2 06/11] ntfs3: Convert attr_make_nonresident to use a folio
2024-04-22 19:31 7% ` [PATCH v2 02/11] ntfs3: Convert ntfs_write_begin to use a folio Matthew Wilcox (Oracle)
@ 2024-04-22 19:31 7% ` Matthew Wilcox (Oracle)
2024-04-22 19:32 7% ` [PATCH v2 10/11] ntfs3: Convert ntfs_get_frame_pages() " Matthew Wilcox (Oracle)
2 siblings, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-22 19:31 UTC (permalink / raw)
To: Konstantin Komarov; +Cc: Matthew Wilcox (Oracle), ntfs3, linux-fsdevel
Fetch a folio from the page cache instead of a page and operate on it.
Take advantage of the new helpers to avoid handling highmem ourselves,
and combine the uptodate + unlock operations into folio_end_read().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ntfs3/attrib.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
index 02fa3245850a..d253840c26cf 100644
--- a/fs/ntfs3/attrib.c
+++ b/fs/ntfs3/attrib.c
@@ -302,22 +302,20 @@ int attr_make_nonresident(struct ntfs_inode *ni, struct ATTRIB *attr,
if (err)
goto out2;
} else if (!page) {
- char *kaddr;
-
- page = grab_cache_page(ni->vfs_inode.i_mapping, 0);
- if (!page) {
- err = -ENOMEM;
+ struct address_space *mapping = ni->vfs_inode.i_mapping;
+ struct folio *folio;
+
+ folio = __filemap_get_folio(mapping, 0,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+ mapping_gfp_mask(mapping));
+ if (IS_ERR(folio)) {
+ err = PTR_ERR(folio);
goto out2;
}
- kaddr = kmap_atomic(page);
- memcpy(kaddr, data, rsize);
- memset(kaddr + rsize, 0, PAGE_SIZE - rsize);
- kunmap_atomic(kaddr);
- flush_dcache_page(page);
- SetPageUptodate(page);
- set_page_dirty(page);
- unlock_page(page);
- put_page(page);
+ folio_fill_tail(folio, 0, data, rsize);
+ folio_mark_dirty(folio);
+ folio_end_read(folio, true);
+ folio_put(folio);
}
}
--
2.43.0
^ permalink raw reply related [relevance 7%]
* [PATCH v2 10/11] ntfs3: Convert ntfs_get_frame_pages() to use a folio
2024-04-22 19:31 7% ` [PATCH v2 02/11] ntfs3: Convert ntfs_write_begin to use a folio Matthew Wilcox (Oracle)
2024-04-22 19:31 7% ` [PATCH v2 06/11] ntfs3: Convert attr_make_nonresident " Matthew Wilcox (Oracle)
@ 2024-04-22 19:32 7% ` Matthew Wilcox (Oracle)
2 siblings, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-22 19:32 UTC (permalink / raw)
To: Konstantin Komarov; +Cc: Matthew Wilcox (Oracle), ntfs3, linux-fsdevel
The function still takes an array of pages, but use a folio internally.
This function would deadlock against itself if used with large folios
(as it locks each page), so we can be a little sloppy with the conversion
back from folio to page for now.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ntfs3/file.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
index 2f903b6ce157..40d3e7d0567a 100644
--- a/fs/ntfs3/file.c
+++ b/fs/ntfs3/file.c
@@ -824,23 +824,24 @@ static int ntfs_get_frame_pages(struct address_space *mapping, pgoff_t index,
*frame_uptodate = true;
for (npages = 0; npages < pages_per_frame; npages++, index++) {
- struct page *page;
+ struct folio *folio;
- page = find_or_create_page(mapping, index, gfp_mask);
- if (!page) {
+ folio = __filemap_get_folio(mapping, index,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp_mask);
+ if (IS_ERR(folio)) {
while (npages--) {
- page = pages[npages];
- unlock_page(page);
- put_page(page);
+ folio = page_folio(pages[npages]);
+ folio_unlock(folio);
+ folio_put(folio);
}
return -ENOMEM;
}
- if (!PageUptodate(page))
+ if (!folio_test_uptodate(folio))
*frame_uptodate = false;
- pages[npages] = page;
+ pages[npages] = &folio->page;
}
return 0;
--
2.43.0
^ permalink raw reply related [relevance 7%]
* [PATCH v2 02/11] ntfs3: Convert ntfs_write_begin to use a folio
@ 2024-04-22 19:31 7% ` Matthew Wilcox (Oracle)
2024-04-22 19:31 7% ` [PATCH v2 06/11] ntfs3: Convert attr_make_nonresident " Matthew Wilcox (Oracle)
2024-04-22 19:32 7% ` [PATCH v2 10/11] ntfs3: Convert ntfs_get_frame_pages() " Matthew Wilcox (Oracle)
2 siblings, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-22 19:31 UTC (permalink / raw)
To: Konstantin Komarov; +Cc: Matthew Wilcox (Oracle), ntfs3, linux-fsdevel
Retrieve a folio from the page cache instead of a precise page.
This function is now large folio safe, but its called function is not.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ntfs3/inode.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
index bdaea9c783ad..794d2aa3a5ab 100644
--- a/fs/ntfs3/inode.c
+++ b/fs/ntfs3/inode.c
@@ -913,24 +913,25 @@ int ntfs_write_begin(struct file *file, struct address_space *mapping,
*pagep = NULL;
if (is_resident(ni)) {
- struct page *page =
- grab_cache_page_write_begin(mapping, pos >> PAGE_SHIFT);
+ struct folio *folio = __filemap_get_folio(mapping,
+ pos >> PAGE_SHIFT, FGP_WRITEBEGIN,
+ mapping_gfp_mask(mapping));
- if (!page) {
- err = -ENOMEM;
+ if (IS_ERR(folio)) {
+ err = PTR_ERR(folio);
goto out;
}
ni_lock(ni);
- err = attr_data_read_resident(ni, page);
+ err = attr_data_read_resident(ni, &folio->page);
ni_unlock(ni);
if (!err) {
- *pagep = page;
+ *pagep = &folio->page;
goto out;
}
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
if (err != E_NTFS_NONRESIDENT)
goto out;
--
2.43.0
^ permalink raw reply related [relevance 7%]
* Re: [PATCH 09/11] KVM: guest_memfd: Add interface for populating gmem pages with user data
@ 2024-04-22 14:44 6% ` Xu Yilun
1 sibling, 0 replies; 200+ results
From: Xu Yilun @ 2024-04-22 14:44 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm, seanjc, michael.roth, isaku.yamahata
On Thu, Apr 04, 2024 at 02:50:31PM -0400, Paolo Bonzini wrote:
> During guest run-time, kvm_arch_gmem_prepare() is issued as needed to
> prepare newly-allocated gmem pages prior to mapping them into the guest.
> In the case of SEV-SNP, this mainly involves setting the pages to
> private in the RMP table.
>
> However, for the GPA ranges comprising the initial guest payload, which
> are encrypted/measured prior to starting the guest, the gmem pages need
> to be accessed prior to setting them to private in the RMP table so they
> can be initialized with the userspace-provided data. Additionally, an
> SNP firmware call is needed afterward to encrypt them in-place and
> measure the contents into the guest's launch digest.
>
> While it is possible to bypass the kvm_arch_gmem_prepare() hooks so that
> this handling can be done in an open-coded/vendor-specific manner, this
> may expose more gmem-internal state/dependencies to external callers
> than necessary. Try to avoid this by implementing an interface that
> tries to handle as much of the common functionality inside gmem as
> possible, while also making it generic enough to potentially be
> usable/extensible for TDX as well.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> Co-developed-by: Michael Roth <michael.roth@amd.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> include/linux/kvm_host.h | 26 ++++++++++++++
> virt/kvm/guest_memfd.c | 78 ++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 104 insertions(+)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 33ed3b884a6b..97d57ec59789 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2450,4 +2450,30 @@ int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_ord
> bool kvm_arch_gmem_prepare_needed(struct kvm *kvm);
> #endif
>
> +/**
> + * kvm_gmem_populate() - Populate/prepare a GPA range with guest data
> + *
> + * @kvm: KVM instance
> + * @gfn: starting GFN to be populated
> + * @src: userspace-provided buffer containing data to copy into GFN range
> + * (passed to @post_populate, and incremented on each iteration
> + * if not NULL)
> + * @npages: number of pages to copy from userspace-buffer
> + * @post_populate: callback to issue for each gmem page that backs the GPA
> + * range
> + * @opaque: opaque data to pass to @post_populate callback
> + *
> + * This is primarily intended for cases where a gmem-backed GPA range needs
> + * to be initialized with userspace-provided data prior to being mapped into
> + * the guest as a private page. This should be called with the slots->lock
> + * held so that caller-enforced invariants regarding the expected memory
> + * attributes of the GPA range do not race with KVM_SET_MEMORY_ATTRIBUTES.
> + *
> + * Returns the number of pages that were populated.
> + */
> +long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages,
> + int (*post_populate)(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
> + void __user *src, int order, void *opaque),
> + void *opaque);
> +
> #endif
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 51c99667690a..e7de97382a67 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -602,3 +602,81 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> return r;
> }
> EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
> +
> +static int kvm_gmem_undo_get_pfn(struct file *file, struct kvm_memory_slot *slot,
> + gfn_t gfn, int order)
> +{
> + pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
> + struct kvm_gmem *gmem = file->private_data;
> +
> + /*
> + * Races with kvm_gmem_unbind() must have been detected by
> + * __kvm_gmem_get_gfn(), because the invalidate_lock is
> + * taken between __kvm_gmem_get_gfn() and kvm_gmem_undo_get_pfn().
> + */
> + if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot))
> + return -EIO;
> +
> + return __kvm_gmem_punch_hole(file_inode(file), index << PAGE_SHIFT, PAGE_SIZE << order);
> +}
> +
> +long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages,
> + int (*post_populate)(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
> + void __user *src, int order, void *opaque),
> + void *opaque)
> +{
> + struct file *file;
> + struct kvm_memory_slot *slot;
> +
> + int ret = 0, max_order;
> + long i;
> +
> + lockdep_assert_held(&kvm->slots_lock);
> + if (npages < 0)
> + return -EINVAL;
> +
> + slot = gfn_to_memslot(kvm, gfn);
> + if (!kvm_slot_can_be_private(slot))
> + return -EINVAL;
> +
> + file = kvm_gmem_get_file(slot);
> + if (!file)
> + return -EFAULT;
> +
> + filemap_invalidate_lock(file->f_mapping);
> +
> + npages = min_t(ulong, slot->npages - (gfn - slot->base_gfn), npages);
> + for (i = 0; i < npages; i += (1 << max_order)) {
> + gfn_t this_gfn = gfn + i;
> + kvm_pfn_t pfn;
> +
> + ret = __kvm_gmem_get_pfn(file, slot, this_gfn, &pfn, &max_order, false);
> + if (ret)
> + break;
> +
> + if (!IS_ALIGNED(this_gfn, (1 << max_order)) ||
> + (npages - i) < (1 << max_order))
> + max_order = 0;
> +
> + if (post_populate) {
> + void __user *p = src ? src + i * PAGE_SIZE : NULL;
> + ret = post_populate(kvm, this_gfn, pfn, p, max_order, opaque);
I don't see the main difference between gmem_prepare() and post_populate()
from gmem's point of view. They are all vendor callbacks invoked after
__filemap_get_folio(). Is it possible gmem choose to call gmem_prepare()
or post_populate() outside __kvm_gmem_get_pfn()? Or even pass all
parameters to a single gmem_prepare() and let vendor code decide what to
do.
> + }
> +
> + put_page(pfn_to_page(pfn));
> + if (ret) {
> + /*
> + * Punch a hole so that FGP_CREAT_ONLY can succeed
> + * again.
> + */
> + kvm_gmem_undo_get_pfn(file, slot, this_gfn, max_order);
> + break;
> + }
> + }
> +
> + filemap_invalidate_unlock(file->f_mapping);
> +
> + fput(file);
> + return ret && !i ? ret : i;
> +}
> +EXPORT_SYMBOL_GPL(kvm_gmem_populate);
> --
> 2.43.0
>
>
>
^ permalink raw reply [relevance 6%]
* [PATCH RFC 2/7] filemap: Change mapping_set_folio_min_order() -> mapping_set_folio_orders()
@ 2024-04-22 14:39 6% ` John Garry
2024-04-25 14:47 0% ` Pankaj Raghav (Samsung)
2024-04-22 14:39 10% ` [PATCH RFC 5/7] fs: iomap: buffered atomic write support John Garry
1 sibling, 1 reply; 200+ results
From: John Garry @ 2024-04-22 14:39 UTC (permalink / raw)
To: axboe, brauner, djwong, viro, jack, akpm, willy, dchinner, tytso,
hch, martin.petersen, nilay, ritesh.list, mcgrof
Cc: linux-block, linux-kernel, linux-xfs, linux-fsdevel, linux-mm,
ojaswin, p.raghav, jbongio, okiselev, John Garry
Borrowed from:
https://lore.kernel.org/linux-fsdevel/20240213093713.1753368-2-kernel@pankajraghav.com/
(credit given in due course)
We will need to be able to only use a single folio order for buffered
atomic writes, so allow the mapping folio order min and max be set.
We still have the restriction of not being able to support order-1
folios - it will be required to lift this limit at some stage.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
fs/xfs/xfs_icache.c | 10 ++++++----
include/linux/pagemap.h | 20 +++++++++++++-------
mm/filemap.c | 8 +++++++-
3 files changed, 26 insertions(+), 12 deletions(-)
diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 8fb5cf0f5a09..6186887bd6ff 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -89,8 +89,9 @@ xfs_inode_alloc(
/* VFS doesn't initialise i_mode or i_state! */
VFS_I(ip)->i_mode = 0;
VFS_I(ip)->i_state = 0;
- mapping_set_folio_min_order(VFS_I(ip)->i_mapping,
- M_IGEO(mp)->min_folio_order);
+ mapping_set_folio_orders(VFS_I(ip)->i_mapping,
+ M_IGEO(mp)->min_folio_order,
+ MAX_PAGECACHE_ORDER);
XFS_STATS_INC(mp, vn_active);
ASSERT(atomic_read(&ip->i_pincount) == 0);
@@ -325,8 +326,9 @@ xfs_reinit_inode(
inode->i_rdev = dev;
inode->i_uid = uid;
inode->i_gid = gid;
- mapping_set_folio_min_order(inode->i_mapping,
- M_IGEO(mp)->min_folio_order);
+ mapping_set_folio_orders(inode->i_mapping,
+ M_IGEO(mp)->min_folio_order,
+ MAX_PAGECACHE_ORDER);
return error;
}
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index fc8eb9c94e9c..c22455fa28a1 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -363,9 +363,10 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
#endif
/*
- * mapping_set_folio_min_order() - Set the minimum folio order
+ * mapping_set_folio_orders() - Set the minimum and max folio order
* @mapping: The address_space.
* @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
+ * @max: Maximum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
*
* The filesystem should call this function in its inode constructor to
* indicate which base size of folio the VFS can use to cache the contents
@@ -376,15 +377,20 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
* Context: This should not be called while the inode is active as it
* is non-atomic.
*/
-static inline void mapping_set_folio_min_order(struct address_space *mapping,
- unsigned int min)
+
+static inline void mapping_set_folio_orders(struct address_space *mapping,
+ unsigned int min, unsigned int max)
{
- if (min > MAX_PAGECACHE_ORDER)
- min = MAX_PAGECACHE_ORDER;
+ if (min == 1)
+ min = 2;
+ if (max < min)
+ max = min;
+ if (max > MAX_PAGECACHE_ORDER)
+ max = MAX_PAGECACHE_ORDER;
mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
(min << AS_FOLIO_ORDER_MIN) |
- (MAX_PAGECACHE_ORDER << AS_FOLIO_ORDER_MAX);
+ (max << AS_FOLIO_ORDER_MAX);
}
/**
@@ -400,7 +406,7 @@ static inline void mapping_set_folio_min_order(struct address_space *mapping,
*/
static inline void mapping_set_large_folios(struct address_space *mapping)
{
- mapping_set_folio_min_order(mapping, 0);
+ mapping_set_folio_orders(mapping, 0, MAX_PAGECACHE_ORDER);
}
static inline unsigned int mapping_max_folio_order(struct address_space *mapping)
diff --git a/mm/filemap.c b/mm/filemap.c
index d81530b0aac0..d5effe50ddcb 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1898,9 +1898,15 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
unsigned int min_order = mapping_min_folio_order(mapping);
- unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
+ unsigned int max_order = mapping_max_folio_order(mapping);
+ unsigned int order = FGF_GET_ORDER(fgp_flags);
int err;
+ if (order > max_order)
+ order = max_order;
+ else if (order < min_order)
+ order = max_order;
+
index = mapping_align_start_index(mapping, index);
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
--
2.31.1
^ permalink raw reply related [relevance 6%]
* [PATCH RFC 5/7] fs: iomap: buffered atomic write support
2024-04-22 14:39 6% ` [PATCH RFC 2/7] filemap: Change mapping_set_folio_min_order() -> mapping_set_folio_orders() John Garry
@ 2024-04-22 14:39 10% ` John Garry
1 sibling, 0 replies; 200+ results
From: John Garry @ 2024-04-22 14:39 UTC (permalink / raw)
To: axboe, brauner, djwong, viro, jack, akpm, willy, dchinner, tytso,
hch, martin.petersen, nilay, ritesh.list, mcgrof
Cc: linux-block, linux-kernel, linux-xfs, linux-fsdevel, linux-mm,
ojaswin, p.raghav, jbongio, okiselev, John Garry
Add special handling of PG_atomic flag to iomap buffered write path.
To flag an iomap iter for an atomic write, set IOMAP_ATOMIC.
For a folio associated with a write which has IOMAP_ATOMIC set, set
PG_atomic.
Otherwise, when IOMAP_ATOMIC is unset, clear PG_atomic.
This means that for an "atomic" folio which has not been written back, it
loses it "atomicity". So if userspace issues a write with RWF_ATOMIC set
and another write with RWF_ATOMIC unset and which fully or partially
overwrites that same region as the first write, that folio is not written
back atomically. For such a scenario to occur, it would be considered a
userspace usage error.
To ensure that a buffered atomic write is written back atomically when
the write syscall returns, RWF_SYNC or similar needs to be used (in
conjunction with RWF_ATOMIC).
As a safety check, when getting a folio for an atomic write in
iomap_get_folio(), ensure that the length matches the inode mapping folio
order-limit.
Only a single BIO should ever be submitted for an atomic write. So modify
iomap_add_to_ioend() to ensure that we don't try to write back an atomic
folio as part of a larger mixed-atomicity BIO.
In iomap_alloc_ioend(), handle an atomic write by setting REQ_ATOMIC for
the allocated BIO.
When a folio is written back, again clear PG_atomic, as it is no longer
required. I assume it will not be needlessly written back a second time...
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
fs/iomap/buffered-io.c | 53 ++++++++++++++++++++++++++++++++++++------
fs/iomap/trace.h | 3 ++-
include/linux/iomap.h | 1 +
3 files changed, 49 insertions(+), 8 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 4e8e41c8b3c0..ac2a014c91a9 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -586,13 +586,25 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate);
*/
struct folio *iomap_get_folio(struct iomap_iter *iter, loff_t pos, size_t len)
{
+ struct address_space *mapping = iter->inode->i_mapping;
fgf_t fgp = FGP_WRITEBEGIN | FGP_NOFS;
if (iter->flags & IOMAP_NOWAIT)
fgp |= FGP_NOWAIT;
fgp |= fgf_set_order(len);
- return __filemap_get_folio(iter->inode->i_mapping, pos >> PAGE_SHIFT,
+ if (iter->flags & IOMAP_ATOMIC) {
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int max_order = mapping_max_folio_order(mapping);
+ unsigned int order = FGF_GET_ORDER(fgp);
+
+ if (order != min_order)
+ return ERR_PTR(-EINVAL);
+ if (order != max_order)
+ return ERR_PTR(-EINVAL);
+ }
+
+ return __filemap_get_folio(mapping, pos >> PAGE_SHIFT,
fgp, mapping_gfp_mask(iter->inode->i_mapping));
}
EXPORT_SYMBOL_GPL(iomap_get_folio);
@@ -769,6 +781,7 @@ static int iomap_write_begin(struct iomap_iter *iter, loff_t pos,
{
const struct iomap_folio_ops *folio_ops = iter->iomap.folio_ops;
const struct iomap *srcmap = iomap_iter_srcmap(iter);
+ bool is_atomic = iter->flags & IOMAP_ATOMIC;
struct folio *folio;
int status = 0;
@@ -786,6 +799,11 @@ static int iomap_write_begin(struct iomap_iter *iter, loff_t pos,
if (IS_ERR(folio))
return PTR_ERR(folio);
+ if (is_atomic)
+ folio_set_atomic(folio);
+ else
+ folio_clear_atomic(folio);
+
/*
* Now we have a locked folio, before we do anything with it we need to
* check that the iomap we have cached is not stale. The inode extent
@@ -1010,6 +1028,8 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
if (iocb->ki_flags & IOCB_NOWAIT)
iter.flags |= IOMAP_NOWAIT;
+ if (iocb->ki_flags & IOCB_ATOMIC)
+ iter.flags |= IOMAP_ATOMIC;
while ((ret = iomap_iter(&iter, ops)) > 0)
iter.processed = iomap_write_iter(&iter, i);
@@ -1499,8 +1519,10 @@ static void iomap_finish_folio_write(struct inode *inode, struct folio *folio,
WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !ifs);
WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) <= 0);
- if (!ifs || atomic_sub_and_test(len, &ifs->write_bytes_pending))
+ if (!ifs || atomic_sub_and_test(len, &ifs->write_bytes_pending)) {
+ folio_clear_atomic(folio);
folio_end_writeback(folio);
+ }
}
/*
@@ -1679,14 +1701,18 @@ static int iomap_submit_ioend(struct iomap_writepage_ctx *wpc, int error)
}
static struct iomap_ioend *iomap_alloc_ioend(struct iomap_writepage_ctx *wpc,
- struct writeback_control *wbc, struct inode *inode, loff_t pos)
+ struct writeback_control *wbc, struct inode *inode, loff_t pos,
+ bool atomic)
{
+ blk_opf_t opf = REQ_OP_WRITE | wbc_to_write_flags(wbc);
struct iomap_ioend *ioend;
struct bio *bio;
+ if (atomic)
+ opf |= REQ_ATOMIC;
+
bio = bio_alloc_bioset(wpc->iomap.bdev, BIO_MAX_VECS,
- REQ_OP_WRITE | wbc_to_write_flags(wbc),
- GFP_NOFS, &iomap_ioend_bioset);
+ opf, GFP_NOFS, &iomap_ioend_bioset);
bio->bi_iter.bi_sector = iomap_sector(&wpc->iomap, pos);
bio->bi_end_io = iomap_writepage_end_bio;
wbc_init_bio(wbc, bio);
@@ -1744,14 +1770,27 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
{
struct iomap_folio_state *ifs = folio->private;
size_t poff = offset_in_folio(folio, pos);
+ bool is_atomic = folio_test_atomic(folio);
int error;
- if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos)) {
+ if (!wpc->ioend || is_atomic || !iomap_can_add_to_ioend(wpc, pos)) {
new_ioend:
error = iomap_submit_ioend(wpc, 0);
if (error)
return error;
- wpc->ioend = iomap_alloc_ioend(wpc, wbc, inode, pos);
+ wpc->ioend = iomap_alloc_ioend(wpc, wbc, inode, pos, is_atomic);
+ }
+
+ /* We must not append anything later if atomic, so submit now */
+ if (is_atomic) {
+ if (!bio_add_folio(&wpc->ioend->io_bio, folio, len, poff))
+ return -EINVAL;
+ wpc->ioend->io_size = len;
+ wbc_account_cgroup_owner(wbc, &folio->page, len);
+ if (ifs)
+ atomic_add(len, &ifs->write_bytes_pending);
+
+ return iomap_submit_ioend(wpc, 0);
}
if (!bio_add_folio(&wpc->ioend->io_bio, folio, len, poff))
diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
index 0a991c4ce87d..4118a42cdab0 100644
--- a/fs/iomap/trace.h
+++ b/fs/iomap/trace.h
@@ -98,7 +98,8 @@ DEFINE_RANGE_EVENT(iomap_dio_rw_queued);
{ IOMAP_REPORT, "REPORT" }, \
{ IOMAP_FAULT, "FAULT" }, \
{ IOMAP_DIRECT, "DIRECT" }, \
- { IOMAP_NOWAIT, "NOWAIT" }
+ { IOMAP_NOWAIT, "NOWAIT" }, \
+ { IOMAP_ATOMIC, "ATOMIC" }
#define IOMAP_F_FLAGS_STRINGS \
{ IOMAP_F_NEW, "NEW" }, \
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index f726f0058fd6..2f50abe06f27 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -179,6 +179,7 @@ struct iomap_folio_ops {
#else
#define IOMAP_DAX 0
#endif /* CONFIG_FS_DAX */
+#define IOMAP_ATOMIC (1 << 9)
struct iomap_ops {
/*
--
2.31.1
^ permalink raw reply related [relevance 10%]
* Re: [PATCH 06/11] KVM: guest_memfd: Add hook for initializing memory
2024-04-04 18:50 5% ` [PATCH 06/11] KVM: guest_memfd: Add hook for initializing memory Paolo Bonzini
@ 2024-04-22 10:53 0% ` Xu Yilun
0 siblings, 0 replies; 200+ results
From: Xu Yilun @ 2024-04-22 10:53 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm, seanjc, michael.roth, isaku.yamahata
On Thu, Apr 04, 2024 at 02:50:28PM -0400, Paolo Bonzini wrote:
> guest_memfd pages are generally expected to be in some arch-defined
> initial state prior to using them for guest memory. For SEV-SNP this
> initial state is 'private', or 'guest-owned', and requires additional
> operations to move these pages into a 'private' state by updating the
> corresponding entries the RMP table.
>
> Allow for an arch-defined hook to handle updates of this sort, and go
> ahead and implement one for x86 so KVM implementations like AMD SVM can
> register a kvm_x86_ops callback to handle these updates for SEV-SNP
> guests.
>
> The preparation callback is always called when allocating/grabbing
> folios via gmem, and it is up to the architecture to keep track of
> whether or not the pages are already in the expected state (e.g. the RMP
> table in the case of SEV-SNP).
>
> In some cases, it is necessary to defer the preparation of the pages to
> handle things like in-place encryption of initial guest memory payloads
> before marking these pages as 'private'/'guest-owned'. Add an argument
> (always true for now) to kvm_gmem_get_folio() that allows for the
> preparation callback to be bypassed. To detect possible issues in
IIUC, we have 2 dedicated flows.
1 kvm_gmem_get_pfn() or kvm_gmem_allocate()
a. kvm_gmem_get_folio()
b. gmem_prepare() for RMP
2 in-place encryption or whatever
a. kvm_gmem_get_folio(FGP_CREAT_ONLY)
b. in-place encryption
c. gmem_prepare() for RMP
Could we move gmem_prepare() out of kvm_gmem_get_folio(), then we could
have straightforward flow for each case, and don't have to have an
argument to pospone gmem_prepare().
> the way userspace initializes memory, it is only possible to add an
> unprepared page if it is not already included in the filemap.
>
> Link: https://lore.kernel.org/lkml/ZLqVdvsF11Ddo7Dq@google.com/
> Co-developed-by: Michael Roth <michael.roth@amd.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> Message-Id: <20231230172351.574091-5-michael.roth@amd.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/kvm-x86-ops.h | 1 +
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/x86.c | 6 +++
> include/linux/kvm_host.h | 5 +++
> virt/kvm/Kconfig | 4 ++
> virt/kvm/guest_memfd.c | 65 ++++++++++++++++++++++++++++--
> 6 files changed, 78 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 5187fcf4b610..d26fcad13e36 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -139,6 +139,7 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
> KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
> KVM_X86_OP_OPTIONAL(get_untagged_addr)
> KVM_X86_OP_OPTIONAL(alloc_apic_backing_page)
> +KVM_X86_OP_OPTIONAL_RET0(gmem_prepare)
>
> #undef KVM_X86_OP
> #undef KVM_X86_OP_OPTIONAL
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 01c69840647e..f101fab0040e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1809,6 +1809,7 @@ struct kvm_x86_ops {
>
> gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
> void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu);
> + int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
> };
>
> struct kvm_x86_nested_ops {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 2d2619d3eee4..972524ddcfdb 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -13598,6 +13598,12 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_arch_no_poll);
>
> +#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> +int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order)
> +{
> + return static_call(kvm_x86_gmem_prepare)(kvm, pfn, gfn, max_order);
> +}
> +#endif
>
> int kvm_spec_ctrl_test_value(u64 value)
> {
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 48f31dcd318a..33ed3b884a6b 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2445,4 +2445,9 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
> }
> #endif /* CONFIG_KVM_PRIVATE_MEM */
>
> +#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> +int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
> +bool kvm_arch_gmem_prepare_needed(struct kvm *kvm);
> +#endif
> +
> #endif
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 29b73eedfe74..ca870157b2ed 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -109,3 +109,7 @@ config KVM_GENERIC_PRIVATE_MEM
> select KVM_GENERIC_MEMORY_ATTRIBUTES
> select KVM_PRIVATE_MEM
> bool
> +
> +config HAVE_KVM_GMEM_PREPARE
> + bool
> + depends on KVM_PRIVATE_MEM
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index e5b3cd02b651..486748e65f36 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -13,12 +13,60 @@ struct kvm_gmem {
> struct list_head entry;
> };
>
> -static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
> +#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> +bool __weak kvm_arch_gmem_prepare_needed(struct kvm *kvm)
> +{
> + return false;
> +}
> +#endif
In which case HAVE_KVM_GMEM_PREPARE is selected but
gmem_prepare_needed() is never implemented? Then all gmem_prepare stuff
are actually dead code. Maybe we don't need this weak stub?
> +
> +static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio)
> +{
> +#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> + struct list_head *gmem_list = &inode->i_mapping->i_private_list;
> + struct kvm_gmem *gmem;
> +
> + list_for_each_entry(gmem, gmem_list, entry) {
> + struct kvm_memory_slot *slot;
> + struct kvm *kvm = gmem->kvm;
> + struct page *page;
> + kvm_pfn_t pfn;
> + gfn_t gfn;
> + int rc;
> +
> + if (!kvm_arch_gmem_prepare_needed(kvm))
> + continue;
> +
> + slot = xa_load(&gmem->bindings, index);
> + if (!slot)
> + continue;
> +
> + page = folio_file_page(folio, index);
> + pfn = page_to_pfn(page);
> + gfn = slot->base_gfn + index - slot->gmem.pgoff;
> + rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, compound_order(compound_head(page)));
> + if (rc) {
> + pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx, error %d.\n",
> + index, rc);
> + return rc;
> + }
> + }
> +
> +#endif
> + return 0;
> +}
> +
> +static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool prepare)
> {
> struct folio *folio;
> + fgf_t fgp_flags = FGP_LOCK | FGP_ACCESSED | FGP_CREAT;
> +
> + if (!prepare)
> + fgp_flags |= FGP_CREAT_ONLY;
>
> /* TODO: Support huge pages. */
> - folio = filemap_grab_folio(inode->i_mapping, index);
> + folio = __filemap_get_folio(inode->i_mapping, index, fgp_flags,
> + mapping_gfp_mask(inode->i_mapping));
> if (IS_ERR_OR_NULL(folio))
> return folio;
>
> @@ -41,6 +89,15 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
> folio_mark_uptodate(folio);
> }
>
> + if (prepare) {
> + int r = kvm_gmem_prepare_folio(inode, index, folio);
> + if (r < 0) {
> + folio_unlock(folio);
> + folio_put(folio);
> + return ERR_PTR(r);
> + }
> + }
> +
Do we still need to prepare the page if it is hwpoisoned? I see the
hwpoisoned check is outside, in kvm_gmem_get_pfn().
Thanks,
Yilun
> /*
> * Ignore accessed, referenced, and dirty flags. The memory is
> * unevictable and there is no storage to write back to.
> @@ -145,7 +202,7 @@ static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len)
> break;
> }
>
> - folio = kvm_gmem_get_folio(inode, index);
> + folio = kvm_gmem_get_folio(inode, index, true);
> if (IS_ERR_OR_NULL(folio)) {
> r = folio ? PTR_ERR(folio) : -ENOMEM;
> break;
> @@ -505,7 +562,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> goto out_fput;
> }
>
> - folio = kvm_gmem_get_folio(file_inode(file), index);
> + folio = kvm_gmem_get_folio(file_inode(file), index, true);
> if (!folio) {
> r = -ENOMEM;
> goto out_fput;
> --
> 2.43.0
>
>
>
^ permalink raw reply [relevance 0%]
* [PATCH 02/30] btrfs: Use a folio in write_dev_supers()
@ 2024-04-20 2:49 6% ` Matthew Wilcox (Oracle)
0 siblings, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-20 2:49 UTC (permalink / raw)
To: linux-fsdevel
Cc: Matthew Wilcox (Oracle),
Chris Mason, Josef Bacik, David Sterba, linux-btrfs
Remove some calls to obsolete APIs and some hidden calls to
compound_head().
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/btrfs/disk-io.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 32cf64ccd761..8fa7c526093c 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3760,9 +3760,10 @@ static int write_dev_supers(struct btrfs_device *device,
shash->tfm = fs_info->csum_shash;
for (i = 0; i < max_mirrors; i++) {
- struct page *page;
+ struct folio *folio;
struct bio *bio;
struct btrfs_super_block *disk_super;
+ size_t offset;
bytenr_orig = btrfs_sb_offset(i);
ret = btrfs_sb_log_location(device, i, WRITE, &bytenr);
@@ -3785,9 +3786,9 @@ static int write_dev_supers(struct btrfs_device *device,
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE,
sb->csum);
- page = find_or_create_page(mapping, bytenr >> PAGE_SHIFT,
- GFP_NOFS);
- if (!page) {
+ folio = __filemap_get_folio(mapping, bytenr >> PAGE_SHIFT,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_NOFS);
+ if (IS_ERR(folio)) {
btrfs_err(device->fs_info,
"couldn't get super block page for bytenr %llu",
bytenr);
@@ -3796,9 +3797,10 @@ static int write_dev_supers(struct btrfs_device *device,
}
/* Bump the refcount for wait_dev_supers() */
- get_page(page);
+ folio_get(folio);
- disk_super = page_address(page);
+ offset = offset_in_folio(folio, bytenr);
+ disk_super = folio_address(folio) + offset;
memcpy(disk_super, sb, BTRFS_SUPER_INFO_SIZE);
/*
@@ -3812,8 +3814,7 @@ static int write_dev_supers(struct btrfs_device *device,
bio->bi_iter.bi_sector = bytenr >> SECTOR_SHIFT;
bio->bi_private = device;
bio->bi_end_io = btrfs_end_super_write;
- __bio_add_page(bio, page, BTRFS_SUPER_INFO_SIZE,
- offset_in_page(bytenr));
+ bio_add_folio_nofail(bio, folio, BTRFS_SUPER_INFO_SIZE, offset);
/*
* We FUA only the first super block. The others we allow to
--
2.43.0
^ permalink raw reply related [relevance 6%]
* Re: [PATCH v13 04/26] KVM: guest_memfd: Fix PTR_ERR() handling in __kvm_gmem_get_pfn()
2024-04-19 15:11 6% ` Michael Roth
@ 2024-04-19 16:17 0% ` Paolo Bonzini
0 siblings, 0 replies; 200+ results
From: Paolo Bonzini @ 2024-04-19 16:17 UTC (permalink / raw)
To: Michael Roth
Cc: David Hildenbrand, kvm, linux-coco, linux-mm, linux-crypto, x86,
linux-kernel, tglx, mingo, jroedel, thomas.lendacky, hpa, ardb,
seanjc, vkuznets, jmattson, luto, dave.hansen, slp, pgonda,
peterz, srinivas.pandruvada, rientjes, dovmurik, tobin, bp,
vbabka, kirill, ak, tony.luck, sathyanarayanan.kuppuswamy,
alpergun, jarkko, ashish.kalra, nikunj.dadhania, pankaj.gupta,
liam.merwick
On Fri, Apr 19, 2024 at 5:11 PM Michael Roth <michael.roth@amd.com> wrote:
>
> On Fri, Apr 19, 2024 at 02:58:43PM +0200, David Hildenbrand wrote:
> > On 18.04.24 21:41, Michael Roth wrote:
> > > kvm_gmem_get_folio() may return a PTR_ERR() rather than just NULL. In
> > > particular, for cases where EEXISTS is returned when FGP_CREAT_ONLY
> > > flag is used. Handle this properly in __kvm_gmem_get_pfn().
> > >
> > > Signed-off-by: Michael Roth <michael.roth@amd.com>
> > > ---
> > > virt/kvm/guest_memfd.c | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> > > index ccf22e44f387..9d7c6a70c547 100644
> > > --- a/virt/kvm/guest_memfd.c
> > > +++ b/virt/kvm/guest_memfd.c
> > > @@ -580,8 +580,8 @@ static int __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot,
> > > }
> > > folio = kvm_gmem_get_folio(file_inode(file), index, prepare);
> > > - if (!folio)
> > > - return -ENOMEM;
> > > + if (IS_ERR_OR_NULL(folio))
> > > + return folio ? PTR_ERR(folio) : -ENOMEM;
> >
> > Will it even return NULL? Staring at other filemap_grab_folio() users, they
> > all check for IS_ERR().
>
> Looks like the NULL case is handled with PTR_ERR(-ENOENT), so IS_ERR()
> would be sufficient. I think in the past kvm_gmem_get_folio() itself
> would return NULL in some cases, but as of commit 2b01b7e994e95 that's
> no longer the case.
>
> I'll fix this up to expect only PTR_ERR() when I re-spin v14, and also
> address the other kvm_gmem_get_folio() / __filemap_get_folio() call
> sites.
>
> >
> > > if (folio_test_hwpoison(folio)) {
> > > r = -EHWPOISON;
> >
> > Do we have a Fixes: tag?
>
> Fixes: 2b01b7e994e95 ("KVM: guest_memfd: pass error up from filemap_grab_folio")
I'll squash it so when you rebase on the new kvm-coco-queue it will go
away. Thanks to both!
Paolo
^ permalink raw reply [relevance 0%]
* Re: [PATCH v13 04/26] KVM: guest_memfd: Fix PTR_ERR() handling in __kvm_gmem_get_pfn()
@ 2024-04-19 15:11 6% ` Michael Roth
2024-04-19 16:17 0% ` Paolo Bonzini
0 siblings, 1 reply; 200+ results
From: Michael Roth @ 2024-04-19 15:11 UTC (permalink / raw)
To: David Hildenbrand
Cc: kvm, linux-coco, linux-mm, linux-crypto, x86, linux-kernel, tglx,
mingo, jroedel, thomas.lendacky, hpa, ardb, pbonzini, seanjc,
vkuznets, jmattson, luto, dave.hansen, slp, pgonda, peterz,
srinivas.pandruvada, rientjes, dovmurik, tobin, bp, vbabka,
kirill, ak, tony.luck, sathyanarayanan.kuppuswamy, alpergun,
jarkko, ashish.kalra, nikunj.dadhania, pankaj.gupta,
liam.merwick
On Fri, Apr 19, 2024 at 02:58:43PM +0200, David Hildenbrand wrote:
> On 18.04.24 21:41, Michael Roth wrote:
> > kvm_gmem_get_folio() may return a PTR_ERR() rather than just NULL. In
> > particular, for cases where EEXISTS is returned when FGP_CREAT_ONLY
> > flag is used. Handle this properly in __kvm_gmem_get_pfn().
> >
> > Signed-off-by: Michael Roth <michael.roth@amd.com>
> > ---
> > virt/kvm/guest_memfd.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> > index ccf22e44f387..9d7c6a70c547 100644
> > --- a/virt/kvm/guest_memfd.c
> > +++ b/virt/kvm/guest_memfd.c
> > @@ -580,8 +580,8 @@ static int __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot,
> > }
> > folio = kvm_gmem_get_folio(file_inode(file), index, prepare);
> > - if (!folio)
> > - return -ENOMEM;
> > + if (IS_ERR_OR_NULL(folio))
> > + return folio ? PTR_ERR(folio) : -ENOMEM;
>
> Will it even return NULL? Staring at other filemap_grab_folio() users, they
> all check for IS_ERR().
Looks like the NULL case is handled with PTR_ERR(-ENOENT), so IS_ERR()
would be sufficient. I think in the past kvm_gmem_get_folio() itself
would return NULL in some cases, but as of commit 2b01b7e994e95 that's
no longer the case.
I'll fix this up to expect only PTR_ERR() when I re-spin v14, and also
address the other kvm_gmem_get_folio() / __filemap_get_folio() call
sites.
>
> > if (folio_test_hwpoison(folio)) {
> > r = -EHWPOISON;
>
> Do we have a Fixes: tag?
Fixes: 2b01b7e994e95 ("KVM: guest_memfd: pass error up from filemap_grab_folio")
Will add that in the re-spin as well.
Thanks!
-Mike
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [relevance 6%]
* Re: [PATCH] ext4: remove the redundant folio_wait_stable()
2024-04-19 2:30 6% [PATCH] ext4: remove the redundant folio_wait_stable() Zhang Yi
@ 2024-04-19 9:28 0% ` Jan Kara
2024-05-07 23:03 0% ` Theodore Ts'o
1 sibling, 0 replies; 200+ results
From: Jan Kara @ 2024-04-19 9:28 UTC (permalink / raw)
To: Zhang Yi
Cc: linux-ext4, linux-fsdevel, tytso, adilger.kernel, jack, yi.zhang,
chengzhihao1, yukuai3
On Fri 19-04-24 10:30:05, Zhang Yi wrote:
> From: Zhang Yi <yi.zhang@huawei.com>
>
> __filemap_get_folio() with FGP_WRITEBEGIN parameter has already wait
> for stable folio, so remove the redundant folio_wait_stable() in
> ext4_da_write_begin(), it was left over from the commit cc883236b792
> ("ext4: drop unnecessary journal handle in delalloc write") that
> removed the retry getting page logic.
>
> Fixes: cc883236b792 ("ext4: drop unnecessary journal handle in delalloc write")
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Looks good. Feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
> ---
> fs/ext4/inode.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 537803250ca9..6de6bf57699b 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -2887,9 +2887,6 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
> if (IS_ERR(folio))
> return PTR_ERR(folio);
>
> - /* In case writeback began while the folio was unlocked */
> - folio_wait_stable(folio);
> -
> #ifdef CONFIG_FS_ENCRYPTION
> ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
> #else
> --
> 2.39.2
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [relevance 0%]
* [PATCH] ext4: remove the redundant folio_wait_stable()
@ 2024-04-19 2:30 6% Zhang Yi
2024-04-19 9:28 0% ` Jan Kara
2024-05-07 23:03 0% ` Theodore Ts'o
0 siblings, 2 replies; 200+ results
From: Zhang Yi @ 2024-04-19 2:30 UTC (permalink / raw)
To: linux-ext4
Cc: linux-fsdevel, tytso, adilger.kernel, jack, yi.zhang, yi.zhang,
chengzhihao1, yukuai3
From: Zhang Yi <yi.zhang@huawei.com>
__filemap_get_folio() with FGP_WRITEBEGIN parameter has already wait
for stable folio, so remove the redundant folio_wait_stable() in
ext4_da_write_begin(), it was left over from the commit cc883236b792
("ext4: drop unnecessary journal handle in delalloc write") that
removed the retry getting page logic.
Fixes: cc883236b792 ("ext4: drop unnecessary journal handle in delalloc write")
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
---
fs/ext4/inode.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 537803250ca9..6de6bf57699b 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2887,9 +2887,6 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
if (IS_ERR(folio))
return PTR_ERR(folio);
- /* In case writeback began while the folio was unlocked */
- folio_wait_stable(folio);
-
#ifdef CONFIG_FS_ENCRYPTION
ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
#else
--
2.39.2
^ permalink raw reply related [relevance 6%]
* Re: Removing PG_error use from btrfs
2024-04-18 17:41 5% Removing PG_error use from btrfs Matthew Wilcox
@ 2024-04-18 18:00 0% ` David Sterba
0 siblings, 0 replies; 200+ results
From: David Sterba @ 2024-04-18 18:00 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Chris Mason, Josef Bacik, David Sterba, linux-btrfs,
linux-fsdevel, Jan Kara
On Thu, Apr 18, 2024 at 06:41:47PM +0100, Matthew Wilcox wrote:
> We're down to just JFS and btrfs using the PG_error flag. I sent a
> patch earlier to remove PG_error from JFS, so now it's your turn ...
>
> btrfs currently uses it to indicate superblock writeback errors.
> This proposal moves that information to a counter in the btrfs_device.
> Maybe this isn't the best approach. What do you think?
Tracking the number of errors in the device is a good approach. The
superblock write is asynchronous but it's not necessary to track the
error in the page, we have the device structure in the end io callback.
Also it's guaranteed that this is running only from one place so not
even the atomics are needed.
> I'm currently running fstests against it and it hasn't blown up yet.
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 3d512b041977..5f6f8472ecec 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -3627,28 +3627,24 @@ ALLOW_ERROR_INJECTION(open_ctree, ERRNO);
> static void btrfs_end_super_write(struct bio *bio)
> {
> struct btrfs_device *device = bio->bi_private;
> - struct bio_vec *bvec;
> - struct bvec_iter_all iter_all;
> - struct page *page;
> -
> - bio_for_each_segment_all(bvec, bio, iter_all) {
> - page = bvec->bv_page;
> + struct folio_iter fi;
I'd rather make the conversion from pages to folios a separate patch
from the error counting change. I haven't seen anything obviously wrong
but the superblock write is a critical action so it's a matter of
precaution.
> + bio_for_each_folio_all(fi, bio) {
> if (bio->bi_status) {
> btrfs_warn_rl_in_rcu(device->fs_info,
> - "lost page write due to IO error on %s (%d)",
> + "lost sb write due to IO error on %s (%d)",
> btrfs_dev_name(device),
> blk_status_to_errno(bio->bi_status));
> - ClearPageUptodate(page);
> - SetPageError(page);
> btrfs_dev_stat_inc_and_print(device,
> BTRFS_DEV_STAT_WRITE_ERRS);
> - } else {
> - SetPageUptodate(page);
> + /* Ensure failure if a primary sb fails */
> + if (bio->bi_opf & REQ_FUA)
> + atomic_set(&device->sb_wb_errors, INT_MAX / 2);
This is using some magic constant so it would be better defined
separately and documented what it means.
> + else
> + atomic_inc(&device->sb_wb_errors);
> }
> -
> - put_page(page);
> - unlock_page(page);
> + folio_unlock(fi.folio);
> + folio_put(fi.folio);
> }
>
> bio_put(bio);
> @@ -3750,19 +3746,21 @@ static int write_dev_supers(struct btrfs_device *device,
> struct address_space *mapping = device->bdev->bd_mapping;
> SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
> int i;
> - int errors = 0;
> int ret;
> u64 bytenr, bytenr_orig;
>
> + atomic_set(&device->sb_wb_errors, 0);
> +
> if (max_mirrors == 0)
> max_mirrors = BTRFS_SUPER_MIRROR_MAX;
>
> shash->tfm = fs_info->csum_shash;
>
> for (i = 0; i < max_mirrors; i++) {
> - struct page *page;
> + struct folio *folio;
> struct bio *bio;
> struct btrfs_super_block *disk_super;
> + size_t offset;
>
> bytenr_orig = btrfs_sb_offset(i);
> ret = btrfs_sb_log_location(device, i, WRITE, &bytenr);
> @@ -3772,7 +3770,7 @@ static int write_dev_supers(struct btrfs_device *device,
> btrfs_err(device->fs_info,
> "couldn't get super block location for mirror %d",
> i);
> - errors++;
> + atomic_inc(&device->sb_wb_errors);
> continue;
> }
> if (bytenr + BTRFS_SUPER_INFO_SIZE >=
> @@ -3785,20 +3783,18 @@ static int write_dev_supers(struct btrfs_device *device,
> BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE,
> sb->csum);
>
> - page = find_or_create_page(mapping, bytenr >> PAGE_SHIFT,
> - GFP_NOFS);
> - if (!page) {
> + folio = __filemap_get_folio(mapping, bytenr >> PAGE_SHIFT,
> + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_NOFS);
> + if (IS_ERR(folio)) {
> btrfs_err(device->fs_info,
> "couldn't get super block page for bytenr %llu",
> bytenr);
> - errors++;
> + atomic_inc(&device->sb_wb_errors);
> continue;
> }
>
> - /* Bump the refcount for wait_dev_supers() */
> - get_page(page);
> -
> - disk_super = page_address(page);
> + offset = offset_in_folio(folio, bytenr);
> + disk_super = folio_address(folio) + offset;
> memcpy(disk_super, sb, BTRFS_SUPER_INFO_SIZE);
>
> /*
> @@ -3812,8 +3808,7 @@ static int write_dev_supers(struct btrfs_device *device,
> bio->bi_iter.bi_sector = bytenr >> SECTOR_SHIFT;
> bio->bi_private = device;
> bio->bi_end_io = btrfs_end_super_write;
> - __bio_add_page(bio, page, BTRFS_SUPER_INFO_SIZE,
> - offset_in_page(bytenr));
> + bio_add_folio_nofail(bio, folio, BTRFS_SUPER_INFO_SIZE, offset);
>
> /*
> * We FUA only the first super block. The others we allow to
> @@ -3825,9 +3820,9 @@ static int write_dev_supers(struct btrfs_device *device,
> submit_bio(bio);
>
> if (btrfs_advance_sb_log(device, i))
> - errors++;
> + atomic_inc(&device->sb_wb_errors);
> }
> - return errors < i ? 0 : -1;
> + return atomic_read(&device->sb_wb_errors) < i ? 0 : -1;
> }
>
> /*
> @@ -3849,7 +3844,7 @@ static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
> max_mirrors = BTRFS_SUPER_MIRROR_MAX;
>
> for (i = 0; i < max_mirrors; i++) {
> - struct page *page;
> + struct folio *folio;
>
> ret = btrfs_sb_log_location(device, i, READ, &bytenr);
> if (ret == -ENOENT) {
> @@ -3864,29 +3859,19 @@ static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
> device->commit_total_bytes)
> break;
>
> - page = find_get_page(device->bdev->bd_mapping,
> + folio = filemap_get_folio(device->bdev->bd_mapping,
> bytenr >> PAGE_SHIFT);
> - if (!page) {
> - errors++;
> - if (i == 0)
> - primary_failed = true;
> + /* If the folio has been removed, then we know it completed */
> + if (IS_ERR(folio))
> continue;
> - }
> - /* Page is submitted locked and unlocked once the IO completes */
> - wait_on_page_locked(page);
> - if (PageError(page)) {
> - errors++;
> - if (i == 0)
> - primary_failed = true;
> - }
> -
> - /* Drop our reference */
> - put_page(page);
> -
> - /* Drop the reference from the writing run */
> - put_page(page);
> + /* Folio is unlocked once the IO completes */
> + folio_wait_locked(folio);
> + folio_put(folio);
> }
>
> + errors += atomic_read(&device->sb_wb_errors);
> + if (errors >= INT_MAX / 2)
> + primary_failed = true;
Alternatively a flag can be set in the device if the primary superblock
write fails but I think encoding that in the error count also works, as
long as it's a named constant.
^ permalink raw reply [relevance 0%]
* Removing PG_error use from btrfs
@ 2024-04-18 17:41 5% Matthew Wilcox
2024-04-18 18:00 0% ` David Sterba
0 siblings, 1 reply; 200+ results
From: Matthew Wilcox @ 2024-04-18 17:41 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: linux-btrfs, linux-fsdevel, Jan Kara
We're down to just JFS and btrfs using the PG_error flag. I sent a
patch earlier to remove PG_error from JFS, so now it's your turn ...
btrfs currently uses it to indicate superblock writeback errors.
This proposal moves that information to a counter in the btrfs_device.
Maybe this isn't the best approach. What do you think?
I'm currently running fstests against it and it hasn't blown up yet.
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 3d512b041977..5f6f8472ecec 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3627,28 +3627,24 @@ ALLOW_ERROR_INJECTION(open_ctree, ERRNO);
static void btrfs_end_super_write(struct bio *bio)
{
struct btrfs_device *device = bio->bi_private;
- struct bio_vec *bvec;
- struct bvec_iter_all iter_all;
- struct page *page;
-
- bio_for_each_segment_all(bvec, bio, iter_all) {
- page = bvec->bv_page;
+ struct folio_iter fi;
+ bio_for_each_folio_all(fi, bio) {
if (bio->bi_status) {
btrfs_warn_rl_in_rcu(device->fs_info,
- "lost page write due to IO error on %s (%d)",
+ "lost sb write due to IO error on %s (%d)",
btrfs_dev_name(device),
blk_status_to_errno(bio->bi_status));
- ClearPageUptodate(page);
- SetPageError(page);
btrfs_dev_stat_inc_and_print(device,
BTRFS_DEV_STAT_WRITE_ERRS);
- } else {
- SetPageUptodate(page);
+ /* Ensure failure if a primary sb fails */
+ if (bio->bi_opf & REQ_FUA)
+ atomic_set(&device->sb_wb_errors, INT_MAX / 2);
+ else
+ atomic_inc(&device->sb_wb_errors);
}
-
- put_page(page);
- unlock_page(page);
+ folio_unlock(fi.folio);
+ folio_put(fi.folio);
}
bio_put(bio);
@@ -3750,19 +3746,21 @@ static int write_dev_supers(struct btrfs_device *device,
struct address_space *mapping = device->bdev->bd_mapping;
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
int i;
- int errors = 0;
int ret;
u64 bytenr, bytenr_orig;
+ atomic_set(&device->sb_wb_errors, 0);
+
if (max_mirrors == 0)
max_mirrors = BTRFS_SUPER_MIRROR_MAX;
shash->tfm = fs_info->csum_shash;
for (i = 0; i < max_mirrors; i++) {
- struct page *page;
+ struct folio *folio;
struct bio *bio;
struct btrfs_super_block *disk_super;
+ size_t offset;
bytenr_orig = btrfs_sb_offset(i);
ret = btrfs_sb_log_location(device, i, WRITE, &bytenr);
@@ -3772,7 +3770,7 @@ static int write_dev_supers(struct btrfs_device *device,
btrfs_err(device->fs_info,
"couldn't get super block location for mirror %d",
i);
- errors++;
+ atomic_inc(&device->sb_wb_errors);
continue;
}
if (bytenr + BTRFS_SUPER_INFO_SIZE >=
@@ -3785,20 +3783,18 @@ static int write_dev_supers(struct btrfs_device *device,
BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE,
sb->csum);
- page = find_or_create_page(mapping, bytenr >> PAGE_SHIFT,
- GFP_NOFS);
- if (!page) {
+ folio = __filemap_get_folio(mapping, bytenr >> PAGE_SHIFT,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_NOFS);
+ if (IS_ERR(folio)) {
btrfs_err(device->fs_info,
"couldn't get super block page for bytenr %llu",
bytenr);
- errors++;
+ atomic_inc(&device->sb_wb_errors);
continue;
}
- /* Bump the refcount for wait_dev_supers() */
- get_page(page);
-
- disk_super = page_address(page);
+ offset = offset_in_folio(folio, bytenr);
+ disk_super = folio_address(folio) + offset;
memcpy(disk_super, sb, BTRFS_SUPER_INFO_SIZE);
/*
@@ -3812,8 +3808,7 @@ static int write_dev_supers(struct btrfs_device *device,
bio->bi_iter.bi_sector = bytenr >> SECTOR_SHIFT;
bio->bi_private = device;
bio->bi_end_io = btrfs_end_super_write;
- __bio_add_page(bio, page, BTRFS_SUPER_INFO_SIZE,
- offset_in_page(bytenr));
+ bio_add_folio_nofail(bio, folio, BTRFS_SUPER_INFO_SIZE, offset);
/*
* We FUA only the first super block. The others we allow to
@@ -3825,9 +3820,9 @@ static int write_dev_supers(struct btrfs_device *device,
submit_bio(bio);
if (btrfs_advance_sb_log(device, i))
- errors++;
+ atomic_inc(&device->sb_wb_errors);
}
- return errors < i ? 0 : -1;
+ return atomic_read(&device->sb_wb_errors) < i ? 0 : -1;
}
/*
@@ -3849,7 +3844,7 @@ static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
max_mirrors = BTRFS_SUPER_MIRROR_MAX;
for (i = 0; i < max_mirrors; i++) {
- struct page *page;
+ struct folio *folio;
ret = btrfs_sb_log_location(device, i, READ, &bytenr);
if (ret == -ENOENT) {
@@ -3864,29 +3859,19 @@ static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
device->commit_total_bytes)
break;
- page = find_get_page(device->bdev->bd_mapping,
+ folio = filemap_get_folio(device->bdev->bd_mapping,
bytenr >> PAGE_SHIFT);
- if (!page) {
- errors++;
- if (i == 0)
- primary_failed = true;
+ /* If the folio has been removed, then we know it completed */
+ if (IS_ERR(folio))
continue;
- }
- /* Page is submitted locked and unlocked once the IO completes */
- wait_on_page_locked(page);
- if (PageError(page)) {
- errors++;
- if (i == 0)
- primary_failed = true;
- }
-
- /* Drop our reference */
- put_page(page);
-
- /* Drop the reference from the writing run */
- put_page(page);
+ /* Folio is unlocked once the IO completes */
+ folio_wait_locked(folio);
+ folio_put(folio);
}
+ errors += atomic_read(&device->sb_wb_errors);
+ if (errors >= INT_MAX / 2)
+ primary_failed = true;
/* log error, force error return */
if (primary_failed) {
btrfs_err(device->fs_info, "error writing primary super block to device %llu",
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index cf555f5b47ce..44c639720426 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -142,6 +142,8 @@ struct btrfs_device {
/* type and info about this device */
u64 type;
+ atomic_t sb_wb_errors;
+
/* minimal io size for this device */
u32 sector_size;
^ permalink raw reply related [relevance 5%]
* [PATCH 06/10] ntfs3: Convert attr_make_nonresident to use a folio
2024-04-17 17:09 7% ` [PATCH 02/10] ntfs3: Convert ntfs_write_begin to use a folio Matthew Wilcox (Oracle)
@ 2024-04-17 17:09 7% ` Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-17 17:09 UTC (permalink / raw)
To: Konstantin Komarov; +Cc: Matthew Wilcox (Oracle), ntfs3, linux-fsdevel
Fetch a folio from the page cache instead of a page and operate on it.
Take advantage of the new helpers to avoid handling highmem ourselves,
and combine the uptodate + unlock operations into folio_end_read().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ntfs3/attrib.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
index 64b526fd2dbc..1972213a663e 100644
--- a/fs/ntfs3/attrib.c
+++ b/fs/ntfs3/attrib.c
@@ -285,22 +285,20 @@ int attr_make_nonresident(struct ntfs_inode *ni, struct ATTRIB *attr,
if (err)
goto out2;
} else if (!page) {
- char *kaddr;
-
- page = grab_cache_page(ni->vfs_inode.i_mapping, 0);
- if (!page) {
- err = -ENOMEM;
+ struct address_space *mapping = ni->vfs_inode.i_mapping;
+ struct folio *folio;
+
+ folio = __filemap_get_folio(mapping, 0,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+ mapping_gfp_mask(mapping));
+ if (IS_ERR(folio)) {
+ err = PTR_ERR(folio);
goto out2;
}
- kaddr = kmap_atomic(page);
- memcpy(kaddr, data, rsize);
- memset(kaddr + rsize, 0, PAGE_SIZE - rsize);
- kunmap_atomic(kaddr);
- flush_dcache_page(page);
- SetPageUptodate(page);
- set_page_dirty(page);
- unlock_page(page);
- put_page(page);
+ folio_fill_tail(folio, 0, data, rsize);
+ folio_mark_dirty(folio);
+ folio_end_read(folio, true);
+ folio_put(folio);
}
}
--
2.43.0
^ permalink raw reply related [relevance 7%]
* [PATCH 02/10] ntfs3: Convert ntfs_write_begin to use a folio
@ 2024-04-17 17:09 7% ` Matthew Wilcox (Oracle)
2024-04-17 17:09 7% ` [PATCH 06/10] ntfs3: Convert attr_make_nonresident " Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-17 17:09 UTC (permalink / raw)
To: Konstantin Komarov; +Cc: Matthew Wilcox (Oracle), ntfs3, linux-fsdevel
Retrieve a folio from the page cache instead of a precise page.
This function is now large folio safe, but its called function is not.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ntfs3/inode.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
index f833d9cd383d..25be12e68d6e 100644
--- a/fs/ntfs3/inode.c
+++ b/fs/ntfs3/inode.c
@@ -901,24 +901,25 @@ int ntfs_write_begin(struct file *file, struct address_space *mapping,
*pagep = NULL;
if (is_resident(ni)) {
- struct page *page =
- grab_cache_page_write_begin(mapping, pos >> PAGE_SHIFT);
+ struct folio *folio = __filemap_get_folio(mapping,
+ pos >> PAGE_SHIFT, FGP_WRITEBEGIN,
+ mapping_gfp_mask(mapping));
- if (!page) {
- err = -ENOMEM;
+ if (IS_ERR(folio)) {
+ err = PTR_ERR(folio);
goto out;
}
ni_lock(ni);
- err = attr_data_read_resident(ni, page);
+ err = attr_data_read_resident(ni, &folio->page);
ni_unlock(ni);
if (!err) {
- *pagep = page;
+ *pagep = &folio->page;
goto out;
}
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
if (err != E_NTFS_NONRESIDENT)
goto out;
--
2.43.0
^ permalink raw reply related [relevance 7%]
* [PATCH 2/7] udf: Convert udf_write_begin() to use a folio
@ 2024-04-17 15:04 7% ` Matthew Wilcox (Oracle)
2024-04-17 15:04 7% ` [PATCH 3/7] udf: Convert udf_expand_file_adinicb() " Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-17 15:04 UTC (permalink / raw)
To: Jan Kara; +Cc: Matthew Wilcox (Oracle), linux-fsdevel
Use the folio APIs throughout instead of the deprecated page APIs.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/udf/inode.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index 2f831a3a91af..5146b9d7aba3 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -254,7 +254,7 @@ static int udf_write_begin(struct file *file, struct address_space *mapping,
struct page **pagep, void **fsdata)
{
struct udf_inode_info *iinfo = UDF_I(file_inode(file));
- struct page *page;
+ struct folio *folio;
int ret;
if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) {
@@ -266,12 +266,13 @@ static int udf_write_begin(struct file *file, struct address_space *mapping,
}
if (WARN_ON_ONCE(pos >= PAGE_SIZE))
return -EIO;
- page = grab_cache_page_write_begin(mapping, 0);
- if (!page)
- return -ENOMEM;
- *pagep = page;
- if (!PageUptodate(page))
- udf_adinicb_readpage(page);
+ folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN,
+ mapping_gfp_mask(mapping));
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+ *pagep = &folio->page;
+ if (!folio_test_uptodate(folio))
+ udf_adinicb_readpage(&folio->page);
return 0;
}
--
2.43.0
^ permalink raw reply related [relevance 7%]
* [PATCH 3/7] udf: Convert udf_expand_file_adinicb() to use a folio
2024-04-17 15:04 7% ` [PATCH 2/7] udf: Convert udf_write_begin() to use a folio Matthew Wilcox (Oracle)
@ 2024-04-17 15:04 7% ` Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-17 15:04 UTC (permalink / raw)
To: Jan Kara; +Cc: Matthew Wilcox (Oracle), linux-fsdevel
Use the folio APIs throughout this function.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/udf/inode.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index 5146b9d7aba3..59215494e6f6 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -342,7 +342,7 @@ const struct address_space_operations udf_aops = {
*/
int udf_expand_file_adinicb(struct inode *inode)
{
- struct page *page;
+ struct folio *folio;
struct udf_inode_info *iinfo = UDF_I(inode);
int err;
@@ -358,12 +358,13 @@ int udf_expand_file_adinicb(struct inode *inode)
return 0;
}
- page = find_or_create_page(inode->i_mapping, 0, GFP_KERNEL);
- if (!page)
- return -ENOMEM;
+ folio = __filemap_get_folio(inode->i_mapping, 0,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, GFP_KERNEL);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
- if (!PageUptodate(page))
- udf_adinicb_readpage(page);
+ if (!folio_test_uptodate(folio))
+ udf_adinicb_readpage(&folio->page);
down_write(&iinfo->i_data_sem);
memset(iinfo->i_data + iinfo->i_lenEAttr, 0x00,
iinfo->i_lenAlloc);
@@ -372,22 +373,22 @@ int udf_expand_file_adinicb(struct inode *inode)
iinfo->i_alloc_type = ICBTAG_FLAG_AD_SHORT;
else
iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG;
- set_page_dirty(page);
- unlock_page(page);
+ folio_mark_dirty(folio);
+ folio_unlock(folio);
up_write(&iinfo->i_data_sem);
err = filemap_fdatawrite(inode->i_mapping);
if (err) {
/* Restore everything back so that we don't lose data... */
- lock_page(page);
+ folio_lock(folio);
down_write(&iinfo->i_data_sem);
- memcpy_to_page(page, 0, iinfo->i_data + iinfo->i_lenEAttr,
- inode->i_size);
- unlock_page(page);
+ memcpy_from_folio(iinfo->i_data + iinfo->i_lenEAttr,
+ folio, 0, inode->i_size);
+ folio_unlock(folio);
iinfo->i_alloc_type = ICBTAG_FLAG_AD_IN_ICB;
iinfo->i_lenAlloc = inode->i_size;
up_write(&iinfo->i_data_sem);
}
- put_page(page);
+ folio_put(folio);
mark_inode_dirty(inode);
return err;
--
2.43.0
^ permalink raw reply related [relevance 7%]
* Re: BUG: Bad page map in process init pte:c0ab684c pmd:01182000 (on a PowerMac G4 DP)
@ 2024-04-17 0:56 1% ` Erhard Furtner
0 siblings, 0 replies; 200+ results
From: Erhard Furtner @ 2024-04-17 0:56 UTC (permalink / raw)
To: Christophe Leroy; +Cc: linux-mm, Rohan McLure, linuxppc-dev, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 13742 bytes --]
On Thu, 29 Feb 2024 17:11:28 +0000
Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
> > Revisited the issue on kernel v6.8-rc6 and I can still reproduce it.
> >
> > Short summary as my last post was over a year ago:
> > (x) I get this memory corruption only when CONFIG_VMAP_STACK=y and CONFIG_SMP=y is enabled.
> > (x) I don't get this memory corruption when only one of the above is enabled. ^^
> > (x) memtester says the 2 GiB RAM in my G4 DP are fine.
> > (x) I don't get this issue on my G5 11,2 or Talos II.
> > (x) "stress -m 2 --vm-bytes 965M" provokes the issue in < 10 secs. (https://salsa.debian.org/debian/stress)
> >
> > For the test I used CONFIG_KASAN_INLINE=y for v6.8-rc6 and debug_pagealloc=on, page_owner=on and got this dmesg:
> >
> > [...]
> > pagealloc: memory corruption
> > f5fcfff0: 00 00 00 00 ....
> > CPU: 1 PID: 1788 Comm: stress Tainted: G B 6.8.0-rc6-PMacG4 #15
> > Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
> > Call Trace:
> > [f3bfbac0] [c162a8e8] dump_stack_lvl+0x60/0x94 (unreliable)
> > [f3bfbae0] [c04edf9c] __kernel_unpoison_pages+0x1e0/0x1f0
> > [f3bfbb30] [c04a8aa0] post_alloc_hook+0xe0/0x174
> > [f3bfbb60] [c04a8b58] prep_new_page+0x24/0xbc
> > [f3bfbb80] [c04abcc4] get_page_from_freelist+0xcd0/0xf10
> > [f3bfbc50] [c04aecd8] __alloc_pages+0x204/0xe2c
> > [f3bfbda0] [c04b07a8] __folio_alloc+0x18/0x88
> > [f3bfbdc0] [c0461a10] vma_alloc_zeroed_movable_folio.isra.0+0x2c/0x6c
> > [f3bfbde0] [c046bb90] handle_mm_fault+0x91c/0x19ac
> > [f3bfbec0] [c0047b8c] ___do_page_fault+0x93c/0xc14
> > [f3bfbf10] [c0048278] do_page_fault+0x28/0x60
> > [f3bfbf30] [c000433c] DataAccess_virt+0x124/0x17c
> > --- interrupt: 300 at 0xbe30d8
> > NIP: 00be30d8 LR: 00be30b4 CTR: 00000000
> > REGS: f3bfbf40 TRAP: 0300 Tainted: G B (6.8.0-rc6-PMacG4)
> > MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 20882464 XER: 00000000
> > DAR: 88c7a010 DSISR: 42000000
> > GPR00: 00be30b4 af8397d0 a78436c0 6b2ee010 3c500000 20224462 fe77f7e1 00b00264
> > GPR08: 1d98d000 1d98c000 00000000 40ae256a 20882262 00bffff4 00000000 00000000
> > GPR16: 00000000 00000002 00000000 0000005a 40802262 80002262 40002262 00c000a4
> > GPR24: ffffffff ffffffff 3c500000 00000000 00000000 6b2ee010 00c07d64 00001000
> > NIP [00be30d8] 0xbe30d8
> > LR [00be30b4] 0xbe30b4
> > --- interrupt: 300
> > page:ef4bd92c refcount:1 mapcount:0 mapping:00000000 index:0x1 pfn:0x310b3
> > flags: 0x80000000(zone=2)
> > page_type: 0xffffffff()
> > raw: 80000000 00000100 00000122 00000000 00000001 00000000 ffffffff 00000001
> > raw: 00000000
> > page dumped because: pagealloc: corrupted page details
> > page_owner info is not present (never set?)
> > swapper/1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0
> > CPU: 1 PID: 0 Comm: swapper/1 Tainted: G B 6.8.0-rc6-PMacG4 #15
> > Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
> > Call Trace:
> > [f101b9d0] [c162a8e8] dump_stack_lvl+0x60/0x94 (unreliable)
> > [f101b9f0] [c04ae948] warn_alloc+0x154/0x2e0
> > [f101bab0] [c04af030] __alloc_pages+0x55c/0xe2c
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > [...]
> >
> > New findings:
> > (x) The page corruption only shows up the 1st time I run "stress -m 2 --vm-bytes 965M". When I quit and restart stress no additional page corruption shows up.
> > (x) The page corruption shows up shortly after I run "stress -m 2 --vm-bytes 965M" but no additional page corruption shows up afterwards, even if left running for 30min.
> >
> >
> > For additional testing I thought it would be a good idea to try "modprobe test_vmalloc" but this remained inconclusive. Sometimes a 'BUG: Unable to handle kernel data access on read at 0xe0000000' like this shows up but not always:
> >
>
> Interesting.
>
> I guess 0xe0000000 is where linear RAM starts to be mapped with pages ?
> Can you confirm with a dump of
> /sys/kernel/debug/powerpc/block_address_translation ?
>
> Do we have a problem of race with hash table ?
>
> Would KCSAN help with that ?
Revisited the issue on kernel v6.9-rc4 and I can still reproduce it. Did some runs now with KCSAN_EARLY_ENABLE=y (+ KCSAN_SKIP_WATCH=4000 + KCSAN_STRICT=y) which made KCSAN a lot more verbose.
On v6.9-rc4 I have not seen the "SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)" I reported some time ago and no other KASAN hits at boot or afterwards so I disabled KASAN. The general memory corruption issue remains however.
At running "stress -m 2 --vm-bytes 965M" I get some "BUG: KCSAN: data-race in list_add / lru_gen_look_around" and "BUG: KCSAN: data-race in zswap_store / zswap_update_total_size" which I don't get otherwise:
[...]
BUG: KCSAN: data-race in list_add / lru_gen_look_around
read (marked) to 0xefa6fa40 of 4 bytes by task 1619 on cpu 0:
lru_gen_look_around+0x320/0x634
folio_referenced_one+0x32c/0x404
rmap_walk_anon+0x1c4/0x24c
rmap_walk+0x70/0x7c
folio_referenced+0x194/0x1ec
shrink_folio_list+0x6a8/0xd28
evict_folios+0xcc0/0x1204
try_to_shrink_lruvec+0x214/0x2f0
shrink_one+0x104/0x1e8
shrink_node+0x314/0xc3c
do_try_to_free_pages+0x500/0x7e4
try_to_free_pages+0x150/0x18c
__alloc_pages+0x460/0x8dc
folio_prealloc.isra.0+0x44/0xec
handle_mm_fault+0x488/0xed0
___do_page_fault+0x4d8/0x630
do_page_fault+0x28/0x40
DataAccess_virt+0x124/0x17c
write to 0xefa6fa40 of 4 bytes by task 40 on cpu 1:
list_add+0x58/0x94
evict_folios+0xb04/0x1204
try_to_shrink_lruvec+0x214/0x2f0
shrink_one+0x104/0x1e8
shrink_node+0x314/0xc3c
balance_pgdat+0x498/0x914
kswapd+0x304/0x398
kthread+0x174/0x178
start_kernel_thread+0x10/0x14
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[...]
BUG: KCSAN: data-race in zswap_update_total_size / zswap_update_total_size
write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
zswap_update_total_size+0x58/0xe8
zswap_entry_free+0xdc/0x1c0
zswap_load+0x190/0x19c
swap_read_folio+0xbc/0x450
swap_cluster_readahead+0x2f8/0x338
swapin_readahead+0x430/0x438
do_swap_page+0x1e0/0x9bc
handle_mm_fault+0xecc/0xed0
___do_page_fault+0x4d8/0x630
do_page_fault+0x28/0x40
DataAccess_virt+0x124/0x17c
write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
zswap_update_total_size+0x58/0xe8
zswap_store+0x5a8/0xa18
swap_writepage+0x4c/0xe8
pageout+0x1dc/0x304
shrink_folio_list+0xa70/0xd28
evict_folios+0xcc0/0x1204
try_to_shrink_lruvec+0x214/0x2f0
shrink_one+0x104/0x1e8
shrink_node+0x314/0xc3c
balance_pgdat+0x498/0x914
kswapd+0x304/0x398
kthread+0x174/0x178
start_kernel_thread+0x10/0x14
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[...]
One time I also got another page allocation failure:
[...]
==================================================================
kworker/u9:1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0
CPU: 1 PID: 39 Comm: kworker/u9:1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
Workqueue: events_freezable_pwr_efficient disk_events_workfn (events_freezable_pwr_ef)
Call Trace:
[f100dc50] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[f100dc70] [c0be4ee8] dump_stack+0x20/0x34
[f100dc80] [c029de40] warn_alloc+0x100/0x178
[f100dce0] [c029e234] __alloc_pages+0x37c/0x8dc
[f100dda0] [c029e884] __page_frag_alloc_align+0x74/0x194
[f100ddd0] [c09bafc0] __netdev_alloc_skb+0x108/0x234
[f100de00] [bef1a5a8] setup_rx_descbuffer+0x5c/0x258 [b43legacy]
[f100de40] [bef1c43c] b43legacy_dma_rx+0x3e4/0x488 [b43legacy]
[f100deb0] [bef0b034] b43legacy_interrupt_tasklet+0x7bc/0x7f0 [b43legacy]
[f100df50] [c006f8c8] tasklet_action_common.isra.0+0xb0/0xe8
[f100df80] [c0c1fc8c] __do_softirq+0x1dc/0x218
[f100dff0] [c00091d8] do_softirq_own_stack+0x54/0x74
[f10dd760] [c00091c8] do_softirq_own_stack+0x44/0x74
[f10dd780] [c006f114] __irq_exit_rcu+0x6c/0xbc
[f10dd790] [c006f588] irq_exit+0x10/0x20
[f10dd7a0] [c0008b58] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[f10dd7b0] [c000917c] do_IRQ+0x24/0x2c
[f10dd7d0] [c00045b4] HardwareInterrupt_virt+0x108/0x10c
--- interrupt: 500 at _raw_spin_unlock_irq+0x30/0x48
NIP: c0c1f49c LR: c0c1f490 CTR: 00000000
REGS: f10dd7e0 TRAP: 0500 Not tainted (6.9.0-rc4-PMacG4-dirty)
MSR: 00209032 <EE,ME,IR,DR,RI> CR: 84882802 XER: 00000000
GPR00: c0c1f490 f10dd8a0 c1c28020 c49d6828 00016828 0001682b 00000003 c12399ec
GPR08: 00000000 00009032 0000001d f10dd860 24882802 00000000 00000001 00000000
GPR16: 00000800 00000800 00000000 00000000 00000002 00000004 00000004 00000000
GPR24: c49d6850 00000004 00000000 00000007 00000001 c49d6850 f10ddbb4 c49d6828
NIP [c0c1f49c] _raw_spin_unlock_irq+0x30/0x48
LR [c0c1f490] _raw_spin_unlock_irq+0x24/0x48
--- interrupt: 500
[f10dd8c0] [c0246150] evict_folios+0xc74/0x1204
[f10dd9d0] [c02468f4] try_to_shrink_lruvec+0x214/0x2f0
[f10dda50] [c0246ad4] shrink_one+0x104/0x1e8
[f10dda90] [c0248eb8] shrink_node+0x314/0xc3c
[f10ddb20] [c024a98c] do_try_to_free_pages+0x500/0x7e4
[f10ddba0] [c024b110] try_to_free_pages+0x150/0x18c
[f10ddc20] [c029e318] __alloc_pages+0x460/0x8dc
[f10ddce0] [c06088ac] alloc_pages.constprop.0+0x30/0x50
[f10ddd00] [c0608ad4] blk_rq_map_kern+0x208/0x404
[f10ddd50] [c089c048] scsi_execute_cmd+0x350/0x534
[f10dddc0] [c08b77cc] sr_check_events+0x108/0x4bc
[f10dde40] [c08fb620] cdrom_update_events+0x54/0xb8
[f10dde60] [c08fb6b4] cdrom_check_events+0x30/0x70
[f10dde80] [c08b7c44] sr_block_check_events+0x60/0x90
[f10ddea0] [c0630444] disk_check_events+0x68/0x168
[f10ddee0] [c063056c] disk_events_workfn+0x28/0x40
[f10ddf00] [c008df0c] process_scheduled_works+0x350/0x494
[f10ddf70] [c008ee2c] worker_thread+0x2a4/0x300
[f10ddfc0] [c009b87c] kthread+0x174/0x178
[f10ddff0] [c001c304] start_kernel_thread+0x10/0x14
Mem-Info:
active_anon:292700 inactive_anon:181968 isolated_anon:0
active_file:6404 inactive_file:5560 isolated_file:0
unevictable:0 dirty:11 writeback:0
slab_reclaimable:1183 slab_unreclaimable:6185
mapped:7898 shmem:133 pagetables:675
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:1193 free_pcp:778 free_cma:0
Node 0 active_anon:1170800kB inactive_anon:727872kB active_file:25616kB inactive_file:22240kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:31592kB dirty:44kB writeback:0kB shmem:532kB writeback_tmp:0kB kernel_stack:952kB pagetables:2700kB sec_pagetables:0kB all_unreclaimable? no
DMA free:0kB boost:7564kB min:10928kB low:11768kB high:12608kB reserved_highatomic:0KB active_anon:568836kB inactive_anon:92340kB active_file:12kB inactive_file:1248kB unevictable:0kB writepending:40kB present:786432kB managed:709428kB mlocked:0kB bounce:0kB free_pcp:3112kB local_pcp:1844kB free_cma:0kB
lowmem_reserve[]: 0 0 1280 1280
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
39962 total pagecache pages
27865 pages in swap cache
Free swap = 8240252kB
Total swap = 8388604kB
524288 pages RAM
327680 pages HighMem/MovableOnly
19251 pages reserved
b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[...]
To fix a "refcount_t: decrement hit 0; leaking memory." issue which showed up otherwise I applied following patchset on top of v6.9-rc4: https://lore.kernel.org/all/mhng-4caed5c9-bc46-42fe-90d4-9d845376578f@palmer-ri-x1c9a/
Kernel .config attached. For more details on the KCSAN hits dmesg of 2 runs attached.
Regards,
Erhard
[-- Attachment #2: config_69-rc4_g4+ --]
[-- Type: application/octet-stream, Size: 116574 bytes --]
#
# Automatically generated file; DO NOT EDIT.
# Linux/powerpc 6.9.0-rc4 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc (Gentoo 13.2.1_p20240210 p14) 13.2.1 20240210"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=130201
CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=24200
CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=24200
CONFIG_LLD_VERSION=0
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_GCC_ASM_GOTO_OUTPUT_WORKAROUND=y
CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_PAHOLE_VERSION=0
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y
#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
# CONFIG_WERROR is not set
CONFIG_LOCALVERSION="-PMacG4"
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_XZ is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_WATCH_QUEUE=y
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_USELIB is not set
# CONFIG_AUDIT is not set
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_SHOW_LEVEL=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_IRQ_DOMAIN_NOMAP=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_ARCH_HAS_TICK_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_TIME_KUNIT_TEST=m
CONFIG_CONTEXT_TRACKING=y
CONFIG_CONTEXT_TRACKING_IDLE=y
#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y
# end of Timers subsystem
CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
#
# BPF subsystem
#
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
# CONFIG_BPF_PRELOAD is not set
# end of BPF subsystem
CONFIG_PREEMPT_VOLUNTARY_BUILD=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not set
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting
CONFIG_CPU_ISOLATION=y
#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_TREE_SRCU=y
CONFIG_NEED_SRCU_NMI_SAFE=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
# end of RCU Subsystem
# CONFIG_IKCONFIG is not set
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=16
CONFIG_LOG_CPU_MAX_BUF_SHIFT=13
# CONFIG_PRINTK_INDEX is not set
#
# Scheduler features
#
# end of Scheduler features
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC10_NO_ARRAY_BOUNDS=y
CONFIG_CC_NO_ARRAY_BOUNDS=y
CONFIG_GCC_NO_STRINGOP_OVERFLOW=y
CONFIG_CC_NO_STRINGOP_OVERFLOW=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
# CONFIG_CGROUP_FAVOR_DYNMODS is not set
CONFIG_MEMCG=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
# CONFIG_RT_GROUP_SCHED is not set
CONFIG_SCHED_MM_CID=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
# CONFIG_CGROUP_BPF is not set
CONFIG_CGROUP_MISC=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_RELAY is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
# CONFIG_RD_XZ is not set
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
# CONFIG_RD_ZSTD is not set
# CONFIG_BOOT_CONFIG is not set
# CONFIG_INITRAMFS_PRESERVE_MTIME is not set
# CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_HAVE_LD_DEAD_CODE_DATA_ELIMINATION=y
# CONFIG_LD_DEAD_CODE_DATA_ELIMINATION is not set
CONFIG_LD_ORPHAN_WARN=y
CONFIG_LD_ORPHAN_WARN_LEVEL="warn"
CONFIG_SYSCTL=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_EXPERT=y
CONFIG_MULTIUSER=y
# CONFIG_SGETMASK_SYSCALL is not set
# CONFIG_SYSFS_SYSCALL is not set
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_PC104 is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_SELFTEST is not set
# CONFIG_KALLSYMS_ALL is not set
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_CALLBACKS=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_HAVE_PERF_EVENTS=y
#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# end of Kernel Performance Events And Counters
CONFIG_SYSTEM_DATA_VERIFICATION=y
# CONFIG_PROFILING is not set
#
# Kexec and crash features
#
# CONFIG_KEXEC is not set
# end of Kexec and crash features
# end of General setup
CONFIG_PPC32=y
# CONFIG_PPC64 is not set
#
# Processor support
#
CONFIG_PPC_BOOK3S_32=y
# CONFIG_PPC_85xx is not set
# CONFIG_PPC_8xx is not set
# CONFIG_40x is not set
# CONFIG_44x is not set
# CONFIG_PPC_BOOK3S_603 is not set
CONFIG_PPC_BOOK3S_604=y
# CONFIG_POWERPC_CPU is not set
# CONFIG_E300C2_CPU is not set
# CONFIG_E300C3_CPU is not set
CONFIG_G4_CPU=y
# CONFIG_TOOLCHAIN_DEFAULT_CPU is not set
CONFIG_TARGET_CPU_BOOL=y
CONFIG_TARGET_CPU="G4"
CONFIG_PPC_BOOK3S=y
CONFIG_PPC_FPU_REGS=y
CONFIG_PPC_FPU=y
CONFIG_ALTIVEC=y
CONFIG_PPC_KUEP=y
CONFIG_PPC_KUAP=y
# CONFIG_PPC_KUAP_DEBUG is not set
CONFIG_PPC_HAVE_PMU_SUPPORT=y
# CONFIG_PMU_SYSFS is not set
CONFIG_PPC_PERF_CTRS=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
# end of Processor support
CONFIG_VDSO32=y
CONFIG_CPU_BIG_ENDIAN=y
CONFIG_32BIT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MAX=17
CONFIG_ARCH_MMAP_RND_BITS_MIN=11
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=17
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=11
CONFIG_NR_IRQS=512
CONFIG_NMI_IPI=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_PPC=y
CONFIG_EARLY_PRINTK=y
CONFIG_PANIC_TIMEOUT=40
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_TBSYNC=y
CONFIG_AUDIT_ARCH=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_SYS_SUPPORTS_APM_EMULATION=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_HAS_ADD_PAGES=y
# CONFIG_PPC_PCI_OF_BUS_MAP is not set
CONFIG_PPC_PCI_BUS_NUM_DOMAIN_DEPENDENT=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_PGTABLE_LEVELS=2
CONFIG_PPC_MSI_BITMAP=y
#
# Platform support
#
# CONFIG_SCOM_DEBUGFS is not set
# CONFIG_PPC_CHRP is not set
# CONFIG_PPC_MPC512x is not set
# CONFIG_PPC_MPC52xx is not set
CONFIG_PPC_PMAC=y
CONFIG_PPC_PMAC32_PSURGE=y
# CONFIG_PPC_82xx is not set
# CONFIG_PPC_83xx is not set
# CONFIG_PPC_86xx is not set
CONFIG_KVM_GUEST=y
CONFIG_EPAPR_PARAVIRT=y
CONFIG_PPC_HASH_MMU_NATIVE=y
CONFIG_PPC_OF_BOOT_TRAMPOLINE=y
CONFIG_PPC_SMP_MUXED_IPI=y
CONFIG_MPIC=y
CONFIG_MPIC_MSGR=y
CONFIG_PPC_MPC106=y
#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set
# CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set
#
# CPU frequency scaling drivers
#
# CONFIG_CPUFREQ_DT_PLATDEV is not set
CONFIG_CPU_FREQ_PMAC=y
# end of CPU Frequency scaling
#
# CPUIdle driver
#
#
# CPU Idle
#
# CONFIG_CPU_IDLE is not set
# end of CPU Idle
# end of CPUIdle driver
CONFIG_TAU=y
# CONFIG_TAU_INT is not set
# CONFIG_TAU_AVERAGE is not set
# CONFIG_GEN_RTC is not set
# end of Platform support
#
# Kernel options
#
CONFIG_HIGHMEM=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
CONFIG_HZ_300=y
# CONFIG_HZ_1000 is not set
CONFIG_HZ=300
CONFIG_SCHED_HRTICK=y
CONFIG_HOTPLUG_CPU=y
# CONFIG_PPC_QUEUED_SPINLOCKS is not set
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_KEXEC=y
CONFIG_ARCH_SUPPORTS_KEXEC_PURGATORY=y
CONFIG_ARCH_SUPPORTS_CRASH_DUMP=y
CONFIG_IRQ_ALL_CPUS=y
CONFIG_ARCH_FLATMEM_ENABLE=y
CONFIG_ILLEGAL_POINTER_VALUE=0
CONFIG_PPC_4K_PAGES=y
CONFIG_THREAD_SHIFT=13
CONFIG_DATA_SHIFT=22
CONFIG_ARCH_FORCE_MAX_ORDER=10
CONFIG_CMDLINE=""
CONFIG_EXTRA_TARGETS=""
CONFIG_ARCH_WANTS_FREEZER_CONTROL=y
# CONFIG_SUSPEND is not set
# CONFIG_HIBERNATION is not set
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
CONFIG_APM_EMULATION=m
CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
# CONFIG_ENERGY_MODEL is not set
# end of Kernel options
CONFIG_ISA_DMA_API=y
#
# Bus options
#
CONFIG_GENERIC_ISA_DMA=y
CONFIG_PPC_INDIRECT_PCI=y
# CONFIG_FSL_LBC is not set
# end of Bus options
#
# Advanced setup
#
# CONFIG_ADVANCED_OPTIONS is not set
#
# Default settings for advanced configuration options are used
#
CONFIG_LOWMEM_SIZE=0x30000000
CONFIG_PAGE_OFFSET=0xc0000000
CONFIG_KERNEL_START=0xc0000000
CONFIG_PHYSICAL_START=0x00000000
CONFIG_TASK_SIZE=0xb0000000
# end of Advanced setup
# CONFIG_VIRTUALIZATION is not set
CONFIG_HAVE_LIVEPATCH=y
#
# General architecture-dependent options
#
CONFIG_HOTPLUG_SMT=y
CONFIG_SMT_NUM_THREADS_DYNAMIC=y
# CONFIG_KPROBES is not set
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_GENERIC_IDLE_POLL_SETUP=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_32BIT_OFF_T=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_MMU_GATHER_PAGE_SIZE=y
CONFIG_MMU_GATHER_MERGE_VMAS=y
CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM=y
CONFIG_MMU_LAZY_TLB_REFCOUNT=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_ARCH_WEAK_RELEASE_ACQUIRE=y
CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# CONFIG_SECCOMP_CACHE_DEBUG is not set
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
# CONFIG_STACKPROTECTOR_STRONG is not set
CONFIG_LTO_NONE=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING_USER=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC=y
CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y
CONFIG_SOFTIRQ_ON_OWN_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_ARCH_MMAP_RND_BITS=11
CONFIG_HAVE_PAGE_SIZE_4KB=y
CONFIG_PAGE_SIZE_4KB=y
CONFIG_PAGE_SIZE_LESS_THAN_64KB=y
CONFIG_PAGE_SIZE_LESS_THAN_256KB=y
CONFIG_PAGE_SHIFT=12
CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT=y
CONFIG_HAVE_OBJTOOL=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_HAVE_ARCH_NVRAM_OPS=y
CONFIG_CLONE_BACKWARDS=y
CONFIG_OLD_SIGSUSPEND=y
CONFIG_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
CONFIG_ARCH_OPTIONAL_KERNEL_RWX=y
CONFIG_ARCH_OPTIONAL_KERNEL_RWX_DEFAULT=y
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
# CONFIG_STRICT_MODULE_RWX is not set
CONFIG_ARCH_HAS_PHYS_TO_DMA=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_HAVE_STATIC_CALL=y
CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_ARCH_SPLIT_ARG64=y
#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling
CONFIG_HAVE_GCC_PLUGINS=y
CONFIG_GCC_PLUGINS=y
CONFIG_GCC_PLUGIN_LATENT_ENTROPY=y
CONFIG_FUNCTION_ALIGNMENT_4B=y
CONFIG_FUNCTION_ALIGNMENT=4
# end of General architecture-dependent options
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
# CONFIG_MODULE_DEBUG is not set
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_MODULE_COMPRESS_NONE=y
# CONFIG_MODULE_COMPRESS_GZIP is not set
# CONFIG_MODULE_COMPRESS_XZ is not set
# CONFIG_MODULE_COMPRESS_ZSTD is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_MODPROBE_PATH="/sbin/modprobe"
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLOCK_LEGACY_AUTOLOAD=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_CGROUP_PUNT_BIO=y
CONFIG_BLK_DEV_BSG_COMMON=y
CONFIG_BLK_ICQ=y
# CONFIG_BLK_DEV_BSGLIB is not set
# CONFIG_BLK_DEV_INTEGRITY is not set
# CONFIG_BLK_DEV_WRITE_MOUNTED is not set
# CONFIG_BLK_DEV_ZONED is not set
# CONFIG_BLK_DEV_THROTTLING is not set
CONFIG_BLK_WBT=y
CONFIG_BLK_WBT_MQ=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
# CONFIG_BLK_CGROUP_IOPRIO is not set
CONFIG_BLK_DEBUG_FS=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
# CONFIG_UNIXWARE_DISKLABEL is not set
CONFIG_LDM_PARTITION=y
# CONFIG_LDM_DEBUG is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_KARMA_PARTITION is not set
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
# end of Partition Types
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y
CONFIG_BLOCK_HOLDER_DEPRECATED=y
CONFIG_BLK_MQ_STACKING=y
#
# IO Schedulers
#
# CONFIG_MQ_IOSCHED_DEADLINE is not set
# CONFIG_MQ_IOSCHED_KYBER is not set
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_BFQ_CGROUP_DEBUG is not set
# end of IO Schedulers
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y
#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
# end of Executable file formats
#
# Memory Management options
#
CONFIG_ZPOOL=y
CONFIG_SWAP=y
CONFIG_ZSWAP=y
CONFIG_ZSWAP_DEFAULT_ON=y
CONFIG_ZSWAP_SHRINKER_DEFAULT_ON=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD=y
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="zstd"
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC=y
CONFIG_ZSWAP_ZPOOL_DEFAULT="zsmalloc"
# CONFIG_ZBUD is not set
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_ZSMALLOC_CHAIN_SIZE=8
#
# Slab allocator options
#
CONFIG_SLUB=y
# CONFIG_SLUB_TINY is not set
# CONFIG_SLAB_MERGE_DEFAULT is not set
CONFIG_SLAB_FREELIST_RANDOM=y
CONFIG_SLAB_FREELIST_HARDENED=y
# CONFIG_SLUB_STATS is not set
# CONFIG_SLUB_CPU_PARTIAL is not set
CONFIG_RANDOM_KMALLOC_CACHES=y
# end of Slab allocator options
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
# CONFIG_COMPAT_BRK is not set
CONFIG_FLATMEM=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_ARCH_KEEP_MEMBLOCK=y
CONFIG_EXCLUSIVE_SYSTEM_RAM=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_PCP_BATCH_SCALE_MAX=5
CONFIG_BOUNCE=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=16384
# CONFIG_CMA is not set
CONFIG_GENERIC_EARLY_IOREMAP=y
# CONFIG_IDLE_PAGE_TRACKING is not set
CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y
CONFIG_ZONE_DMA=y
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_TEST is not set
# CONFIG_DMAPOOL_TEST is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_KMAP_LOCAL=y
CONFIG_MEMFD_CREATE=y
# CONFIG_ANON_VMA_NAME is not set
CONFIG_USERFAULTFD=y
CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y
# CONFIG_LRU_GEN_STATS is not set
CONFIG_LOCK_MM_AND_FIND_VMA=y
#
# Data Access Monitoring
#
# CONFIG_DAMON is not set
# end of Data Access Monitoring
# end of Memory Management options
CONFIG_NET=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_NET_XGRESS=y
CONFIG_SKB_EXTENSIONS=y
#
# Networking options
#
CONFIG_PACKET=m
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_AF_UNIX_OOB=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
CONFIG_TLS_DEVICE=y
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=m
CONFIG_XFRM_USER=m
# CONFIG_XFRM_INTERFACE is not set
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
# CONFIG_NET_KEY_MIGRATE is not set
# CONFIG_XDP_SOCKETS is not set
CONFIG_NET_HANDSHAKE=y
# CONFIG_NET_HANDSHAKE_KUNIT_TEST is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=m
CONFIG_SYN_COOKIES=y
# CONFIG_NET_IPVTI is not set
CONFIG_NET_UDP_TUNNEL=m
# CONFIG_NET_FOU is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
# CONFIG_INET_ESP_OFFLOAD is not set
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
# CONFIG_TCP_CONG_CUBIC is not set
CONFIG_TCP_CONG_WESTWOOD=y
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_NV is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
# CONFIG_TCP_CONG_DCTCP is not set
# CONFIG_TCP_CONG_CDG is not set
# CONFIG_TCP_CONG_BBR is not set
CONFIG_DEFAULT_WESTWOOD=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="westwood"
# CONFIG_TCP_MD5SIG is not set
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
# CONFIG_INET6_ESP_OFFLOAD is not set
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
# CONFIG_IPV6_MIP6 is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
# CONFIG_IPV6_VTI is not set
# CONFIG_IPV6_SIT is not set
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
# CONFIG_IPV6_SEG6_LWTUNNEL is not set
# CONFIG_IPV6_SEG6_HMAC is not set
# CONFIG_IPV6_RPL_LWTUNNEL is not set
# CONFIG_IPV6_IOAM6_LWTUNNEL is not set
# CONFIG_NETLABEL is not set
# CONFIG_MPTCP is not set
# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
# CONFIG_NETFILTER is not set
# CONFIG_IP_DCCP is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
# CONFIG_BRIDGE_MRP is not set
# CONFIG_BRIDGE_CFM is not set
# CONFIG_NET_DSA is not set
# CONFIG_VLAN_8021Q is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_6LOWPAN is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y
#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_CBS is not set
# CONFIG_NET_SCH_ETF is not set
# CONFIG_NET_SCH_TAPRIO is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_SKBPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
CONFIG_NET_SCH_FQ_CODEL=y
# CONFIG_NET_SCH_CAKE is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_PLUG is not set
# CONFIG_NET_SCH_ETS is not set
CONFIG_NET_SCH_DEFAULT=y
CONFIG_DEFAULT_FQ_CODEL=y
# CONFIG_DEFAULT_PFIFO_FAST is not set
CONFIG_DEFAULT_NET_SCH="fq_codel"
#
# Classification
#
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
# CONFIG_NET_CLS_FLOWER is not set
# CONFIG_NET_CLS_MATCHALL is not set
# CONFIG_NET_EMATCH is not set
# CONFIG_NET_CLS_ACT is not set
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
# CONFIG_VSOCKETS_LOOPBACK is not set
# CONFIG_VIRTIO_VSOCKETS is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_MPLS is not set
# CONFIG_NET_NSH is not set
# CONFIG_HSR is not set
# CONFIG_NET_SWITCHDEV is not set
# CONFIG_NET_L3_MASTER_DEV is not set
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_PCPU_DEV_REFCNT=y
CONFIG_MAX_SKB_FRAGS=17
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_SOCK_RX_QUEUE_MAPPING=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_NET_FLOW_LIMIT=y
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# end of Network testing
# end of Networking options
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_HIDP=m
CONFIG_BT_LE=y
CONFIG_BT_LE_L2CAP_ECRED=y
# CONFIG_BT_LEDS is not set
CONFIG_BT_MSFTEXT=y
CONFIG_BT_AOSPEXT=y
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set
CONFIG_BT_FEATURE_DEBUG=y
#
# Bluetooth device drivers
#
CONFIG_BT_INTEL=m
CONFIG_BT_BCM=m
CONFIG_BT_RTL=m
CONFIG_BT_MTK=m
CONFIG_BT_HCIBTUSB=m
CONFIG_BT_HCIBTUSB_AUTOSUSPEND=y
CONFIG_BT_HCIBTUSB_POLL_SYNC=y
CONFIG_BT_HCIBTUSB_BCM=y
CONFIG_BT_HCIBTUSB_MTK=y
CONFIG_BT_HCIBTUSB_RTL=y
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
CONFIG_BT_HCIUART_AG6XX=y
CONFIG_BT_HCIBCM203X=m
# CONFIG_BT_HCIBCM4377 is not set
# CONFIG_BT_HCIBPA10X is not set
CONFIG_BT_HCIBFUSB=m
# CONFIG_BT_HCIDTL1 is not set
# CONFIG_BT_HCIBT3C is not set
# CONFIG_BT_HCIBLUECARD is not set
# CONFIG_BT_HCIVHCI is not set
CONFIG_BT_MRVL=m
CONFIG_BT_ATH3K=m
# CONFIG_BT_VIRTIO is not set
# end of Bluetooth device drivers
# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
# CONFIG_MCTP is not set
CONFIG_WIRELESS=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
# CONFIG_CFG80211_WEXT is not set
CONFIG_CFG80211_KUNIT_TEST=m
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_KUNIT_TEST=m
# CONFIG_MAC80211_MESH is not set
CONFIG_MAC80211_LEDS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
# CONFIG_RFKILL_INPUT is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_FD=y
CONFIG_NET_9P_VIRTIO=y
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
# CONFIG_PSAMPLE is not set
# CONFIG_NET_IFE is not set
# CONFIG_LWTUNNEL is not set
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SOCK_MSG=y
CONFIG_PAGE_POOL=y
# CONFIG_PAGE_POOL_STATS is not set
CONFIG_FAILOVER=y
CONFIG_ETHTOOL_NETLINK=y
CONFIG_NETDEV_ADDR_LIST_TEST=m
CONFIG_NET_TEST=m
#
# Device Drivers
#
CONFIG_HAVE_PCI=y
CONFIG_FORCE_PCI=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCI_SYSCALL=y
# CONFIG_PCIEPORTBUS is not set
# CONFIG_PCIEASPM is not set
# CONFIG_PCIE_PTM is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_ARCH_FALLBACKS=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_STUB is not set
# CONFIG_PCI_IOV is not set
# CONFIG_PCI_PRI is not set
# CONFIG_PCI_PASID is not set
CONFIG_PCI_DYNAMIC_OF_NODES=y
# CONFIG_PCIE_BUS_TUNE_OFF is not set
CONFIG_PCIE_BUS_DEFAULT=y
# CONFIG_PCIE_BUS_SAFE is not set
# CONFIG_PCIE_BUS_PERFORMANCE is not set
# CONFIG_PCIE_BUS_PEER2PEER is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=4
# CONFIG_HOTPLUG_PCI is not set
#
# PCI controller drivers
#
# CONFIG_PCI_FTPCI100 is not set
# CONFIG_PCI_HOST_GENERIC is not set
# CONFIG_PCIE_MICROCHIP_HOST is not set
# CONFIG_PCIE_XILINX is not set
#
# Cadence-based PCIe controllers
#
# CONFIG_PCIE_CADENCE_PLAT_HOST is not set
# end of Cadence-based PCIe controllers
#
# DesignWare-based PCIe controllers
#
# CONFIG_PCI_MESON is not set
# CONFIG_PCIE_DW_PLAT_HOST is not set
# end of DesignWare-based PCIe controllers
#
# Mobiveil-based PCIe controllers
#
# end of Mobiveil-based PCIe controllers
# end of PCI controller drivers
#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint
#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers
# CONFIG_CXL_BUS is not set
CONFIG_PCCARD=m
CONFIG_PCMCIA=m
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y
#
# PC-card bridges
#
CONFIG_YENTA=m
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
# CONFIG_RAPIDIO is not set
#
# Generic Driver Options
#
# CONFIG_UEVENT_HELPER is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DEVTMPFS_SAFE=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_EXTRA_FIRMWARE=""
# CONFIG_FW_LOADER_USER_HELPER is not set
CONFIG_FW_LOADER_COMPRESS=y
# CONFIG_FW_LOADER_COMPRESS_XZ is not set
CONFIG_FW_LOADER_COMPRESS_ZSTD=y
# CONFIG_FW_UPLOAD is not set
# end of Firmware loader
CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_DM_KUNIT_TEST=m
CONFIG_DRIVER_PE_KUNIT_TEST=m
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_REGMAP=y
CONFIG_REGMAP_KUNIT=m
# CONFIG_REGMAP_BUILD is not set
CONFIG_REGMAP_RAM=m
CONFIG_DMA_SHARED_BUFFER=y
CONFIG_DMA_FENCE_TRACE=y
# CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT is not set
# end of Generic Driver Options
#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# CONFIG_MHI_BUS_EP is not set
# end of Bus devices
#
# Cache Drivers
#
# end of Cache Drivers
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
#
# Firmware Drivers
#
#
# ARM System Control and Management Interface Protocol
#
# end of ARM System Control and Management Interface Protocol
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_FW_CFG_SYSFS=m
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
# CONFIG_GOOGLE_FIRMWARE is not set
#
# Qualcomm firmware drivers
#
# end of Qualcomm firmware drivers
#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers
# CONFIG_GNSS is not set
# CONFIG_MTD is not set
CONFIG_DTC=y
CONFIG_OF=y
# CONFIG_OF_UNITTEST is not set
CONFIG_OF_KUNIT_TEST=m
CONFIG_OF_FLATTREE=y
CONFIG_OF_EARLY_FLATTREE=y
CONFIG_OF_KOBJ=y
CONFIG_OF_DYNAMIC=y
CONFIG_OF_ADDRESS=y
CONFIG_OF_IRQ=y
CONFIG_OF_RESERVED_MEM=y
# CONFIG_OF_OVERLAY is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
# CONFIG_PARPORT is not set
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_MAC_FLOPPY is not set
CONFIG_CDROM=y
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_ZRAM is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_VIRTIO_BLK=y
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_UBLK is not set
#
# NVME Support
#
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_NVME_FC is not set
# CONFIG_NVME_TCP is not set
# CONFIG_NVME_TARGET is not set
# end of NVME Support
#
# Misc devices
#
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_PHANTOM is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_SRAM is not set
# CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
# CONFIG_OPEN_DICE is not set
# CONFIG_VCPU_STALL_DETECTOR is not set
# CONFIG_NSM is not set
# CONFIG_C2PORT is not set
#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support
# CONFIG_CB710_CORE is not set
#
# Texas Instruments shared transport line discipline
#
# end of Texas Instruments shared transport line discipline
# CONFIG_SENSORS_LIS3_I2C is not set
# CONFIG_ALTERA_STAPL is not set
# CONFIG_ECHO is not set
# CONFIG_BCM_VK is not set
# CONFIG_MISC_ALCOR_PCI is not set
# CONFIG_MISC_RTSX_PCI is not set
# CONFIG_MISC_RTSX_USB is not set
CONFIG_PVPANIC=y
CONFIG_PVPANIC_MMIO=m
CONFIG_PVPANIC_PCI=m
# end of Misc devices
#
# SCSI device support
#
CONFIG_SCSI_MOD=y
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI_COMMON=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_PROC_FS is not set
CONFIG_SCSI_LIB_KUNIT_TEST=m
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=m
CONFIG_BLK_DEV_BSG=y
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_CONSTANTS=y
# CONFIG_SCSI_LOGGING is not set
CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_PROTO_TEST=m
#
# SCSI Transports
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
# CONFIG_SCSI_ISCSI_ATTRS is not set
# CONFIG_SCSI_SAS_ATTRS is not set
# CONFIG_SCSI_SAS_LIBSAS is not set
# CONFIG_SCSI_SRP_ATTRS is not set
# end of SCSI Transports
CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_MPI3MR is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_NSP32 is not set
# CONFIG_SCSI_WD719X is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_MESH is not set
# CONFIG_SCSI_MAC53C94 is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_VIRTIO=y
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# end of SCSI device support
CONFIG_ATA=y
CONFIG_SATA_HOST=y
CONFIG_ATA_VERBOSE_ERROR=y
# CONFIG_ATA_FORCE is not set
# CONFIG_SATA_PMP is not set
#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI is not set
# CONFIG_SATA_AHCI_PLATFORM is not set
# CONFIG_AHCI_DWC is not set
# CONFIG_AHCI_CEVA is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y
#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y
#
# SATA SFF controllers with BMDMA
#
# CONFIG_ATA_PIIX is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
CONFIG_SATA_SIL=y
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set
#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MACIO=y
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set
#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PCMCIA is not set
# CONFIG_PATA_OF_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set
#
# Generic fallback / legacy drivers
#
# CONFIG_ATA_GENERIC is not set
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
# CONFIG_MD_BITMAP_FILE is not set
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID10 is not set
CONFIG_MD_RAID456=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING=y
# CONFIG_DM_DEBUG_BLOCK_STACK_TRACING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
# CONFIG_DM_SNAPSHOT is not set
CONFIG_DM_THIN_PROVISIONING=m
# CONFIG_DM_CACHE is not set
# CONFIG_DM_WRITECACHE is not set
# CONFIG_DM_ERA is not set
# CONFIG_DM_CLONE is not set
# CONFIG_DM_MIRROR is not set
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_DELAY is not set
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
# CONFIG_DM_LOG_WRITES is not set
# CONFIG_DM_INTEGRITY is not set
# CONFIG_TARGET_CORE is not set
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_KUNIT_UAPI_TEST=m
CONFIG_FIREWIRE_KUNIT_DEVICE_ATTRIBUTE_TEST=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_ADB=y
# CONFIG_ADB_CUDA is not set
CONFIG_ADB_PMU=y
CONFIG_ADB_PMU_EVENT=y
CONFIG_ADB_PMU_LED=y
# CONFIG_ADB_PMU_LED_DISK is not set
CONFIG_PMAC_APM_EMU=m
CONFIG_PMAC_MEDIABAY=y
# CONFIG_PMAC_BACKLIGHT is not set
CONFIG_INPUT_ADBHID=y
CONFIG_MAC_EMUMOUSEBTN=m
CONFIG_THERM_WINDTUNNEL=m
CONFIG_THERM_ADT746X=m
CONFIG_WINDFARM=m
# CONFIG_PMAC_RACKMETER is not set
CONFIG_SENSORS_AMS=m
CONFIG_SENSORS_AMS_PMU=y
CONFIG_SENSORS_AMS_I2C=y
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
CONFIG_WIREGUARD=m
# CONFIG_WIREGUARD_DEBUG is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_IPVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
# CONFIG_MACSEC is not set
CONFIG_NETCONSOLE=y
# CONFIG_NETCONSOLE_EXTENDED_LOG is not set
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=m
# CONFIG_TUN_VNET_CROSS_LE is not set
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=y
# CONFIG_NLMON is not set
# CONFIG_NETKIT is not set
CONFIG_SUNGEM_PHY=y
# CONFIG_ARCNET is not set
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_AGERE is not set
# CONFIG_NET_VENDOR_ALACRITECH is not set
# CONFIG_NET_VENDOR_ALTEON is not set
# CONFIG_ALTERA_TSE is not set
# CONFIG_NET_VENDOR_AMAZON is not set
# CONFIG_NET_VENDOR_AMD is not set
# CONFIG_NET_VENDOR_APPLE is not set
# CONFIG_NET_VENDOR_AQUANTIA is not set
# CONFIG_NET_VENDOR_ARC is not set
# CONFIG_NET_VENDOR_ASIX is not set
# CONFIG_NET_VENDOR_ATHEROS is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_VENDOR_CADENCE is not set
# CONFIG_NET_VENDOR_CAVIUM is not set
# CONFIG_NET_VENDOR_CHELSIO is not set
# CONFIG_NET_VENDOR_CISCO is not set
# CONFIG_NET_VENDOR_CORTINA is not set
# CONFIG_NET_VENDOR_DAVICOM is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DEC is not set
# CONFIG_NET_VENDOR_DLINK is not set
# CONFIG_NET_VENDOR_EMULEX is not set
# CONFIG_NET_VENDOR_ENGLEDER is not set
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_FUJITSU is not set
# CONFIG_NET_VENDOR_FUNGIBLE is not set
# CONFIG_NET_VENDOR_GOOGLE is not set
# CONFIG_NET_VENDOR_HUAWEI is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_JME is not set
# CONFIG_NET_VENDOR_LITEX is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
# CONFIG_NET_VENDOR_MICROSOFT is not set
# CONFIG_NET_VENDOR_MYRI is not set
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NI is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_NETERION is not set
# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set
# CONFIG_ETHOC is not set
# CONFIG_NET_VENDOR_PACKET_ENGINES is not set
# CONFIG_NET_VENDOR_PENSANDO is not set
# CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_BROCADE is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RDC is not set
# CONFIG_NET_VENDOR_REALTEK is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
# CONFIG_NET_VENDOR_SAMSUNG is not set
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
# CONFIG_NET_VENDOR_SOLARFLARE is not set
# CONFIG_NET_VENDOR_SMSC is not set
# CONFIG_NET_VENDOR_SOCIONEXT is not set
# CONFIG_NET_VENDOR_STMICRO is not set
CONFIG_NET_VENDOR_SUN=y
# CONFIG_HAPPYMEAL is not set
CONFIG_SUNGEM=y
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
# CONFIG_NET_VENDOR_SYNOPSYS is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
# CONFIG_NET_VENDOR_TI is not set
# CONFIG_NET_VENDOR_VERTEXCOM is not set
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_NET_VENDOR_WANGXUN is not set
# CONFIG_NET_VENDOR_WIZNET is not set
# CONFIG_NET_VENDOR_XILINX is not set
# CONFIG_NET_VENDOR_XIRCOM is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_PHYLIB is not set
# CONFIG_PSE_CONTROLLER is not set
# CONFIG_MDIO_DEVICE is not set
#
# PCS device drivers
#
# end of PCS device drivers
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_USB_NET_DRIVERS is not set
CONFIG_WLAN=y
# CONFIG_WLAN_VENDOR_ADMTEK is not set
# CONFIG_WLAN_VENDOR_ATH is not set
# CONFIG_WLAN_VENDOR_ATMEL is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
CONFIG_B43LEGACY=m
CONFIG_B43LEGACY_PCI_AUTOSELECT=y
CONFIG_B43LEGACY_PCICORE_AUTOSELECT=y
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_HWRNG=y
CONFIG_B43LEGACY_DEBUG=y
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
# CONFIG_WLAN_VENDOR_INTEL is not set
# CONFIG_WLAN_VENDOR_INTERSIL is not set
# CONFIG_WLAN_VENDOR_MARVELL is not set
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
# CONFIG_WLAN_VENDOR_MICROCHIP is not set
# CONFIG_WLAN_VENDOR_PURELIFI is not set
# CONFIG_WLAN_VENDOR_RALINK is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
# CONFIG_RTL_CARDS is not set
CONFIG_RTL8XXXU=m
# CONFIG_RTL8XXXU_UNTESTED is not set
# CONFIG_RTW88 is not set
# CONFIG_RTW89 is not set
# CONFIG_WLAN_VENDOR_RSI is not set
# CONFIG_WLAN_VENDOR_SILABS is not set
# CONFIG_WLAN_VENDOR_ST is not set
# CONFIG_WLAN_VENDOR_TI is not set
# CONFIG_WLAN_VENDOR_ZYDAS is not set
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
# CONFIG_MAC80211_HWSIM is not set
# CONFIG_VIRT_WIFI is not set
# CONFIG_WAN is not set
#
# Wireless WAN
#
# CONFIG_WWAN is not set
# end of Wireless WAN
# CONFIG_VMXNET3 is not set
# CONFIG_NETDEVSIM is not set
CONFIG_NET_FAILOVER=y
# CONFIG_ISDN is not set
#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_INPUT_MATRIXKMAP is not set
#
# Userland interfaces
#
# CONFIG_INPUT_MOUSEDEV is not set
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
CONFIG_INPUT_KUNIT_TEST=m
# CONFIG_INPUT_APMPOWER is not set
#
# Input Device Drivers
#
# CONFIG_INPUT_KEYBOARD is not set
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
CONFIG_MOUSE_APPLETOUCH=m
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_ELAN_I2C is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
CONFIG_JOYSTICK_XPAD=m
# CONFIG_JOYSTICK_XPAD_FF is not set
CONFIG_JOYSTICK_XPAD_LEDS=y
# CONFIG_JOYSTICK_PXRC is not set
# CONFIG_JOYSTICK_QWIIC is not set
# CONFIG_JOYSTICK_FSIA6B is not set
# CONFIG_JOYSTICK_SENSEHAT is not set
# CONFIG_JOYSTICK_SEESAW is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_ATMEL_CAPTOUCH is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
CONFIG_INPUT_UINPUT=m
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_DA7280_HAPTICS is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_IQS626A is not set
# CONFIG_INPUT_IQS7222 is not set
# CONFIG_INPUT_CMA3000 is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
# CONFIG_RMI4_CORE is not set
#
# Hardware I/O ports
#
# CONFIG_SERIO is not set
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support
#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
# CONFIG_LEGACY_TIOCSTI is not set
CONFIG_LDISC_AUTOLOAD=y
#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_PCILIB=y
CONFIG_SERIAL_8250_PCI=y
# CONFIG_SERIAL_8250_EXAR is not set
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=8
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set
# CONFIG_SERIAL_8250_PCI1XXXX is not set
CONFIG_SERIAL_8250_FSL=y
# CONFIG_SERIAL_8250_DW is not set
# CONFIG_SERIAL_8250_RT288X is not set
# CONFIG_SERIAL_8250_PERICOM is not set
CONFIG_SERIAL_OF_PLATFORM=y
#
# Non-8250 serial port support
#
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_PMACZILOG is not set
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SIFIVE is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_XILINX_PS_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set
# end of Serial drivers
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_PPC_EPAPR_HV_BYTECHAN is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_N_GSM is not set
# CONFIG_NOZOMI is not set
# CONFIG_NULL_TTY is not set
CONFIG_HVC_DRIVER=y
# CONFIG_HVC_UDBG is not set
# CONFIG_SERIAL_DEV_BUS is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=m
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_BA431 is not set
CONFIG_HW_RANDOM_VIRTIO=m
# CONFIG_HW_RANDOM_CCTRNG is not set
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
CONFIG_DEVMEM=y
CONFIG_NVRAM=m
CONFIG_DEVPORT=y
# CONFIG_TCG_TPM is not set
# CONFIG_XILLYBUS is not set
# CONFIG_XILLYUSB is not set
# end of Character devices
#
# I2C support
#
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=m
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=m
#
# I2C Hardware Bus support
#
#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set
#
# Mac SMBus host controller drivers
#
CONFIG_I2C_POWERMAC=y
#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_MPC is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set
#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_CP2615 is not set
# CONFIG_I2C_PCI1XXXX is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set
#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_VIRTIO is not set
# end of I2C Hardware Bus support
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support
# CONFIG_I3C is not set
# CONFIG_SPI is not set
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
# CONFIG_PPS is not set
#
# PTP clock support
#
# CONFIG_PTP_1588_CLOCK is not set
CONFIG_PTP_1588_CLOCK_OPTIONAL=y
#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# end of PTP clock support
# CONFIG_PINCTRL is not set
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_RESET is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_APM_POWER=m
# CONFIG_IP5XXX_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
CONFIG_BATTERY_PMU=m
# CONFIG_BATTERY_SAMSUNG_SDI is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_LTC4162L is not set
# CONFIG_CHARGER_DETECTOR_MAX14656 is not set
# CONFIG_CHARGER_MAX77976 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_BATTERY_GOLDFISH is not set
# CONFIG_BATTERY_RT5033 is not set
# CONFIG_CHARGER_BD99954 is not set
# CONFIG_BATTERY_UG3105 is not set
# CONFIG_FUEL_GAUGE_MM8013 is not set
CONFIG_HWMON=m
CONFIG_HWMON_DEBUG_CHIP=y
#
# Native drivers
#
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM1177 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_AHT10 is not set
# CONFIG_SENSORS_AQUACOMPUTER_D5NEXT is not set
# CONFIG_SENSORS_AS370 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_ASUS_ROG_RYUJIN is not set
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_CHIPCAP2 is not set
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_CORSAIR_PSU is not set
CONFIG_SENSORS_DRIVETEMP=m
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_GIGABYTE_WATERFORCE is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HS3001 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_POWERZ is not set
# CONFIG_SENSORS_POWR1220 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2990 is not set
# CONFIG_SENSORS_LTC2991 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4222 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4260 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LTC4282 is not set
# CONFIG_SENSORS_MAX127 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX31760 is not set
# CONFIG_MAX31827 is not set
# CONFIG_SENSORS_MAX6620 is not set
# CONFIG_SENSORS_MAX6621 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MAX31790 is not set
# CONFIG_SENSORS_MC34VR500 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_TPS23861 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_NCT6775_I2C is not set
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NPCM7XX is not set
# CONFIG_SENSORS_NZXT_KRAKEN2 is not set
# CONFIG_SENSORS_NZXT_KRAKEN3 is not set
# CONFIG_SENSORS_NZXT_SMART2 is not set
# CONFIG_SENSORS_OCC_P8_I2C is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_PT5161L is not set
# CONFIG_SENSORS_SBTSI is not set
# CONFIG_SENSORS_SBRMI is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHT4x is not set
# CONFIG_SENSORS_SHTC1 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC2305 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_ADC128D818 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_INA238 is not set
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_TMP464 is not set
# CONFIG_SENSORS_TMP513 is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83773G is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_THERMAL is not set
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=m
CONFIG_SSB_SPROM=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
CONFIG_SSB_PCIHOST=y
CONFIG_SSB_B43_PCI_BRIDGE=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
CONFIG_SSB_PCMCIAHOST=y
CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
CONFIG_SSB_DRIVER_PCICORE=y
CONFIG_BCMA_POSSIBLE=y
# CONFIG_BCMA is not set
#
# Multifunction device drivers
#
# CONFIG_MFD_ACT8945A is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_MFD_SMPRO is not set
# CONFIG_MFD_AS3722 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_ATMEL_FLEXCOM is not set
# CONFIG_MFD_ATMEL_HLCDC is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_CS42L43_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_MFD_MAX5970 is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_GATEWORKS_GSC is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_MFD_HI6421_PMIC is not set
# CONFIG_LPC_ICH is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77541 is not set
# CONFIG_MFD_MAX77620 is not set
# CONFIG_MFD_MAX77650 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77714 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6370 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_NTXEC is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_SY7636A is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT4831 is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RT5120 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_RK8XX_I2C is not set
# CONFIG_MFD_RN5T618 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TI_LP87565 is not set
# CONFIG_MFD_TPS65218 is not set
# CONFIG_MFD_TPS65219 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS6594_I2C is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TQMX86 is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_LOCHNAGAR is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_ROHM_BD718XX is not set
# CONFIG_MFD_ROHM_BD71828 is not set
# CONFIG_MFD_ROHM_BD957XMUF is not set
# CONFIG_MFD_STPMIC1 is not set
# CONFIG_MFD_STMFX is not set
# CONFIG_MFD_ATC260X_I2C is not set
# CONFIG_MFD_QCOM_PM8008 is not set
# CONFIG_MFD_RSMU_I2C is not set
# end of Multifunction device drivers
# CONFIG_REGULATOR is not set
# CONFIG_RC_CORE is not set
#
# CEC support
#
# CONFIG_MEDIA_CEC_SUPPORT is not set
# end of CEC support
# CONFIG_MEDIA_SUPPORT is not set
#
# Graphics support
#
CONFIG_APERTURE_HELPERS=y
CONFIG_VIDEO=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_AGP is not set
CONFIG_DRM=y
# CONFIG_DRM_DEBUG_MM is not set
CONFIG_DRM_KUNIT_TEST_HELPERS=m
CONFIG_DRM_KUNIT_TEST=m
CONFIG_DRM_KMS_HELPER=y
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
CONFIG_DRM_DEBUG_MODESET_LOCK=y
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_DISPLAY_HELPER=m
CONFIG_DRM_DISPLAY_DP_HELPER=y
# CONFIG_DRM_DP_AUX_CHARDEV is not set
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_EXEC=m
CONFIG_DRM_BUDDY=m
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=m
CONFIG_DRM_SUBALLOC_HELPER=m
#
# I2C encoder or helper chips
#
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips
#
# ARM devices
#
# end of ARM devices
CONFIG_DRM_RADEON=m
CONFIG_DRM_RADEON_USERPTR=y
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
# CONFIG_DRM_XE is not set
CONFIG_DRM_VGEM=m
# CONFIG_DRM_VKMS is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_QXL is not set
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_VIRTIO_GPU_KMS=y
CONFIG_DRM_PANEL=y
#
# Display Panels
#
# CONFIG_DRM_PANEL_LVDS is not set
# CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01 is not set
# CONFIG_DRM_PANEL_SAMSUNG_ATNA33XC20 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6D7AA0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E63M0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set
# CONFIG_DRM_PANEL_SEIKO_43WVF1G is not set
# CONFIG_DRM_PANEL_EDP is not set
# CONFIG_DRM_PANEL_SIMPLE is not set
# end of Display Panels
CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y
#
# Display Interface Bridges
#
# CONFIG_DRM_CHIPONE_ICN6211 is not set
# CONFIG_DRM_CHRONTEL_CH7033 is not set
# CONFIG_DRM_DISPLAY_CONNECTOR is not set
# CONFIG_DRM_ITE_IT6505 is not set
# CONFIG_DRM_LONTIUM_LT8912B is not set
# CONFIG_DRM_LONTIUM_LT9211 is not set
# CONFIG_DRM_LONTIUM_LT9611 is not set
# CONFIG_DRM_LONTIUM_LT9611UXC is not set
# CONFIG_DRM_ITE_IT66121 is not set
# CONFIG_DRM_LVDS_CODEC is not set
# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set
# CONFIG_DRM_NXP_PTN3460 is not set
# CONFIG_DRM_PARADE_PS8622 is not set
# CONFIG_DRM_PARADE_PS8640 is not set
# CONFIG_DRM_SIL_SII8620 is not set
# CONFIG_DRM_SII902X is not set
# CONFIG_DRM_SII9234 is not set
# CONFIG_DRM_SIMPLE_BRIDGE is not set
# CONFIG_DRM_THINE_THC63LVD1024 is not set
# CONFIG_DRM_TOSHIBA_TC358762 is not set
# CONFIG_DRM_TOSHIBA_TC358764 is not set
# CONFIG_DRM_TOSHIBA_TC358767 is not set
# CONFIG_DRM_TOSHIBA_TC358768 is not set
# CONFIG_DRM_TOSHIBA_TC358775 is not set
# CONFIG_DRM_TI_DLPC3433 is not set
# CONFIG_DRM_TI_TFP410 is not set
# CONFIG_DRM_TI_SN65DSI83 is not set
# CONFIG_DRM_TI_SN65DSI86 is not set
# CONFIG_DRM_TI_TPD12S015 is not set
# CONFIG_DRM_ANALOGIX_ANX6345 is not set
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# CONFIG_DRM_ANALOGIX_ANX7625 is not set
# CONFIG_DRM_I2C_ADV7511 is not set
# CONFIG_DRM_CDNS_DSI is not set
# CONFIG_DRM_CDNS_MHDP8546 is not set
# end of Display Interface Bridges
# CONFIG_DRM_ETNAVIV is not set
# CONFIG_DRM_LOGICVC is not set
# CONFIG_DRM_ARCPGU is not set
CONFIG_DRM_BOCHS=m
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_GM12U320 is not set
# CONFIG_DRM_OFDRM is not set
# CONFIG_DRM_SIMPLEDRM is not set
# CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set
CONFIG_DRM_EXPORT_FOR_TESTS=y
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
CONFIG_DRM_LIB_RANDOM=y
#
# Frame buffer Devices
#
CONFIG_FB=y
CONFIG_FB_MACMODES=y
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
CONFIG_FB_OF=y
# CONFIG_FB_CONTROL is not set
# CONFIG_FB_PLATINUM is not set
# CONFIG_FB_VALKYRIE is not set
CONFIG_FB_CT65550=y
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SM712 is not set
CONFIG_FB_CORE=y
CONFIG_FB_NOTIFY=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DEVICE is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYSMEM_FOPS=y
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_IOMEM_FOPS=y
CONFIG_FB_IOMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y
# CONFIG_FB_MODE_HELPERS is not set
# CONFIG_FB_TILEBLITTING is not set
# end of Frame buffer Devices
#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
CONFIG_LCD_PLATFORM=m
CONFIG_BACKLIGHT_CLASS_DEVICE=m
# CONFIG_BACKLIGHT_KTD2801 is not set
# CONFIG_BACKLIGHT_KTZ8866 is not set
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
CONFIG_BACKLIGHT_LED=m
# end of Backlight & LCD device support
CONFIG_HDMI=y
#
# Console display driver support
#
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION is not set
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support
# CONFIG_LOGO is not set
# end of Graphics support
# CONFIG_DRM_ACCEL is not set
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_SND_TIMER=m
CONFIG_SND_PCM=m
CONFIG_SND_HWDEP=m
CONFIG_SND_SEQ_DEVICE=m
CONFIG_SND_RAWMIDI=m
CONFIG_SND_CORE_TEST=m
CONFIG_SND_JACK=y
CONFIG_SND_JACK_INPUT_DEV=y
# CONFIG_SND_OSSEMUL is not set
CONFIG_SND_PCM_TIMER=y
CONFIG_SND_HRTIMER=m
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_MAX_CARDS=6
# CONFIG_SND_SUPPORT_OLD_API is not set
CONFIG_SND_PROC_FS=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
# CONFIG_SND_CTL_FAST_LOOKUP is not set
# CONFIG_SND_DEBUG is not set
CONFIG_SND_CTL_INPUT_VALIDATION=y
CONFIG_SND_VMASTER=y
CONFIG_SND_SEQUENCER=m
# CONFIG_SND_SEQ_DUMMY is not set
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=m
CONFIG_SND_SEQ_MIDI=m
CONFIG_SND_SEQ_VIRMIDI=m
# CONFIG_SND_SEQ_UMP is not set
CONFIG_SND_DRIVERS=y
# CONFIG_SND_DUMMY is not set
CONFIG_SND_ALOOP=m
# CONFIG_SND_PCMTEST is not set
CONFIG_SND_VIRMIDI=m
# CONFIG_SND_MTPAV is not set
# CONFIG_SND_SERIAL_U16550 is not set
# CONFIG_SND_MPU401 is not set
CONFIG_SND_PCI=y
# CONFIG_SND_AD1889 is not set
# CONFIG_SND_ALS300 is not set
# CONFIG_SND_ALS4000 is not set
# CONFIG_SND_ALI5451 is not set
# CONFIG_SND_ATIIXP is not set
# CONFIG_SND_ATIIXP_MODEM is not set
# CONFIG_SND_AU8810 is not set
# CONFIG_SND_AU8820 is not set
# CONFIG_SND_AU8830 is not set
# CONFIG_SND_AW2 is not set
# CONFIG_SND_AZT3328 is not set
# CONFIG_SND_BT87X is not set
# CONFIG_SND_CA0106 is not set
# CONFIG_SND_CMIPCI is not set
# CONFIG_SND_OXYGEN is not set
# CONFIG_SND_CS4281 is not set
# CONFIG_SND_CS46XX is not set
# CONFIG_SND_CTXFI is not set
# CONFIG_SND_DARLA20 is not set
# CONFIG_SND_GINA20 is not set
# CONFIG_SND_LAYLA20 is not set
# CONFIG_SND_DARLA24 is not set
# CONFIG_SND_GINA24 is not set
# CONFIG_SND_LAYLA24 is not set
# CONFIG_SND_MONA is not set
# CONFIG_SND_MIA is not set
# CONFIG_SND_ECHO3G is not set
# CONFIG_SND_INDIGO is not set
# CONFIG_SND_INDIGOIO is not set
# CONFIG_SND_INDIGODJ is not set
# CONFIG_SND_INDIGOIOX is not set
# CONFIG_SND_INDIGODJX is not set
# CONFIG_SND_EMU10K1 is not set
# CONFIG_SND_EMU10K1X is not set
# CONFIG_SND_ENS1370 is not set
# CONFIG_SND_ENS1371 is not set
# CONFIG_SND_ES1938 is not set
# CONFIG_SND_ES1968 is not set
# CONFIG_SND_FM801 is not set
# CONFIG_SND_HDSP is not set
# CONFIG_SND_HDSPM is not set
# CONFIG_SND_ICE1712 is not set
# CONFIG_SND_ICE1724 is not set
# CONFIG_SND_INTEL8X0 is not set
# CONFIG_SND_INTEL8X0M is not set
# CONFIG_SND_KORG1212 is not set
# CONFIG_SND_LOLA is not set
# CONFIG_SND_LX6464ES is not set
# CONFIG_SND_MAESTRO3 is not set
# CONFIG_SND_MIXART is not set
# CONFIG_SND_NM256 is not set
# CONFIG_SND_PCXHR is not set
# CONFIG_SND_RIPTIDE is not set
# CONFIG_SND_RME32 is not set
# CONFIG_SND_RME96 is not set
# CONFIG_SND_RME9652 is not set
# CONFIG_SND_SE6X is not set
# CONFIG_SND_SONICVIBES is not set
# CONFIG_SND_TRIDENT is not set
# CONFIG_SND_VIA82XX is not set
# CONFIG_SND_VIA82XX_MODEM is not set
# CONFIG_SND_VIRTUOSO is not set
# CONFIG_SND_VX222 is not set
# CONFIG_SND_YMFPCI is not set
#
# HD-Audio
#
CONFIG_SND_HDA=m
CONFIG_SND_HDA_INTEL=m
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
# CONFIG_SND_HDA_INPUT_BEEP is not set
# CONFIG_SND_HDA_PATCH_LOADER is not set
# CONFIG_SND_HDA_CIRRUS_SCODEC_KUNIT_TEST is not set
# CONFIG_SND_HDA_CODEC_REALTEK is not set
# CONFIG_SND_HDA_CODEC_ANALOG is not set
# CONFIG_SND_HDA_CODEC_SIGMATEL is not set
# CONFIG_SND_HDA_CODEC_VIA is not set
CONFIG_SND_HDA_CODEC_HDMI=m
# CONFIG_SND_HDA_CODEC_CIRRUS is not set
# CONFIG_SND_HDA_CODEC_CS8409 is not set
# CONFIG_SND_HDA_CODEC_CONEXANT is not set
# CONFIG_SND_HDA_CODEC_CA0110 is not set
# CONFIG_SND_HDA_CODEC_CA0132 is not set
# CONFIG_SND_HDA_CODEC_CMEDIA is not set
# CONFIG_SND_HDA_CODEC_SI3054 is not set
# CONFIG_SND_HDA_GENERIC is not set
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
# CONFIG_SND_HDA_INTEL_HDMI_SILENT_STREAM is not set
# CONFIG_SND_HDA_CTL_DEV_ID is not set
# end of HD-Audio
CONFIG_SND_HDA_CORE=m
CONFIG_SND_HDA_COMPONENT=y
CONFIG_SND_HDA_PREALLOC_SIZE=2048
CONFIG_SND_INTEL_DSP_CONFIG=m
# CONFIG_SND_PPC is not set
CONFIG_SND_AOA=m
CONFIG_SND_AOA_FABRIC_LAYOUT=m
CONFIG_SND_AOA_ONYX=m
CONFIG_SND_AOA_TAS=m
CONFIG_SND_AOA_TOONIE=m
CONFIG_SND_AOA_SOUNDBUS=m
CONFIG_SND_AOA_SOUNDBUS_I2S=m
# CONFIG_SND_USB is not set
CONFIG_SND_FIREWIRE=y
CONFIG_SND_FIREWIRE_LIB=m
# CONFIG_SND_DICE is not set
# CONFIG_SND_OXFW is not set
CONFIG_SND_ISIGHT=m
# CONFIG_SND_FIREWORKS is not set
# CONFIG_SND_BEBOB is not set
# CONFIG_SND_FIREWIRE_DIGI00X is not set
# CONFIG_SND_FIREWIRE_TASCAM is not set
# CONFIG_SND_FIREWIRE_MOTU is not set
# CONFIG_SND_FIREFACE is not set
# CONFIG_SND_PCMCIA is not set
# CONFIG_SND_SOC is not set
# CONFIG_SND_VIRTIO is not set
CONFIG_HID_SUPPORT=y
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y
#
# Special HID drivers
#
# CONFIG_HID_A4TECH is not set
# CONFIG_HID_ACCUTOUCH is not set
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=y
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_ASUS is not set
# CONFIG_HID_AUREAL is not set
# CONFIG_HID_BELKIN is not set
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
# CONFIG_HID_CHERRY is not set
# CONFIG_HID_CHICONY is not set
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
# CONFIG_HID_PRODIKEYS is not set
# CONFIG_HID_CMEDIA is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
# CONFIG_HID_CYPRESS is not set
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
# CONFIG_HID_EVISION is not set
# CONFIG_HID_EZKEY is not set
# CONFIG_HID_FT260 is not set
# CONFIG_HID_GEMBIRD is not set
# CONFIG_HID_GFRM is not set
# CONFIG_HID_GLORIOUS is not set
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_GOOGLE_STADIA_FF is not set
# CONFIG_HID_VIVALDI is not set
# CONFIG_HID_GT683R is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
CONFIG_HID_UCLOGIC=m
# CONFIG_HID_WALTOP is not set
# CONFIG_HID_VIEWSONIC is not set
# CONFIG_HID_VRC2 is not set
# CONFIG_HID_XIAOMI is not set
# CONFIG_HID_GYRATION is not set
# CONFIG_HID_ICADE is not set
# CONFIG_HID_ITE is not set
# CONFIG_HID_JABRA is not set
# CONFIG_HID_TWINHAN is not set
# CONFIG_HID_KENSINGTON is not set
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LED is not set
# CONFIG_HID_LENOVO is not set
# CONFIG_HID_LETSKETCH is not set
# CONFIG_HID_LOGITECH is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
# CONFIG_HID_MEGAWORLD_FF is not set
# CONFIG_HID_REDRAGON is not set
CONFIG_HID_MICROSOFT=m
# CONFIG_HID_MONTEREY is not set
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NINTENDO=m
# CONFIG_NINTENDO_FF is not set
# CONFIG_HID_NTI is not set
# CONFIG_HID_NTRIG is not set
# CONFIG_HID_NVIDIA_SHIELD is not set
# CONFIG_HID_ORTEK is not set
# CONFIG_HID_PANTHERLORD is not set
# CONFIG_HID_PENMOUNT is not set
# CONFIG_HID_PETALYNX is not set
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PLANTRONICS is not set
# CONFIG_HID_PXRC is not set
# CONFIG_HID_RAZER is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_RETRODE is not set
CONFIG_HID_ROCCAT=m
# CONFIG_HID_SAITEK is not set
# CONFIG_HID_SAMSUNG is not set
# CONFIG_HID_SEMITEK is not set
# CONFIG_HID_SIGMAMICRO is not set
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEAM is not set
# CONFIG_HID_STEELSERIES is not set
# CONFIG_HID_SUNPLUS is not set
# CONFIG_HID_RMI is not set
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
# CONFIG_HID_TOPSEED is not set
# CONFIG_HID_TOPRE is not set
# CONFIG_HID_THINGM is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
# CONFIG_HID_WACOM is not set
CONFIG_HID_WIIMOTE=m
# CONFIG_HID_XINMO is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set
# CONFIG_HID_ALPS is not set
# CONFIG_HID_MCP2221 is not set
CONFIG_HID_KUNIT_TEST=m
# end of Special HID drivers
#
# HID-BPF support
#
# end of HID-BPF support
#
# USB HID support
#
CONFIG_USB_HID=y
# CONFIG_HID_PID is not set
CONFIG_USB_HIDDEV=y
# end of USB HID support
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_BIG_ENDIAN_DESC=y
CONFIG_USB_OHCI_BIG_ENDIAN_MMIO=y
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
# CONFIG_USB_PCI_AMD is not set
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
# CONFIG_USB_LEDS_TRIGGER_USBPORT is not set
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_DEFAULT_AUTHORIZATION_MODE=1
CONFIG_USB_MON=m
#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_XPS_USB_HCD_XILINX is not set
# CONFIG_USB_EHCI_FSL is not set
CONFIG_USB_EHCI_HCD_PPC_OF=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PPC_OF_BE=y
# CONFIG_USB_OHCI_HCD_PPC_OF_LE is not set
CONFIG_USB_OHCI_HCD_PPC_OF=y
CONFIG_USB_OHCI_HCD_PCI=m
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
# CONFIG_USB_UHCI_HCD is not set
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_SSB is not set
# CONFIG_USB_HCD_TEST_MODE is not set
#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set
#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
CONFIG_USB_UAS=m
#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USBIP_CORE is not set
#
# USB dual-mode controller drivers
#
# CONFIG_USB_CDNS_SUPPORT is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set
#
# USB port drivers
#
CONFIG_USB_SERIAL=m
# CONFIG_USB_SERIAL_GENERIC is not set
# CONFIG_USB_SERIAL_SIMPLE is not set
# CONFIG_USB_SERIAL_AIRCABLE is not set
# CONFIG_USB_SERIAL_ARK3116 is not set
# CONFIG_USB_SERIAL_BELKIN is not set
# CONFIG_USB_SERIAL_CH341 is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_CP210X is not set
# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
# CONFIG_USB_SERIAL_EMPEG is not set
CONFIG_USB_SERIAL_FTDI_SIO=m
# CONFIG_USB_SERIAL_VISOR is not set
# CONFIG_USB_SERIAL_IPAQ is not set
# CONFIG_USB_SERIAL_IR is not set
# CONFIG_USB_SERIAL_EDGEPORT is not set
# CONFIG_USB_SERIAL_EDGEPORT_TI is not set
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
# CONFIG_USB_SERIAL_GARMIN is not set
# CONFIG_USB_SERIAL_IPW is not set
# CONFIG_USB_SERIAL_IUU is not set
# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
# CONFIG_USB_SERIAL_KEYSPAN is not set
# CONFIG_USB_SERIAL_KLSI is not set
# CONFIG_USB_SERIAL_KOBIL_SCT is not set
# CONFIG_USB_SERIAL_MCT_U232 is not set
# CONFIG_USB_SERIAL_METRO is not set
# CONFIG_USB_SERIAL_MOS7720 is not set
# CONFIG_USB_SERIAL_MOS7840 is not set
# CONFIG_USB_SERIAL_MXUPORT is not set
# CONFIG_USB_SERIAL_NAVMAN is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_OTI6858 is not set
# CONFIG_USB_SERIAL_QCAUX is not set
# CONFIG_USB_SERIAL_QUALCOMM is not set
# CONFIG_USB_SERIAL_SPCP8X5 is not set
# CONFIG_USB_SERIAL_SAFE is not set
# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
# CONFIG_USB_SERIAL_SYMBOL is not set
# CONFIG_USB_SERIAL_TI is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_OPTION is not set
# CONFIG_USB_SERIAL_OMNINET is not set
# CONFIG_USB_SERIAL_OPTICON is not set
# CONFIG_USB_SERIAL_XSENS_MT is not set
# CONFIG_USB_SERIAL_WISHBONE is not set
# CONFIG_USB_SERIAL_SSU100 is not set
# CONFIG_USB_SERIAL_QT2 is not set
# CONFIG_USB_SERIAL_UPD78F0730 is not set
# CONFIG_USB_SERIAL_XR is not set
# CONFIG_USB_SERIAL_DEBUG is not set
#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
CONFIG_USB_APPLEDISPLAY=m
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
CONFIG_USB_ISIGHTFW=m
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HUB_USB251XB is not set
# CONFIG_USB_HSIC_USB3503 is not set
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
# CONFIG_USB_ONBOARD_HUB is not set
#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers
# CONFIG_USB_GADGET is not set
# CONFIG_TYPEC is not set
# CONFIG_USB_ROLE_SWITCH is not set
# CONFIG_MMC is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
CONFIG_LEDS_BRIGHTNESS_HW_CHANGED=y
#
# LED drivers
#
# CONFIG_LEDS_AN30259A is not set
# CONFIG_LEDS_AW200XX is not set
# CONFIG_LEDS_AW2013 is not set
# CONFIG_LEDS_BCM6328 is not set
# CONFIG_LEDS_BCM6358 is not set
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_LM3692X is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP8860 is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA995X is not set
# CONFIG_LEDS_BD2606MVV is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_IS31FL319X is not set
# CONFIG_LEDS_IS31FL32XX is not set
#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
# CONFIG_LEDS_BLINKM is not set
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_LM3697 is not set
#
# Flash and Torch LED drivers
#
#
# RGB LED drivers
#
#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
# CONFIG_LEDS_TRIGGER_TIMER is not set
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
CONFIG_LEDS_TRIGGER_DISK=y
# CONFIG_LEDS_TRIGGER_HEARTBEAT is not set
# CONFIG_LEDS_TRIGGER_BACKLIGHT is not set
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
#
# iptables trigger is under Netfilter config (LED target)
#
# CONFIG_LEDS_TRIGGER_TRANSIENT is not set
# CONFIG_LEDS_TRIGGER_CAMERA is not set
CONFIG_LEDS_TRIGGER_PANIC=y
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
# CONFIG_LEDS_TRIGGER_AUDIO is not set
# CONFIG_LEDS_TRIGGER_TTY is not set
#
# Simple LED drivers
#
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_SYSTOHC_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_LIB_KUNIT_TEST=m
CONFIG_RTC_NVMEM=y
#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set
#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_HYM8563 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_NCT3018Y is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12026 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8010 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set
#
# SPI RTC drivers
#
CONFIG_RTC_I2C_AND_SPI=y
#
# SPI and I2C RTC drivers
#
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set
# CONFIG_RTC_DRV_RX6110 is not set
#
# Platform RTC drivers
#
# CONFIG_RTC_DRV_CMOS is not set
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_DS2404 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_ZYNQMP is not set
#
# on-CPU RTC drivers
#
CONFIG_RTC_DRV_GENERIC=y
# CONFIG_RTC_DRV_CADENCE is not set
# CONFIG_RTC_DRV_FTRTC010 is not set
# CONFIG_RTC_DRV_R7301 is not set
#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_GOLDFISH is not set
# CONFIG_DMADEVICES is not set
#
# DMABUF options
#
CONFIG_SYNC_FILE=y
# CONFIG_SW_SYNC is not set
CONFIG_UDMABUF=y
# CONFIG_DMABUF_MOVE_NOTIFY is not set
CONFIG_DMABUF_DEBUG=y
CONFIG_DMABUF_SELFTESTS=m
CONFIG_DMABUF_HEAPS=y
# CONFIG_DMABUF_SYSFS_STATS is not set
CONFIG_DMABUF_HEAPS_SYSTEM=y
# end of DMABUF options
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_BALLOON is not set
# CONFIG_VIRTIO_INPUT is not set
# CONFIG_VIRTIO_MMIO is not set
CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST_TASK=y
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_VSOCK is not set
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set
#
# Microsoft Hyper-V guest support
#
# end of Microsoft Hyper-V guest support
# CONFIG_GREYBUS is not set
# CONFIG_COMEDI is not set
# CONFIG_STAGING is not set
# CONFIG_GOLDFISH is not set
# CONFIG_COMMON_CLK is not set
# CONFIG_HWSPINLOCK is not set
#
# Clock Source drivers
#
# end of Clock Source drivers
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_SUPPORT=y
#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support
# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMUFD is not set
#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers
#
# Rpmsg drivers
#
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers
# CONFIG_SOUNDWIRE is not set
#
# SOC (System On Chip) specific Drivers
#
#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers
#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers
#
# NXP/Freescale QorIQ SoC drivers
#
# CONFIG_QUICC_ENGINE is not set
# end of NXP/Freescale QorIQ SoC drivers
#
# fujitsu SoC drivers
#
# end of fujitsu SoC drivers
#
# i.MX SoC drivers
#
# end of i.MX SoC drivers
#
# Enable LiteX SoC Builder specific drivers
#
# CONFIG_LITEX_SOC_CONTROLLER is not set
# end of Enable LiteX SoC Builder specific drivers
# CONFIG_WPCM450_SOC is not set
#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers
# CONFIG_SOC_TI is not set
#
# Xilinx SoC drivers
#
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers
#
# PM Domains
#
#
# Amlogic PM Domains
#
# end of Amlogic PM Domains
#
# Broadcom PM Domains
#
# end of Broadcom PM Domains
#
# i.MX PM Domains
#
# end of i.MX PM Domains
#
# Qualcomm PM Domains
#
# end of Qualcomm PM Domains
# end of PM Domains
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_PWM is not set
#
# IRQ chip support
#
CONFIG_IRQCHIP=y
# CONFIG_AL_FIC is not set
# CONFIG_XILINX_INTC is not set
# end of IRQ chip support
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_PHY_CAN_TRANSCEIVER is not set
#
# PHY drivers for Broadcom platforms
#
# CONFIG_BCM_KONA_USB2_PHY is not set
# end of PHY drivers for Broadcom platforms
# CONFIG_PHY_CADENCE_DPHY is not set
# CONFIG_PHY_CADENCE_DPHY_RX is not set
# CONFIG_PHY_CADENCE_SALVO is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# end of PHY Subsystem
# CONFIG_POWERCAP is not set
# CONFIG_MCB is not set
#
# Performance monitor support
#
# CONFIG_DWC_PCIE_PMU is not set
# end of Performance monitor support
# CONFIG_RAS is not set
# CONFIG_USB4 is not set
#
# Android
#
# CONFIG_ANDROID_BINDER_IPC is not set
# end of Android
# CONFIG_DAX is not set
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y
CONFIG_NVMEM_LAYOUTS=y
#
# Layout Types
#
# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set
# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set
# end of Layout Types
# CONFIG_NVMEM_RMEM is not set
#
# HW tracing support
#
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
# end of HW tracing support
# CONFIG_FPGA is not set
# CONFIG_FSI is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# CONFIG_PECI is not set
# CONFIG_HTE is not set
# end of Device Drivers
#
# File systems
#
CONFIG_VALIDATE_FS_PARSER=y
CONFIG_FS_IOMAP=y
CONFIG_FS_STACK=y
CONFIG_BUFFER_HEAD=y
CONFIG_LEGACY_DIRECT_IO=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
# CONFIG_EXT4_FS_SECURITY is not set
# CONFIG_EXT4_DEBUG is not set
CONFIG_EXT4_KUNIT_TESTS=m
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
# CONFIG_XFS_SUPPORT_V4 is not set
# CONFIG_XFS_SUPPORT_ASCII_CI is not set
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_ONLINE_SCRUB is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
CONFIG_BTRFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
# CONFIG_F2FS_FS is not set
CONFIG_BCACHEFS_FS=m
# CONFIG_BCACHEFS_QUOTA is not set
# CONFIG_BCACHEFS_ERASURE_CODING is not set
CONFIG_BCACHEFS_POSIX_ACL=y
# CONFIG_BCACHEFS_DEBUG is not set
CONFIG_BCACHEFS_TESTS=y
# CONFIG_BCACHEFS_LOCK_TIME_STATS is not set
# CONFIG_BCACHEFS_NO_LATENCY_ACCT is not set
CONFIG_BCACHEFS_SIX_OPTIMISTIC_SPIN=y
CONFIG_MEAN_AND_VARIANCE_UNIT_TEST=m
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
# CONFIG_EXPORTFS_BLOCK_OPS is not set
CONFIG_FILE_LOCKING=y
# CONFIG_FS_ENCRYPTION is not set
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
# CONFIG_FANOTIFY_ACCESS_PERMISSIONS is not set
# CONFIG_QUOTA is not set
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
# CONFIG_CUSE is not set
CONFIG_VIRTIO_FS=m
CONFIG_FUSE_PASSTHROUGH=y
# CONFIG_OVERLAY_FS is not set
#
# Caches
#
CONFIG_NETFS_SUPPORT=y
# CONFIG_NETFS_STATS is not set
# CONFIG_FSCACHE is not set
# end of Caches
#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems
#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-15"
CONFIG_FAT_DEFAULT_UTF8=y
CONFIG_FAT_KUNIT_TEST=m
CONFIG_EXFAT_FS=m
CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8"
CONFIG_NTFS3_FS=m
CONFIG_NTFS3_LZX_XPRESS=y
# CONFIG_NTFS3_FS_POSIX_ACL is not set
# end of DOS/FAT/EXFAT/NT Filesystems
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
# CONFIG_PROC_KCORE is not set
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
# CONFIG_PROC_CHILDREN is not set
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_QUOTA is not set
CONFIG_CONFIGFS_FS=m
# end of Pseudo filesystems
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
CONFIG_AFFS_FS=m
# CONFIG_ECRYPT_FS is not set
CONFIG_HFS_FS=m
CONFIG_HFSPLUS_FS=m
CONFIG_BEFS_FS=m
CONFIG_BEFS_DEBUG=y
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
# CONFIG_PSTORE is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m
# CONFIG_NFS_V2 is not set
# CONFIG_NFS_V3 is not set
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
# CONFIG_NFS_FSCACHE is not set
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
# CONFIG_NFS_V4_2_READ_PLUS is not set
# CONFIG_NFSD is not set
CONFIG_GRACE_PERIOD=m
CONFIG_LOCKD=m
CONFIG_NFS_COMMON=y
CONFIG_NFS_V4_2_SSC_HELPER=y
CONFIG_SUNRPC=m
CONFIG_SUNRPC_BACKCHANNEL=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
CONFIG_SUNRPC_DEBUG=y
# CONFIG_CEPH_FS is not set
CONFIG_CIFS=m
CONFIG_CIFS_STATS2=y
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
CONFIG_CIFS_SWN_UPCALL=y
# CONFIG_SMB_SERVER is not set
CONFIG_SMBFS=m
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_9P_FS=y
CONFIG_9P_FS_POSIX_ACL=y
# CONFIG_9P_FS_SECURITY is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=m
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
CONFIG_NLS_CODEPAGE_850=m
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
CONFIG_NLS_ISO8859_15=m
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_MAC_ROMAN=m
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
CONFIG_NLS_UCS2_UTILS=m
# CONFIG_DLM is not set
CONFIG_UNICODE=m
# CONFIG_UNICODE_NORMALIZATION_SELFTEST is not set
CONFIG_IO_WQ=y
# end of File systems
#
# Security options
#
CONFIG_KEYS=y
CONFIG_KEYS_REQUEST_CACHE=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEY_DH_OPERATIONS=y
CONFIG_KEY_NOTIFICATIONS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
# CONFIG_SECURITYFS is not set
# CONFIG_SECURITY_NETWORK is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
# CONFIG_STATIC_USERMODEHELPER is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
# CONFIG_SECURITY_LANDLOCK is not set
# CONFIG_INTEGRITY is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf"
#
# Kernel hardening options
#
#
# Memory initialization
#
CONFIG_CC_HAS_AUTO_VAR_INIT_PATTERN=y
CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_BARE=y
CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO=y
# CONFIG_INIT_STACK_NONE is not set
CONFIG_INIT_STACK_ALL_PATTERN=y
# CONFIG_INIT_STACK_ALL_ZERO is not set
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y
# CONFIG_ZERO_CALL_USED_REGS is not set
# end of Memory initialization
#
# Hardening of kernel data structures
#
CONFIG_LIST_HARDENED=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
# end of Hardening of kernel data structures
CONFIG_RANDSTRUCT_NONE=y
# CONFIG_RANDSTRUCT_FULL is not set
# CONFIG_RANDSTRUCT_PERFORMANCE is not set
# end of Kernel hardening options
# end of Security options
CONFIG_XOR_BLOCKS=y
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y
#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=m
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SIG2=y
CONFIG_CRYPTO_SKCIPHER=m
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=m
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=y
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
# CONFIG_CRYPTO_MANAGER_EXTRA_TESTS is not set
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_NULL2=m
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_ENGINE=m
# end of Crypto core or helper
#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=y
# CONFIG_CRYPTO_DH_RFC7919_GROUPS is not set
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECDSA is not set
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_SM2 is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# end of Public-key cryptography
#
# Block ciphers
#
CONFIG_CRYPTO_AES=m
# CONFIG_CRYPTO_AES_TI is not set
# CONFIG_CRYPTO_ARIA is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST6 is not set
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SM4_GENERIC is not set
# CONFIG_CRYPTO_TWOFISH is not set
# end of Block ciphers
#
# Length-preserving ciphers and modes
#
CONFIG_CRYPTO_ADIANTUM=m
CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=m
# CONFIG_CRYPTO_CTS is not set
CONFIG_CRYPTO_ECB=m
# CONFIG_CRYPTO_HCTR2 is not set
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
CONFIG_CRYPTO_XTS=m
CONFIG_CRYPTO_NHPOLY1305=m
# end of Length-preserving ciphers and modes
#
# AEAD (authenticated encryption with associated data) ciphers
#
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=m
CONFIG_CRYPTO_GENIV=m
CONFIG_CRYPTO_SEQIV=m
CONFIG_CRYPTO_ECHAINIV=m
CONFIG_CRYPTO_ESSIV=m
# end of AEAD (authenticated encryption with associated data) ciphers
#
# Hashes, digests, and MACs
#
CONFIG_CRYPTO_BLAKE2B=y
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_GHASH=m
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
CONFIG_CRYPTO_POLY1305=m
# CONFIG_CRYPTO_RMD160 is not set
CONFIG_CRYPTO_SHA1=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=m
CONFIG_CRYPTO_SHA3=m
# CONFIG_CRYPTO_SM3_GENERIC is not set
# CONFIG_CRYPTO_STREEBOG is not set
# CONFIG_CRYPTO_VMAC is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_XCBC is not set
CONFIG_CRYPTO_XXHASH=y
# end of Hashes, digests, and MACs
#
# CRCs (cyclic redundancy checks)
#
CONFIG_CRYPTO_CRC32C=y
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_CRCT10DIF is not set
# CONFIG_CRYPTO_CRC64_ROCKSOFT is not set
# end of CRCs (cyclic redundancy checks)
#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
CONFIG_CRYPTO_LZ4=m
# CONFIG_CRYPTO_LZ4HC is not set
CONFIG_CRYPTO_ZSTD=y
# end of Compression
#
# Random number generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRYPTO_DRBG_MENU=m
CONFIG_CRYPTO_DRBG_HMAC=y
# CONFIG_CRYPTO_DRBG_HASH is not set
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=m
CONFIG_CRYPTO_JITTERENTROPY=m
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS=64
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE=32
CONFIG_CRYPTO_JITTERENTROPY_OSR=1
CONFIG_CRYPTO_KDF800108_CTR=y
# end of Random number generation
#
# Userspace interface
#
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=m
# CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE is not set
# CONFIG_CRYPTO_STATS is not set
# end of Userspace interface
CONFIG_CRYPTO_HASH_INFO=y
#
# Accelerated Cryptographic Algorithms for CPU (powerpc)
#
CONFIG_CRYPTO_MD5_PPC=m
CONFIG_CRYPTO_SHA1_PPC=m
# end of Accelerated Cryptographic Algorithms for CPU (powerpc)
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_HIFN_795X is not set
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_VIRTIO=m
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_CCREE is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS8_PRIVATE_KEY_PARSER=m
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set
# CONFIG_FIPS_SIGNATURE_SELFTEST is not set
#
# Certificates for signature checking
#
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
# end of Certificates for signature checking
CONFIG_BINARY_PRINTF=y
#
# Library routines
#
CONFIG_RAID6_PQ=y
CONFIG_RAID6_PQ_BENCHMARK=y
CONFIG_LINEAR_RANGES=m
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
# CONFIG_CORDIC is not set
CONFIG_PRIME_NUMBERS=m
#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_UTILS=y
CONFIG_CRYPTO_LIB_AES=m
CONFIG_CRYPTO_LIB_ARC4=m
CONFIG_CRYPTO_LIB_GF128MUL=m
CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y
CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m
CONFIG_CRYPTO_LIB_CHACHA=m
CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=m
CONFIG_CRYPTO_LIB_CURVE25519=m
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=1
CONFIG_CRYPTO_LIB_POLY1305_GENERIC=m
CONFIG_CRYPTO_LIB_POLY1305=m
CONFIG_CRYPTO_LIB_CHACHA20POLY1305=m
CONFIG_CRYPTO_LIB_SHA1=y
CONFIG_CRYPTO_LIB_SHA256=y
# end of Crypto library routines
# CONFIG_CRC_CCITT is not set
CONFIG_CRC16=y
# CONFIG_CRC_T10DIF is not set
# CONFIG_CRC64_ROCKSOFT is not set
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
CONFIG_CRC64=m
# CONFIG_CRC4 is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=y
# CONFIG_CRC8 is not set
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_COMPRESS=m
CONFIG_LZ4HC_COMPRESS=m
CONFIG_LZ4_DECOMPRESS=m
CONFIG_ZSTD_COMMON=y
CONFIG_ZSTD_COMPRESS=y
CONFIG_ZSTD_DECOMPRESS=y
# CONFIG_XZ_DEC is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC16=y
CONFIG_REED_SOLOMON_DEC16=y
CONFIG_INTERVAL_TREE=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_CLOSURES=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_DMA_DECLARE_COHERENT=y
CONFIG_ARCH_DMA_DEFAULT_COHERENT=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_DMA_MAP_BENCHMARK is not set
CONFIG_SGL_ALLOC=y
# CONFIG_FORCE_NR_CPUS is not set
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_GENERIC_ATOMIC64=y
CONFIG_CLZ_TAB=y
# CONFIG_IRQ_POLL is not set
CONFIG_MPILIB=y
CONFIG_DIMLIB=y
CONFIG_LIBFDT=y
CONFIG_OID_REGISTRY=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_FONT_SUN8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_STACKWALK=y
CONFIG_STACKDEPOT=y
CONFIG_STACKDEPOT_ALWAYS_INIT=y
CONFIG_STACKDEPOT_MAX_FRAMES=64
CONFIG_SBITMAP=y
# CONFIG_LWQ_TEST is not set
# end of Library routines
CONFIG_GENERIC_IOREMAP=y
#
# Kernel hacking
#
#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
# CONFIG_PRINTK_CALLER is not set
# CONFIG_STACKTRACE_BUILD_ID is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_DYNAMIC_DEBUG_CORE is not set
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_MISC is not set
#
# Compile-time checks and compiler options
#
CONFIG_AS_HAS_NON_CONST_ULEB128=y
CONFIG_DEBUG_INFO_NONE=y
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
# CONFIG_DEBUG_INFO_DWARF4 is not set
# CONFIG_DEBUG_INFO_DWARF5 is not set
CONFIG_FRAME_WARN=1024
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set
# CONFIG_VMLINUX_MAP is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options
#
# Generic Kernel Debugging Instruments
#
# CONFIG_MAGIC_SYSRQ is not set
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN=y
# CONFIG_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
CONFIG_HAVE_KCSAN_COMPILER=y
CONFIG_KCSAN=y
CONFIG_KCSAN_SELFTEST=y
CONFIG_KCSAN_EARLY_ENABLE=y
CONFIG_KCSAN_NUM_WATCHPOINTS=64
CONFIG_KCSAN_UDELAY_TASK=80
CONFIG_KCSAN_UDELAY_INTERRUPT=20
# CONFIG_KCSAN_DELAY_RANDOMIZE is not set
CONFIG_KCSAN_SKIP_WATCH=4000
# CONFIG_KCSAN_SKIP_WATCH_RANDOMIZE is not set
CONFIG_KCSAN_INTERRUPT_WATCHER=y
CONFIG_KCSAN_REPORT_ONCE_IN_MS=3000
CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=y
CONFIG_KCSAN_STRICT=y
CONFIG_KCSAN_WEAK_MEMORY=y
# end of Generic Kernel Debugging Instruments
#
# Networking Debugging
#
# CONFIG_NET_DEV_REFCNT_TRACKER is not set
# CONFIG_NET_NS_REFCNT_TRACKER is not set
# CONFIG_DEBUG_NET is not set
# end of Networking Debugging
#
# Memory Debugging
#
CONFIG_PAGE_EXTENSION=y
CONFIG_DEBUG_PAGEALLOC=y
# CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT is not set
CONFIG_SLUB_DEBUG=y
CONFIG_SLUB_DEBUG_ON=y
CONFIG_PAGE_OWNER=y
CONFIG_PAGE_POISONING=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_ARCH_HAS_DEBUG_WX=y
CONFIG_DEBUG_WX=y
CONFIG_GENERIC_PTDUMP=y
CONFIG_PTDUMP_CORE=y
# CONFIG_PTDUMP_DEBUGFS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SHRINKER_DEBUG is not set
# CONFIG_DEBUG_STACK_USAGE is not set
CONFIG_SCHED_STACK_END_CHECK=y
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
CONFIG_DEBUG_VM_PGTABLE=y
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_DEBUG_KMAP_LOCAL is not set
# CONFIG_DEBUG_HIGHMEM is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
# CONFIG_KASAN is not set
CONFIG_HAVE_ARCH_KFENCE=y
# CONFIG_KFENCE is not set
# end of Memory Debugging
CONFIG_DEBUG_SHIRQ=y
#
# Debug Oops, Lockups and Hangs
#
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_HAVE_HARDLOCKUP_DETECTOR_BUDDY=y
CONFIG_HARDLOCKUP_DETECTOR=y
# CONFIG_HARDLOCKUP_DETECTOR_PERF is not set
CONFIG_HARDLOCKUP_DETECTOR_BUDDY=y
# CONFIG_HARDLOCKUP_DETECTOR_ARCH is not set
CONFIG_HARDLOCKUP_DETECTOR_COUNTS_HRTIMER=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=60
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_WQ_WATCHDOG=y
# CONFIG_WQ_CPU_INTENSIVE_REPORT is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs
#
# Scheduler Debugging
#
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHED_INFO=y
# CONFIG_SCHEDSTATS is not set
# end of Scheduler Debugging
# CONFIG_DEBUG_TIMEKEEPING is not set
#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
CONFIG_DEBUG_RWSEMS=y
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)
CONFIG_DEBUG_IRQFLAGS=y
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set
#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
# CONFIG_DEBUG_PLIST is not set
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
CONFIG_DEBUG_CLOSURES=y
CONFIG_DEBUG_MAPLE_TREE=y
# end of Debug kernel data structures
#
# RCU Debugging
#
# CONFIG_RCU_SCALE_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0
CONFIG_RCU_CPU_STALL_CPUTIME=y
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging
# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
# CONFIG_LATENCYTOP is not set
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_OBJTOOL_MCOUNT=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACING_SUPPORT=y
# CONFIG_FTRACE is not set
# CONFIG_SAMPLES is not set
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
CONFIG_IO_STRICT_DEVMEM=y
#
# powerpc Debugging
#
# CONFIG_PPC_DISABLE_WERROR is not set
CONFIG_PPC_WERROR=y
CONFIG_PRINT_STACK_DEPTH=64
# CONFIG_PPC_EMULATED_STATS is not set
# CONFIG_CODE_PATCHING_SELFTEST is not set
# CONFIG_JUMP_LABEL_FEATURE_CHECKS is not set
# CONFIG_FTR_FIXUP_SELFTEST is not set
# CONFIG_MSI_BITMAP_SELFTEST is not set
# CONFIG_XMON is not set
# CONFIG_BDI_SWITCH is not set
CONFIG_BOOTX_TEXT=y
# CONFIG_PPC_EARLY_DEBUG is not set
# end of powerpc Debugging
#
# Kernel Testing and Coverage
#
CONFIG_KUNIT=m
CONFIG_KUNIT_DEBUGFS=y
CONFIG_KUNIT_TEST=m
# CONFIG_KUNIT_EXAMPLE_TEST is not set
# CONFIG_KUNIT_ALL_TESTS is not set
CONFIG_KUNIT_DEFAULT_ENABLED=y
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
# CONFIG_FAULT_INJECTION is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
# CONFIG_TEST_DHRY is not set
# CONFIG_LKDTM is not set
CONFIG_CPUMASK_KUNIT_TEST=m
CONFIG_TEST_LIST_SORT=m
CONFIG_TEST_MIN_HEAP=m
CONFIG_TEST_SORT=m
CONFIG_TEST_DIV64=m
CONFIG_TEST_IOV_ITER=m
CONFIG_BACKTRACE_SELF_TEST=m
# CONFIG_TEST_REF_TRACKER is not set
CONFIG_RBTREE_TEST=m
CONFIG_REED_SOLOMON_TEST=m
CONFIG_INTERVAL_TREE_TEST=m
CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y
CONFIG_ASYNC_RAID6_TEST=m
# CONFIG_TEST_HEXDUMP is not set
CONFIG_STRING_KUNIT_TEST=m
CONFIG_STRING_HELPERS_KUNIT_TEST=m
CONFIG_TEST_KSTRTOX=y
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_SCANF is not set
# CONFIG_TEST_BITMAP is not set
CONFIG_TEST_UUID=m
CONFIG_TEST_XARRAY=m
CONFIG_TEST_MAPLE_TREE=m
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
CONFIG_TEST_BITOPS=m
CONFIG_TEST_VMALLOC=m
CONFIG_TEST_USER_COPY=m
CONFIG_TEST_BPF=m
# CONFIG_TEST_BLACKHOLE_DEV is not set
CONFIG_FIND_BIT_BENCHMARK=m
# CONFIG_TEST_FIRMWARE is not set
CONFIG_TEST_SYSCTL=m
CONFIG_BITFIELD_KUNIT=m
CONFIG_CHECKSUM_KUNIT=m
CONFIG_HASH_KUNIT_TEST=m
CONFIG_RESOURCE_KUNIT_TEST=m
CONFIG_SYSCTL_KUNIT_TEST=m
CONFIG_LIST_KUNIT_TEST=m
CONFIG_HASHTABLE_KUNIT_TEST=m
CONFIG_LINEAR_RANGES_TEST=m
CONFIG_CMDLINE_KUNIT_TEST=m
CONFIG_BITS_TEST=m
CONFIG_SLUB_KUNIT_TEST=m
CONFIG_MEMCPY_KUNIT_TEST=m
CONFIG_IS_SIGNED_TYPE_KUNIT_TEST=m
CONFIG_OVERFLOW_KUNIT_TEST=m
CONFIG_STACKINIT_KUNIT_TEST=m
CONFIG_FORTIFY_KUNIT_TEST=m
CONFIG_STRCAT_KUNIT_TEST=m
CONFIG_STRSCPY_KUNIT_TEST=m
CONFIG_SIPHASH_KUNIT_TEST=m
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_KMOD is not set
CONFIG_TEST_MEMCAT_P=m
CONFIG_TEST_MEMINIT=m
CONFIG_TEST_FREE_PAGES=m
CONFIG_TEST_OBJPOOL=m
CONFIG_ARCH_USE_MEMTEST=y
# CONFIG_MEMTEST is not set
# end of Kernel Testing and Coverage
#
# Rust hacking
#
# end of Rust hacking
# end of Kernel hacking
[-- Attachment #3: dmesg_69-rc4_g4_04 --]
[-- Type: application/octet-stream, Size: 77070 bytes --]
[ 60.350911] interrupt_async_enter_prepare+0x64/0xc4
[ 60.374183] do_IRQ+0x18/0x2c
[ 60.397365] HardwareInterrupt_virt+0x108/0x10c
[ 60.420718] do_raw_spin_unlock+0x10c/0x130
[ 60.444258] 0x9032
[ 60.467597] kcsan_setup_watchpoint+0x300/0x4cc
[ 60.491224] kernel_wait4+0x17c/0x200
[ 60.514737] sys_wait4+0x84/0xe0
[ 60.538119] system_call_exception+0x15c/0x1c0
[ 60.561604] ret_from_syscall+0x0/0x2c
[ 60.609428] write to 0xc2eff19c of 4 bytes by task 114 on cpu 0:
[ 60.633822] kernel_wait4+0x17c/0x200
[ 60.658312] sys_wait4+0x84/0xe0
[ 60.682758] system_call_exception+0x15c/0x1c0
[ 60.707358] ret_from_syscall+0x0/0x2c
[ 60.756267] Reported by Kernel Concurrency Sanitizer on:
[ 60.780795] CPU: 0 PID: 114 Comm: gendepends.sh Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 60.805881] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 60.831112] ==================================================================
[ 67.142520] ==================================================================
[ 67.168991] BUG: KCSAN: data-race in handle_mm_fault / save_stack
[ 67.221726] read to 0xc2ef9b10 of 2 bytes by interrupt on cpu 0:
[ 67.248713] save_stack+0x3c/0xec
[ 67.275637] __reset_page_owner+0xd8/0x234
[ 67.302694] free_unref_page_prepare+0x124/0x1dc
[ 67.329878] free_unref_page+0x40/0x114
[ 67.356996] pagetable_free+0x48/0x60
[ 67.384066] pte_free_now+0x50/0x74
[ 67.411031] pte_fragment_free+0x198/0x19c
[ 67.437970] pgtable_free+0x34/0x78
[ 67.464778] tlb_remove_table_rcu+0x8c/0x90
[ 67.491565] rcu_core+0x564/0xa88
[ 67.518043] rcu_core_si+0x20/0x3c
[ 67.544219] __do_softirq+0x1dc/0x218
[ 67.570202] do_softirq_own_stack+0x54/0x74
[ 67.595632] do_softirq_own_stack+0x44/0x74
[ 67.620352] __irq_exit_rcu+0x6c/0xbc
[ 67.644834] irq_exit+0x10/0x20
[ 67.669066] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 67.693435] timer_interrupt+0x64/0x178
[ 67.717627] Decrementer_virt+0x108/0x10c
[ 67.741776] 0xc1f1a6a0
[ 67.765735] 0xc1f1a6a0
[ 67.789591] kcsan_setup_watchpoint+0x300/0x4cc
[ 67.813724] handle_mm_fault+0x214/0xed0
[ 67.837916] ___do_page_fault+0x4d8/0x630
[ 67.862248] do_page_fault+0x28/0x40
[ 67.886576] DataAccess_virt+0x124/0x17c
[ 67.935091] write to 0xc2ef9b10 of 2 bytes by task 329 on cpu 0:
[ 67.959710] handle_mm_fault+0x214/0xed0
[ 67.984283] ___do_page_fault+0x4d8/0x630
[ 68.009051] do_page_fault+0x28/0x40
[ 68.033783] DataAccess_virt+0x124/0x17c
[ 68.083292] Reported by Kernel Concurrency Sanitizer on:
[ 68.108461] CPU: 0 PID: 329 Comm: grep Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 68.133782] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 68.158952] ==================================================================
[ 75.578869] ==================================================================
[ 75.604454] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 75.655432] write (marked) to 0xeedc9c11 of 1 bytes by interrupt on cpu 1:
[ 75.681312] rcu_report_qs_rdp+0x15c/0x18c
[ 75.707121] rcu_core+0x1f0/0xa88
[ 75.732883] rcu_core_si+0x20/0x3c
[ 75.758555] __do_softirq+0x1dc/0x218
[ 75.784228] do_softirq_own_stack+0x54/0x74
[ 75.809978] do_softirq_own_stack+0x44/0x74
[ 75.835450] __irq_exit_rcu+0x6c/0xbc
[ 75.860603] irq_exit+0x10/0x20
[ 75.885401] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 75.910261] timer_interrupt+0x64/0x178
[ 75.934741] Decrementer_virt+0x108/0x10c
[ 75.959042] 0x15
[ 75.983236] 0x0
[ 76.006939] kcsan_setup_watchpoint+0x300/0x4cc
[ 76.030722] rcu_all_qs+0x58/0x17c
[ 76.054281] __cond_resched+0x50/0x58
[ 76.077660] down_read+0x20/0x16c
[ 76.100808] walk_component+0xf4/0x150
[ 76.123982] path_lookupat+0xe8/0x21c
[ 76.147079] filename_lookup+0x90/0x100
[ 76.170236] user_path_at_empty+0x58/0x90
[ 76.193421] do_readlinkat+0x74/0x180
[ 76.216588] sys_readlinkat+0x5c/0x88
[ 76.239765] system_call_exception+0x15c/0x1c0
[ 76.263040] ret_from_syscall+0x0/0x2c
[ 76.309124] read to 0xeedc9c11 of 1 bytes by task 528 on cpu 1:
[ 76.332648] rcu_all_qs+0x58/0x17c
[ 76.356255] __cond_resched+0x50/0x58
[ 76.379844] down_read+0x20/0x16c
[ 76.403551] walk_component+0xf4/0x150
[ 76.427278] path_lookupat+0xe8/0x21c
[ 76.451026] filename_lookup+0x90/0x100
[ 76.474683] user_path_at_empty+0x58/0x90
[ 76.498267] do_readlinkat+0x74/0x180
[ 76.521790] sys_readlinkat+0x5c/0x88
[ 76.545297] system_call_exception+0x15c/0x1c0
[ 76.569079] ret_from_syscall+0x0/0x2c
[ 76.616105] Reported by Kernel Concurrency Sanitizer on:
[ 76.639868] CPU: 1 PID: 528 Comm: udevadm Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 76.664100] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 76.688790] ==================================================================
[ 84.242338] ohci-pci 0001:00:12.0: OHCI PCI host controller
[ 84.354205] ohci-pci 0001:00:12.0: new USB bus registered, assigned bus number 3
[ 84.435743] ohci-pci 0001:00:12.0: irq 52, io mem 0x8008c000
[ 84.686185] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 84.727113] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 84.767527] usb usb3: Product: OHCI PCI host controller
[ 84.807744] usb usb3: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 84.849003] usb usb3: SerialNumber: 0001:00:12.0
[ 84.902522] hub 3-0:1.0: USB hub found
[ 84.944146] hub 3-0:1.0: 3 ports detected
[ 85.151114] ohci-pci 0001:00:12.1: OHCI PCI host controller
[ 85.392801] ohci-pci 0001:00:12.1: new USB bus registered, assigned bus number 4
[ 85.512940] ohci-pci 0001:00:12.1: irq 52, io mem 0x8008b000
[ 85.819520] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 85.861383] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 85.902304] usb usb4: Product: OHCI PCI host controller
[ 85.943139] usb usb4: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 85.982851] usb usb4: SerialNumber: 0001:00:12.1
[ 86.066872] hub 4-0:1.0: USB hub found
[ 86.117898] hub 4-0:1.0: 2 ports detected
[ 86.381077] Apple USB OHCI 0001:00:18.0 disabled by firmware
[ 86.707225] Apple USB OHCI 0001:00:19.0 disabled by firmware
[ 86.921002] ohci-pci 0001:00:1b.0: OHCI PCI host controller
[ 86.960853] ohci-pci 0001:00:1b.0: new USB bus registered, assigned bus number 5
[ 87.011362] ohci-pci 0001:00:1b.0: irq 63, io mem 0x80084000
[ 87.266252] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 87.306689] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 87.346175] usb usb5: Product: OHCI PCI host controller
[ 87.388986] usb usb5: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 87.428575] usb usb5: SerialNumber: 0001:00:1b.0
[ 87.503678] b43-pci-bridge 0001:00:16.0: enabling device (0004 -> 0006)
[ 87.616976] ssb: Found chip with id 0x4306, rev 0x02 and package 0x00
[ 87.877391] hub 5-0:1.0: USB hub found
[ 88.188820] b43-pci-bridge 0001:00:16.0: Sonics Silicon Backplane found on PCI device 0001:00:16.0
[ 88.429085] hub 5-0:1.0: 3 ports detected
[ 88.990850] ohci-pci 0001:00:1b.1: OHCI PCI host controller
[ 89.412328] ohci-pci 0001:00:1b.1: new USB bus registered, assigned bus number 6
[ 89.547659] ohci-pci 0001:00:1b.1: irq 63, io mem 0x80083000
[ 90.020865] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 90.065497] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 90.110271] usb usb6: Product: OHCI PCI host controller
[ 90.154401] usb usb6: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 90.200694] usb usb6: SerialNumber: 0001:00:1b.1
[ 90.204953] [drm] radeon kernel modesetting enabled.
[ 90.612186] Console: switching to colour dummy device 80x25
[ 90.649146] hub 6-0:1.0: USB hub found
[ 90.649547] hub 6-0:1.0: 2 ports detected
[ 90.700923] radeon 0000:00:10.0: enabling device (0006 -> 0007)
[ 90.786008] [drm] initializing kernel modesetting (RV350 0x1002:0x4150 0x1002:0x0002 0x00).
[ 90.786633] [drm] Forcing AGP to PCI mode
[ 90.787252] radeon 0000:00:10.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000
[ 91.273734] [drm] Generation 2 PCI interface, using max accessible memory
[ 91.274292] radeon 0000:00:10.0: VRAM: 256M 0x00000000A0000000 - 0x00000000AFFFFFFF (256M used)
[ 91.274688] radeon 0000:00:10.0: GTT: 512M 0x0000000080000000 - 0x000000009FFFFFFF
[ 91.275283] [drm] Detected VRAM RAM=256M, BAR=256M
[ 91.275763] [drm] RAM width 128bits DDR
[ 91.303103] [drm] radeon: 256M of VRAM memory ready
[ 91.303385] [drm] radeon: 512M of GTT memory ready.
[ 91.304588] [drm] GART: num cpu pages 131072, num gpu pages 131072
[ 91.897823] [drm] radeon: 1 quad pipes, 1 Z pipes initialized
[ 91.898352] [drm] PCI GART of 512M enabled (table at 0x0000000003B00000).
[ 91.922492] radeon 0000:00:10.0: WB enabled
[ 91.922938] radeon 0000:00:10.0: fence driver on ring 0 use gpu addr 0x0000000080000000
[ 91.951295] [drm] radeon: irq initialized.
[ 91.951821] [drm] Loading R300 Microcode
[ 92.296417] [drm] radeon: ring at 0x0000000080001000
[ 92.298345] [drm] ring test succeeded in 0 usecs
[ 92.319800] random: crng init done
[ 92.550561] [drm] ib test succeeded in 0 usecs
[ 92.920129] [drm] Radeon Display Connectors
[ 92.920466] [drm] Connector 0:
[ 92.920726] [drm] DVI-I-1
[ 92.920960] [drm] HPD2
[ 92.921186] [drm] DDC: 0x64 0x64 0x64 0x64 0x64 0x64 0x64 0x64
[ 92.921575] [drm] Encoders:
[ 92.921822] [drm] CRT1: INTERNAL_DAC1
[ 92.922129] [drm] DFP2: INTERNAL_DVO1
[ 92.922504] [drm] Connector 1:
[ 92.922739] [drm] DVI-I-2
[ 92.923049] [drm] HPD1
[ 92.923274] [drm] DDC: 0x60 0x60 0x60 0x60 0x60 0x60 0x60 0x60
[ 92.923691] [drm] Encoders:
[ 92.923857] [drm] CRT2: INTERNAL_DAC2
[ 92.924125] [drm] DFP1: INTERNAL_TMDS1
[ 92.970473] [drm] Initialized radeon 2.50.0 20080528 for 0000:00:10.0 on minor 0
[ 92.992946] ==================================================================
[ 92.993307] BUG: KCSAN: data-race in blk_finish_plug / blk_time_get_ns
[ 92.993726] read to 0xc1fb63b0 of 4 bytes by interrupt on cpu 0:
[ 92.993948] blk_time_get_ns+0x24/0xf4
[ 92.994185] __blk_mq_end_request+0x58/0xe8
[ 92.994408] scsi_end_request+0x120/0x2d4
[ 92.994652] scsi_io_completion+0x290/0x6b4
[ 92.994894] scsi_finish_command+0x160/0x1a4
[ 92.995116] scsi_complete+0xf0/0x128
[ 92.995349] blk_complete_reqs+0xb4/0xd8
[ 92.995554] blk_done_softirq+0x68/0xa4
[ 92.995758] __do_softirq+0x1dc/0x218
[ 92.995990] do_softirq_own_stack+0x54/0x74
[ 92.996225] do_softirq_own_stack+0x44/0x74
[ 92.996456] __irq_exit_rcu+0x6c/0xbc
[ 92.996673] irq_exit+0x10/0x20
[ 92.996881] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 92.997135] do_IRQ+0x24/0x2c
[ 92.997343] HardwareInterrupt_virt+0x108/0x10c
[ 92.997572] 0x40
[ 92.997740] 0x40
[ 92.997901] kcsan_setup_watchpoint+0x300/0x4cc
[ 92.998120] blk_finish_plug+0x48/0x6c
[ 92.998323] read_pages+0xf0/0x214
[ 92.998543] page_cache_ra_unbounded+0x120/0x244
[ 92.998787] do_page_cache_ra+0x90/0xb8
[ 92.999012] force_page_cache_ra+0x12c/0x130
[ 92.999247] page_cache_sync_ra+0xc4/0xdc
[ 92.999476] filemap_get_pages+0x1a4/0x708
[ 92.999723] filemap_read+0x204/0x4c0
[ 92.999952] blkdev_read_iter+0x1e8/0x25c
[ 93.000181] vfs_read+0x29c/0x2f4
[ 93.000389] ksys_read+0xb8/0x134
[ 93.000599] sys_read+0x4c/0x74
[ 93.000802] system_call_exception+0x15c/0x1c0
[ 93.001042] ret_from_syscall+0x0/0x2c
[ 93.001387] write to 0xc1fb63b0 of 4 bytes by task 575 on cpu 0:
[ 93.001609] blk_finish_plug+0x48/0x6c
[ 93.001814] read_pages+0xf0/0x214
[ 93.002031] page_cache_ra_unbounded+0x120/0x244
[ 93.002271] do_page_cache_ra+0x90/0xb8
[ 93.002496] force_page_cache_ra+0x12c/0x130
[ 93.002730] page_cache_sync_ra+0xc4/0xdc
[ 93.002959] filemap_get_pages+0x1a4/0x708
[ 93.003197] filemap_read+0x204/0x4c0
[ 93.003428] blkdev_read_iter+0x1e8/0x25c
[ 93.003652] vfs_read+0x29c/0x2f4
[ 93.003858] ksys_read+0xb8/0x134
[ 93.004065] sys_read+0x4c/0x74
[ 93.004268] system_call_exception+0x15c/0x1c0
[ 93.004504] ret_from_syscall+0x0/0x2c
[ 93.004842] Reported by Kernel Concurrency Sanitizer on:
[ 93.005036] CPU: 0 PID: 575 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 93.005309] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 93.005517] ==================================================================
[ 93.873453] [drm] fb mappable at 0xA0040000
[ 93.873817] [drm] vram apper at 0xA0000000
[ 93.874106] [drm] size 8294400
[ 93.874361] [drm] fb depth is 24
[ 93.874538] [drm] pitch is 7680
[ 94.252525] Console: switching to colour frame buffer device 240x67
[ 95.062293] radeon 0000:00:10.0: [drm] fb0: radeondrmfb frame buffer device
[ 97.049715] firewire_ohci 0002:00:0e.0: enabling device (0000 -> 0002)
[ 97.199210] firewire_ohci 0002:00:0e.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
[ 97.412736] gem 0002:00:0f.0 enP2p0s15: renamed from eth0 (while UP)
[ 97.613568] ADM1030 fan controller [@2c]
[ 97.685542] DS1775 digital thermometer [@49]
[ 97.687865] Temp: 58.8 C
[ 97.687914] Hyst: 70.0 C
[ 97.689321] OS: 75.0 C
[ 97.741434] firewire_core 0002:00:0e.0: created device fw0: GUID 000a95fffe9c763a, S800
[ 99.215587] ==================================================================
[ 99.217409] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 99.219434] write (marked) to 0xeedacc11 of 1 bytes by interrupt on cpu 0:
[ 99.221074] rcu_report_qs_rdp+0x15c/0x18c
[ 99.222137] rcu_core+0x1f0/0xa88
[ 99.223034] rcu_core_si+0x20/0x3c
[ 99.223948] __do_softirq+0x1dc/0x218
[ 99.224944] do_softirq_own_stack+0x54/0x74
[ 99.226047] do_softirq_own_stack+0x44/0x74
[ 99.227145] __irq_exit_rcu+0x6c/0xbc
[ 99.228124] irq_exit+0x10/0x20
[ 99.228992] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 99.230356] timer_interrupt+0x64/0x178
[ 99.231364] Decrementer_virt+0x108/0x10c
[ 99.232415] 0x1
[ 99.232987] 0x5c
[ 99.233571] kcsan_setup_watchpoint+0x300/0x4cc
[ 99.234725] rcu_all_qs+0x58/0x17c
[ 99.235645] __cond_resched+0x50/0x58
[ 99.236623] kmem_cache_alloc+0x48/0x228
[ 99.237670] anon_vma_fork+0xbc/0x1e8
[ 99.238635] copy_process+0x1f14/0x3324
[ 99.239672] kernel_clone+0x78/0x2d0
[ 99.240641] sys_clone+0xe0/0x110
[ 99.241556] system_call_exception+0x15c/0x1c0
[ 99.242710] ret_from_syscall+0x0/0x2c
[ 99.356241] read to 0xeedacc11 of 1 bytes by task 719 on cpu 0:
[ 99.413875] rcu_all_qs+0x58/0x17c
[ 99.471688] __cond_resched+0x50/0x58
[ 99.529622] kmem_cache_alloc+0x48/0x228
[ 99.587637] anon_vma_fork+0xbc/0x1e8
[ 99.645528] copy_process+0x1f14/0x3324
[ 99.703716] kernel_clone+0x78/0x2d0
[ 99.761923] sys_clone+0xe0/0x110
[ 99.819992] system_call_exception+0x15c/0x1c0
[ 99.878269] ret_from_syscall+0x0/0x2c
[ 99.993841] Reported by Kernel Concurrency Sanitizer on:
[ 100.051585] CPU: 0 PID: 719 Comm: openrc-run.sh Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 100.110064] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 100.168370] ==================================================================
[ 101.851821] EXT4-fs (sda5): re-mounted fa07e66f-b4f9-404f-85d8-487d3c097aec r/w. Quota mode: disabled.
[ 102.483920] EXT4-fs (sda5): re-mounted fa07e66f-b4f9-404f-85d8-487d3c097aec r/w. Quota mode: disabled.
[ 104.866209] snd-aoa-fabric-layout: Using direct GPIOs
[ 105.217508] snd-aoa-fabric-layout: can use this codec
[ 105.470497] snd-aoa-codec-tas: tas found, addr 0x35 on /pci@f2000000/mac-io@17/i2c@18000/deq@6a
[ 105.907575] CPU-temp: 58.9 C
[ 105.907650] , Case: 35.5 C
[ 106.016350] , Fan: 5 (tuned -6)
[ 106.679581] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[ 107.172258] ==================================================================
[ 107.235050] BUG: KCSAN: data-race in _copy_to_user / interrupt_async_enter_prepare
[ 107.360040] read to 0xc3499f5c of 4 bytes by task 547 on cpu 1:
[ 107.423383] interrupt_async_enter_prepare+0x64/0xc4
[ 107.487499] do_IRQ+0x18/0x2c
[ 107.551661] HardwareInterrupt_virt+0x108/0x10c
[ 107.616591] 0xbc4640
[ 107.680385] 0xd
[ 107.742790] kcsan_setup_watchpoint+0x300/0x4cc
[ 107.805424] _copy_to_user+0x9c/0xdc
[ 107.867387] cp_statx+0x348/0x384
[ 107.928284] do_statx+0xc8/0xfc
[ 107.988247] sys_statx+0x8c/0xc8
[ 108.047635] system_call_exception+0x15c/0x1c0
[ 108.106929] ret_from_syscall+0x0/0x2c
[ 108.223641] write to 0xc3499f5c of 4 bytes by task 547 on cpu 1:
[ 108.283215] _copy_to_user+0x9c/0xdc
[ 108.342989] cp_statx+0x348/0x384
[ 108.402639] do_statx+0xc8/0xfc
[ 108.462438] sys_statx+0x8c/0xc8
[ 108.522074] system_call_exception+0x15c/0x1c0
[ 108.582153] ret_from_syscall+0x0/0x2c
[ 108.700558] Reported by Kernel Concurrency Sanitizer on:
[ 108.760385] CPU: 1 PID: 547 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 108.821586] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 108.883577] ==================================================================
[ 108.925512] Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[ 109.199155] Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
[ 109.276375] Adding 8388604k swap on /dev/sdb6. Priority:-2 extents:1 across:8388604k
[ 109.314175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 109.544449] cfg80211: failed to load regulatory.db
[ 110.594360] b43legacy-phy0: Broadcom 4306 WLAN found (core revision 4)
[ 110.742139] b43legacy-phy0 debug: Found PHY: Analog 1, Type 2, Revision 1
[ 110.742258] b43legacy-phy0 debug: Found Radio: Manuf 0x17F, Version 0x2050, Revision 2
[ 110.775448] b43legacy-phy0 debug: Radio initialized
[ 110.778851] Broadcom 43xx-legacy driver loaded [ Features: PLID ]
[ 110.900422] b43legacy-phy0: Loading firmware b43legacy/ucode4.fw
[ 111.029503] b43legacy-phy0: Loading firmware b43legacy/pcm4.fw
[ 111.153092] b43legacy-phy0: Loading firmware b43legacy/b0g0initvals2.fw
[ 111.287784] ieee80211 phy0: Selected rate control algorithm 'minstrel_ht'
[ 111.647673] EXT4-fs (sdc5): mounting ext2 file system using the ext4 subsystem
[ 111.800289] EXT4-fs (sdc5): mounted filesystem e4e8af9e-0f0d-44f9-b983-71bf61d782de r/w without journal. Quota mode: disabled.
[ 111.927130] ext2 filesystem being mounted at /boot supports timestamps until 2038-01-19 (0x7fffffff)
[ 112.067788] BTRFS: device label tmp devid 1 transid 2859 /dev/sda6 (8:6) scanned by mount (899)
[ 112.207634] BTRFS info (device sda6): first mount of filesystem 65162d91-887e-4e48-a356-fbf7093eefb5
[ 112.340711] BTRFS info (device sda6): using xxhash64 (xxhash64-generic) checksum algorithm
[ 112.473698] BTRFS info (device sda6): using free-space-tree
[ 134.785416] b43legacy-phy0: Loading firmware version 0x127, patch level 14 (2005-04-18 02:36:27)
[ 134.872724] b43legacy-phy0 debug: Chip initialized
[ 134.918765] b43legacy-phy0 debug: 30-bit DMA initialized
[ 134.930672] b43legacy-phy0 debug: Wireless interface started
[ 134.930824] b43legacy-phy0 debug: Adding Interface type 2
[ 135.340440] NET: Registered PF_PACKET protocol family
[ 142.262239] ==================================================================
[ 142.262373] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 142.262522] write (marked) to 0xeedacc11 of 1 bytes by interrupt on cpu 0:
[ 142.262599] rcu_report_qs_rdp+0x15c/0x18c
[ 142.262688] rcu_core+0x1f0/0xa88
[ 142.262775] rcu_core_si+0x20/0x3c
[ 142.262862] __do_softirq+0x1dc/0x218
[ 142.262974] do_softirq_own_stack+0x54/0x74
[ 142.263084] do_softirq_own_stack+0x44/0x74
[ 142.263190] __irq_exit_rcu+0x6c/0xbc
[ 142.263287] irq_exit+0x10/0x20
[ 142.263380] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 142.263478] timer_interrupt+0x64/0x178
[ 142.263564] Decrementer_virt+0x108/0x10c
[ 142.263659] 0xf393dd80
[ 142.263737] 0xc1b7f120
[ 142.263808] kcsan_setup_watchpoint+0x300/0x4cc
[ 142.263898] rcu_all_qs+0x58/0x17c
[ 142.263989] __cond_resched+0x50/0x58
[ 142.264078] dput+0x28/0x90
[ 142.264174] path_put+0x2c/0x54
[ 142.264271] terminate_walk+0x80/0x110
[ 142.264371] path_lookupat+0x120/0x21c
[ 142.264481] filename_lookup+0x90/0x100
[ 142.264594] vfs_statx+0x8c/0x25c
[ 142.264674] do_statx+0xb4/0xfc
[ 142.264754] sys_statx+0x8c/0xc8
[ 142.264836] system_call_exception+0x15c/0x1c0
[ 142.264945] ret_from_syscall+0x0/0x2c
[ 142.265079] read to 0xeedacc11 of 1 bytes by task 1278 on cpu 0:
[ 142.265153] rcu_all_qs+0x58/0x17c
[ 142.265245] __cond_resched+0x50/0x58
[ 142.265333] dput+0x28/0x90
[ 142.265426] path_put+0x2c/0x54
[ 142.265520] terminate_walk+0x80/0x110
[ 142.265620] path_lookupat+0x120/0x21c
[ 142.265729] filename_lookup+0x90/0x100
[ 142.265841] vfs_statx+0x8c/0x25c
[ 142.265921] do_statx+0xb4/0xfc
[ 142.266001] sys_statx+0x8c/0xc8
[ 142.266082] system_call_exception+0x15c/0x1c0
[ 142.266189] ret_from_syscall+0x0/0x2c
[ 142.266315] Reported by Kernel Concurrency Sanitizer on:
[ 142.266370] CPU: 0 PID: 1278 Comm: openrc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 142.266464] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 142.266525] ==================================================================
[ 146.864470] CPU-temp: 59.2 C
[ 146.864533] , Case: 35.6 C
[ 146.864575] , Fan: 6 (tuned +1)
[ 155.274777] ==================================================================
[ 155.274912] BUG: KCSAN: data-race in do_sys_poll / interrupt_async_enter_prepare
[ 155.275072] read to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 155.275146] interrupt_async_enter_prepare+0x64/0xc4
[ 155.275243] timer_interrupt+0x1c/0x178
[ 155.275329] Decrementer_virt+0x108/0x10c
[ 155.275425] do_raw_spin_unlock+0x10c/0x130
[ 155.275526] 0x9032
[ 155.275599] kcsan_setup_watchpoint+0x300/0x4cc
[ 155.275689] do_sys_poll+0x500/0x614
[ 155.275778] sys_poll+0xac/0x160
[ 155.275866] system_call_exception+0x15c/0x1c0
[ 155.275975] ret_from_syscall+0x0/0x2c
[ 155.276106] write to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 155.276180] do_sys_poll+0x500/0x614
[ 155.276269] sys_poll+0xac/0x160
[ 155.276357] system_call_exception+0x15c/0x1c0
[ 155.276464] ret_from_syscall+0x0/0x2c
[ 155.276590] Reported by Kernel Concurrency Sanitizer on:
[ 155.276644] CPU: 0 PID: 1568 Comm: wmaker Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 155.276739] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 155.276799] ==================================================================
[ 212.002338] CPU-temp: 59.6 C
[ 212.002409] , Case: 35.7 C
[ 212.002474] , Fan: 7 (tuned +1)
[ 252.536412] ==================================================================
[ 252.536552] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 252.536727] read to 0xeeda9094 of 1 bytes by interrupt on cpu 1:
[ 252.536803] tmigr_next_groupevt+0x60/0xd8
[ 252.536906] tmigr_handle_remote_up+0x94/0x394
[ 252.537011] __walk_groups+0x74/0xc8
[ 252.537107] tmigr_handle_remote+0x13c/0x198
[ 252.537211] run_timer_softirq+0x94/0x98
[ 252.537320] __do_softirq+0x1dc/0x218
[ 252.537433] do_softirq_own_stack+0x54/0x74
[ 252.537543] do_softirq_own_stack+0x44/0x74
[ 252.537650] __irq_exit_rcu+0x6c/0xbc
[ 252.537747] irq_exit+0x10/0x20
[ 252.537839] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 252.537937] timer_interrupt+0x64/0x178
[ 252.538025] Decrementer_virt+0x108/0x10c
[ 252.538120] _raw_spin_unlock_irqrestore+0x28/0x58
[ 252.538232] free_to_partial_list+0x100/0x3c8
[ 252.538342] kfree+0x15c/0x1bc
[ 252.538439] skb_kfree_head+0x68/0x6c
[ 252.538548] skb_free_head+0xbc/0xc0
[ 252.538628] skb_release_data+0x1c4/0x1d4
[ 252.538714] skb_release_all+0x50/0x70
[ 252.538796] __kfree_skb+0x2c/0x4c
[ 252.538875] kfree_skb_reason+0x34/0x4c
[ 252.538958] kfree_skb+0x28/0x40
[ 252.539039] unix_stream_read_generic+0x9ac/0xae0
[ 252.539138] unix_stream_recvmsg+0x118/0x11c
[ 252.539234] sock_recvmsg_nosec+0x5c/0x88
[ 252.539329] ____sys_recvmsg+0xc4/0x270
[ 252.539427] ___sys_recvmsg+0x90/0xd4
[ 252.539532] __sys_recvmsg+0xb0/0xf8
[ 252.539637] sys_recvmsg+0x50/0x78
[ 252.539740] system_call_exception+0x15c/0x1c0
[ 252.539850] ret_from_syscall+0x0/0x2c
[ 252.539980] write to 0xeeda9094 of 1 bytes by task 0 on cpu 0:
[ 252.540053] tmigr_cpu_activate+0xe8/0x12c
[ 252.540156] timer_clear_idle+0x60/0x80
[ 252.540262] tick_nohz_restart_sched_tick+0x3c/0x170
[ 252.540365] tick_nohz_idle_exit+0xe0/0x158
[ 252.540465] do_idle+0x54/0x11c
[ 252.540547] cpu_startup_entry+0x30/0x34
[ 252.540634] kernel_init+0x0/0x1a4
[ 252.540732] console_on_rootfs+0x0/0xc8
[ 252.540814] 0x3610
[ 252.540926] Reported by Kernel Concurrency Sanitizer on:
[ 252.540981] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 252.541076] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 252.541137] ==================================================================
[ 269.361258] ==================================================================
[ 269.424130] BUG: KCSAN: data-race in copy_iovec_from_user / interrupt_async_enter_prepare
[ 269.551580] read to 0xc34987dc of 4 bytes by task 1577 on cpu 0:
[ 269.616042] interrupt_async_enter_prepare+0x64/0xc4
[ 269.680588] do_IRQ+0x18/0x2c
[ 269.745159] HardwareInterrupt_virt+0x108/0x10c
[ 269.810375] ___sys_recvmsg+0xa8/0xd4
[ 269.875466] 0x1
[ 269.939950] kcsan_setup_watchpoint+0x300/0x4cc
[ 270.005262] copy_iovec_from_user+0xb0/0x10c
[ 270.070322] __import_iovec+0xfc/0x22c
[ 270.134934] import_iovec+0x50/0x84
[ 270.199533] copy_msghdr_from_user+0xa0/0xd4
[ 270.264728] ___sys_recvmsg+0x6c/0xd4
[ 270.330041] __sys_recvmsg+0xb0/0xf8
[ 270.395115] sys_recvmsg+0x50/0x78
[ 270.459977] system_call_exception+0x15c/0x1c0
[ 270.525143] ret_from_syscall+0x0/0x2c
[ 270.653525] write to 0xc34987dc of 4 bytes by task 1577 on cpu 0:
[ 270.717547] copy_iovec_from_user+0xb0/0x10c
[ 270.780806] __import_iovec+0xfc/0x22c
[ 270.843348] import_iovec+0x50/0x84
[ 270.905420] copy_msghdr_from_user+0xa0/0xd4
[ 270.966956] ___sys_recvmsg+0x6c/0xd4
[ 271.027596] __sys_recvmsg+0xb0/0xf8
[ 271.087124] sys_recvmsg+0x50/0x78
[ 271.145899] system_call_exception+0x15c/0x1c0
[ 271.204429] ret_from_syscall+0x0/0x2c
[ 271.320364] Reported by Kernel Concurrency Sanitizer on:
[ 271.379532] CPU: 0 PID: 1577 Comm: urxvt Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 271.439191] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 271.498416] ==================================================================
[ 276.865543] CPU-temp: 59.9 C
[ 276.865623] , Case: 35.8 C
[ 276.968161] , Fan: 8 (tuned +1)
[ 279.054669] ==================================================================
[ 279.111269] BUG: KCSAN: data-race in copy_iovec_from_user / interrupt_async_enter_prepare
[ 279.223825] read to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 279.280806] interrupt_async_enter_prepare+0x64/0xc4
[ 279.337796] do_IRQ+0x18/0x2c
[ 279.394353] HardwareInterrupt_virt+0x108/0x10c
[ 279.451258] 0x1
[ 279.507766] 0x1000
[ 279.563800] kcsan_setup_watchpoint+0x300/0x4cc
[ 279.620285] copy_iovec_from_user+0xb0/0x10c
[ 279.676778] __import_iovec+0xfc/0x22c
[ 279.733472] import_iovec+0x50/0x84
[ 279.789929] copy_msghdr_from_user+0xa0/0xd4
[ 279.846778] ___sys_recvmsg+0x6c/0xd4
[ 279.903213] __sys_recvmsg+0xb0/0xf8
[ 279.959331] sys_recvmsg+0x50/0x78
[ 280.015040] system_call_exception+0x15c/0x1c0
[ 280.071038] ret_from_syscall+0x0/0x2c
[ 280.183559] write to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 280.241201] copy_iovec_from_user+0xb0/0x10c
[ 280.298804] __import_iovec+0xfc/0x22c
[ 280.356543] import_iovec+0x50/0x84
[ 280.414376] copy_msghdr_from_user+0xa0/0xd4
[ 280.472566] ___sys_recvmsg+0x6c/0xd4
[ 280.531236] __sys_recvmsg+0xb0/0xf8
[ 280.589458] sys_recvmsg+0x50/0x78
[ 280.647220] system_call_exception+0x15c/0x1c0
[ 280.704265] ret_from_syscall+0x0/0x2c
[ 280.815096] Reported by Kernel Concurrency Sanitizer on:
[ 280.870689] CPU: 0 PID: 1568 Comm: wmaker Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 280.927061] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 280.983547] ==================================================================
[ 333.820031] CPU-temp: 60.1 C
[ 333.820104] , Case: 36.0 C
[ 333.922934] , Fan: 9 (tuned +1)
[ 386.720306] ==================================================================
[ 386.780763] BUG: KCSAN: data-race in __run_timer_base / next_expiry_recalc
[ 386.900308] write to 0xeedc4918 of 4 bytes by interrupt on cpu 1:
[ 386.961089] next_expiry_recalc+0xbc/0x15c
[ 387.022044] __run_timer_base+0x278/0x38c
[ 387.083095] run_timer_base+0x5c/0x7c
[ 387.144161] run_timer_softirq+0x34/0x98
[ 387.205064] __do_softirq+0x1dc/0x218
[ 387.265807] do_softirq_own_stack+0x54/0x74
[ 387.326741] do_softirq_own_stack+0x44/0x74
[ 387.386848] __irq_exit_rcu+0x6c/0xbc
[ 387.446427] irq_exit+0x10/0x20
[ 387.505765] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 387.565965] timer_interrupt+0x64/0x178
[ 387.625952] Decrementer_virt+0x108/0x10c
[ 387.685840] default_idle_call+0x38/0x48
[ 387.745740] do_idle+0xfc/0x11c
[ 387.805480] cpu_startup_entry+0x30/0x34
[ 387.865333] start_secondary+0x504/0x854
[ 387.925068] 0x3338
[ 388.042760] read to 0xeedc4918 of 4 bytes by interrupt on cpu 0:
[ 388.101842] __run_timer_base+0x4c/0x38c
[ 388.160468] timer_expire_remote+0x48/0x68
[ 388.218450] tmigr_handle_remote_up+0x1f4/0x394
[ 388.275754] __walk_groups+0x74/0xc8
[ 388.333193] tmigr_handle_remote+0x13c/0x198
[ 388.391077] run_timer_softirq+0x94/0x98
[ 388.448233] __do_softirq+0x1dc/0x218
[ 388.504471] do_softirq_own_stack+0x54/0x74
[ 388.560085] do_softirq_own_stack+0x44/0x74
[ 388.614865] __irq_exit_rcu+0x6c/0xbc
[ 388.669169] irq_exit+0x10/0x20
[ 388.723070] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 388.777663] timer_interrupt+0x64/0x178
[ 388.832063] Decrementer_virt+0x108/0x10c
[ 388.886823] default_idle_call+0x38/0x48
[ 388.941375] do_idle+0xfc/0x11c
[ 388.995612] cpu_startup_entry+0x30/0x34
[ 389.049972] kernel_init+0x0/0x1a4
[ 389.104285] console_on_rootfs+0x0/0xc8
[ 389.158566] 0x3610
[ 389.265473] Reported by Kernel Concurrency Sanitizer on:
[ 389.319778] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 389.375176] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 389.430835] ==================================================================
[ 452.659321] pagealloc: memory corruption
[ 452.756403] fffdfff0: 00 00 00 00 ....
[ 452.854833] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 452.953923] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 453.053902] Call Trace:
[ 453.150878] [f1919c00] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[ 453.251275] [f1919c20] [c0be4ee8] dump_stack+0x20/0x34
[ 453.350119] [f1919c30] [c02c47c0] __kernel_unpoison_pages+0x198/0x1a8
[ 453.451915] [f1919c80] [c029b62c] post_alloc_hook+0x8c/0xf0
[ 453.553600] [f1919cb0] [c029b6b4] prep_new_page+0x24/0x5c
[ 453.654442] [f1919cd0] [c029c9dc] get_page_from_freelist+0x564/0x660
[ 453.755561] [f1919d60] [c029dfcc] __alloc_pages+0x114/0x8dc
[ 453.856815] [f1919e20] [c02764f0] folio_prealloc.isra.0+0x44/0xec
[ 453.959273] [f1919e40] [c027be28] handle_mm_fault+0x488/0xed0
[ 454.057617] [f1919ed0] [c00340f4] ___do_page_fault+0x4d8/0x630
[ 454.154895] [f1919f10] [c003446c] do_page_fault+0x28/0x40
[ 454.251719] [f1919f30] [c000433c] DataAccess_virt+0x124/0x17c
[ 454.349211] --- interrupt: 300 at 0x413008
[ 454.445748] NIP: 00413008 LR: 00412fe8 CTR: 00000000
[ 454.542365] REGS: f1919f40 TRAP: 0300 Not tainted (6.9.0-rc4-PMacG4-dirty)
[ 454.638976] MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 20882464 XER: 00000000
[ 454.733294] DAR: 8d7de010 DSISR: 42000000
GPR00: 00412fe8 afa78860 a7dc6700 6b871010 3c500000 20884462 00000003 003301e4
GPR08: 21f6e000 21f6d000 00000000 408258ea 20882462 0042ff68 00000000 40882462
GPR16: ffffffff 00000000 00000002 00000000 00000001 00000000 00430018 00000001
GPR24: ffffffff ffffffff 3c500000 0000005a 6b871010 00000000 00437cd0 00001000
[ 455.228075] NIP [00413008] 0x413008
[ 455.327281] LR [00412fe8] 0x412fe8
[ 455.422923] --- interrupt: 300
[ 455.523201] page: refcount:1 mapcount:0 mapping:00000000 index:0x1 pfn:0x31069
[ 455.624640] flags: 0x80000000(zone=2)
[ 455.725989] page_type: 0xffffffff()
[ 455.826265] raw: 80000000 00000100 00000122 00000000 00000001 00000000 ffffffff 00000001
[ 455.931213] raw: 00000000
[ 456.032785] page dumped because: pagealloc: corrupted page details
[ 456.137755] page_owner info is not present (never set?)
[ 471.812481] ==================================================================
[ 471.875913] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 472.002063] read (marked) to 0xefbfb770 of 4 bytes by task 39 on cpu 0:
[ 472.066742] lru_gen_look_around+0x320/0x634
[ 472.130601] folio_referenced_one+0x32c/0x404
[ 472.194198] rmap_walk_anon+0x1c4/0x24c
[ 472.257718] rmap_walk+0x70/0x7c
[ 472.320908] folio_referenced+0x194/0x1ec
[ 472.384159] shrink_folio_list+0x6a8/0xd28
[ 472.447385] evict_folios+0xcc0/0x1204
[ 472.510527] try_to_shrink_lruvec+0x214/0x2f0
[ 472.573863] shrink_one+0x104/0x1e8
[ 472.637032] shrink_node+0x314/0xc3c
[ 472.700496] balance_pgdat+0x498/0x914
[ 472.763930] kswapd+0x304/0x398
[ 472.827248] kthread+0x174/0x178
[ 472.890132] start_kernel_thread+0x10/0x14
[ 473.015917] write to 0xefbfb770 of 4 bytes by task 1594 on cpu 1:
[ 473.080139] list_add+0x58/0x94
[ 473.143681] evict_folios+0xb04/0x1204
[ 473.207333] try_to_shrink_lruvec+0x214/0x2f0
[ 473.271180] shrink_one+0x104/0x1e8
[ 473.334921] shrink_node+0x314/0xc3c
[ 473.398514] do_try_to_free_pages+0x500/0x7e4
[ 473.462735] try_to_free_pages+0x150/0x18c
[ 473.526742] __alloc_pages+0x460/0x8dc
[ 473.590118] folio_prealloc.isra.0+0x44/0xec
[ 473.652888] handle_mm_fault+0x488/0xed0
[ 473.714904] ___do_page_fault+0x4d8/0x630
[ 473.776247] do_page_fault+0x28/0x40
[ 473.837398] DataAccess_virt+0x124/0x17c
[ 473.957872] Reported by Kernel Concurrency Sanitizer on:
[ 474.018336] CPU: 1 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 474.079266] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 474.140486] ==================================================================
[ 476.045778] ==================================================================
[ 476.107390] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 476.230084] read (marked) to 0xef9ba594 of 4 bytes by task 1593 on cpu 0:
[ 476.292384] lru_gen_look_around+0x320/0x634
[ 476.354216] folio_referenced_one+0x32c/0x404
[ 476.416032] rmap_walk_anon+0x1c4/0x24c
[ 476.477599] rmap_walk+0x70/0x7c
[ 476.538677] folio_referenced+0x194/0x1ec
[ 476.599863] shrink_folio_list+0x6a8/0xd28
[ 476.660728] evict_folios+0xcc0/0x1204
[ 476.721348] try_to_shrink_lruvec+0x214/0x2f0
[ 476.781560] shrink_one+0x104/0x1e8
[ 476.841011] shrink_node+0x314/0xc3c
[ 476.899794] do_try_to_free_pages+0x500/0x7e4
[ 476.958094] try_to_free_pages+0x150/0x18c
[ 477.015971] __alloc_pages+0x460/0x8dc
[ 477.073511] folio_prealloc.isra.0+0x44/0xec
[ 477.131177] handle_mm_fault+0x488/0xed0
[ 477.187936] ___do_page_fault+0x4d8/0x630
[ 477.244819] do_page_fault+0x28/0x40
[ 477.301705] DataAccess_virt+0x124/0x17c
[ 477.413345] write to 0xef9ba594 of 4 bytes by task 39 on cpu 1:
[ 477.469994] list_add+0x58/0x94
[ 477.525372] evict_folios+0xb04/0x1204
[ 477.580264] try_to_shrink_lruvec+0x214/0x2f0
[ 477.634933] shrink_one+0x104/0x1e8
[ 477.689145] shrink_node+0x314/0xc3c
[ 477.742465] balance_pgdat+0x498/0x914
[ 477.795104] kswapd+0x304/0x398
[ 477.847128] kthread+0x174/0x178
[ 477.898527] start_kernel_thread+0x10/0x14
[ 478.000334] Reported by Kernel Concurrency Sanitizer on:
[ 478.052065] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 478.105114] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 478.158491] ==================================================================
[ 484.836016] ==================================================================
[ 484.890251] BUG: KCSAN: data-race in __mod_memcg_lruvec_state / mem_cgroup_css_rstat_flush
[ 484.999385] read (marked) to 0xeedd91ac of 4 bytes by task 1593 on cpu 0:
[ 485.055331] mem_cgroup_css_rstat_flush+0xcc/0x518
[ 485.111764] cgroup_rstat_flush_locked+0x528/0x538
[ 485.168325] cgroup_rstat_flush+0x38/0x5c
[ 485.224702] do_flush_stats+0x78/0x9c
[ 485.281044] mem_cgroup_flush_stats+0x7c/0x80
[ 485.337605] zswap_shrinker_count+0xb8/0x150
[ 485.393845] do_shrink_slab+0x7c/0x540
[ 485.449674] shrink_slab+0x1f0/0x384
[ 485.505456] shrink_one+0x140/0x1e8
[ 485.560938] shrink_node+0x314/0xc3c
[ 485.616173] do_try_to_free_pages+0x500/0x7e4
[ 485.671835] try_to_free_pages+0x150/0x18c
[ 485.727443] __alloc_pages+0x460/0x8dc
[ 485.782944] folio_prealloc.isra.0+0x44/0xec
[ 485.838574] handle_mm_fault+0x488/0xed0
[ 485.894091] ___do_page_fault+0x4d8/0x630
[ 485.949620] do_page_fault+0x28/0x40
[ 486.005049] DataAccess_virt+0x124/0x17c
[ 486.115237] write to 0xeedd91ac of 4 bytes by task 39 on cpu 1:
[ 486.171210] __mod_memcg_lruvec_state+0x8c/0x154
[ 486.227322] __mod_lruvec_state+0x58/0x78
[ 486.282611] lru_gen_update_size+0x130/0x240
[ 486.337329] lru_gen_del_folio+0x104/0x140
[ 486.391280] evict_folios+0xaf8/0x1204
[ 486.445636] try_to_shrink_lruvec+0x214/0x2f0
[ 486.499529] shrink_one+0x104/0x1e8
[ 486.552893] shrink_node+0x314/0xc3c
[ 486.605603] balance_pgdat+0x498/0x914
[ 486.657986] kswapd+0x304/0x398
[ 486.709948] kthread+0x174/0x178
[ 486.761693] start_kernel_thread+0x10/0x14
[ 486.865145] Reported by Kernel Concurrency Sanitizer on:
[ 486.917476] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 486.970887] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 487.024556] ==================================================================
[ 488.445808] ==================================================================
[ 488.500314] BUG: KCSAN: data-race in list_del / lru_gen_look_around
[ 488.608881] read (marked) to 0xef383a00 of 4 bytes by task 1594 on cpu 0:
[ 488.664363] lru_gen_look_around+0x320/0x634
[ 488.720003] folio_referenced_one+0x32c/0x404
[ 488.775696] rmap_walk_anon+0x1c4/0x24c
[ 488.831310] rmap_walk+0x70/0x7c
[ 488.886546] folio_referenced+0x194/0x1ec
[ 488.941958] shrink_folio_list+0x6a8/0xd28
[ 488.997442] evict_folios+0xcc0/0x1204
[ 489.052550] try_to_shrink_lruvec+0x214/0x2f0
[ 489.107616] shrink_one+0x104/0x1e8
[ 489.162617] shrink_node+0x314/0xc3c
[ 489.217347] do_try_to_free_pages+0x500/0x7e4
[ 489.272219] try_to_free_pages+0x150/0x18c
[ 489.327292] __alloc_pages+0x460/0x8dc
[ 489.382392] folio_prealloc.isra.0+0x44/0xec
[ 489.437664] handle_mm_fault+0x488/0xed0
[ 489.493033] ___do_page_fault+0x4d8/0x630
[ 489.548450] do_page_fault+0x28/0x40
[ 489.603743] DataAccess_virt+0x124/0x17c
[ 489.712459] write to 0xef383a00 of 4 bytes by task 39 on cpu 1:
[ 489.766735] list_del+0x2c/0x5c
[ 489.820297] lru_gen_del_folio+0x110/0x140
[ 489.874513] evict_folios+0xaf8/0x1204
[ 489.927811] try_to_shrink_lruvec+0x214/0x2f0
[ 489.980494] shrink_one+0x104/0x1e8
[ 490.032600] shrink_node+0x314/0xc3c
[ 490.084017] balance_pgdat+0x498/0x914
[ 490.135319] kswapd+0x304/0x398
[ 490.186592] kthread+0x174/0x178
[ 490.237688] start_kernel_thread+0x10/0x14
[ 490.339293] Reported by Kernel Concurrency Sanitizer on:
[ 490.390696] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 490.443194] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 490.496203] ==================================================================
[ 504.870324] ==================================================================
[ 504.926179] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 505.035013] read to 0xc121b328 of 8 bytes by task 39 on cpu 0:
[ 505.089891] zswap_store+0x118/0xa18
[ 505.145917] swap_writepage+0x4c/0xe8
[ 505.200945] pageout+0x1dc/0x304
[ 505.256018] shrink_folio_list+0xa70/0xd28
[ 505.311460] evict_folios+0xcc0/0x1204
[ 505.366557] try_to_shrink_lruvec+0x214/0x2f0
[ 505.422439] shrink_one+0x104/0x1e8
[ 505.476800] shrink_node+0x314/0xc3c
[ 505.530919] balance_pgdat+0x498/0x914
[ 505.585030] kswapd+0x304/0x398
[ 505.639149] kthread+0x174/0x178
[ 505.692932] start_kernel_thread+0x10/0x14
[ 505.800244] write to 0xc121b328 of 8 bytes by task 1593 on cpu 1:
[ 505.854808] zswap_update_total_size+0x58/0xe8
[ 505.910040] zswap_entry_free+0xdc/0x1c0
[ 505.964971] zswap_load+0x190/0x19c
[ 506.019793] swap_read_folio+0xbc/0x450
[ 506.074754] swap_cluster_readahead+0x2f8/0x338
[ 506.129791] swapin_readahead+0x430/0x438
[ 506.184612] do_swap_page+0x1e0/0x9bc
[ 506.238597] handle_mm_fault+0xecc/0xed0
[ 506.291968] ___do_page_fault+0x4d8/0x630
[ 506.344759] do_page_fault+0x28/0x40
[ 506.398273] DataAccess_virt+0x124/0x17c
[ 506.503169] Reported by Kernel Concurrency Sanitizer on:
[ 506.555788] CPU: 1 PID: 1593 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 506.609554] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 506.662427] ==================================================================
[ 510.124486] ==================================================================
[ 510.180131] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 510.291131] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 510.347527] hrtimer_active+0xb0/0x100
[ 510.403984] task_tick_fair+0xc8/0xcc
[ 510.460204] scheduler_tick+0x6c/0xcc
[ 510.516434] update_process_times+0xc8/0x120
[ 510.572773] tick_nohz_handler+0x1ac/0x270
[ 510.629081] __hrtimer_run_queues+0x170/0x1d8
[ 510.685810] hrtimer_interrupt+0x168/0x350
[ 510.742347] timer_interrupt+0x108/0x178
[ 510.798808] Decrementer_virt+0x108/0x10c
[ 510.855184] memcg_rstat_updated+0x154/0x15c
[ 510.911753] __mod_memcg_lruvec_state+0x118/0x154
[ 510.968523] __mod_lruvec_state+0x58/0x78
[ 511.025058] __lruvec_stat_mod_folio+0x88/0x8c
[ 511.081447] folio_remove_rmap_ptes+0xc8/0x150
[ 511.137516] unmap_page_range+0x6f8/0x8bc
[ 511.193560] unmap_vmas+0x11c/0x174
[ 511.249316] unmap_region+0x134/0x1dc
[ 511.304910] do_vmi_align_munmap+0x3ac/0x4ac
[ 511.360666] do_vmi_munmap+0x114/0x11c
[ 511.416401] __vm_munmap+0xcc/0x124
[ 511.472115] sys_munmap+0x40/0x64
[ 511.528049] system_call_exception+0x15c/0x1c0
[ 511.584233] ret_from_syscall+0x0/0x2c
[ 511.695258] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 511.751441] __hrtimer_run_queues+0x1cc/0x1d8
[ 511.807288] hrtimer_interrupt+0x168/0x350
[ 511.862980] timer_interrupt+0x108/0x178
[ 511.917466] Decrementer_virt+0x108/0x10c
[ 511.972362] find_stack+0x198/0x1dc
[ 512.026447] do_raw_spin_lock+0xbc/0x11c
[ 512.080033] _raw_spin_lock+0x24/0x3c
[ 512.133252] __pte_offset_map_lock+0x58/0xb8
[ 512.186376] page_vma_mapped_walk+0x1e0/0x468
[ 512.239590] remove_migration_pte+0xf4/0x334
[ 512.292790] rmap_walk_anon+0x1c4/0x24c
[ 512.345898] rmap_walk+0x70/0x7c
[ 512.398564] remove_migration_ptes+0x98/0x9c
[ 512.451480] migrate_pages_batch+0x8ec/0xb38
[ 512.504414] migrate_pages+0x290/0x77c
[ 512.557249] compact_zone+0xb48/0xf04
[ 512.609972] compact_node+0xe8/0x158
[ 512.662532] kcompactd+0x2c0/0x2d8
[ 512.715068] kthread+0x174/0x178
[ 512.767460] start_kernel_thread+0x10/0x14
[ 512.871299] Reported by Kernel Concurrency Sanitizer on:
[ 512.923314] CPU: 0 PID: 31 Comm: kcompactd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 512.976594] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 513.030308] ==================================================================
[ 528.568529] ==================================================================
[ 528.623563] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 528.733089] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 528.788901] hrtimer_active+0xb0/0x100
[ 528.844762] task_tick_fair+0xc8/0xcc
[ 528.900519] scheduler_tick+0x6c/0xcc
[ 528.956040] update_process_times+0xc8/0x120
[ 529.011842] tick_nohz_handler+0x1ac/0x270
[ 529.068353] __hrtimer_run_queues+0x170/0x1d8
[ 529.123288] hrtimer_interrupt+0x168/0x350
[ 529.177586] timer_interrupt+0x108/0x178
[ 529.231317] Decrementer_virt+0x108/0x10c
[ 529.285354] memcg_rstat_updated+0x2c/0x15c
[ 529.338748] __mod_memcg_lruvec_state+0x30/0x154
[ 529.391722] __mod_lruvec_state+0x58/0x78
[ 529.444551] __lruvec_stat_mod_folio+0x88/0x8c
[ 529.498429] folio_remove_rmap_ptes+0xc8/0x150
[ 529.551038] unmap_page_range+0x6f8/0x8bc
[ 529.603804] unmap_vmas+0x11c/0x174
[ 529.656712] unmap_region+0x134/0x1dc
[ 529.709663] do_vmi_align_munmap+0x3ac/0x4ac
[ 529.762012] do_vmi_munmap+0x114/0x11c
[ 529.814038] __vm_munmap+0xcc/0x124
[ 529.866185] sys_munmap+0x40/0x64
[ 529.918142] system_call_exception+0x15c/0x1c0
[ 529.970373] ret_from_syscall+0x0/0x2c
[ 530.073406] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 530.125836] __hrtimer_run_queues+0x1cc/0x1d8
[ 530.178436] hrtimer_interrupt+0x168/0x350
[ 530.230954] timer_interrupt+0x108/0x178
[ 530.283567] Decrementer_virt+0x108/0x10c
[ 530.336311] 0xc4a28800
[ 530.388668] cgroup_rstat_updated+0x50/0x150
[ 530.441621] memcg_rstat_updated+0x7c/0x15c
[ 530.494654] __mod_memcg_lruvec_state+0x118/0x154
[ 530.547963] __mod_lruvec_state+0x58/0x78
[ 530.601108] __lruvec_stat_mod_folio+0x88/0x8c
[ 530.654289] folio_remove_rmap_ptes+0xc8/0x150
[ 530.707564] unmap_page_range+0x6f8/0x8bc
[ 530.760503] unmap_vmas+0x11c/0x174
[ 530.812737] unmap_region+0x134/0x1dc
[ 530.864783] do_vmi_align_munmap+0x3ac/0x4ac
[ 530.916971] do_vmi_munmap+0x114/0x11c
[ 530.969005] __vm_munmap+0xcc/0x124
[ 531.020979] sys_munmap+0x40/0x64
[ 531.072850] system_call_exception+0x15c/0x1c0
[ 531.125022] ret_from_syscall+0x0/0x2c
[ 531.228289] Reported by Kernel Concurrency Sanitizer on:
[ 531.280569] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 531.334009] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 531.388022] ==================================================================
[ 563.307241] ==================================================================
[ 563.362164] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 563.472308] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 563.528696] hrtimer_active+0xb0/0x100
[ 563.585352] task_tick_fair+0xc8/0xcc
[ 563.642002] scheduler_tick+0x6c/0xcc
[ 563.698393] update_process_times+0xc8/0x120
[ 563.754995] tick_nohz_handler+0x1ac/0x270
[ 563.811358] __hrtimer_run_queues+0x170/0x1d8
[ 563.867091] hrtimer_interrupt+0x168/0x350
[ 563.922175] timer_interrupt+0x108/0x178
[ 563.976509] Decrementer_virt+0x108/0x10c
[ 564.031245] percpu_counter_add_batch+0x1dc/0x1fc
[ 564.085623] percpu_counter_add+0x44/0x68
[ 564.139133] handle_mm_fault+0x86c/0xed0
[ 564.192221] ___do_page_fault+0x4d8/0x630
[ 564.245005] do_page_fault+0x28/0x40
[ 564.297817] DataAccess_virt+0x124/0x17c
[ 564.403062] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 564.456530] __hrtimer_run_queues+0x1cc/0x1d8
[ 564.510280] hrtimer_interrupt+0x168/0x350
[ 564.563961] timer_interrupt+0x108/0x178
[ 564.617565] Decrementer_virt+0x108/0x10c
[ 564.671173] 0x595
[ 564.724345] memchr_inv+0x100/0x188
[ 564.777722] __kernel_unpoison_pages+0xe0/0x1a8
[ 564.831361] post_alloc_hook+0x8c/0xf0
[ 564.884944] prep_new_page+0x24/0x5c
[ 564.938342] get_page_from_freelist+0x564/0x660
[ 564.991991] __alloc_pages+0x114/0x8dc
[ 565.045672] folio_prealloc.isra.0+0x44/0xec
[ 565.099752] handle_mm_fault+0x488/0xed0
[ 565.153686] ___do_page_fault+0x4d8/0x630
[ 565.207797] do_page_fault+0x28/0x40
[ 565.261822] DataAccess_virt+0x124/0x17c
[ 565.369310] Reported by Kernel Concurrency Sanitizer on:
[ 565.423579] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 565.479243] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 565.534848] ==================================================================
[ 566.720422] ==================================================================
[ 566.776545] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 566.888607] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 566.945636] hrtimer_active+0xb0/0x100
[ 567.002396] task_tick_fair+0xc8/0xcc
[ 567.058903] scheduler_tick+0x6c/0xcc
[ 567.115129] update_process_times+0xc8/0x120
[ 567.171522] tick_nohz_handler+0x1ac/0x270
[ 567.227935] __hrtimer_run_queues+0x170/0x1d8
[ 567.284401] hrtimer_interrupt+0x168/0x350
[ 567.340786] timer_interrupt+0x108/0x178
[ 567.397215] Decrementer_virt+0x108/0x10c
[ 567.453799] kcsan_setup_watchpoint+0x300/0x4cc
[ 567.510581] stack_trace_save+0x40/0xa4
[ 567.567366] save_stack+0xa4/0xec
[ 567.624009] __set_page_owner+0x38/0x2dc
[ 567.680879] prep_new_page+0x24/0x5c
[ 567.737592] get_page_from_freelist+0x564/0x660
[ 567.794672] __alloc_pages+0x114/0x8dc
[ 567.851607] folio_prealloc.isra.0+0x44/0xec
[ 567.908433] handle_mm_fault+0x488/0xed0
[ 567.964553] ___do_page_fault+0x4d8/0x630
[ 568.020061] do_page_fault+0x28/0x40
[ 568.074778] DataAccess_virt+0x124/0x17c
[ 568.184134] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 568.239095] __hrtimer_run_queues+0x1cc/0x1d8
[ 568.293623] hrtimer_interrupt+0x168/0x350
[ 568.347815] timer_interrupt+0x108/0x178
[ 568.402063] Decrementer_virt+0x108/0x10c
[ 568.456590] memchr_inv+0x100/0x188
[ 568.511078] __kernel_unpoison_pages+0xe0/0x1a8
[ 568.565651] post_alloc_hook+0x8c/0xf0
[ 568.620041] prep_new_page+0x24/0x5c
[ 568.674241] get_page_from_freelist+0x564/0x660
[ 568.728680] __alloc_pages+0x114/0x8dc
[ 568.783144] folio_prealloc.isra.0+0x44/0xec
[ 568.837644] handle_mm_fault+0x488/0xed0
[ 568.892186] ___do_page_fault+0x4d8/0x630
[ 568.946782] do_page_fault+0x28/0x40
[ 569.001443] DataAccess_virt+0x124/0x17c
[ 569.110268] Reported by Kernel Concurrency Sanitizer on:
[ 569.165538] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 569.221571] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 569.277546] ==================================================================
[ 573.083473] ==================================================================
[ 573.140478] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 573.253599] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 573.311185] hrtimer_active+0xb0/0x100
[ 573.368855] task_tick_fair+0xc8/0xcc
[ 573.426338] scheduler_tick+0x6c/0xcc
[ 573.483586] update_process_times+0xc8/0x120
[ 573.540944] tick_nohz_handler+0x1ac/0x270
[ 573.598207] __hrtimer_run_queues+0x170/0x1d8
[ 573.655508] hrtimer_interrupt+0x168/0x350
[ 573.712905] timer_interrupt+0x108/0x178
[ 573.770161] Decrementer_virt+0x108/0x10c
[ 573.827391] __mod_node_page_state+0xf0/0x120
[ 573.884763] __mod_lruvec_state+0x2c/0x78
[ 573.942017] __lruvec_stat_mod_folio+0x88/0x8c
[ 573.999248] folio_remove_rmap_ptes+0xc8/0x150
[ 574.055832] unmap_page_range+0x6f8/0x8bc
[ 574.111688] unmap_vmas+0x11c/0x174
[ 574.166627] unmap_region+0x134/0x1dc
[ 574.221884] do_vmi_align_munmap+0x3ac/0x4ac
[ 574.276683] do_vmi_munmap+0x114/0x11c
[ 574.330669] __vm_munmap+0xcc/0x124
[ 574.384227] sys_munmap+0x40/0x64
[ 574.437248] system_call_exception+0x15c/0x1c0
[ 574.490657] ret_from_syscall+0x0/0x2c
[ 574.596853] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 574.650843] __hrtimer_run_queues+0x1cc/0x1d8
[ 574.705065] hrtimer_interrupt+0x168/0x350
[ 574.759258] timer_interrupt+0x108/0x178
[ 574.813360] Decrementer_virt+0x108/0x10c
[ 574.867513] 0xc1f18020
[ 574.921225] __mod_node_page_state+0x7c/0x120
[ 574.975368] __mod_lruvec_state+0x3c/0x78
[ 575.029458] __lruvec_stat_mod_folio+0x88/0x8c
[ 575.083714] folio_add_new_anon_rmap+0x130/0x19c
[ 575.138111] handle_mm_fault+0x87c/0xed0
[ 575.192365] ___do_page_fault+0x4d8/0x630
[ 575.246563] do_page_fault+0x28/0x40
[ 575.300625] DataAccess_virt+0x124/0x17c
[ 575.407905] Reported by Kernel Concurrency Sanitizer on:
[ 575.462192] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 575.517670] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 575.573511] ==================================================================
[ 579.993169] ==================================================================
[ 580.049442] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 580.161663] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 580.218764] hrtimer_active+0xb0/0x100
[ 580.275622] task_tick_fair+0xc8/0xcc
[ 580.332267] scheduler_tick+0x6c/0xcc
[ 580.388652] update_process_times+0xc8/0x120
[ 580.445227] tick_nohz_handler+0x1ac/0x270
[ 580.502867] __hrtimer_run_queues+0x170/0x1d8
[ 580.559642] hrtimer_interrupt+0x168/0x350
[ 580.616166] timer_interrupt+0x108/0x178
[ 580.672611] Decrementer_virt+0x108/0x10c
[ 580.730396] 0xffffffff
[ 580.786775] page_mapcount+0x2c/0xa8
[ 580.843024] unmap_page_range+0x700/0x8bc
[ 580.899830] unmap_vmas+0x11c/0x174
[ 580.956114] unmap_region+0x134/0x1dc
[ 581.011260] do_vmi_align_munmap+0x3ac/0x4ac
[ 581.065927] do_vmi_munmap+0x114/0x11c
[ 581.119728] __vm_munmap+0xcc/0x124
[ 581.173851] sys_munmap+0x40/0x64
[ 581.227159] system_call_exception+0x15c/0x1c0
[ 581.280190] ret_from_syscall+0x0/0x2c
[ 581.384626] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 581.438036] __hrtimer_run_queues+0x1cc/0x1d8
[ 581.491426] hrtimer_interrupt+0x168/0x350
[ 581.544824] timer_interrupt+0x108/0x178
[ 581.598099] Decrementer_virt+0x108/0x10c
[ 581.651273] flush_dcache_icache_folio+0x94/0x1a0
[ 581.704651] set_ptes+0xcc/0x144
[ 581.757983] handle_mm_fault+0x634/0xed0
[ 581.811404] ___do_page_fault+0x4d8/0x630
[ 581.864837] do_page_fault+0x28/0x40
[ 581.918179] DataAccess_virt+0x124/0x17c
[ 582.024420] Reported by Kernel Concurrency Sanitizer on:
[ 582.078308] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 582.133644] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 582.189451] ==================================================================
[ 641.910995] ==================================================================
[ 641.966187] BUG: KCSAN: data-race in interrupt_async_enter_prepare / set_fd_set
[ 642.076270] read to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 642.132235] interrupt_async_enter_prepare+0x64/0xc4
[ 642.188074] timer_interrupt+0x1c/0x178
[ 642.243862] Decrementer_virt+0x108/0x10c
[ 642.299563] 0xfefefefe
[ 642.354267] 0x0
[ 642.408407] kcsan_setup_watchpoint+0x300/0x4cc
[ 642.463244] set_fd_set+0xa4/0xec
[ 642.517966] core_sys_select+0x1ec/0x240
[ 642.572793] sys_pselect6_time32+0x190/0x1b4
[ 642.627633] system_call_exception+0x15c/0x1c0
[ 642.682584] ret_from_syscall+0x0/0x2c
[ 642.791857] write to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 642.847530] set_fd_set+0xa4/0xec
[ 642.902848] core_sys_select+0x1ec/0x240
[ 642.958519] sys_pselect6_time32+0x190/0x1b4
[ 643.014008] system_call_exception+0x15c/0x1c0
[ 643.069680] ret_from_syscall+0x0/0x2c
[ 643.179351] Reported by Kernel Concurrency Sanitizer on:
[ 643.234027] CPU: 0 PID: 1525 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 643.289155] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 643.345096] ==================================================================
[ 789.051163] ==================================================================
[ 789.106819] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 789.217527] write to 0xeedd91a0 of 4 bytes by task 40 on cpu 0:
[ 789.273728] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 789.330051] cgroup_rstat_flush_locked+0x528/0x538
[ 789.386476] cgroup_rstat_flush+0x38/0x5c
[ 789.442576] do_flush_stats+0x78/0x9c
[ 789.498516] flush_memcg_stats_dwork+0x34/0x70
[ 789.554606] process_scheduled_works+0x350/0x494
[ 789.610721] worker_thread+0x2a4/0x300
[ 789.666832] kthread+0x174/0x178
[ 789.722710] start_kernel_thread+0x10/0x14
[ 789.834825] write to 0xeedd91a0 of 4 bytes by task 1594 on cpu 1:
[ 789.892152] memcg_rstat_updated+0xd8/0x15c
[ 789.949397] __mod_memcg_lruvec_state+0x118/0x154
[ 790.006733] __mod_lruvec_state+0x58/0x78
[ 790.064148] __lruvec_stat_mod_folio+0x88/0x8c
[ 790.121707] folio_add_new_anon_rmap+0x130/0x19c
[ 790.179460] handle_mm_fault+0x87c/0xed0
[ 790.237134] ___do_page_fault+0x4d8/0x630
[ 790.294833] do_page_fault+0x28/0x40
[ 790.352533] DataAccess_virt+0x124/0x17c
[ 790.466485] value changed: 0x00000032 -> 0x00000000
[ 790.580686] Reported by Kernel Concurrency Sanitizer on:
[ 790.638575] CPU: 1 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 790.697513] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 790.756623] ==================================================================
[ 801.198092] ==================================================================
[ 801.258682] BUG: KCSAN: data-race in memcg_rstat_updated / memcg_rstat_updated
[ 801.378522] read to 0xeedd91a0 of 4 bytes by interrupt on cpu 1:
[ 801.439371] memcg_rstat_updated+0xcc/0x15c
[ 801.499726] __mod_memcg_state+0xf4/0xf8
[ 801.559395] mod_memcg_state+0x3c/0x74
[ 801.618309] mem_cgroup_charge_skmem+0x54/0xf0
[ 801.676767] __sk_mem_raise_allocated+0xa0/0x418
[ 801.735810] __sk_mem_schedule+0x60/0xb8
[ 801.794018] sk_rmem_schedule+0x90/0xb4
[ 801.851523] tcp_try_rmem_schedule+0x3e8/0x59c
[ 801.908923] tcp_data_queue+0x234/0x1138
[ 801.965807] tcp_rcv_established+0x5c0/0x6f0
[ 802.022610] tcp_v4_do_rcv+0x138/0x3b0
[ 802.079313] tcp_v4_rcv+0xc0c/0xe20
[ 802.135981] ip_protocol_deliver_rcu+0xa4/0x2a4
[ 802.193162] ip_local_deliver+0x1d8/0x1dc
[ 802.250162] ip_sublist_rcv_finish+0x94/0xa4
[ 802.307089] ip_list_rcv_finish.constprop.0+0x6c/0x1c4
[ 802.364412] ip_list_rcv+0x80/0x1a0
[ 802.421375] __netif_receive_skb_list_ptype+0x68/0x118
[ 802.478877] __netif_receive_skb_list_core+0x80/0x158
[ 802.536042] netif_receive_skb_list_internal+0x1f0/0x3e4
[ 802.593554] gro_normal_list+0x60/0x8c
[ 802.650642] napi_complete_done+0x108/0x284
[ 802.707472] gem_poll+0x1400/0x1638
[ 802.764247] __napi_poll.constprop.0+0x64/0x228
[ 802.821469] net_rx_action+0x3bc/0x5ac
[ 802.878388] __do_softirq+0x1dc/0x218
[ 802.935163] do_softirq_own_stack+0x54/0x74
[ 802.992141] do_softirq_own_stack+0x44/0x74
[ 803.048409] __irq_exit_rcu+0x6c/0xbc
[ 803.103980] irq_exit+0x10/0x20
[ 803.158845] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 803.213749] do_IRQ+0x24/0x2c
[ 803.268820] HardwareInterrupt_virt+0x108/0x10c
[ 803.323524] get_page_from_freelist+0x564/0x660
[ 803.377514] 0xc4a28800
[ 803.430781] kcsan_setup_watchpoint+0x300/0x4cc
[ 803.484260] memcg_rstat_updated+0xd8/0x15c
[ 803.537584] __mod_memcg_lruvec_state+0x118/0x154
[ 803.591269] __mod_lruvec_state+0x58/0x78
[ 803.644970] __lruvec_stat_mod_folio+0x88/0x8c
[ 803.698607] folio_add_new_anon_rmap+0x130/0x19c
[ 803.752290] handle_mm_fault+0x87c/0xed0
[ 803.805839] ___do_page_fault+0x4d8/0x630
[ 803.859528] do_page_fault+0x28/0x40
[ 803.913090] DataAccess_virt+0x124/0x17c
[ 804.019591] write to 0xeedd91a0 of 4 bytes by task 1594 on cpu 1:
[ 804.073476] memcg_rstat_updated+0xd8/0x15c
[ 804.127161] __mod_memcg_lruvec_state+0x118/0x154
[ 804.180876] __mod_lruvec_state+0x58/0x78
[ 804.234425] __lruvec_stat_mod_folio+0x88/0x8c
[ 804.288016] folio_add_new_anon_rmap+0x130/0x19c
[ 804.341587] handle_mm_fault+0x87c/0xed0
[ 804.395136] ___do_page_fault+0x4d8/0x630
[ 804.448881] do_page_fault+0x28/0x40
[ 804.502451] DataAccess_virt+0x124/0x17c
[ 804.609130] value changed: 0x00000012 -> 0x00000013
[ 804.715953] Reported by Kernel Concurrency Sanitizer on:
[ 804.769360] CPU: 1 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 804.823212] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 804.876762] ==================================================================
[ 842.725847] ==================================================================
[ 842.780124] BUG: KCSAN: data-race in filldir64 / interrupt_async_enter_prepare
[ 842.887232] read to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 0:
[ 842.941211] interrupt_async_enter_prepare+0x64/0xc4
[ 842.995309] timer_interrupt+0x1c/0x178
[ 843.049347] Decrementer_virt+0x108/0x10c
[ 843.103290] 0xeee9b9f8
[ 843.156782] page_address+0x60/0x134
[ 843.210476] kcsan_setup_watchpoint+0x300/0x4cc
[ 843.264485] filldir64+0x10c/0x2d4
[ 843.318271] dir_emit_dots+0x168/0x1a4
[ 843.372123] proc_task_readdir+0x6c/0x340
[ 843.426051] iterate_dir+0xe4/0x248
[ 843.479886] sys_getdents64+0xb0/0x1fc
[ 843.533912] system_call_exception+0x15c/0x1c0
[ 843.588011] ret_from_syscall+0x0/0x2c
[ 843.695515] write to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 0:
[ 843.750187] filldir64+0x10c/0x2d4
[ 843.804568] dir_emit_dots+0x168/0x1a4
[ 843.858790] proc_task_readdir+0x6c/0x340
[ 843.913275] iterate_dir+0xe4/0x248
[ 843.967382] sys_getdents64+0xb0/0x1fc
[ 844.021271] system_call_exception+0x15c/0x1c0
[ 844.075329] ret_from_syscall+0x0/0x2c
[ 844.182846] Reported by Kernel Concurrency Sanitizer on:
[ 844.237183] CPU: 0 PID: 1608 Comm: htop Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 844.292805] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 844.348677] ==================================================================
[ 857.632000] ==================================================================
[ 857.689040] BUG: KCSAN: data-race in ____sys_recvmsg / interrupt_async_enter_prepare
[ 857.803287] read to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 857.860911] interrupt_async_enter_prepare+0x64/0xc4
[ 857.918431] timer_interrupt+0x1c/0x178
[ 857.975859] Decrementer_virt+0x108/0x10c
[ 858.033192] 0xf33c1b3c
[ 858.090110] 0x4000
[ 858.146531] kcsan_setup_watchpoint+0x300/0x4cc
[ 858.203514] ____sys_recvmsg+0x1a0/0x270
[ 858.260435] ___sys_recvmsg+0x90/0xd4
[ 858.317191] __sys_recvmsg+0xb0/0xf8
[ 858.373786] sys_recvmsg+0x50/0x78
[ 858.430107] system_call_exception+0x15c/0x1c0
[ 858.486693] ret_from_syscall+0x0/0x2c
[ 858.599379] write to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 858.656889] ____sys_recvmsg+0x1a0/0x270
[ 858.713762] ___sys_recvmsg+0x90/0xd4
[ 858.770135] __sys_recvmsg+0xb0/0xf8
[ 858.826333] sys_recvmsg+0x50/0x78
[ 858.882338] system_call_exception+0x15c/0x1c0
[ 858.938542] ret_from_syscall+0x0/0x2c
[ 859.050306] Reported by Kernel Concurrency Sanitizer on:
[ 859.107157] CPU: 0 PID: 1525 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 859.164937] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 859.223053] ==================================================================
[ 899.064182] ==================================================================
[ 899.125213] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 899.246007] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 899.306586] hrtimer_active+0xb0/0x100
[ 899.366160] task_tick_fair+0xc8/0xcc
[ 899.424917] scheduler_tick+0x6c/0xcc
[ 899.483903] update_process_times+0xc8/0x120
[ 899.542400] tick_nohz_handler+0x1ac/0x270
[ 899.600361] __hrtimer_run_queues+0x170/0x1d8
[ 899.658073] hrtimer_interrupt+0x168/0x350
[ 899.715431] timer_interrupt+0x108/0x178
[ 899.772539] Decrementer_virt+0x108/0x10c
[ 899.829644] 0x6e02
[ 899.886308] HUF_compress1X_usingCTable_internal.isra.0+0xfe8/0x11c0
[ 899.944629] HUF_compress4X_usingCTable_internal.isra.0+0x1ac/0x1d0
[ 900.002386] HUF_compressCTable_internal.isra.0+0xbc/0xc0
[ 900.060166] HUF_compress_internal.isra.0+0x17c/0x45c
[ 900.117911] HUF_compress4X_repeat+0x80/0xbc
[ 900.175716] ZSTD_compressLiterals+0x230/0x350
[ 900.233376] ZSTD_entropyCompressSeqStore.constprop.0+0x130/0x3c4
[ 900.291780] ZSTD_compressBlock_internal+0x150/0x240
[ 900.350171] ZSTD_compressContinue_internal+0xab4/0xb88
[ 900.408568] ZSTD_compressEnd+0x50/0x1e4
[ 900.466700] ZSTD_compressStream2+0x360/0x8b8
[ 900.524437] ZSTD_compressStream2_simpleArgs+0x7c/0xd8
[ 900.581862] ZSTD_compress2+0xbc/0x13c
[ 900.639007] zstd_compress_cctx+0x68/0x9c
[ 900.696102] __zstd_compress+0x70/0xc4
[ 900.753102] zstd_scompress+0x44/0x74
[ 900.810045] scomp_acomp_comp_decomp+0x328/0x4e4
[ 900.867222] scomp_acomp_compress+0x28/0x48
[ 900.924057] zswap_store+0x834/0xa18
[ 900.980844] swap_writepage+0x4c/0xe8
[ 901.037488] pageout+0x1dc/0x304
[ 901.093196] shrink_folio_list+0xa70/0xd28
[ 901.148454] evict_folios+0xcc0/0x1204
[ 901.202977] try_to_shrink_lruvec+0x214/0x2f0
[ 901.258168] shrink_one+0x104/0x1e8
[ 901.312462] shrink_node+0x314/0xc3c
[ 901.365852] do_try_to_free_pages+0x500/0x7e4
[ 901.419109] try_to_free_pages+0x150/0x18c
[ 901.471981] __alloc_pages+0x460/0x8dc
[ 901.524637] folio_prealloc.isra.0+0x44/0xec
[ 901.577526] handle_mm_fault+0x488/0xed0
[ 901.630288] ___do_page_fault+0x4d8/0x630
[ 901.683476] do_page_fault+0x28/0x40
[ 901.736432] DataAccess_virt+0x124/0x17c
[ 901.842006] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 901.896048] __hrtimer_run_queues+0x1cc/0x1d8
[ 901.950088] hrtimer_interrupt+0x168/0x350
[ 902.004081] timer_interrupt+0x108/0x178
[ 902.057964] Decrementer_virt+0x108/0x10c
[ 902.111847] 0xd
[ 902.164887] ZSTD_compressBlock_doubleFast+0x1358/0x2854
[ 902.218615] ZSTD_buildSeqStore+0x3b8/0x3bc
[ 902.272298] ZSTD_compressBlock_internal+0x44/0x240
[ 902.326319] ZSTD_compressContinue_internal+0xab4/0xb88
[ 902.380552] ZSTD_compressEnd+0x50/0x1e4
[ 902.434501] ZSTD_compressStream2+0x360/0x8b8
[ 902.488294] ZSTD_compressStream2_simpleArgs+0x7c/0xd8
[ 902.542191] ZSTD_compress2+0xbc/0x13c
[ 902.595500] zstd_compress_cctx+0x68/0x9c
[ 902.648223] __zstd_compress+0x70/0xc4
[ 902.700112] zstd_scompress+0x44/0x74
[ 902.751241] scomp_acomp_comp_decomp+0x328/0x4e4
[ 902.803142] scomp_acomp_compress+0x28/0x48
[ 902.854101] zswap_store+0x834/0xa18
[ 902.904406] swap_writepage+0x4c/0xe8
[ 902.954293] pageout+0x1dc/0x304
[ 903.003615] shrink_folio_list+0xa70/0xd28
[ 903.053351] evict_folios+0xcc0/0x1204
[ 903.103206] try_to_shrink_lruvec+0x214/0x2f0
[ 903.153455] shrink_one+0x104/0x1e8
[ 903.203317] shrink_node+0x314/0xc3c
[ 903.252906] balance_pgdat+0x498/0x914
[ 903.302390] kswapd+0x304/0x398
[ 903.351652] kthread+0x174/0x178
[ 903.400956] start_kernel_thread+0x10/0x14
[ 903.498731] Reported by Kernel Concurrency Sanitizer on:
[ 903.548555] CPU: 0 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 903.599232] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 903.650208] ==================================================================
[ 906.388161] ==================================================================
[ 906.438415] BUG: KCSAN: data-race in list_del / lru_gen_look_around
[ 906.537584] read (marked) to 0xef8a86b8 of 4 bytes by task 1337 on cpu 0:
[ 906.588237] lru_gen_look_around+0x320/0x634
[ 906.639064] folio_referenced_one+0x32c/0x404
[ 906.690180] rmap_walk_anon+0x1c4/0x24c
[ 906.741310] rmap_walk+0x70/0x7c
[ 906.792053] folio_referenced+0x194/0x1ec
[ 906.843086] shrink_folio_list+0x6a8/0xd28
[ 906.894189] evict_folios+0xcc0/0x1204
[ 906.945320] try_to_shrink_lruvec+0x214/0x2f0
[ 906.996523] shrink_one+0x104/0x1e8
[ 907.047743] shrink_node+0x314/0xc3c
[ 907.098786] do_try_to_free_pages+0x500/0x7e4
[ 907.150110] try_to_free_pages+0x150/0x18c
[ 907.201486] __alloc_pages+0x460/0x8dc
[ 907.252798] folio_alloc.constprop.0+0x30/0x50
[ 907.304295] __filemap_get_folio+0x164/0x1e4
[ 907.355984] ext4_da_write_begin+0x158/0x24c
[ 907.407354] generic_perform_write+0x114/0x2f0
[ 907.459021] ext4_buffered_write_iter+0x94/0x194
[ 907.510768] ext4_file_write_iter+0x1e0/0x828
[ 907.562389] do_iter_readv_writev+0x1a4/0x23c
[ 907.613926] vfs_writev+0x124/0x2a0
[ 907.665300] do_writev+0xc8/0x1bc
[ 907.716518] sys_writev+0x50/0x78
[ 907.767598] system_call_exception+0x15c/0x1c0
[ 907.818951] ret_from_syscall+0x0/0x2c
[ 907.920788] write to 0xef8a86b8 of 4 bytes by task 1611 on cpu 1:
[ 907.972293] list_del+0x2c/0x5c
[ 908.023363] lru_gen_del_folio+0x110/0x140
[ 908.074604] evict_folios+0xaf8/0x1204
[ 908.125907] try_to_shrink_lruvec+0x214/0x2f0
[ 908.177343] shrink_one+0x104/0x1e8
[ 908.228612] shrink_node+0x314/0xc3c
[ 908.279487] do_try_to_free_pages+0x500/0x7e4
[ 908.330410] try_to_free_pages+0x150/0x18c
[ 908.381248] __alloc_pages+0x460/0x8dc
[ 908.432012] folio_prealloc.isra.0+0x44/0xec
[ 908.482927] handle_mm_fault+0x488/0xed0
[ 908.533908] ___do_page_fault+0x4d8/0x630
[ 908.585056] do_page_fault+0x28/0x40
[ 908.636089] DataAccess_virt+0x124/0x17c
[ 908.737702] Reported by Kernel Concurrency Sanitizer on:
[ 908.789208] CPU: 1 PID: 1611 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 908.841703] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 908.894834] ==================================================================
[ 917.245693] ==================================================================
[ 917.299728] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 917.408432] write to 0xeedd91a0 of 4 bytes by task 2 on cpu 0:
[ 917.463602] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 917.518709] cgroup_rstat_flush_locked+0x528/0x538
[ 917.573889] cgroup_rstat_flush+0x38/0x5c
[ 917.628921] do_flush_stats+0x78/0x9c
[ 917.684000] mem_cgroup_flush_stats+0x7c/0x80
[ 917.739357] zswap_shrinker_count+0xb8/0x150
[ 917.794928] do_shrink_slab+0x7c/0x540
[ 917.850431] shrink_slab+0x1f0/0x384
[ 917.905863] shrink_one+0x140/0x1e8
[ 917.960830] shrink_node+0x314/0xc3c
[ 918.014963] do_try_to_free_pages+0x500/0x7e4
[ 918.068723] try_to_free_pages+0x150/0x18c
[ 918.121805] __alloc_pages+0x460/0x8dc
[ 918.175295] __alloc_pages_bulk+0x140/0x340
[ 918.228022] __vmalloc_node_range+0x310/0x530
[ 918.280599] copy_process+0x608/0x3324
[ 918.332468] kernel_clone+0x78/0x2d0
[ 918.383718] kernel_thread+0xbc/0xe8
[ 918.434646] kthreadd+0x200/0x284
[ 918.485366] start_kernel_thread+0x10/0x14
[ 918.587160] read to 0xeedd91a0 of 4 bytes by task 39 on cpu 1:
[ 918.639042] memcg_rstat_updated+0xcc/0x15c
[ 918.690798] __mod_memcg_lruvec_state+0x118/0x154
[ 918.742670] __mod_lruvec_state+0x58/0x78
[ 918.794343] lru_gen_update_size+0x130/0x240
[ 918.846290] lru_gen_add_folio+0x198/0x288
[ 918.898076] move_folios_to_lru+0x29c/0x350
[ 918.949848] evict_folios+0xd20/0x1204
[ 919.001524] try_to_shrink_lruvec+0x214/0x2f0
[ 919.053494] shrink_one+0x104/0x1e8
[ 919.105116] shrink_node+0x314/0xc3c
[ 919.156616] balance_pgdat+0x498/0x914
[ 919.207970] kswapd+0x304/0x398
[ 919.259058] kthread+0x174/0x178
[ 919.309981] start_kernel_thread+0x10/0x14
[ 919.411884] Reported by Kernel Concurrency Sanitizer on:
[ 919.463717] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 919.516723] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 919.570035] ==================================================================
[ 927.578462] Key type dns_resolver registered
[ 928.915260] Key type cifs.idmap registered
[ 929.094635] CIFS: Attempting to mount //192.168.2.3/yea_home
[ 933.757206] ==================================================================
[ 933.814618] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 933.929568] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 933.988103] hrtimer_active+0xb0/0x100
[ 934.046727] task_tick_fair+0xc8/0xcc
[ 934.104691] scheduler_tick+0x6c/0xcc
[ 934.162283] update_process_times+0xc8/0x120
[ 934.220063] tick_nohz_handler+0x1ac/0x270
[ 934.277793] __hrtimer_run_queues+0x170/0x1d8
[ 934.335613] hrtimer_interrupt+0x168/0x350
[ 934.393444] timer_interrupt+0x108/0x178
[ 934.451240] Decrementer_virt+0x108/0x10c
[ 934.509027] 0xc11d8420
[ 934.566483] 0x29f00
[ 934.623270] kcsan_setup_watchpoint+0x300/0x4cc
[ 934.680057] page_ext_get+0x98/0xc0
[ 934.736043] __reset_page_owner+0x3c/0x234
[ 934.791487] free_unref_page_prepare+0x124/0x1dc
[ 934.847571] free_unref_folios+0xcc/0x208
[ 934.902681] folios_put_refs+0x1c8/0x1cc
[ 934.956979] free_pages_and_swap_cache+0x1c8/0x1d0
[ 935.011280] tlb_flush_mmu+0x200/0x288
[ 935.065230] unmap_page_range+0x4f8/0x8bc
[ 935.118995] unmap_vmas+0x11c/0x174
[ 935.172707] exit_mmap+0x170/0x2e0
[ 935.226475] __mmput+0x4c/0x188
[ 935.279858] mmput+0x74/0x94
[ 935.332902] do_exit+0x55c/0xd08
[ 935.385817] do_group_exit+0x58/0xfc
[ 935.438665] get_signal+0x73c/0x8c0
[ 935.491638] do_notify_resume+0x94/0x47c
[ 935.544891] interrupt_exit_user_prepare_main+0xa8/0xac
[ 935.598584] interrupt_exit_user_prepare+0x54/0x74
[ 935.651886] interrupt_return+0x14/0x190
[ 935.757849] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 935.812083] __hrtimer_run_queues+0x1cc/0x1d8
[ 935.866163] hrtimer_interrupt+0x168/0x350
[ 935.920317] timer_interrupt+0x108/0x178
[ 935.974671] Decrementer_virt+0x108/0x10c
[ 936.029242] mmput+0x74/0x94
[ 936.083487] __reset_page_owner+0x20c/0x234
[ 936.138013] free_unref_page_prepare+0x124/0x1dc
[ 936.192475] free_unref_folios+0xcc/0x208
[ 936.246380] folios_put_refs+0x1c8/0x1cc
[ 936.300183] free_pages_and_swap_cache+0x1c8/0x1d0
[ 936.354241] tlb_flush_mmu+0x200/0x288
[ 936.408213] unmap_page_range+0x4f8/0x8bc
[ 936.462314] unmap_vmas+0x11c/0x174
[ 936.516131] exit_mmap+0x170/0x2e0
[ 936.569830] __mmput+0x4c/0x188
[ 936.623246] mmput+0x74/0x94
[ 936.676396] do_exit+0x55c/0xd08
[ 936.729625] do_group_exit+0x58/0xfc
[ 936.782887] get_signal+0x73c/0x8c0
[ 936.836245] do_notify_resume+0x94/0x47c
[ 936.889731] interrupt_exit_user_prepare_main+0xa8/0xac
[ 936.943717] interrupt_exit_user_prepare+0x54/0x74
[ 936.997344] interrupt_return+0x14/0x190
[ 937.102654] Reported by Kernel Concurrency Sanitizer on:
[ 937.155242] CPU: 0 PID: 1611 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 937.208309] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 937.261046] ==================================================================
[ 952.256115] ==================================================================
[ 952.307600] BUG: KCSAN: data-race in _copy_to_user / interrupt_async_enter_prepare
[ 952.408873] read to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 1:
[ 952.459599] interrupt_async_enter_prepare+0x64/0xc4
[ 952.510398] timer_interrupt+0x1c/0x178
[ 952.560756] Decrementer_virt+0x108/0x10c
[ 952.611137] 0xf37c9c18
[ 952.661389] 0x0
[ 952.711105] kcsan_setup_watchpoint+0x300/0x4cc
[ 952.761473] _copy_to_user+0x58/0xdc
[ 952.811719] cp_statx+0x348/0x384
[ 952.861700] do_statx+0xc8/0xfc
[ 952.911329] sys_statx+0x8c/0xc8
[ 952.960860] system_call_exception+0x15c/0x1c0
[ 953.010711] ret_from_syscall+0x0/0x2c
[ 953.110024] write to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 1:
[ 953.160452] _copy_to_user+0x58/0xdc
[ 953.210974] cp_statx+0x348/0x384
[ 953.261269] do_statx+0xc8/0xfc
[ 953.311306] sys_statx+0x8c/0xc8
[ 953.361267] system_call_exception+0x15c/0x1c0
[ 953.411405] ret_from_syscall+0x0/0x2c
[ 953.510221] Reported by Kernel Concurrency Sanitizer on:
[ 953.560401] CPU: 1 PID: 1608 Comm: htop Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 953.611794] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 953.663592] ==================================================================
[-- Attachment #4: dmesg_69-rc4_g4_02 --]
[-- Type: application/octet-stream, Size: 76408 bytes --]
[ 114.850479] kernfs_refresh_inode+0x40/0x1c0
[ 114.911781] kernfs_iop_getattr+0x84/0xd0
[ 114.971637] vfs_getattr_nosec+0x138/0x18c
[ 115.030664] vfs_getattr+0x88/0x90
[ 115.088781] vfs_statx+0xa8/0x25c
[ 115.146327] do_statx+0xb4/0xfc
[ 115.203307] sys_statx+0x8c/0xc8
[ 115.259711] system_call_exception+0x15c/0x1c0
[ 115.316465] ret_from_syscall+0x0/0x2c
[ 115.429725] write to 0xc1887ce8 of 2 bytes by task 590 on cpu 1:
[ 115.487354] kernfs_refresh_inode+0x40/0x1c0
[ 115.545724] kernfs_iop_permission+0x74/0xbc
[ 115.604075] inode_permission+0x84/0x20c
[ 115.662475] link_path_walk+0x114/0x4c0
[ 115.720560] path_lookupat+0x78/0x21c
[ 115.778366] path_openat+0x1d8/0xe98
[ 115.836052] do_filp_open+0x88/0xec
[ 115.893683] do_sys_openat2+0x9c/0xf8
[ 115.951309] do_sys_open+0x48/0x74
[ 116.008532] sys_openat+0x5c/0x88
[ 116.065613] system_call_exception+0x15c/0x1c0
[ 116.123132] ret_from_syscall+0x0/0x2c
[ 116.237575] Reported by Kernel Concurrency Sanitizer on:
[ 116.295758] CPU: 1 PID: 590 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 116.355514] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 116.415730] ==================================================================
[ 117.050295] Adding 8388604k swap on /dev/sdb6. Priority:-2 extents:1 across:8388604k
[ 118.414158] EXT4-fs (sdc5): mounting ext2 file system using the ext4 subsystem
[ 118.550248] EXT4-fs (sdc5): mounted filesystem e4e8af9e-0f0d-44f9-b983-71bf61d782de r/w without journal. Quota mode: disabled.
[ 118.671048] ext2 filesystem being mounted at /boot supports timestamps until 2038-01-19 (0x7fffffff)
[ 118.800234] BTRFS: device label tmp devid 1 transid 2856 /dev/sda6 (8:6) scanned by mount (916)
[ 118.932560] BTRFS info (device sda6): first mount of filesystem 65162d91-887e-4e48-a356-fbf7093eefb5
[ 119.056738] BTRFS info (device sda6): using xxhash64 (xxhash64-generic) checksum algorithm
[ 119.180037] BTRFS info (device sda6): using free-space-tree
[ 122.613242] ==================================================================
[ 122.613372] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 122.613531] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 122.613588] hrtimer_active+0xb0/0x100
[ 122.613683] task_tick_fair+0xc8/0xcc
[ 122.613766] scheduler_tick+0x6c/0xcc
[ 122.613831] update_process_times+0xc8/0x120
[ 122.613920] tick_nohz_handler+0x1ac/0x270
[ 122.614000] __hrtimer_run_queues+0x170/0x1d8
[ 122.614094] hrtimer_interrupt+0x168/0x350
[ 122.614188] timer_interrupt+0x108/0x178
[ 122.614256] Decrementer_virt+0x108/0x10c
[ 122.614332] 0x84004482
[ 122.614385] rcu_all_qs+0x58/0x17c
[ 122.614459] __cond_resched+0x50/0x58
[ 122.614530] console_conditional_schedule+0x38/0x50
[ 122.614622] fbcon_redraw+0x1a4/0x24c
[ 122.614688] fbcon_scroll+0xe0/0x1dc
[ 122.614754] con_scroll+0x19c/0x1dc
[ 122.614820] lf+0x64/0xfc
[ 122.614878] do_con_write+0x9e0/0x263c
[ 122.614950] con_write+0x34/0x64
[ 122.615017] do_output_char+0x1cc/0x2f4
[ 122.615103] n_tty_write+0x4c8/0x574
[ 122.615188] file_tty_write.isra.0+0x284/0x300
[ 122.615270] tty_write+0x34/0x58
[ 122.615344] redirected_tty_write+0xdc/0xe4
[ 122.615426] vfs_write+0x2b8/0x318
[ 122.615500] ksys_write+0xb8/0x134
[ 122.615572] sys_write+0x4c/0x74
[ 122.615643] system_call_exception+0x15c/0x1c0
[ 122.615732] ret_from_syscall+0x0/0x2c
[ 122.615817] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 122.615869] __hrtimer_run_queues+0x12c/0x1d8
[ 122.615963] hrtimer_interrupt+0x168/0x350
[ 122.616057] timer_interrupt+0x108/0x178
[ 122.616123] Decrementer_virt+0x108/0x10c
[ 122.616197] memchr_inv+0x100/0x188
[ 122.616281] __kernel_unpoison_pages+0xe0/0x1a8
[ 122.616354] post_alloc_hook+0x8c/0xf0
[ 122.616446] prep_new_page+0x24/0x5c
[ 122.616533] get_page_from_freelist+0x564/0x660
[ 122.616629] __alloc_pages+0x114/0x8dc
[ 122.616722] folio_prealloc.isra.0+0x9c/0xec
[ 122.616825] do_wp_page+0x5cc/0xb98
[ 122.616889] handle_mm_fault+0xd88/0xed0
[ 122.616956] ___do_page_fault+0x4d8/0x630
[ 122.617051] do_page_fault+0x28/0x40
[ 122.617145] DataAccess_virt+0x124/0x17c
[ 122.617242] Reported by Kernel Concurrency Sanitizer on:
[ 122.617276] CPU: 0 PID: 563 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 122.617354] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 122.617395] ==================================================================
[ 129.152749] CPU-temp: 59.3 C
[ 129.152824] , Case: 35.6 C
[ 129.252654] , Fan: 6 (tuned +1)
[ 145.249842] ==================================================================
[ 145.249975] BUG: KCSAN: data-race in copy_iovec_from_user / interrupt_async_enter_prepare
[ 145.250148] read to 0xc29df19c of 4 bytes by task 1355 on cpu 0:
[ 145.250221] interrupt_async_enter_prepare+0x64/0xc4
[ 145.250314] timer_interrupt+0x1c/0x178
[ 145.250399] Decrementer_virt+0x108/0x10c
[ 145.250495] ___slab_alloc+0x31c/0x5dc
[ 145.250602] 0xf3841c88
[ 145.250679] kcsan_setup_watchpoint+0x300/0x4cc
[ 145.250768] copy_iovec_from_user+0x44/0x10c
[ 145.250873] iovec_from_user+0xd0/0xdc
[ 145.250980] __import_iovec+0x118/0x22c
[ 145.251087] import_iovec+0x50/0x84
[ 145.251191] vfs_writev+0xac/0x2a0
[ 145.251283] do_writev+0xc8/0x1bc
[ 145.251371] sys_writev+0x50/0x78
[ 145.251463] system_call_exception+0x15c/0x1c0
[ 145.251571] ret_from_syscall+0x0/0x2c
[ 145.251700] write to 0xc29df19c of 4 bytes by task 1355 on cpu 0:
[ 145.251772] copy_iovec_from_user+0x44/0x10c
[ 145.251878] iovec_from_user+0xd0/0xdc
[ 145.251983] __import_iovec+0x118/0x22c
[ 145.252090] import_iovec+0x50/0x84
[ 145.252194] vfs_writev+0xac/0x2a0
[ 145.252283] do_writev+0xc8/0x1bc
[ 145.252371] sys_writev+0x50/0x78
[ 145.252461] system_call_exception+0x15c/0x1c0
[ 145.252567] ret_from_syscall+0x0/0x2c
[ 145.252691] Reported by Kernel Concurrency Sanitizer on:
[ 145.252745] CPU: 0 PID: 1355 Comm: syslogd Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 145.252839] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 145.252899] ==================================================================
[ 147.179793] b43legacy-phy0: Loading firmware version 0x127, patch level 14 (2005-04-18 02:36:27)
[ 147.267106] b43legacy-phy0 debug: Chip initialized
[ 147.312848] b43legacy-phy0 debug: 30-bit DMA initialized
[ 147.324745] b43legacy-phy0 debug: Wireless interface started
[ 147.336810] b43legacy-phy0 debug: Adding Interface type 2
[ 147.360298] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.360401] b43legacy-phy0 debug: RX: Packet dropped
[ 147.407501] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.407603] b43legacy-phy0 debug: RX: Packet dropped
[ 147.413213] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.413303] b43legacy-phy0 debug: RX: Packet dropped
[ 147.418268] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.418363] b43legacy-phy0 debug: RX: Packet dropped
[ 147.427312] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.427414] b43legacy-phy0 debug: RX: Packet dropped
[ 147.445950] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.446049] b43legacy-phy0 debug: RX: Packet dropped
[ 147.481984] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.482104] b43legacy-phy0 debug: RX: Packet dropped
[ 147.486390] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.486487] b43legacy-phy0 debug: RX: Packet dropped
[ 147.488969] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.489087] b43legacy-phy0 debug: RX: Packet dropped
[ 147.534423] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.534517] b43legacy-phy0 debug: RX: Packet dropped
[ 147.538166] b43legacy-phy0 debug: RX: Packet dropped
[ 147.545897] b43legacy-phy0 debug: RX: Packet dropped
[ 147.625904] b43legacy-phy0 debug: RX: Packet dropped
[ 147.631379] b43legacy-phy0 debug: RX: Packet dropped
[ 147.684197] b43legacy-phy0 debug: RX: Packet dropped
[ 147.709147] b43legacy-phy0 debug: RX: Packet dropped
[ 147.735089] b43legacy-phy0 debug: RX: Packet dropped
[ 147.748795] b43legacy-phy0 debug: RX: Packet dropped
[ 148.203300] NET: Registered PF_PACKET protocol family
[ 156.352809] ==================================================================
[ 156.352954] BUG: KCSAN: data-race in interrupt_async_enter_prepare / raw_copy_to_user
[ 156.353130] read to 0xc32dc29c of 4 bytes by task 1486 on cpu 1:
[ 156.353204] interrupt_async_enter_prepare+0x64/0xc4
[ 156.353300] timer_interrupt+0x1c/0x178
[ 156.353386] Decrementer_virt+0x108/0x10c
[ 156.353483] 0x1841d4a2
[ 156.353558] 0x6d8169f5
[ 156.353625] kcsan_setup_watchpoint+0x300/0x4cc
[ 156.353715] raw_copy_to_user+0x74/0xb4
[ 156.353819] _copy_to_iter+0x120/0x694
[ 156.353925] get_random_bytes_user+0x128/0x1a0
[ 156.354016] sys_getrandom+0x108/0x110
[ 156.354103] system_call_exception+0x15c/0x1c0
[ 156.354213] ret_from_syscall+0x0/0x2c
[ 156.354343] write to 0xc32dc29c of 4 bytes by task 1486 on cpu 1:
[ 156.354416] raw_copy_to_user+0x74/0xb4
[ 156.354520] _copy_to_iter+0x120/0x694
[ 156.354626] get_random_bytes_user+0x128/0x1a0
[ 156.354715] sys_getrandom+0x108/0x110
[ 156.354802] system_call_exception+0x15c/0x1c0
[ 156.354908] ret_from_syscall+0x0/0x2c
[ 156.355034] Reported by Kernel Concurrency Sanitizer on:
[ 156.355088] CPU: 1 PID: 1486 Comm: sshd Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 156.355182] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 156.355242] ==================================================================
[ 161.546024] ==================================================================
[ 161.546124] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 161.546228] write (marked) to 0xeedc9c11 of 1 bytes by interrupt on cpu 1:
[ 161.546284] rcu_report_qs_rdp+0x15c/0x18c
[ 161.546350] rcu_core+0x1f0/0xa88
[ 161.546415] rcu_core_si+0x20/0x3c
[ 161.546480] __do_softirq+0x1dc/0x218
[ 161.546570] do_softirq_own_stack+0x54/0x74
[ 161.546657] do_softirq_own_stack+0x44/0x74
[ 161.546741] __irq_exit_rcu+0x6c/0xbc
[ 161.546817] irq_exit+0x10/0x20
[ 161.546887] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 161.546963] timer_interrupt+0x64/0x178
[ 161.547026] Decrementer_virt+0x108/0x10c
[ 161.547098] 0x0
[ 161.547144] 0xffffffff
[ 161.547188] kcsan_setup_watchpoint+0x300/0x4cc
[ 161.547255] rcu_all_qs+0x58/0x17c
[ 161.547324] __cond_resched+0x50/0x58
[ 161.547391] console_conditional_schedule+0x38/0x50
[ 161.547477] fbcon_redraw+0x1a4/0x24c
[ 161.547543] fbcon_scroll+0xe0/0x1dc
[ 161.547607] con_scroll+0x19c/0x1dc
[ 161.547671] lf+0x64/0xfc
[ 161.547727] do_con_write+0x9e0/0x263c
[ 161.547797] con_write+0x34/0x64
[ 161.547862] do_output_char+0x1cc/0x2f4
[ 161.547948] n_tty_write+0x4c8/0x574
[ 161.548030] file_tty_write.isra.0+0x284/0x300
[ 161.548110] tty_write+0x34/0x58
[ 161.548182] redirected_tty_write+0xdc/0xe4
[ 161.548261] vfs_write+0x2b8/0x318
[ 161.548333] ksys_write+0xb8/0x134
[ 161.548403] sys_write+0x4c/0x74
[ 161.548471] system_call_exception+0x15c/0x1c0
[ 161.548559] ret_from_syscall+0x0/0x2c
[ 161.548646] read to 0xeedc9c11 of 1 bytes by task 1558 on cpu 1:
[ 161.548697] rcu_all_qs+0x58/0x17c
[ 161.548767] __cond_resched+0x50/0x58
[ 161.548832] console_conditional_schedule+0x38/0x50
[ 161.548919] fbcon_redraw+0x1a4/0x24c
[ 161.548982] fbcon_scroll+0xe0/0x1dc
[ 161.549046] con_scroll+0x19c/0x1dc
[ 161.549108] lf+0x64/0xfc
[ 161.549164] do_con_write+0x9e0/0x263c
[ 161.549233] con_write+0x34/0x64
[ 161.549299] do_output_char+0x1cc/0x2f4
[ 161.549378] n_tty_write+0x4c8/0x574
[ 161.549460] file_tty_write.isra.0+0x284/0x300
[ 161.549539] tty_write+0x34/0x58
[ 161.549611] redirected_tty_write+0xdc/0xe4
[ 161.549689] vfs_write+0x2b8/0x318
[ 161.549759] ksys_write+0xb8/0x134
[ 161.549829] sys_write+0x4c/0x74
[ 161.549898] system_call_exception+0x15c/0x1c0
[ 161.549982] ret_from_syscall+0x0/0x2c
[ 161.550064] Reported by Kernel Concurrency Sanitizer on:
[ 161.550097] CPU: 1 PID: 1558 Comm: ebegin Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 161.550169] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 161.550208] ==================================================================
[ 178.005079] CPU-temp: 59.6 C
[ 178.005153] , Case: 35.7 C
[ 178.005217] , Fan: 7 (tuned +1)
[ 237.396120] ==================================================================
[ 237.396262] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 237.396447] write to 0xeedc6094 of 1 bytes by task 0 on cpu 1:
[ 237.396524] tmigr_cpu_activate+0xe8/0x12c
[ 237.396632] timer_clear_idle+0x60/0x80
[ 237.396746] tick_nohz_restart_sched_tick+0x3c/0x170
[ 237.396852] tick_nohz_idle_exit+0xe0/0x158
[ 237.396955] do_idle+0x54/0x11c
[ 237.397042] cpu_startup_entry+0x30/0x34
[ 237.397131] start_secondary+0x504/0x854
[ 237.397231] 0x3338
[ 237.397347] read to 0xeedc6094 of 1 bytes by interrupt on cpu 0:
[ 237.397423] tmigr_next_groupevt+0x60/0xd8
[ 237.397528] tmigr_handle_remote_up+0x94/0x394
[ 237.397636] __walk_groups+0x74/0xc8
[ 237.397735] tmigr_handle_remote+0x13c/0x198
[ 237.397843] run_timer_softirq+0x94/0x98
[ 237.397952] __do_softirq+0x1dc/0x218
[ 237.398068] do_softirq_own_stack+0x54/0x74
[ 237.398182] do_softirq_own_stack+0x44/0x74
[ 237.398292] __irq_exit_rcu+0x6c/0xbc
[ 237.398392] irq_exit+0x10/0x20
[ 237.398488] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 237.398590] timer_interrupt+0x64/0x178
[ 237.398679] Decrementer_virt+0x108/0x10c
[ 237.398778] default_idle_call+0x38/0x48
[ 237.398871] do_idle+0xfc/0x11c
[ 237.398955] cpu_startup_entry+0x30/0x34
[ 237.399044] kernel_init+0x0/0x1a4
[ 237.399146] console_on_rootfs+0x0/0xc8
[ 237.399231] 0x3610
[ 237.399343] value changed: 0x00 -> 0x01
[ 237.399449] Reported by Kernel Concurrency Sanitizer on:
[ 237.399505] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 237.399603] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 237.399665] ==================================================================
[ 243.045849] CPU-temp: 59.9 C
[ 243.045914] , Case: 35.8 C
[ 243.046057] , Fan: 8 (tuned +1)
[ 249.349141] ==================================================================
[ 249.349270] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 249.349443] read to 0xeeda9094 of 1 bytes by interrupt on cpu 1:
[ 249.349518] tmigr_next_groupevt+0x60/0xd8
[ 249.349621] tmigr_handle_remote_up+0x94/0x394
[ 249.349724] __walk_groups+0x74/0xc8
[ 249.349819] tmigr_handle_remote+0x13c/0x198
[ 249.349922] run_timer_softirq+0x94/0x98
[ 249.350030] __do_softirq+0x1dc/0x218
[ 249.350140] do_softirq_own_stack+0x54/0x74
[ 249.350248] do_softirq_own_stack+0x44/0x74
[ 249.350354] __irq_exit_rcu+0x6c/0xbc
[ 249.350451] irq_exit+0x10/0x20
[ 249.350543] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 249.350639] timer_interrupt+0x64/0x178
[ 249.350724] Decrementer_virt+0x108/0x10c
[ 249.350818] default_idle_call+0x38/0x48
[ 249.350907] do_idle+0xfc/0x11c
[ 249.350987] cpu_startup_entry+0x30/0x34
[ 249.351072] start_secondary+0x504/0x854
[ 249.351167] 0x3338
[ 249.351280] write to 0xeeda9094 of 1 bytes by task 0 on cpu 0:
[ 249.351352] tmigr_cpu_activate+0xe8/0x12c
[ 249.351454] timer_clear_idle+0x60/0x80
[ 249.351560] tick_nohz_restart_sched_tick+0x3c/0x170
[ 249.351661] tick_nohz_idle_exit+0xe0/0x158
[ 249.351759] do_idle+0x54/0x11c
[ 249.351839] cpu_startup_entry+0x30/0x34
[ 249.351925] kernel_init+0x0/0x1a4
[ 249.352022] console_on_rootfs+0x0/0xc8
[ 249.352103] 0x3610
[ 249.352210] Reported by Kernel Concurrency Sanitizer on:
[ 249.352263] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 249.352356] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 249.352416] ==================================================================
[ 275.591448] CPU-temp: 60.1 C
[ 275.591517] , Case: 36.0 C
[ 275.591661] , Fan: 9 (tuned +1)
[ 278.327717] net_ratelimit: 8 callbacks suppressed
[ 278.327781] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 278.327899] b43legacy-phy0 debug: RX: Packet dropped
[ 373.933764] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 373.933867] b43legacy-phy0 debug: RX: Packet dropped
[ 720.759460] ==================================================================
[ 720.759601] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 720.759781] read to 0xeedc6094 of 1 bytes by task 0 on cpu 0:
[ 720.759855] tmigr_next_groupevt+0x60/0xd8
[ 720.759965] tmigr_update_events+0x29c/0x328
[ 720.760069] tmigr_inactive_up+0x180/0x288
[ 720.760171] __walk_groups+0x74/0xc8
[ 720.760269] tmigr_cpu_deactivate+0x110/0x178
[ 720.760375] __get_next_timer_interrupt+0x32c/0x34c
[ 720.760489] timer_base_try_to_set_idle+0x50/0x94
[ 720.760601] tick_nohz_idle_stop_tick+0x150/0x4fc
[ 720.760704] do_idle+0xf8/0x11c
[ 720.760787] cpu_startup_entry+0x30/0x34
[ 720.760875] kernel_init+0x0/0x1a4
[ 720.760976] console_on_rootfs+0x0/0xc8
[ 720.761059] 0x3610
[ 720.761178] write to 0xeedc6094 of 1 bytes by task 0 on cpu 1:
[ 720.761252] tmigr_cpu_activate+0xe8/0x12c
[ 720.761357] timer_clear_idle+0x60/0x80
[ 720.761463] tick_nohz_restart_sched_tick+0x3c/0x170
[ 720.761565] tick_nohz_idle_exit+0xe0/0x158
[ 720.761667] do_idle+0x54/0x11c
[ 720.761747] cpu_startup_entry+0x30/0x34
[ 720.761835] start_secondary+0x504/0x854
[ 720.761932] 0x3338
[ 720.762041] Reported by Kernel Concurrency Sanitizer on:
[ 720.762097] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 720.762193] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 720.762255] ==================================================================
[ 751.213814] ==================================================================
[ 751.266545] BUG: KCSAN: data-race in interrupt_async_enter_prepare / set_fd_set
[ 751.372865] read to 0xc29db6dc of 4 bytes by task 1541 on cpu 0:
[ 751.427255] interrupt_async_enter_prepare+0x64/0xc4
[ 751.481946] do_IRQ+0x18/0x2c
[ 751.536487] HardwareInterrupt_virt+0x108/0x10c
[ 751.591584] 0xfefefefe
[ 751.646400] 0x0
[ 751.700756] kcsan_setup_watchpoint+0x300/0x4cc
[ 751.755834] set_fd_set+0x60/0xec
[ 751.810703] core_sys_select+0x1ec/0x240
[ 751.865731] sys_pselect6_time32+0x190/0x1b4
[ 751.920851] system_call_exception+0x15c/0x1c0
[ 751.976313] ret_from_syscall+0x0/0x2c
[ 752.086926] write to 0xc29db6dc of 4 bytes by task 1541 on cpu 0:
[ 752.143313] set_fd_set+0x60/0xec
[ 752.199552] core_sys_select+0x1ec/0x240
[ 752.255574] sys_pselect6_time32+0x190/0x1b4
[ 752.311346] system_call_exception+0x15c/0x1c0
[ 752.367176] ret_from_syscall+0x0/0x2c
[ 752.478262] Reported by Kernel Concurrency Sanitizer on:
[ 752.534822] CPU: 0 PID: 1541 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 752.592536] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 752.650552] ==================================================================
[ 771.386274] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 771.476892] b43legacy-phy0 debug: RX: Packet dropped
[ 772.110509] ==================================================================
[ 772.170664] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 772.291413] write to 0xeedc6094 of 1 bytes by task 0 on cpu 1:
[ 772.352754] tmigr_cpu_activate+0xe8/0x12c
[ 772.413919] timer_clear_idle+0x60/0x80
[ 772.475037] tick_nohz_restart_sched_tick+0x3c/0x170
[ 772.536604] tick_nohz_idle_exit+0xe0/0x158
[ 772.598085] do_idle+0x54/0x11c
[ 772.659168] cpu_startup_entry+0x30/0x34
[ 772.719700] start_secondary+0x504/0x854
[ 772.779445] 0x3338
[ 772.895403] read to 0xeedc6094 of 1 bytes by interrupt on cpu 0:
[ 772.954414] tmigr_next_groupevt+0x60/0xd8
[ 773.013453] tmigr_handle_remote_up+0x94/0x394
[ 773.072167] __walk_groups+0x74/0xc8
[ 773.130690] tmigr_handle_remote+0x13c/0x198
[ 773.189549] run_timer_softirq+0x94/0x98
[ 773.248284] __do_softirq+0x1dc/0x218
[ 773.306765] do_softirq_own_stack+0x54/0x74
[ 773.365384] do_softirq_own_stack+0x44/0x74
[ 773.423759] __irq_exit_rcu+0x6c/0xbc
[ 773.481931] irq_exit+0x10/0x20
[ 773.540045] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 773.598635] timer_interrupt+0x64/0x178
[ 773.656878] Decrementer_virt+0x108/0x10c
[ 773.714842] default_idle_call+0x38/0x48
[ 773.772963] do_idle+0xfc/0x11c
[ 773.831032] cpu_startup_entry+0x30/0x34
[ 773.889479] kernel_init+0x0/0x1a4
[ 773.947933] console_on_rootfs+0x0/0xc8
[ 774.006554] 0x3610
[ 774.123373] Reported by Kernel Concurrency Sanitizer on:
[ 774.182980] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 774.244373] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 774.305784] ==================================================================
[ 908.288449] ==================================================================
[ 908.349201] BUG: KCSAN: data-race in __run_timer_base / next_expiry_recalc
[ 908.467956] read to 0xeedc4918 of 4 bytes by interrupt on cpu 0:
[ 908.527641] __run_timer_base+0x4c/0x38c
[ 908.586652] timer_expire_remote+0x48/0x68
[ 908.645495] tmigr_handle_remote_up+0x1f4/0x394
[ 908.704257] __walk_groups+0x74/0xc8
[ 908.762829] tmigr_handle_remote+0x13c/0x198
[ 908.821961] run_timer_softirq+0x94/0x98
[ 908.880952] __do_softirq+0x1dc/0x218
[ 908.939760] do_softirq_own_stack+0x54/0x74
[ 908.998778] do_softirq_own_stack+0x44/0x74
[ 909.057271] __irq_exit_rcu+0x6c/0xbc
[ 909.115657] irq_exit+0x10/0x20
[ 909.173786] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 909.232717] timer_interrupt+0x64/0x178
[ 909.291195] Decrementer_virt+0x108/0x10c
[ 909.349294] default_idle_call+0x38/0x48
[ 909.407348] do_idle+0xfc/0x11c
[ 909.465156] cpu_startup_entry+0x30/0x34
[ 909.523064] kernel_init+0x0/0x1a4
[ 909.580804] console_on_rootfs+0x0/0xc8
[ 909.638593] 0x3610
[ 909.751912] write to 0xeedc4918 of 4 bytes by interrupt on cpu 1:
[ 909.808835] next_expiry_recalc+0xbc/0x15c
[ 909.864998] __run_timer_base+0x278/0x38c
[ 909.920308] run_timer_base+0x5c/0x7c
[ 909.974831] run_timer_softirq+0x34/0x98
[ 910.028542] __do_softirq+0x1dc/0x218
[ 910.081628] do_softirq_own_stack+0x54/0x74
[ 910.134578] do_softirq_own_stack+0x44/0x74
[ 910.186699] __irq_exit_rcu+0x6c/0xbc
[ 910.238904] irq_exit+0x10/0x20
[ 910.290634] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 910.343100] timer_interrupt+0x64/0x178
[ 910.395429] Decrementer_virt+0x108/0x10c
[ 910.447741] default_idle_call+0x38/0x48
[ 910.500014] do_idle+0xfc/0x11c
[ 910.552097] cpu_startup_entry+0x30/0x34
[ 910.604699] start_secondary+0x504/0x854
[ 910.656958] 0x3338
[ 910.759460] Reported by Kernel Concurrency Sanitizer on:
[ 910.811642] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 910.864781] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 910.918205] ==================================================================
[ 948.875808] ==================================================================
[ 948.928873] BUG: KCSAN: data-race in interrupt_async_enter_prepare / raw_copy_to_user
[ 949.036459] read to 0xc29d939c of 4 bytes by task 1584 on cpu 0:
[ 949.091302] interrupt_async_enter_prepare+0x64/0xc4
[ 949.145797] timer_interrupt+0x1c/0x178
[ 949.199947] Decrementer_virt+0x108/0x10c
[ 949.254144] 0x8
[ 949.307879] 0xc51a8020
[ 949.361476] kcsan_setup_watchpoint+0x300/0x4cc
[ 949.415617] raw_copy_to_user+0x74/0xb4
[ 949.469747] _copy_to_iter+0x120/0x694
[ 949.523836] simple_copy_to_iter+0x78/0x80
[ 949.578000] __skb_datagram_iter+0x88/0x334
[ 949.632420] skb_copy_datagram_iter+0x4c/0x78
[ 949.686676] unix_stream_read_actor+0x58/0x8c
[ 949.740203] unix_stream_read_generic+0x808/0xae0
[ 949.792946] unix_stream_recvmsg+0x118/0x11c
[ 949.844851] sock_recvmsg_nosec+0x5c/0x88
[ 949.897131] ____sys_recvmsg+0xc4/0x270
[ 949.948720] ___sys_recvmsg+0x90/0xd4
[ 949.999685] __sys_recvmsg+0xb0/0xf8
[ 950.050220] sys_recvmsg+0x50/0x78
[ 950.100272] system_call_exception+0x15c/0x1c0
[ 950.150591] ret_from_syscall+0x0/0x2c
[ 950.250668] write to 0xc29d939c of 4 bytes by task 1584 on cpu 0:
[ 950.301716] raw_copy_to_user+0x74/0xb4
[ 950.352436] _copy_to_iter+0x120/0x694
[ 950.403091] simple_copy_to_iter+0x78/0x80
[ 950.453773] __skb_datagram_iter+0x88/0x334
[ 950.504795] skb_copy_datagram_iter+0x4c/0x78
[ 950.556085] unix_stream_read_actor+0x58/0x8c
[ 950.607130] unix_stream_read_generic+0x808/0xae0
[ 950.657834] unix_stream_recvmsg+0x118/0x11c
[ 950.708078] sock_recvmsg_nosec+0x5c/0x88
[ 950.758405] ____sys_recvmsg+0xc4/0x270
[ 950.808713] ___sys_recvmsg+0x90/0xd4
[ 950.858949] __sys_recvmsg+0xb0/0xf8
[ 950.909091] sys_recvmsg+0x50/0x78
[ 950.959103] system_call_exception+0x15c/0x1c0
[ 951.009386] ret_from_syscall+0x0/0x2c
[ 951.109902] Reported by Kernel Concurrency Sanitizer on:
[ 951.160864] CPU: 0 PID: 1584 Comm: wmaker Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 951.212548] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 951.264588] ==================================================================
[ 1037.010310] ==================================================================
[ 1037.063153] BUG: KCSAN: data-race in blk_finish_plug / blk_time_get_ns
[ 1037.168081] read to 0xc15b1d30 of 4 bytes by interrupt on cpu 1:
[ 1037.221981] blk_time_get_ns+0x24/0xf4
[ 1037.275976] __blk_mq_end_request+0x58/0xe8
[ 1037.330011] scsi_end_request+0x120/0x2d4
[ 1037.383796] scsi_io_completion+0x290/0x6b4
[ 1037.439234] scsi_finish_command+0x160/0x1a4
[ 1037.494753] scsi_complete+0xf0/0x128
[ 1037.549618] blk_complete_reqs+0xb4/0xd8
[ 1037.603095] blk_done_softirq+0x68/0xa4
[ 1037.656486] __do_softirq+0x1dc/0x218
[ 1037.709877] do_softirq_own_stack+0x54/0x74
[ 1037.763446] do_softirq_own_stack+0x44/0x74
[ 1037.816890] __irq_exit_rcu+0x6c/0xbc
[ 1037.870073] irq_exit+0x10/0x20
[ 1037.922396] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 1037.974802] do_IRQ+0x24/0x2c
[ 1038.026293] HardwareInterrupt_virt+0x108/0x10c
[ 1038.078675] 0x1dffff0
[ 1038.129889] 0x1dffff0
[ 1038.179967] kcsan_setup_watchpoint+0x300/0x4cc
[ 1038.230224] blk_finish_plug+0x48/0x6c
[ 1038.280185] read_pages+0xf0/0x214
[ 1038.329697] page_cache_ra_unbounded+0x120/0x244
[ 1038.379653] do_page_cache_ra+0x90/0xb8
[ 1038.429513] force_page_cache_ra+0x12c/0x130
[ 1038.479826] page_cache_sync_ra+0xc4/0xdc
[ 1038.529986] filemap_get_pages+0x1a4/0x708
[ 1038.580050] filemap_read+0x204/0x4c0
[ 1038.629911] blkdev_read_iter+0x1e8/0x25c
[ 1038.679901] vfs_read+0x29c/0x2f4
[ 1038.729784] ksys_read+0xb8/0x134
[ 1038.779468] sys_read+0x4c/0x74
[ 1038.828948] system_call_exception+0x15c/0x1c0
[ 1038.878919] ret_from_syscall+0x0/0x2c
[ 1038.978089] write to 0xc15b1d30 of 4 bytes by task 1615 on cpu 1:
[ 1039.028773] blk_finish_plug+0x48/0x6c
[ 1039.079459] read_pages+0xf0/0x214
[ 1039.130155] page_cache_ra_unbounded+0x120/0x244
[ 1039.181231] do_page_cache_ra+0x90/0xb8
[ 1039.232200] force_page_cache_ra+0x12c/0x130
[ 1039.283238] page_cache_sync_ra+0xc4/0xdc
[ 1039.334278] filemap_get_pages+0x1a4/0x708
[ 1039.384945] filemap_read+0x204/0x4c0
[ 1039.435002] blkdev_read_iter+0x1e8/0x25c
[ 1039.485191] vfs_read+0x29c/0x2f4
[ 1039.535226] ksys_read+0xb8/0x134
[ 1039.585232] sys_read+0x4c/0x74
[ 1039.634967] system_call_exception+0x15c/0x1c0
[ 1039.685109] ret_from_syscall+0x0/0x2c
[ 1039.785036] Reported by Kernel Concurrency Sanitizer on:
[ 1039.835612] CPU: 1 PID: 1615 Comm: blkid Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1039.887246] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1039.939286] ==================================================================
[ 1051.674902] ==================================================================
[ 1051.728499] BUG: KCSAN: data-race in interrupt_async_enter_prepare / raw_copy_to_user
[ 1051.836119] read to 0xc29db6dc of 4 bytes by task 1541 on cpu 1:
[ 1051.890846] interrupt_async_enter_prepare+0x64/0xc4
[ 1051.945445] timer_interrupt+0x1c/0x178
[ 1051.999296] Decrementer_virt+0x108/0x10c
[ 1052.052489] 0x8
[ 1052.104560] 0xc51a79c0
[ 1052.156840] kcsan_setup_watchpoint+0x300/0x4cc
[ 1052.209000] raw_copy_to_user+0x74/0xb4
[ 1052.260652] _copy_to_iter+0x120/0x694
[ 1052.311927] simple_copy_to_iter+0x78/0x80
[ 1052.362945] __skb_datagram_iter+0x214/0x334
[ 1052.413927] skb_copy_datagram_iter+0x4c/0x78
[ 1052.464757] unix_stream_read_actor+0x58/0x8c
[ 1052.515586] unix_stream_read_generic+0x808/0xae0
[ 1052.566377] unix_stream_recvmsg+0x118/0x11c
[ 1052.617046] sock_recvmsg_nosec+0x5c/0x88
[ 1052.667661] ____sys_recvmsg+0xc4/0x270
[ 1052.718310] ___sys_recvmsg+0x90/0xd4
[ 1052.768927] __sys_recvmsg+0xb0/0xf8
[ 1052.819350] sys_recvmsg+0x50/0x78
[ 1052.870273] system_call_exception+0x15c/0x1c0
[ 1052.921322] ret_from_syscall+0x0/0x2c
[ 1053.022476] write to 0xc29db6dc of 4 bytes by task 1541 on cpu 1:
[ 1053.073773] raw_copy_to_user+0x74/0xb4
[ 1053.124738] _copy_to_iter+0x120/0x694
[ 1053.175625] simple_copy_to_iter+0x78/0x80
[ 1053.226967] __skb_datagram_iter+0x214/0x334
[ 1053.278171] skb_copy_datagram_iter+0x4c/0x78
[ 1053.330087] unix_stream_read_actor+0x58/0x8c
[ 1053.381320] unix_stream_read_generic+0x808/0xae0
[ 1053.432375] unix_stream_recvmsg+0x118/0x11c
[ 1053.483113] sock_recvmsg_nosec+0x5c/0x88
[ 1053.533812] ____sys_recvmsg+0xc4/0x270
[ 1053.584454] ___sys_recvmsg+0x90/0xd4
[ 1053.635043] __sys_recvmsg+0xb0/0xf8
[ 1053.685732] sys_recvmsg+0x50/0x78
[ 1053.736246] system_call_exception+0x15c/0x1c0
[ 1053.787073] ret_from_syscall+0x0/0x2c
[ 1053.888526] Reported by Kernel Concurrency Sanitizer on:
[ 1053.940064] CPU: 1 PID: 1541 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1053.992784] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1054.045899] ==================================================================
[ 1075.301806] ==================================================================
[ 1075.356564] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 1075.466084] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 1075.521666] hrtimer_active+0xb0/0x100
[ 1075.576934] task_tick_fair+0xc8/0xcc
[ 1075.631997] scheduler_tick+0x6c/0xcc
[ 1075.686924] update_process_times+0xc8/0x120
[ 1075.742171] tick_nohz_handler+0x1ac/0x270
[ 1075.797428] __hrtimer_run_queues+0x170/0x1d8
[ 1075.852820] hrtimer_interrupt+0x168/0x350
[ 1075.908457] timer_interrupt+0x108/0x178
[ 1075.964201] Decrementer_virt+0x108/0x10c
[ 1076.019855] percpu_ref_tryget_many.constprop.0+0xf8/0x11c
[ 1076.076096] css_tryget+0x38/0x60
[ 1076.132179] get_mem_cgroup_from_mm+0x138/0x144
[ 1076.188426] __mem_cgroup_charge+0x2c/0x88
[ 1076.244053] folio_prealloc.isra.0+0x84/0xec
[ 1076.299063] handle_mm_fault+0x488/0xed0
[ 1076.353307] ___do_page_fault+0x4d8/0x630
[ 1076.408033] do_page_fault+0x28/0x40
[ 1076.461833] DataAccess_virt+0x124/0x17c
[ 1076.567260] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 1076.620584] __hrtimer_run_queues+0x1cc/0x1d8
[ 1076.673635] hrtimer_interrupt+0x168/0x350
[ 1076.726768] timer_interrupt+0x108/0x178
[ 1076.779810] Decrementer_virt+0x108/0x10c
[ 1076.833162] 0x595
[ 1076.885990] __kernel_unpoison_pages+0xe0/0x1a8
[ 1076.939390] post_alloc_hook+0x8c/0xf0
[ 1076.992752] prep_new_page+0x24/0x5c
[ 1077.045983] get_page_from_freelist+0x564/0x660
[ 1077.099651] __alloc_pages+0x114/0x8dc
[ 1077.153211] folio_prealloc.isra.0+0x44/0xec
[ 1077.206973] handle_mm_fault+0x488/0xed0
[ 1077.260843] ___do_page_fault+0x4d8/0x630
[ 1077.314829] do_page_fault+0x28/0x40
[ 1077.368660] DataAccess_virt+0x124/0x17c
[ 1077.476086] Reported by Kernel Concurrency Sanitizer on:
[ 1077.530829] CPU: 0 PID: 1620 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1077.586833] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1077.643130] ==================================================================
[ 1082.516165] pagealloc: memory corruption
[ 1082.613096] fffdfff0: 00 00 00 00 ....
[ 1082.710010] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1082.807840] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1082.905938] Call Trace:
[ 1083.002796] [f2cf5c00] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[ 1083.103663] [f2cf5c20] [c0be4ee8] dump_stack+0x20/0x34
[ 1083.203141] [f2cf5c30] [c02c47c0] __kernel_unpoison_pages+0x198/0x1a8
[ 1083.304417] [f2cf5c80] [c029b62c] post_alloc_hook+0x8c/0xf0
[ 1083.406281] [f2cf5cb0] [c029b6b4] prep_new_page+0x24/0x5c
[ 1083.508295] [f2cf5cd0] [c029c9dc] get_page_from_freelist+0x564/0x660
[ 1083.610055] [f2cf5d60] [c029dfcc] __alloc_pages+0x114/0x8dc
[ 1083.712330] [f2cf5e20] [c02764f0] folio_prealloc.isra.0+0x44/0xec
[ 1083.817046] [f2cf5e40] [c027be28] handle_mm_fault+0x488/0xed0
[ 1083.919976] [f2cf5ed0] [c00340f4] ___do_page_fault+0x4d8/0x630
[ 1084.024052] [f2cf5f10] [c003446c] do_page_fault+0x28/0x40
[ 1084.126551] [f2cf5f30] [c000433c] DataAccess_virt+0x124/0x17c
[ 1084.229750] --- interrupt: 300 at 0xb13008
[ 1084.332833] NIP: 00b13008 LR: 00b12fe8 CTR: 00000000
[ 1084.436540] REGS: f2cf5f40 TRAP: 0300 Not tainted (6.9.0-rc4-PMacG4-dirty)
[ 1084.538670] MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 20882464 XER: 00000000
[ 1084.643896] DAR: 8fa70010 DSISR: 42000000
GPR00: 00b12fe8 afd69f00 a7fed700 6ba98010 3c500000 20884462 00000003 00a301e4
GPR08: 23fd9000 23fd8000 00000000 4088429a 20882462 00b2ff68 00000000 40882462
GPR16: ffffffff 00000000 00000002 00000000 00000002 00000000 00b30018 00000001
GPR24: ffffffff ffffffff 3c500000 0000005a 6ba98010 00000000 00b37cd0 00001000
[ 1085.165724] NIP [00b13008] 0xb13008
[ 1085.267098] LR [00b12fe8] 0xb12fe8
[ 1085.368411] --- interrupt: 300
[ 1085.470618] page: refcount:1 mapcount:0 mapping:00000000 index:0x1 pfn:0x31069
[ 1085.577511] flags: 0x80000000(zone=2)
[ 1085.682232] page_type: 0xffffffff()
[ 1085.788198] raw: 80000000 00000100 00000122 00000000 00000001 00000000 ffffffff 00000001
[ 1085.894169] raw: 00000000
[ 1085.998995] page dumped because: pagealloc: corrupted page details
[ 1086.105882] page_owner info is not present (never set?)
[ 1103.172608] ==================================================================
[ 1103.237300] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1103.365582] read (marked) to 0xefa6fa40 of 4 bytes by task 1619 on cpu 0:
[ 1103.430899] lru_gen_look_around+0x320/0x634
[ 1103.495970] folio_referenced_one+0x32c/0x404
[ 1103.561131] rmap_walk_anon+0x1c4/0x24c
[ 1103.626212] rmap_walk+0x70/0x7c
[ 1103.690974] folio_referenced+0x194/0x1ec
[ 1103.755894] shrink_folio_list+0x6a8/0xd28
[ 1103.820531] evict_folios+0xcc0/0x1204
[ 1103.884712] try_to_shrink_lruvec+0x214/0x2f0
[ 1103.949008] shrink_one+0x104/0x1e8
[ 1104.013172] shrink_node+0x314/0xc3c
[ 1104.077234] do_try_to_free_pages+0x500/0x7e4
[ 1104.141517] try_to_free_pages+0x150/0x18c
[ 1104.205712] __alloc_pages+0x460/0x8dc
[ 1104.269801] folio_prealloc.isra.0+0x44/0xec
[ 1104.334098] handle_mm_fault+0x488/0xed0
[ 1104.398190] ___do_page_fault+0x4d8/0x630
[ 1104.462229] do_page_fault+0x28/0x40
[ 1104.526125] DataAccess_virt+0x124/0x17c
[ 1104.653866] write to 0xefa6fa40 of 4 bytes by task 40 on cpu 1:
[ 1104.718744] list_add+0x58/0x94
[ 1104.783166] evict_folios+0xb04/0x1204
[ 1104.847662] try_to_shrink_lruvec+0x214/0x2f0
[ 1104.912124] shrink_one+0x104/0x1e8
[ 1104.975841] shrink_node+0x314/0xc3c
[ 1105.038693] balance_pgdat+0x498/0x914
[ 1105.100896] kswapd+0x304/0x398
[ 1105.162235] kthread+0x174/0x178
[ 1105.223310] start_kernel_thread+0x10/0x14
[ 1105.343563] Reported by Kernel Concurrency Sanitizer on:
[ 1105.403874] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1105.464743] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1105.526020] ==================================================================
[ 1107.514623] ==================================================================
[ 1107.576537] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1107.699840] read (marked) to 0xef8320ec of 4 bytes by task 40 on cpu 1:
[ 1107.762376] lru_gen_look_around+0x320/0x634
[ 1107.824312] folio_referenced_one+0x32c/0x404
[ 1107.886238] rmap_walk_anon+0x1c4/0x24c
[ 1107.947942] rmap_walk+0x70/0x7c
[ 1108.009135] folio_referenced+0x194/0x1ec
[ 1108.070477] shrink_folio_list+0x6a8/0xd28
[ 1108.131506] evict_folios+0xcc0/0x1204
[ 1108.192277] try_to_shrink_lruvec+0x214/0x2f0
[ 1108.252645] shrink_one+0x104/0x1e8
[ 1108.312276] shrink_node+0x314/0xc3c
[ 1108.371237] balance_pgdat+0x498/0x914
[ 1108.429451] kswapd+0x304/0x398
[ 1108.487098] kthread+0x174/0x178
[ 1108.544273] start_kernel_thread+0x10/0x14
[ 1108.658034] write to 0xef8320ec of 4 bytes by task 1619 on cpu 0:
[ 1108.715833] list_add+0x58/0x94
[ 1108.773051] evict_folios+0xb04/0x1204
[ 1108.829735] try_to_shrink_lruvec+0x214/0x2f0
[ 1108.886174] shrink_one+0x104/0x1e8
[ 1108.942365] shrink_node+0x314/0xc3c
[ 1108.997602] do_try_to_free_pages+0x500/0x7e4
[ 1109.052504] try_to_free_pages+0x150/0x18c
[ 1109.107028] __alloc_pages+0x460/0x8dc
[ 1109.161106] folio_prealloc.isra.0+0x44/0xec
[ 1109.214621] handle_mm_fault+0x488/0xed0
[ 1109.267410] ___do_page_fault+0x4d8/0x630
[ 1109.319824] do_page_fault+0x28/0x40
[ 1109.371670] DataAccess_virt+0x124/0x17c
[ 1109.474176] Reported by Kernel Concurrency Sanitizer on:
[ 1109.526294] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1109.579602] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1109.633233] ==================================================================
[ 1112.175937] ==================================================================
[ 1112.230216] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1112.338269] read (marked) to 0xef0fa554 of 4 bytes by task 1620 on cpu 1:
[ 1112.393682] lru_gen_look_around+0x320/0x634
[ 1112.448808] folio_referenced_one+0x32c/0x404
[ 1112.503987] rmap_walk_anon+0x1c4/0x24c
[ 1112.559086] rmap_walk+0x70/0x7c
[ 1112.613757] folio_referenced+0x194/0x1ec
[ 1112.668584] shrink_folio_list+0x6a8/0xd28
[ 1112.723455] evict_folios+0xcc0/0x1204
[ 1112.778287] try_to_shrink_lruvec+0x214/0x2f0
[ 1112.833316] shrink_one+0x104/0x1e8
[ 1112.888249] shrink_node+0x314/0xc3c
[ 1112.942681] do_try_to_free_pages+0x500/0x7e4
[ 1112.997037] try_to_free_pages+0x150/0x18c
[ 1113.051448] __alloc_pages+0x460/0x8dc
[ 1113.105779] folio_prealloc.isra.0+0x44/0xec
[ 1113.160200] handle_mm_fault+0x488/0xed0
[ 1113.214729] ___do_page_fault+0x4d8/0x630
[ 1113.269341] do_page_fault+0x28/0x40
[ 1113.323895] DataAccess_virt+0x124/0x17c
[ 1113.433274] write to 0xef0fa554 of 4 bytes by task 40 on cpu 0:
[ 1113.488967] list_add+0x58/0x94
[ 1113.543902] evict_folios+0xb04/0x1204
[ 1113.598280] try_to_shrink_lruvec+0x214/0x2f0
[ 1113.652213] shrink_one+0x104/0x1e8
[ 1113.705362] shrink_node+0x314/0xc3c
[ 1113.758812] balance_pgdat+0x498/0x914
[ 1113.811578] kswapd+0x304/0x398
[ 1113.863739] kthread+0x174/0x178
[ 1113.915313] start_kernel_thread+0x10/0x14
[ 1114.017462] Reported by Kernel Concurrency Sanitizer on:
[ 1114.069359] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1114.122557] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1114.176028] ==================================================================
[ 1114.925709] ==================================================================
[ 1114.980036] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 1115.089080] write to 0xeedbbd40 of 4 bytes by task 1620 on cpu 1:
[ 1115.144741] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 1115.200501] cgroup_rstat_flush_locked+0x528/0x538
[ 1115.256431] cgroup_rstat_flush+0x38/0x5c
[ 1115.312176] do_flush_stats+0x78/0x9c
[ 1115.367879] mem_cgroup_flush_stats+0x7c/0x80
[ 1115.423757] zswap_shrinker_count+0xb8/0x150
[ 1115.479357] do_shrink_slab+0x7c/0x540
[ 1115.534529] shrink_slab+0x1f0/0x384
[ 1115.589688] shrink_one+0x140/0x1e8
[ 1115.644520] shrink_node+0x314/0xc3c
[ 1115.699123] do_try_to_free_pages+0x500/0x7e4
[ 1115.754139] try_to_free_pages+0x150/0x18c
[ 1115.809094] __alloc_pages+0x460/0x8dc
[ 1115.863928] folio_prealloc.isra.0+0x44/0xec
[ 1115.918893] handle_mm_fault+0x488/0xed0
[ 1115.973762] ___do_page_fault+0x4d8/0x630
[ 1116.028624] do_page_fault+0x28/0x40
[ 1116.083430] DataAccess_virt+0x124/0x17c
[ 1116.192920] write to 0xeedbbd40 of 4 bytes by task 40 on cpu 0:
[ 1116.248673] memcg_rstat_updated+0xd8/0x15c
[ 1116.304041] __mod_memcg_lruvec_state+0x118/0x154
[ 1116.358966] __mod_lruvec_state+0x58/0x78
[ 1116.413060] lru_gen_update_size+0x130/0x240
[ 1116.466608] lru_gen_add_folio+0x198/0x288
[ 1116.520444] move_folios_to_lru+0x29c/0x350
[ 1116.573667] evict_folios+0xd20/0x1204
[ 1116.626394] try_to_shrink_lruvec+0x214/0x2f0
[ 1116.678850] shrink_one+0x104/0x1e8
[ 1116.730711] shrink_node+0x314/0xc3c
[ 1116.782307] balance_pgdat+0x498/0x914
[ 1116.833820] kswapd+0x304/0x398
[ 1116.885406] kthread+0x174/0x178
[ 1116.936809] start_kernel_thread+0x10/0x14
[ 1117.039674] value changed: 0x00000018 -> 0x00000000
[ 1117.142997] Reported by Kernel Concurrency Sanitizer on:
[ 1117.195578] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1117.249142] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1117.302991] ==================================================================
[ 1118.378999] ==================================================================
[ 1118.433585] BUG: KCSAN: data-race in list_del / lru_gen_look_around
[ 1118.542375] read (marked) to 0xef2e6d64 of 4 bytes by task 1620 on cpu 1:
[ 1118.598040] lru_gen_look_around+0x320/0x634
[ 1118.653916] folio_referenced_one+0x32c/0x404
[ 1118.709922] rmap_walk_anon+0x1c4/0x24c
[ 1118.765527] rmap_walk+0x70/0x7c
[ 1118.820441] folio_referenced+0x194/0x1ec
[ 1118.875594] shrink_folio_list+0x6a8/0xd28
[ 1118.930737] evict_folios+0xcc0/0x1204
[ 1118.985757] try_to_shrink_lruvec+0x214/0x2f0
[ 1119.041134] shrink_one+0x104/0x1e8
[ 1119.096511] shrink_node+0x314/0xc3c
[ 1119.151747] do_try_to_free_pages+0x500/0x7e4
[ 1119.207404] try_to_free_pages+0x150/0x18c
[ 1119.263057] __alloc_pages+0x460/0x8dc
[ 1119.318628] folio_prealloc.isra.0+0x44/0xec
[ 1119.374089] handle_mm_fault+0x488/0xed0
[ 1119.428844] ___do_page_fault+0x4d8/0x630
[ 1119.482993] do_page_fault+0x28/0x40
[ 1119.536380] DataAccess_virt+0x124/0x17c
[ 1119.642844] write to 0xef2e6d64 of 4 bytes by task 40 on cpu 0:
[ 1119.695760] list_del+0x2c/0x5c
[ 1119.748250] lru_gen_del_folio+0x110/0x140
[ 1119.800516] evict_folios+0xaf8/0x1204
[ 1119.852574] try_to_shrink_lruvec+0x214/0x2f0
[ 1119.904997] shrink_one+0x104/0x1e8
[ 1119.957279] shrink_node+0x314/0xc3c
[ 1120.009316] balance_pgdat+0x498/0x914
[ 1120.061307] kswapd+0x304/0x398
[ 1120.113069] kthread+0x174/0x178
[ 1120.164720] start_kernel_thread+0x10/0x14
[ 1120.268265] Reported by Kernel Concurrency Sanitizer on:
[ 1120.320735] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1120.374216] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1120.428137] ==================================================================
[ 1122.332197] ==================================================================
[ 1122.387140] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1122.496688] read (marked) to 0xef4c94b8 of 4 bytes by task 40 on cpu 0:
[ 1122.552654] lru_gen_look_around+0x320/0x634
[ 1122.608217] folio_referenced_one+0x32c/0x404
[ 1122.663598] rmap_walk_anon+0x1c4/0x24c
[ 1122.718522] rmap_walk+0x70/0x7c
[ 1122.772986] folio_referenced+0x194/0x1ec
[ 1122.827581] shrink_folio_list+0x6a8/0xd28
[ 1122.882182] evict_folios+0xcc0/0x1204
[ 1122.936818] try_to_shrink_lruvec+0x214/0x2f0
[ 1122.991642] shrink_one+0x104/0x1e8
[ 1123.046317] shrink_node+0x314/0xc3c
[ 1123.100786] balance_pgdat+0x498/0x914
[ 1123.155167] kswapd+0x304/0x398
[ 1123.209542] kthread+0x174/0x178
[ 1123.263856] start_kernel_thread+0x10/0x14
[ 1123.372926] write to 0xef4c94b8 of 4 bytes by task 1620 on cpu 1:
[ 1123.428774] list_add+0x58/0x94
[ 1123.483944] evict_folios+0xb04/0x1204
[ 1123.539181] try_to_shrink_lruvec+0x214/0x2f0
[ 1123.594297] shrink_one+0x104/0x1e8
[ 1123.649039] shrink_node+0x314/0xc3c
[ 1123.702982] do_try_to_free_pages+0x500/0x7e4
[ 1123.756502] try_to_free_pages+0x150/0x18c
[ 1123.809341] __alloc_pages+0x460/0x8dc
[ 1123.862617] folio_prealloc.isra.0+0x44/0xec
[ 1123.915388] handle_mm_fault+0x488/0xed0
[ 1123.967668] ___do_page_fault+0x4d8/0x630
[ 1124.019509] do_page_fault+0x28/0x40
[ 1124.070795] DataAccess_virt+0x124/0x17c
[ 1124.173021] Reported by Kernel Concurrency Sanitizer on:
[ 1124.225247] CPU: 1 PID: 1620 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1124.278439] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1124.332099] ==================================================================
[ 1127.208932] ==================================================================
[ 1127.263097] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 1127.371973] write to 0xeedd8d40 of 4 bytes by task 1619 on cpu 0:
[ 1127.427413] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 1127.482791] cgroup_rstat_flush_locked+0x528/0x538
[ 1127.538283] cgroup_rstat_flush+0x38/0x5c
[ 1127.593429] do_flush_stats+0x78/0x9c
[ 1127.648480] mem_cgroup_flush_stats+0x7c/0x80
[ 1127.703760] zswap_shrinker_count+0xb8/0x150
[ 1127.759088] do_shrink_slab+0x7c/0x540
[ 1127.814363] shrink_slab+0x1f0/0x384
[ 1127.869577] shrink_one+0x140/0x1e8
[ 1127.924251] shrink_node+0x314/0xc3c
[ 1127.978437] do_try_to_free_pages+0x500/0x7e4
[ 1128.032843] try_to_free_pages+0x150/0x18c
[ 1128.087271] __alloc_pages+0x460/0x8dc
[ 1128.141597] folio_prealloc.isra.0+0x44/0xec
[ 1128.195997] handle_mm_fault+0x488/0xed0
[ 1128.250490] ___do_page_fault+0x4d8/0x630
[ 1128.305050] do_page_fault+0x28/0x40
[ 1128.359559] DataAccess_virt+0x124/0x17c
[ 1128.468744] write to 0xeedd8d40 of 4 bytes by task 40 on cpu 1:
[ 1128.524270] memcg_rstat_updated+0xd8/0x15c
[ 1128.579455] __mod_memcg_lruvec_state+0x118/0x154
[ 1128.634197] __mod_lruvec_state+0x58/0x78
[ 1128.688182] lru_gen_update_size+0x130/0x240
[ 1128.741579] lru_gen_add_folio+0x198/0x288
[ 1128.795328] move_folios_to_lru+0x29c/0x350
[ 1128.848471] evict_folios+0xd20/0x1204
[ 1128.901122] try_to_shrink_lruvec+0x214/0x2f0
[ 1128.953550] shrink_one+0x104/0x1e8
[ 1129.005393] shrink_node+0x314/0xc3c
[ 1129.057004] balance_pgdat+0x498/0x914
[ 1129.108555] kswapd+0x304/0x398
[ 1129.160143] kthread+0x174/0x178
[ 1129.211721] start_kernel_thread+0x10/0x14
[ 1129.314534] value changed: 0x0000000d -> 0x00000000
[ 1129.417903] Reported by Kernel Concurrency Sanitizer on:
[ 1129.470489] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1129.524180] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1129.578250] ==================================================================
[ 1132.350890] kworker/u9:1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0
[ 1132.439055] CPU: 1 PID: 39 Comm: kworker/u9:1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1132.530157] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1132.620439] Workqueue: events_freezable_pwr_efficient disk_events_workfn (events_freezable_pwr_ef)
[ 1132.712862] Call Trace:
[ 1132.805472] [f100dc50] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[ 1132.902185] [f100dc70] [c0be4ee8] dump_stack+0x20/0x34
[ 1132.997462] [f100dc80] [c029de40] warn_alloc+0x100/0x178
[ 1133.091658] [f100dce0] [c029e234] __alloc_pages+0x37c/0x8dc
[ 1133.187093] [f100dda0] [c029e884] __page_frag_alloc_align+0x74/0x194
[ 1133.280854] [f100ddd0] [c09bafc0] __netdev_alloc_skb+0x108/0x234
[ 1133.375951] [f100de00] [bef1a5a8] setup_rx_descbuffer+0x5c/0x258 [b43legacy]
[ 1133.471342] [f100de40] [bef1c43c] b43legacy_dma_rx+0x3e4/0x488 [b43legacy]
[ 1133.566247] [f100deb0] [bef0b034] b43legacy_interrupt_tasklet+0x7bc/0x7f0 [b43legacy]
[ 1133.661223] [f100df50] [c006f8c8] tasklet_action_common.isra.0+0xb0/0xe8
[ 1133.756602] [f100df80] [c0c1fc8c] __do_softirq+0x1dc/0x218
[ 1133.853423] [f100dff0] [c00091d8] do_softirq_own_stack+0x54/0x74
[ 1133.950509] [f10dd760] [c00091c8] do_softirq_own_stack+0x44/0x74
[ 1134.045886] [f10dd780] [c006f114] __irq_exit_rcu+0x6c/0xbc
[ 1134.141538] [f10dd790] [c006f588] irq_exit+0x10/0x20
[ 1134.235241] [f10dd7a0] [c0008b58] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 1134.328250] [f10dd7b0] [c000917c] do_IRQ+0x24/0x2c
[ 1134.421852] [f10dd7d0] [c00045b4] HardwareInterrupt_virt+0x108/0x10c
[ 1134.518090] --- interrupt: 500 at _raw_spin_unlock_irq+0x30/0x48
[ 1134.611842] NIP: c0c1f49c LR: c0c1f490 CTR: 00000000
[ 1134.705301] REGS: f10dd7e0 TRAP: 0500 Not tainted (6.9.0-rc4-PMacG4-dirty)
[ 1134.800041] MSR: 00209032 <EE,ME,IR,DR,RI> CR: 84882802 XER: 00000000
[ 1134.895506]
GPR00: c0c1f490 f10dd8a0 c1c28020 c49d6828 00016828 0001682b 00000003 c12399ec
GPR08: 00000000 00009032 0000001d f10dd860 24882802 00000000 00000001 00000000
GPR16: 00000800 00000800 00000000 00000000 00000002 00000004 00000004 00000000
GPR24: c49d6850 00000004 00000000 00000007 00000001 c49d6850 f10ddbb4 c49d6828
[ 1135.378017] NIP [c0c1f49c] _raw_spin_unlock_irq+0x30/0x48
[ 1135.473742] LR [c0c1f490] _raw_spin_unlock_irq+0x24/0x48
[ 1135.570964] --- interrupt: 500
[ 1135.667558] [f10dd8c0] [c0246150] evict_folios+0xc74/0x1204
[ 1135.766055] [f10dd9d0] [c02468f4] try_to_shrink_lruvec+0x214/0x2f0
[ 1135.865435] [f10dda50] [c0246ad4] shrink_one+0x104/0x1e8
[ 1135.964504] [f10dda90] [c0248eb8] shrink_node+0x314/0xc3c
[ 1136.063967] [f10ddb20] [c024a98c] do_try_to_free_pages+0x500/0x7e4
[ 1136.164791] [f10ddba0] [c024b110] try_to_free_pages+0x150/0x18c
[ 1136.265414] [f10ddc20] [c029e318] __alloc_pages+0x460/0x8dc
[ 1136.364886] [f10ddce0] [c06088ac] alloc_pages.constprop.0+0x30/0x50
[ 1136.465171] [f10ddd00] [c0608ad4] blk_rq_map_kern+0x208/0x404
[ 1136.564679] [f10ddd50] [c089c048] scsi_execute_cmd+0x350/0x534
[ 1136.663635] [f10dddc0] [c08b77cc] sr_check_events+0x108/0x4bc
[ 1136.764635] [f10dde40] [c08fb620] cdrom_update_events+0x54/0xb8
[ 1136.865074] [f10dde60] [c08fb6b4] cdrom_check_events+0x30/0x70
[ 1136.965069] [f10dde80] [c08b7c44] sr_block_check_events+0x60/0x90
[ 1137.064917] [f10ddea0] [c0630444] disk_check_events+0x68/0x168
[ 1137.165414] [f10ddee0] [c063056c] disk_events_workfn+0x28/0x40
[ 1137.267952] [f10ddf00] [c008df0c] process_scheduled_works+0x350/0x494
[ 1137.368522] [f10ddf70] [c008ee2c] worker_thread+0x2a4/0x300
[ 1137.469521] [f10ddfc0] [c009b87c] kthread+0x174/0x178
[ 1137.569313] [f10ddff0] [c001c304] start_kernel_thread+0x10/0x14
[ 1137.670144] Mem-Info:
[ 1137.769084] active_anon:292700 inactive_anon:181968 isolated_anon:0
active_file:6404 inactive_file:5560 isolated_file:0
unevictable:0 dirty:11 writeback:0
slab_reclaimable:1183 slab_unreclaimable:6185
mapped:7898 shmem:133 pagetables:675
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:1193 free_pcp:778 free_cma:0
[ 1138.591873] Node 0 active_anon:1170800kB inactive_anon:727872kB active_file:25616kB inactive_file:22240kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:31592kB dirty:44kB writeback:0kB shmem:532kB writeback_tmp:0kB kernel_stack:952kB pagetables:2700kB sec_pagetables:0kB all_unreclaimable? no
[ 1138.817095] DMA free:0kB boost:7564kB min:10928kB low:11768kB high:12608kB reserved_highatomic:0KB active_anon:568836kB inactive_anon:92340kB active_file:12kB inactive_file:1248kB unevictable:0kB writepending:40kB present:786432kB managed:709428kB mlocked:0kB bounce:0kB free_pcp:3112kB local_pcp:1844kB free_cma:0kB
[ 1139.054054] lowmem_reserve[]: 0 0 1280 1280
[ 1139.168685] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
[ 1139.288155] 39962 total pagecache pages
[ 1139.403030] 27865 pages in swap cache
[ 1139.518121] Free swap = 8240252kB
[ 1139.632092] Total swap = 8388604kB
[ 1139.745755] 524288 pages RAM
[ 1139.860425] 327680 pages HighMem/MovableOnly
[ 1139.972892] 19251 pages reserved
[ 1140.086052] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086495] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086627] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086729] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086811] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086897] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086981] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087066] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087125] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087233] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087318] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087401] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087484] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087568] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087651] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087753] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087836] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087920] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088003] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088087] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088171] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088277] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088364] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088448] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088530] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088615] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088699] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088806] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088891] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088974] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089059] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089142] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089226] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089331] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089414] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089498] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089584] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089665] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089748] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089852] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089935] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090019] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090103] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090187] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090292] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090377] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090461] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090544] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090628] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090713] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090817] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090903] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090987] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091071] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091156] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091240] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091345] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091430] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091515] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1145.532381] ==================================================================
[ 1145.608894] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1145.760471] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1145.836461] zswap_update_total_size+0x58/0xe8
[ 1145.912507] zswap_store+0x5a8/0xa18
[ 1145.989718] swap_writepage+0x4c/0xe8
[ 1146.065657] pageout+0x1dc/0x304
[ 1146.141299] shrink_folio_list+0xa70/0xd28
[ 1146.217154] evict_folios+0xcc0/0x1204
[ 1146.292889] try_to_shrink_lruvec+0x214/0x2f0
[ 1146.369041] shrink_one+0x104/0x1e8
[ 1146.446060] shrink_node+0x314/0xc3c
[ 1146.520298] balance_pgdat+0x498/0x914
[ 1146.594835] kswapd+0x304/0x398
[ 1146.667816] kthread+0x174/0x178
[ 1146.740277] start_kernel_thread+0x10/0x14
[ 1146.883255] read to 0xc121b328 of 8 bytes by task 1620 on cpu 0:
[ 1146.954655] zswap_store+0x118/0xa18
[ 1147.026298] swap_writepage+0x4c/0xe8
[ 1147.098668] pageout+0x1dc/0x304
[ 1147.169358] shrink_folio_list+0xa70/0xd28
[ 1147.240046] evict_folios+0xcc0/0x1204
[ 1147.310128] try_to_shrink_lruvec+0x214/0x2f0
[ 1147.380323] shrink_one+0x104/0x1e8
[ 1147.449989] shrink_node+0x314/0xc3c
[ 1147.519311] do_try_to_free_pages+0x500/0x7e4
[ 1147.588985] try_to_free_pages+0x150/0x18c
[ 1147.658439] __alloc_pages+0x460/0x8dc
[ 1147.727688] folio_prealloc.isra.0+0x44/0xec
[ 1147.796963] handle_mm_fault+0x488/0xed0
[ 1147.866127] ___do_page_fault+0x4d8/0x630
[ 1147.935298] do_page_fault+0x28/0x40
[ 1148.003939] DataAccess_virt+0x124/0x17c
[ 1148.140405] Reported by Kernel Concurrency Sanitizer on:
[ 1148.209378] CPU: 0 PID: 1620 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1148.279898] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1148.350632] ==================================================================
[ 1153.340372] ==================================================================
[ 1153.412514] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1153.554905] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1153.626481] zswap_update_total_size+0x58/0xe8
[ 1153.697496] zswap_store+0x5a8/0xa18
[ 1153.768192] swap_writepage+0x4c/0xe8
[ 1153.839021] pageout+0x1dc/0x304
[ 1153.910909] shrink_folio_list+0xa70/0xd28
[ 1153.980463] evict_folios+0xcc0/0x1204
[ 1154.050937] try_to_shrink_lruvec+0x214/0x2f0
[ 1154.120486] shrink_one+0x104/0x1e8
[ 1154.191056] shrink_node+0x314/0xc3c
[ 1154.260876] balance_pgdat+0x498/0x914
[ 1154.327067] kswapd+0x304/0x398
[ 1154.389843] kthread+0x174/0x178
[ 1154.448891] start_kernel_thread+0x10/0x14
[ 1154.558693] read to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1154.613044] zswap_store+0x118/0xa18
[ 1154.666450] swap_writepage+0x4c/0xe8
[ 1154.719823] pageout+0x1dc/0x304
[ 1154.773083] shrink_folio_list+0xa70/0xd28
[ 1154.826726] evict_folios+0xcc0/0x1204
[ 1154.880407] try_to_shrink_lruvec+0x214/0x2f0
[ 1154.934376] shrink_one+0x104/0x1e8
[ 1154.988131] shrink_node+0x314/0xc3c
[ 1155.041052] do_try_to_free_pages+0x500/0x7e4
[ 1155.093526] try_to_free_pages+0x150/0x18c
[ 1155.145467] __alloc_pages+0x460/0x8dc
[ 1155.197157] folio_prealloc.isra.0+0x44/0xec
[ 1155.248720] handle_mm_fault+0x488/0xed0
[ 1155.300028] ___do_page_fault+0x4d8/0x630
[ 1155.351434] do_page_fault+0x28/0x40
[ 1155.402778] DataAccess_virt+0x124/0x17c
[ 1155.504632] Reported by Kernel Concurrency Sanitizer on:
[ 1155.556251] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1155.608663] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1155.661629] ==================================================================
[ 1159.860944] ==================================================================
[ 1159.914891] BUG: KCSAN: data-race in __mod_memcg_lruvec_state / mem_cgroup_css_rstat_flush
[ 1160.023991] read (marked) to 0xeedd8f80 of 4 bytes by task 1619 on cpu 0:
[ 1160.079774] mem_cgroup_css_rstat_flush+0x394/0x518
[ 1160.135661] cgroup_rstat_flush_locked+0x528/0x538
[ 1160.191359] cgroup_rstat_flush+0x38/0x5c
[ 1160.246745] do_flush_stats+0x78/0x9c
[ 1160.302181] mem_cgroup_flush_stats+0x7c/0x80
[ 1160.357857] zswap_shrinker_count+0xb8/0x150
[ 1160.413527] do_shrink_slab+0x7c/0x540
[ 1160.469078] shrink_slab+0x1f0/0x384
[ 1160.524481] shrink_one+0x140/0x1e8
[ 1160.579854] shrink_node+0x314/0xc3c
[ 1160.634981] do_try_to_free_pages+0x500/0x7e4
[ 1160.690290] try_to_free_pages+0x150/0x18c
[ 1160.745600] __alloc_pages+0x460/0x8dc
[ 1160.800804] __read_swap_cache_async+0xd0/0x24c
[ 1160.856176] swap_cluster_readahead+0x2cc/0x338
[ 1160.911816] swapin_readahead+0x430/0x438
[ 1160.967167] do_swap_page+0x1e0/0x9bc
[ 1161.022385] handle_mm_fault+0xecc/0xed0
[ 1161.077696] ___do_page_fault+0x4d8/0x630
[ 1161.132806] do_page_fault+0x28/0x40
[ 1161.187151] DataAccess_virt+0x124/0x17c
[ 1161.293119] write to 0xeedd8f80 of 4 bytes by task 40 on cpu 1:
[ 1161.347088] __mod_memcg_lruvec_state+0xdc/0x154
[ 1161.400803] __mod_lruvec_state+0x58/0x78
[ 1161.453851] lru_gen_update_size+0x130/0x240
[ 1161.506703] lru_gen_del_folio+0x104/0x140
[ 1161.559074] evict_folios+0xaf8/0x1204
[ 1161.611409] try_to_shrink_lruvec+0x214/0x2f0
[ 1161.664014] shrink_one+0x104/0x1e8
[ 1161.716690] shrink_node+0x314/0xc3c
[ 1161.769028] balance_pgdat+0x498/0x914
[ 1161.821319] kswapd+0x304/0x398
[ 1161.873340] kthread+0x174/0x178
[ 1161.925118] start_kernel_thread+0x10/0x14
[ 1162.028727] Reported by Kernel Concurrency Sanitizer on:
[ 1162.081278] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1162.135074] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1162.189178] ==================================================================
[ 1167.537551] ==================================================================
[ 1167.592244] BUG: KCSAN: data-race in zswap_update_total_size / zswap_update_total_size
[ 1167.702971] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1167.758691] zswap_update_total_size+0x58/0xe8
[ 1167.815688] zswap_entry_free+0xdc/0x1c0
[ 1167.872100] zswap_load+0x190/0x19c
[ 1167.927754] swap_read_folio+0xbc/0x450
[ 1167.984430] swap_cluster_readahead+0x2f8/0x338
[ 1168.040390] swapin_readahead+0x430/0x438
[ 1168.097280] do_swap_page+0x1e0/0x9bc
[ 1168.153152] handle_mm_fault+0xecc/0xed0
[ 1168.210362] ___do_page_fault+0x4d8/0x630
[ 1168.266601] do_page_fault+0x28/0x40
[ 1168.322623] DataAccess_virt+0x124/0x17c
[ 1168.434517] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1168.491480] zswap_update_total_size+0x58/0xe8
[ 1168.547866] zswap_store+0x5a8/0xa18
[ 1168.604934] swap_writepage+0x4c/0xe8
[ 1168.660335] pageout+0x1dc/0x304
[ 1168.714767] shrink_folio_list+0xa70/0xd28
[ 1168.768845] evict_folios+0xcc0/0x1204
[ 1168.823468] try_to_shrink_lruvec+0x214/0x2f0
[ 1168.878212] shrink_one+0x104/0x1e8
[ 1168.931092] shrink_node+0x314/0xc3c
[ 1168.984636] balance_pgdat+0x498/0x914
[ 1169.036606] kswapd+0x304/0x398
[ 1169.087855] kthread+0x174/0x178
[ 1169.139562] start_kernel_thread+0x10/0x14
[ 1169.242777] Reported by Kernel Concurrency Sanitizer on:
[ 1169.294617] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1169.348458] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1169.401904] ==================================================================
[ 1183.009768] ==================================================================
[ 1183.064956] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1183.174114] read to 0xc121b328 of 8 bytes by task 40 on cpu 0:
[ 1183.229430] zswap_store+0x118/0xa18
[ 1183.284521] swap_writepage+0x4c/0xe8
[ 1183.339893] pageout+0x1dc/0x304
[ 1183.395281] shrink_folio_list+0xa70/0xd28
[ 1183.450670] evict_folios+0xcc0/0x1204
[ 1183.506068] try_to_shrink_lruvec+0x214/0x2f0
[ 1183.562182] shrink_one+0x104/0x1e8
[ 1183.617580] shrink_node+0x314/0xc3c
[ 1183.673440] balance_pgdat+0x498/0x914
[ 1183.730115] kswapd+0x304/0x398
[ 1183.784757] kthread+0x174/0x178
[ 1183.839371] start_kernel_thread+0x10/0x14
[ 1183.947992] write to 0xc121b328 of 8 bytes by task 1619 on cpu 1:
[ 1184.002593] zswap_update_total_size+0x58/0xe8
[ 1184.058037] zswap_entry_free+0xdc/0x1c0
[ 1184.113370] zswap_load+0x190/0x19c
[ 1184.167695] swap_read_folio+0xbc/0x450
[ 1184.223285] swap_cluster_readahead+0x2f8/0x338
[ 1184.278473] swapin_readahead+0x430/0x438
[ 1184.333386] do_swap_page+0x1e0/0x9bc
[ 1184.388168] handle_mm_fault+0xecc/0xed0
[ 1184.443913] ___do_page_fault+0x4d8/0x630
[ 1184.499751] do_page_fault+0x28/0x40
[ 1184.554853] DataAccess_virt+0x124/0x17c
[ 1184.663890] Reported by Kernel Concurrency Sanitizer on:
[ 1184.717341] CPU: 1 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1184.772860] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1184.827366] ==================================================================
[ 1190.455160] ==================================================================
[ 1190.509181] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1190.616279] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1190.671318] zswap_update_total_size+0x58/0xe8
[ 1190.726030] zswap_entry_free+0xdc/0x1c0
[ 1190.781260] zswap_load+0x190/0x19c
[ 1190.835946] swap_read_folio+0xbc/0x450
[ 1190.890448] swap_cluster_readahead+0x2f8/0x338
[ 1190.945200] swapin_readahead+0x430/0x438
[ 1191.000452] do_swap_page+0x1e0/0x9bc
[ 1191.055327] handle_mm_fault+0xecc/0xed0
[ 1191.110193] ___do_page_fault+0x4d8/0x630
[ 1191.166183] do_page_fault+0x28/0x40
[ 1191.220277] DataAccess_virt+0x124/0x17c
[ 1191.328296] read to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1191.383248] zswap_store+0x118/0xa18
[ 1191.439465] swap_writepage+0x4c/0xe8
[ 1191.493796] pageout+0x1dc/0x304
[ 1191.548296] shrink_folio_list+0xa70/0xd28
[ 1191.603645] evict_folios+0xcc0/0x1204
[ 1191.658098] try_to_shrink_lruvec+0x214/0x2f0
[ 1191.712976] shrink_one+0x104/0x1e8
[ 1191.768774] shrink_node+0x314/0xc3c
[ 1191.823924] balance_pgdat+0x498/0x914
[ 1191.878609] kswapd+0x304/0x398
[ 1191.933283] kthread+0x174/0x178
[ 1191.988300] start_kernel_thread+0x10/0x14
[ 1192.097058] Reported by Kernel Concurrency Sanitizer on:
[ 1192.150417] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1192.203938] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1192.258910] ==================================================================
[ 1203.342040] ==================================================================
[ 1203.396067] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1203.503547] read to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1203.557855] zswap_store+0x118/0xa18
[ 1203.612576] swap_writepage+0x4c/0xe8
[ 1203.666931] pageout+0x1dc/0x304
[ 1203.721970] shrink_folio_list+0xa70/0xd28
[ 1203.776637] evict_folios+0xcc0/0x1204
[ 1203.831039] try_to_shrink_lruvec+0x214/0x2f0
[ 1203.886009] shrink_one+0x104/0x1e8
[ 1203.940864] shrink_node+0x314/0xc3c
[ 1203.996775] balance_pgdat+0x498/0x914
[ 1204.053002] kswapd+0x304/0x398
[ 1204.107500] kthread+0x174/0x178
[ 1204.162461] start_kernel_thread+0x10/0x14
[ 1204.269324] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1204.323962] zswap_update_total_size+0x58/0xe8
[ 1204.378630] zswap_entry_free+0xdc/0x1c0
[ 1204.433175] zswap_load+0x190/0x19c
[ 1204.488474] swap_read_folio+0xbc/0x450
[ 1204.542800] swap_cluster_readahead+0x2f8/0x338
[ 1204.597291] swapin_readahead+0x430/0x438
[ 1204.651656] do_swap_page+0x1e0/0x9bc
[ 1204.706654] handle_mm_fault+0xecc/0xed0
[ 1204.760974] ___do_page_fault+0x4d8/0x630
[ 1204.815926] do_page_fault+0x28/0x40
[ 1204.870354] DataAccess_virt+0x124/0x17c
[ 1204.979137] Reported by Kernel Concurrency Sanitizer on:
[ 1205.032170] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1205.085728] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1205.140017] ==================================================================
[ 1206.640937] ==================================================================
[ 1206.694993] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1206.801946] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1206.856508] zswap_update_total_size+0x58/0xe8
[ 1206.911132] zswap_entry_free+0xdc/0x1c0
[ 1206.965843] zswap_load+0x190/0x19c
[ 1207.020101] swap_read_folio+0xbc/0x450
[ 1207.075221] swap_cluster_readahead+0x2f8/0x338
[ 1207.130431] swapin_readahead+0x430/0x438
[ 1207.184750] do_swap_page+0x1e0/0x9bc
[ 1207.239188] handle_mm_fault+0xecc/0xed0
[ 1207.294227] ___do_page_fault+0x4d8/0x630
[ 1207.349077] do_page_fault+0x28/0x40
[ 1207.404162] DataAccess_virt+0x124/0x17c
[ 1207.512153] read to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1207.566528] zswap_store+0x118/0xa18
[ 1207.620922] swap_writepage+0x4c/0xe8
[ 1207.675291] pageout+0x1dc/0x304
[ 1207.729477] shrink_folio_list+0xa70/0xd28
[ 1207.785130] evict_folios+0xcc0/0x1204
[ 1207.841011] try_to_shrink_lruvec+0x214/0x2f0
[ 1207.895916] shrink_one+0x104/0x1e8
[ 1207.950438] shrink_node+0x314/0xc3c
[ 1208.005265] balance_pgdat+0x498/0x914
[ 1208.060116] kswapd+0x304/0x398
[ 1208.115036] kthread+0x174/0x178
[ 1208.169594] start_kernel_thread+0x10/0x14
[ 1208.277724] Reported by Kernel Concurrency Sanitizer on:
[ 1208.331348] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1208.384839] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1208.439529] ==================================================================
[ 1213.640903] ==================================================================
[ 1213.695703] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1213.804484] read to 0xc121b328 of 8 bytes by task 40 on cpu 0:
[ 1213.860459] zswap_store+0x118/0xa18
[ 1213.915658] swap_writepage+0x4c/0xe8
[ 1213.970521] pageout+0x1dc/0x304
[ 1214.025573] shrink_folio_list+0xa70/0xd28
[ 1214.079835] evict_folios+0xcc0/0x1204
[ 1214.134082] try_to_shrink_lruvec+0x214/0x2f0
[ 1214.189919] shrink_one+0x104/0x1e8
[ 1214.246323] shrink_node+0x314/0xc3c
[ 1214.302606] balance_pgdat+0x498/0x914
[ 1214.359039] kswapd+0x304/0x398
[ 1214.415259] kthread+0x174/0x178
[ 1214.471274] start_kernel_thread+0x10/0x14
[ 1214.581789] write to 0xc121b328 of 8 bytes by task 1619 on cpu 1:
[ 1214.637849] zswap_update_total_size+0x58/0xe8
[ 1214.694311] zswap_entry_free+0xdc/0x1c0
[ 1214.750697] zswap_load+0x190/0x19c
[ 1214.806815] swap_read_folio+0xbc/0x450
[ 1214.862958] swap_cluster_readahead+0x2f8/0x338
[ 1214.919292] swapin_readahead+0x430/0x438
[ 1214.975554] do_swap_page+0x1e0/0x9bc
[ 1215.031737] handle_mm_fault+0xecc/0xed0
[ 1215.088003] ___do_page_fault+0x4d8/0x630
[ 1215.144352] do_page_fault+0x28/0x40
[ 1215.200613] DataAccess_virt+0x124/0x17c
[ 1215.311446] Reported by Kernel Concurrency Sanitizer on:
[ 1215.366431] CPU: 1 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1215.421814] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1215.478075] ==================================================================
[ 1218.273217] ==================================================================
[ 1218.328009] BUG: KCSAN: data-race in zswap_update_total_size / zswap_update_total_size
[ 1218.435905] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1218.490496] zswap_update_total_size+0x58/0xe8
[ 1218.545503] zswap_store+0x5a8/0xa18
[ 1218.601334] swap_writepage+0x4c/0xe8
[ 1218.656924] pageout+0x1dc/0x304
[ 1218.711641] shrink_folio_list+0xa70/0xd28
[ 1218.768359] evict_folios+0xcc0/0x1204
[ 1218.823335] try_to_shrink_lruvec+0x214/0x2f0
[ 1218.878309] shrink_one+0x104/0x1e8
[ 1218.933755] shrink_node+0x314/0xc3c
[ 1218.989790] do_try_to_free_pages+0x500/0x7e4
[ 1219.045988] try_to_free_pages+0x150/0x18c
[ 1219.100646] __alloc_pages+0x460/0x8dc
[ 1219.155704] __read_swap_cache_async+0xd0/0x24c
[ 1219.210859] swap_cluster_readahead+0x2cc/0x338
[ 1219.266254] swapin_readahead+0x430/0x438
[ 1219.321160] do_swap_page+0x1e0/0x9bc
[ 1219.375680] handle_mm_fault+0xecc/0xed0
[ 1219.431293] ___do_page_fault+0x4d8/0x630
[ 1219.486916] do_page_fault+0x28/0x40
[ 1219.541880] DataAccess_virt+0x124/0x17c
[ 1219.651735] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1219.707148] zswap_update_total_size+0x58/0xe8
[ 1219.763713] zswap_store+0x5a8/0xa18
[ 1219.820142] swap_writepage+0x4c/0xe8
[ 1219.875386] pageout+0x1dc/0x304
[ 1219.931246] shrink_folio_list+0xa70/0xd28
[ 1219.986528] evict_folios+0xcc0/0x1204
[ 1220.040133] try_to_shrink_lruvec+0x214/0x2f0
[ 1220.094196] shrink_one+0x104/0x1e8
[ 1220.147543] shrink_node+0x314/0xc3c
[ 1220.200613] balance_pgdat+0x498/0x914
[ 1220.253663] kswapd+0x304/0x398
[ 1220.305693] kthread+0x174/0x178
[ 1220.357259] start_kernel_thread+0x10/0x14
[ 1220.460634] Reported by Kernel Concurrency Sanitizer on:
[ 1220.512814] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1220.565806] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1220.619024] ==================================================================
[ 1220.909835] ==================================================================
[ 1220.964030] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1221.072982] write to 0xc121b328 of 8 bytes by task 1620 on cpu 1:
[ 1221.128360] zswap_update_total_size+0x58/0xe8
[ 1221.184098] zswap_entry_free+0xdc/0x1c0
[ 1221.239507] zswap_load+0x190/0x19c
[ 1221.295278] swap_read_folio+0xbc/0x450
[ 1221.349882] swap_cluster_readahead+0x2f8/0x338
[ 1221.404828] swapin_readahead+0x430/0x438
[ 1221.459969] do_swap_page+0x1e0/0x9bc
[ 1221.514717] handle_mm_fault+0xecc/0xed0
[ 1221.569478] ___do_page_fault+0x4d8/0x630
[ 1221.624290] do_page_fault+0x28/0x40
[ 1221.679550] DataAccess_virt+0x124/0x17c
[ 1221.788426] read to 0xc121b328 of 8 bytes by task 40 on cpu 0:
[ 1221.843562] zswap_store+0x118/0xa18
[ 1221.898855] swap_writepage+0x4c/0xe8
[ 1221.953838] pageout+0x1dc/0x304
[ 1222.008062] shrink_folio_list+0xa70/0xd28
[ 1222.062928] evict_folios+0xcc0/0x1204
[ 1222.116088] try_to_shrink_lruvec+0x214/0x2f0
[ 1222.169817] shrink_one+0x104/0x1e8
[ 1222.222571] shrink_node+0x314/0xc3c
[ 1222.274443] balance_pgdat+0x498/0x914
[ 1222.326101] kswapd+0x304/0x398
[ 1222.378276] kthread+0x174/0x178
[ 1222.429440] start_kernel_thread+0x10/0x14
[ 1222.531455] Reported by Kernel Concurrency Sanitizer on:
[ 1222.582721] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1222.635180] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1222.688017] ==================================================================
^ permalink raw reply [relevance 1%]
* Re: BUG: Bad page map in process init pte:c0ab684c pmd:01182000 (on a PowerMac G4 DP)
@ 2024-04-17 0:56 1% ` Erhard Furtner
0 siblings, 0 replies; 200+ results
From: Erhard Furtner @ 2024-04-17 0:56 UTC (permalink / raw)
To: Christophe Leroy; +Cc: Nicholas Piggin, linux-mm, linuxppc-dev, Rohan McLure
[-- Attachment #1: Type: text/plain, Size: 13742 bytes --]
On Thu, 29 Feb 2024 17:11:28 +0000
Christophe Leroy <christophe.leroy@csgroup.eu> wrote:
> > Revisited the issue on kernel v6.8-rc6 and I can still reproduce it.
> >
> > Short summary as my last post was over a year ago:
> > (x) I get this memory corruption only when CONFIG_VMAP_STACK=y and CONFIG_SMP=y is enabled.
> > (x) I don't get this memory corruption when only one of the above is enabled. ^^
> > (x) memtester says the 2 GiB RAM in my G4 DP are fine.
> > (x) I don't get this issue on my G5 11,2 or Talos II.
> > (x) "stress -m 2 --vm-bytes 965M" provokes the issue in < 10 secs. (https://salsa.debian.org/debian/stress)
> >
> > For the test I used CONFIG_KASAN_INLINE=y for v6.8-rc6 and debug_pagealloc=on, page_owner=on and got this dmesg:
> >
> > [...]
> > pagealloc: memory corruption
> > f5fcfff0: 00 00 00 00 ....
> > CPU: 1 PID: 1788 Comm: stress Tainted: G B 6.8.0-rc6-PMacG4 #15
> > Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
> > Call Trace:
> > [f3bfbac0] [c162a8e8] dump_stack_lvl+0x60/0x94 (unreliable)
> > [f3bfbae0] [c04edf9c] __kernel_unpoison_pages+0x1e0/0x1f0
> > [f3bfbb30] [c04a8aa0] post_alloc_hook+0xe0/0x174
> > [f3bfbb60] [c04a8b58] prep_new_page+0x24/0xbc
> > [f3bfbb80] [c04abcc4] get_page_from_freelist+0xcd0/0xf10
> > [f3bfbc50] [c04aecd8] __alloc_pages+0x204/0xe2c
> > [f3bfbda0] [c04b07a8] __folio_alloc+0x18/0x88
> > [f3bfbdc0] [c0461a10] vma_alloc_zeroed_movable_folio.isra.0+0x2c/0x6c
> > [f3bfbde0] [c046bb90] handle_mm_fault+0x91c/0x19ac
> > [f3bfbec0] [c0047b8c] ___do_page_fault+0x93c/0xc14
> > [f3bfbf10] [c0048278] do_page_fault+0x28/0x60
> > [f3bfbf30] [c000433c] DataAccess_virt+0x124/0x17c
> > --- interrupt: 300 at 0xbe30d8
> > NIP: 00be30d8 LR: 00be30b4 CTR: 00000000
> > REGS: f3bfbf40 TRAP: 0300 Tainted: G B (6.8.0-rc6-PMacG4)
> > MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 20882464 XER: 00000000
> > DAR: 88c7a010 DSISR: 42000000
> > GPR00: 00be30b4 af8397d0 a78436c0 6b2ee010 3c500000 20224462 fe77f7e1 00b00264
> > GPR08: 1d98d000 1d98c000 00000000 40ae256a 20882262 00bffff4 00000000 00000000
> > GPR16: 00000000 00000002 00000000 0000005a 40802262 80002262 40002262 00c000a4
> > GPR24: ffffffff ffffffff 3c500000 00000000 00000000 6b2ee010 00c07d64 00001000
> > NIP [00be30d8] 0xbe30d8
> > LR [00be30b4] 0xbe30b4
> > --- interrupt: 300
> > page:ef4bd92c refcount:1 mapcount:0 mapping:00000000 index:0x1 pfn:0x310b3
> > flags: 0x80000000(zone=2)
> > page_type: 0xffffffff()
> > raw: 80000000 00000100 00000122 00000000 00000001 00000000 ffffffff 00000001
> > raw: 00000000
> > page dumped because: pagealloc: corrupted page details
> > page_owner info is not present (never set?)
> > swapper/1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0
> > CPU: 1 PID: 0 Comm: swapper/1 Tainted: G B 6.8.0-rc6-PMacG4 #15
> > Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
> > Call Trace:
> > [f101b9d0] [c162a8e8] dump_stack_lvl+0x60/0x94 (unreliable)
> > [f101b9f0] [c04ae948] warn_alloc+0x154/0x2e0
> > [f101bab0] [c04af030] __alloc_pages+0x55c/0xe2c
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)
> > cache: skbuff_head_cache, object size: 176, buffer size: 288, default order: 0, min order: 0
> > node 0: slabs: 509, objs: 7126, free: 0
> > [...]
> >
> > New findings:
> > (x) The page corruption only shows up the 1st time I run "stress -m 2 --vm-bytes 965M". When I quit and restart stress no additional page corruption shows up.
> > (x) The page corruption shows up shortly after I run "stress -m 2 --vm-bytes 965M" but no additional page corruption shows up afterwards, even if left running for 30min.
> >
> >
> > For additional testing I thought it would be a good idea to try "modprobe test_vmalloc" but this remained inconclusive. Sometimes a 'BUG: Unable to handle kernel data access on read at 0xe0000000' like this shows up but not always:
> >
>
> Interesting.
>
> I guess 0xe0000000 is where linear RAM starts to be mapped with pages ?
> Can you confirm with a dump of
> /sys/kernel/debug/powerpc/block_address_translation ?
>
> Do we have a problem of race with hash table ?
>
> Would KCSAN help with that ?
Revisited the issue on kernel v6.9-rc4 and I can still reproduce it. Did some runs now with KCSAN_EARLY_ENABLE=y (+ KCSAN_SKIP_WATCH=4000 + KCSAN_STRICT=y) which made KCSAN a lot more verbose.
On v6.9-rc4 I have not seen the "SLUB: Unable to allocate memory on node -1, gfp=0x820(GFP_ATOMIC)" I reported some time ago and no other KASAN hits at boot or afterwards so I disabled KASAN. The general memory corruption issue remains however.
At running "stress -m 2 --vm-bytes 965M" I get some "BUG: KCSAN: data-race in list_add / lru_gen_look_around" and "BUG: KCSAN: data-race in zswap_store / zswap_update_total_size" which I don't get otherwise:
[...]
BUG: KCSAN: data-race in list_add / lru_gen_look_around
read (marked) to 0xefa6fa40 of 4 bytes by task 1619 on cpu 0:
lru_gen_look_around+0x320/0x634
folio_referenced_one+0x32c/0x404
rmap_walk_anon+0x1c4/0x24c
rmap_walk+0x70/0x7c
folio_referenced+0x194/0x1ec
shrink_folio_list+0x6a8/0xd28
evict_folios+0xcc0/0x1204
try_to_shrink_lruvec+0x214/0x2f0
shrink_one+0x104/0x1e8
shrink_node+0x314/0xc3c
do_try_to_free_pages+0x500/0x7e4
try_to_free_pages+0x150/0x18c
__alloc_pages+0x460/0x8dc
folio_prealloc.isra.0+0x44/0xec
handle_mm_fault+0x488/0xed0
___do_page_fault+0x4d8/0x630
do_page_fault+0x28/0x40
DataAccess_virt+0x124/0x17c
write to 0xefa6fa40 of 4 bytes by task 40 on cpu 1:
list_add+0x58/0x94
evict_folios+0xb04/0x1204
try_to_shrink_lruvec+0x214/0x2f0
shrink_one+0x104/0x1e8
shrink_node+0x314/0xc3c
balance_pgdat+0x498/0x914
kswapd+0x304/0x398
kthread+0x174/0x178
start_kernel_thread+0x10/0x14
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[...]
BUG: KCSAN: data-race in zswap_update_total_size / zswap_update_total_size
write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
zswap_update_total_size+0x58/0xe8
zswap_entry_free+0xdc/0x1c0
zswap_load+0x190/0x19c
swap_read_folio+0xbc/0x450
swap_cluster_readahead+0x2f8/0x338
swapin_readahead+0x430/0x438
do_swap_page+0x1e0/0x9bc
handle_mm_fault+0xecc/0xed0
___do_page_fault+0x4d8/0x630
do_page_fault+0x28/0x40
DataAccess_virt+0x124/0x17c
write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
zswap_update_total_size+0x58/0xe8
zswap_store+0x5a8/0xa18
swap_writepage+0x4c/0xe8
pageout+0x1dc/0x304
shrink_folio_list+0xa70/0xd28
evict_folios+0xcc0/0x1204
try_to_shrink_lruvec+0x214/0x2f0
shrink_one+0x104/0x1e8
shrink_node+0x314/0xc3c
balance_pgdat+0x498/0x914
kswapd+0x304/0x398
kthread+0x174/0x178
start_kernel_thread+0x10/0x14
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[...]
One time I also got another page allocation failure:
[...]
==================================================================
kworker/u9:1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0
CPU: 1 PID: 39 Comm: kworker/u9:1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
Workqueue: events_freezable_pwr_efficient disk_events_workfn (events_freezable_pwr_ef)
Call Trace:
[f100dc50] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[f100dc70] [c0be4ee8] dump_stack+0x20/0x34
[f100dc80] [c029de40] warn_alloc+0x100/0x178
[f100dce0] [c029e234] __alloc_pages+0x37c/0x8dc
[f100dda0] [c029e884] __page_frag_alloc_align+0x74/0x194
[f100ddd0] [c09bafc0] __netdev_alloc_skb+0x108/0x234
[f100de00] [bef1a5a8] setup_rx_descbuffer+0x5c/0x258 [b43legacy]
[f100de40] [bef1c43c] b43legacy_dma_rx+0x3e4/0x488 [b43legacy]
[f100deb0] [bef0b034] b43legacy_interrupt_tasklet+0x7bc/0x7f0 [b43legacy]
[f100df50] [c006f8c8] tasklet_action_common.isra.0+0xb0/0xe8
[f100df80] [c0c1fc8c] __do_softirq+0x1dc/0x218
[f100dff0] [c00091d8] do_softirq_own_stack+0x54/0x74
[f10dd760] [c00091c8] do_softirq_own_stack+0x44/0x74
[f10dd780] [c006f114] __irq_exit_rcu+0x6c/0xbc
[f10dd790] [c006f588] irq_exit+0x10/0x20
[f10dd7a0] [c0008b58] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[f10dd7b0] [c000917c] do_IRQ+0x24/0x2c
[f10dd7d0] [c00045b4] HardwareInterrupt_virt+0x108/0x10c
--- interrupt: 500 at _raw_spin_unlock_irq+0x30/0x48
NIP: c0c1f49c LR: c0c1f490 CTR: 00000000
REGS: f10dd7e0 TRAP: 0500 Not tainted (6.9.0-rc4-PMacG4-dirty)
MSR: 00209032 <EE,ME,IR,DR,RI> CR: 84882802 XER: 00000000
GPR00: c0c1f490 f10dd8a0 c1c28020 c49d6828 00016828 0001682b 00000003 c12399ec
GPR08: 00000000 00009032 0000001d f10dd860 24882802 00000000 00000001 00000000
GPR16: 00000800 00000800 00000000 00000000 00000002 00000004 00000004 00000000
GPR24: c49d6850 00000004 00000000 00000007 00000001 c49d6850 f10ddbb4 c49d6828
NIP [c0c1f49c] _raw_spin_unlock_irq+0x30/0x48
LR [c0c1f490] _raw_spin_unlock_irq+0x24/0x48
--- interrupt: 500
[f10dd8c0] [c0246150] evict_folios+0xc74/0x1204
[f10dd9d0] [c02468f4] try_to_shrink_lruvec+0x214/0x2f0
[f10dda50] [c0246ad4] shrink_one+0x104/0x1e8
[f10dda90] [c0248eb8] shrink_node+0x314/0xc3c
[f10ddb20] [c024a98c] do_try_to_free_pages+0x500/0x7e4
[f10ddba0] [c024b110] try_to_free_pages+0x150/0x18c
[f10ddc20] [c029e318] __alloc_pages+0x460/0x8dc
[f10ddce0] [c06088ac] alloc_pages.constprop.0+0x30/0x50
[f10ddd00] [c0608ad4] blk_rq_map_kern+0x208/0x404
[f10ddd50] [c089c048] scsi_execute_cmd+0x350/0x534
[f10dddc0] [c08b77cc] sr_check_events+0x108/0x4bc
[f10dde40] [c08fb620] cdrom_update_events+0x54/0xb8
[f10dde60] [c08fb6b4] cdrom_check_events+0x30/0x70
[f10dde80] [c08b7c44] sr_block_check_events+0x60/0x90
[f10ddea0] [c0630444] disk_check_events+0x68/0x168
[f10ddee0] [c063056c] disk_events_workfn+0x28/0x40
[f10ddf00] [c008df0c] process_scheduled_works+0x350/0x494
[f10ddf70] [c008ee2c] worker_thread+0x2a4/0x300
[f10ddfc0] [c009b87c] kthread+0x174/0x178
[f10ddff0] [c001c304] start_kernel_thread+0x10/0x14
Mem-Info:
active_anon:292700 inactive_anon:181968 isolated_anon:0
active_file:6404 inactive_file:5560 isolated_file:0
unevictable:0 dirty:11 writeback:0
slab_reclaimable:1183 slab_unreclaimable:6185
mapped:7898 shmem:133 pagetables:675
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:1193 free_pcp:778 free_cma:0
Node 0 active_anon:1170800kB inactive_anon:727872kB active_file:25616kB inactive_file:22240kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:31592kB dirty:44kB writeback:0kB shmem:532kB writeback_tmp:0kB kernel_stack:952kB pagetables:2700kB sec_pagetables:0kB all_unreclaimable? no
DMA free:0kB boost:7564kB min:10928kB low:11768kB high:12608kB reserved_highatomic:0KB active_anon:568836kB inactive_anon:92340kB active_file:12kB inactive_file:1248kB unevictable:0kB writepending:40kB present:786432kB managed:709428kB mlocked:0kB bounce:0kB free_pcp:3112kB local_pcp:1844kB free_cma:0kB
lowmem_reserve[]: 0 0 1280 1280
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
39962 total pagecache pages
27865 pages in swap cache
Free swap = 8240252kB
Total swap = 8388604kB
524288 pages RAM
327680 pages HighMem/MovableOnly
19251 pages reserved
b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[...]
To fix a "refcount_t: decrement hit 0; leaking memory." issue which showed up otherwise I applied following patchset on top of v6.9-rc4: https://lore.kernel.org/all/mhng-4caed5c9-bc46-42fe-90d4-9d845376578f@palmer-ri-x1c9a/
Kernel .config attached. For more details on the KCSAN hits dmesg of 2 runs attached.
Regards,
Erhard
[-- Attachment #2: config_69-rc4_g4+ --]
[-- Type: application/octet-stream, Size: 116574 bytes --]
#
# Automatically generated file; DO NOT EDIT.
# Linux/powerpc 6.9.0-rc4 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc (Gentoo 13.2.1_p20240210 p14) 13.2.1 20240210"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=130201
CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=24200
CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=24200
CONFIG_LLD_VERSION=0
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_GCC_ASM_GOTO_OUTPUT_WORKAROUND=y
CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_PAHOLE_VERSION=0
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y
#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
# CONFIG_WERROR is not set
CONFIG_LOCALVERSION="-PMacG4"
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_XZ is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_WATCH_QUEUE=y
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_USELIB is not set
# CONFIG_AUDIT is not set
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_SHOW_LEVEL=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_IRQ_DOMAIN_NOMAP=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_ARCH_HAS_TICK_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_TIME_KUNIT_TEST=m
CONFIG_CONTEXT_TRACKING=y
CONFIG_CONTEXT_TRACKING_IDLE=y
#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y
# end of Timers subsystem
CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
#
# BPF subsystem
#
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
# CONFIG_BPF_PRELOAD is not set
# end of BPF subsystem
CONFIG_PREEMPT_VOLUNTARY_BUILD=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not set
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting
CONFIG_CPU_ISOLATION=y
#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_TREE_SRCU=y
CONFIG_NEED_SRCU_NMI_SAFE=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
# end of RCU Subsystem
# CONFIG_IKCONFIG is not set
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=16
CONFIG_LOG_CPU_MAX_BUF_SHIFT=13
# CONFIG_PRINTK_INDEX is not set
#
# Scheduler features
#
# end of Scheduler features
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC10_NO_ARRAY_BOUNDS=y
CONFIG_CC_NO_ARRAY_BOUNDS=y
CONFIG_GCC_NO_STRINGOP_OVERFLOW=y
CONFIG_CC_NO_STRINGOP_OVERFLOW=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
# CONFIG_CGROUP_FAVOR_DYNMODS is not set
CONFIG_MEMCG=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
# CONFIG_RT_GROUP_SCHED is not set
CONFIG_SCHED_MM_CID=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
# CONFIG_CGROUP_BPF is not set
CONFIG_CGROUP_MISC=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_RELAY is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
# CONFIG_RD_XZ is not set
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
# CONFIG_RD_ZSTD is not set
# CONFIG_BOOT_CONFIG is not set
# CONFIG_INITRAMFS_PRESERVE_MTIME is not set
# CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_HAVE_LD_DEAD_CODE_DATA_ELIMINATION=y
# CONFIG_LD_DEAD_CODE_DATA_ELIMINATION is not set
CONFIG_LD_ORPHAN_WARN=y
CONFIG_LD_ORPHAN_WARN_LEVEL="warn"
CONFIG_SYSCTL=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_EXPERT=y
CONFIG_MULTIUSER=y
# CONFIG_SGETMASK_SYSCALL is not set
# CONFIG_SYSFS_SYSCALL is not set
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_PC104 is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_SELFTEST is not set
# CONFIG_KALLSYMS_ALL is not set
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_CALLBACKS=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_HAVE_PERF_EVENTS=y
#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# end of Kernel Performance Events And Counters
CONFIG_SYSTEM_DATA_VERIFICATION=y
# CONFIG_PROFILING is not set
#
# Kexec and crash features
#
# CONFIG_KEXEC is not set
# end of Kexec and crash features
# end of General setup
CONFIG_PPC32=y
# CONFIG_PPC64 is not set
#
# Processor support
#
CONFIG_PPC_BOOK3S_32=y
# CONFIG_PPC_85xx is not set
# CONFIG_PPC_8xx is not set
# CONFIG_40x is not set
# CONFIG_44x is not set
# CONFIG_PPC_BOOK3S_603 is not set
CONFIG_PPC_BOOK3S_604=y
# CONFIG_POWERPC_CPU is not set
# CONFIG_E300C2_CPU is not set
# CONFIG_E300C3_CPU is not set
CONFIG_G4_CPU=y
# CONFIG_TOOLCHAIN_DEFAULT_CPU is not set
CONFIG_TARGET_CPU_BOOL=y
CONFIG_TARGET_CPU="G4"
CONFIG_PPC_BOOK3S=y
CONFIG_PPC_FPU_REGS=y
CONFIG_PPC_FPU=y
CONFIG_ALTIVEC=y
CONFIG_PPC_KUEP=y
CONFIG_PPC_KUAP=y
# CONFIG_PPC_KUAP_DEBUG is not set
CONFIG_PPC_HAVE_PMU_SUPPORT=y
# CONFIG_PMU_SYSFS is not set
CONFIG_PPC_PERF_CTRS=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
# end of Processor support
CONFIG_VDSO32=y
CONFIG_CPU_BIG_ENDIAN=y
CONFIG_32BIT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MAX=17
CONFIG_ARCH_MMAP_RND_BITS_MIN=11
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=17
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=11
CONFIG_NR_IRQS=512
CONFIG_NMI_IPI=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_PPC=y
CONFIG_EARLY_PRINTK=y
CONFIG_PANIC_TIMEOUT=40
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_TBSYNC=y
CONFIG_AUDIT_ARCH=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_SYS_SUPPORTS_APM_EMULATION=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_HAS_ADD_PAGES=y
# CONFIG_PPC_PCI_OF_BUS_MAP is not set
CONFIG_PPC_PCI_BUS_NUM_DOMAIN_DEPENDENT=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_PGTABLE_LEVELS=2
CONFIG_PPC_MSI_BITMAP=y
#
# Platform support
#
# CONFIG_SCOM_DEBUGFS is not set
# CONFIG_PPC_CHRP is not set
# CONFIG_PPC_MPC512x is not set
# CONFIG_PPC_MPC52xx is not set
CONFIG_PPC_PMAC=y
CONFIG_PPC_PMAC32_PSURGE=y
# CONFIG_PPC_82xx is not set
# CONFIG_PPC_83xx is not set
# CONFIG_PPC_86xx is not set
CONFIG_KVM_GUEST=y
CONFIG_EPAPR_PARAVIRT=y
CONFIG_PPC_HASH_MMU_NATIVE=y
CONFIG_PPC_OF_BOOT_TRAMPOLINE=y
CONFIG_PPC_SMP_MUXED_IPI=y
CONFIG_MPIC=y
CONFIG_MPIC_MSGR=y
CONFIG_PPC_MPC106=y
#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set
# CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set
#
# CPU frequency scaling drivers
#
# CONFIG_CPUFREQ_DT_PLATDEV is not set
CONFIG_CPU_FREQ_PMAC=y
# end of CPU Frequency scaling
#
# CPUIdle driver
#
#
# CPU Idle
#
# CONFIG_CPU_IDLE is not set
# end of CPU Idle
# end of CPUIdle driver
CONFIG_TAU=y
# CONFIG_TAU_INT is not set
# CONFIG_TAU_AVERAGE is not set
# CONFIG_GEN_RTC is not set
# end of Platform support
#
# Kernel options
#
CONFIG_HIGHMEM=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
CONFIG_HZ_300=y
# CONFIG_HZ_1000 is not set
CONFIG_HZ=300
CONFIG_SCHED_HRTICK=y
CONFIG_HOTPLUG_CPU=y
# CONFIG_PPC_QUEUED_SPINLOCKS is not set
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_KEXEC=y
CONFIG_ARCH_SUPPORTS_KEXEC_PURGATORY=y
CONFIG_ARCH_SUPPORTS_CRASH_DUMP=y
CONFIG_IRQ_ALL_CPUS=y
CONFIG_ARCH_FLATMEM_ENABLE=y
CONFIG_ILLEGAL_POINTER_VALUE=0
CONFIG_PPC_4K_PAGES=y
CONFIG_THREAD_SHIFT=13
CONFIG_DATA_SHIFT=22
CONFIG_ARCH_FORCE_MAX_ORDER=10
CONFIG_CMDLINE=""
CONFIG_EXTRA_TARGETS=""
CONFIG_ARCH_WANTS_FREEZER_CONTROL=y
# CONFIG_SUSPEND is not set
# CONFIG_HIBERNATION is not set
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
CONFIG_APM_EMULATION=m
CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
# CONFIG_ENERGY_MODEL is not set
# end of Kernel options
CONFIG_ISA_DMA_API=y
#
# Bus options
#
CONFIG_GENERIC_ISA_DMA=y
CONFIG_PPC_INDIRECT_PCI=y
# CONFIG_FSL_LBC is not set
# end of Bus options
#
# Advanced setup
#
# CONFIG_ADVANCED_OPTIONS is not set
#
# Default settings for advanced configuration options are used
#
CONFIG_LOWMEM_SIZE=0x30000000
CONFIG_PAGE_OFFSET=0xc0000000
CONFIG_KERNEL_START=0xc0000000
CONFIG_PHYSICAL_START=0x00000000
CONFIG_TASK_SIZE=0xb0000000
# end of Advanced setup
# CONFIG_VIRTUALIZATION is not set
CONFIG_HAVE_LIVEPATCH=y
#
# General architecture-dependent options
#
CONFIG_HOTPLUG_SMT=y
CONFIG_SMT_NUM_THREADS_DYNAMIC=y
# CONFIG_KPROBES is not set
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_GENERIC_IDLE_POLL_SETUP=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_32BIT_OFF_T=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_MMU_GATHER_PAGE_SIZE=y
CONFIG_MMU_GATHER_MERGE_VMAS=y
CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM=y
CONFIG_MMU_LAZY_TLB_REFCOUNT=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_ARCH_WEAK_RELEASE_ACQUIRE=y
CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# CONFIG_SECCOMP_CACHE_DEBUG is not set
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
# CONFIG_STACKPROTECTOR_STRONG is not set
CONFIG_LTO_NONE=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING_USER=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC=y
CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y
CONFIG_SOFTIRQ_ON_OWN_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_ARCH_MMAP_RND_BITS=11
CONFIG_HAVE_PAGE_SIZE_4KB=y
CONFIG_PAGE_SIZE_4KB=y
CONFIG_PAGE_SIZE_LESS_THAN_64KB=y
CONFIG_PAGE_SIZE_LESS_THAN_256KB=y
CONFIG_PAGE_SHIFT=12
CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT=y
CONFIG_HAVE_OBJTOOL=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_HAVE_ARCH_NVRAM_OPS=y
CONFIG_CLONE_BACKWARDS=y
CONFIG_OLD_SIGSUSPEND=y
CONFIG_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET=y
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
CONFIG_ARCH_OPTIONAL_KERNEL_RWX=y
CONFIG_ARCH_OPTIONAL_KERNEL_RWX_DEFAULT=y
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
# CONFIG_STRICT_MODULE_RWX is not set
CONFIG_ARCH_HAS_PHYS_TO_DMA=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_HAVE_STATIC_CALL=y
CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_ARCH_SPLIT_ARG64=y
#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling
CONFIG_HAVE_GCC_PLUGINS=y
CONFIG_GCC_PLUGINS=y
CONFIG_GCC_PLUGIN_LATENT_ENTROPY=y
CONFIG_FUNCTION_ALIGNMENT_4B=y
CONFIG_FUNCTION_ALIGNMENT=4
# end of General architecture-dependent options
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
# CONFIG_MODULE_DEBUG is not set
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODULE_UNLOAD_TAINT_TRACKING is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_MODULE_COMPRESS_NONE=y
# CONFIG_MODULE_COMPRESS_GZIP is not set
# CONFIG_MODULE_COMPRESS_XZ is not set
# CONFIG_MODULE_COMPRESS_ZSTD is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_MODPROBE_PATH="/sbin/modprobe"
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLOCK_LEGACY_AUTOLOAD=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_CGROUP_PUNT_BIO=y
CONFIG_BLK_DEV_BSG_COMMON=y
CONFIG_BLK_ICQ=y
# CONFIG_BLK_DEV_BSGLIB is not set
# CONFIG_BLK_DEV_INTEGRITY is not set
# CONFIG_BLK_DEV_WRITE_MOUNTED is not set
# CONFIG_BLK_DEV_ZONED is not set
# CONFIG_BLK_DEV_THROTTLING is not set
CONFIG_BLK_WBT=y
CONFIG_BLK_WBT_MQ=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
# CONFIG_BLK_CGROUP_IOPRIO is not set
CONFIG_BLK_DEBUG_FS=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
# CONFIG_UNIXWARE_DISKLABEL is not set
CONFIG_LDM_PARTITION=y
# CONFIG_LDM_DEBUG is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_KARMA_PARTITION is not set
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
# end of Partition Types
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y
CONFIG_BLOCK_HOLDER_DEPRECATED=y
CONFIG_BLK_MQ_STACKING=y
#
# IO Schedulers
#
# CONFIG_MQ_IOSCHED_DEADLINE is not set
# CONFIG_MQ_IOSCHED_KYBER is not set
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_BFQ_CGROUP_DEBUG is not set
# end of IO Schedulers
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y
#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
# end of Executable file formats
#
# Memory Management options
#
CONFIG_ZPOOL=y
CONFIG_SWAP=y
CONFIG_ZSWAP=y
CONFIG_ZSWAP_DEFAULT_ON=y
CONFIG_ZSWAP_SHRINKER_DEFAULT_ON=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD=y
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="zstd"
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC=y
CONFIG_ZSWAP_ZPOOL_DEFAULT="zsmalloc"
# CONFIG_ZBUD is not set
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_ZSMALLOC_CHAIN_SIZE=8
#
# Slab allocator options
#
CONFIG_SLUB=y
# CONFIG_SLUB_TINY is not set
# CONFIG_SLAB_MERGE_DEFAULT is not set
CONFIG_SLAB_FREELIST_RANDOM=y
CONFIG_SLAB_FREELIST_HARDENED=y
# CONFIG_SLUB_STATS is not set
# CONFIG_SLUB_CPU_PARTIAL is not set
CONFIG_RANDOM_KMALLOC_CACHES=y
# end of Slab allocator options
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
# CONFIG_COMPAT_BRK is not set
CONFIG_FLATMEM=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_ARCH_KEEP_MEMBLOCK=y
CONFIG_EXCLUSIVE_SYSTEM_RAM=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_COMPACT_UNEVICTABLE_DEFAULT=1
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_PCP_BATCH_SCALE_MAX=5
CONFIG_BOUNCE=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=16384
# CONFIG_CMA is not set
CONFIG_GENERIC_EARLY_IOREMAP=y
# CONFIG_IDLE_PAGE_TRACKING is not set
CONFIG_ARCH_HAS_CURRENT_STACK_POINTER=y
CONFIG_ZONE_DMA=y
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_TEST is not set
# CONFIG_DMAPOOL_TEST is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_KMAP_LOCAL=y
CONFIG_MEMFD_CREATE=y
# CONFIG_ANON_VMA_NAME is not set
CONFIG_USERFAULTFD=y
CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y
# CONFIG_LRU_GEN_STATS is not set
CONFIG_LOCK_MM_AND_FIND_VMA=y
#
# Data Access Monitoring
#
# CONFIG_DAMON is not set
# end of Data Access Monitoring
# end of Memory Management options
CONFIG_NET=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_NET_XGRESS=y
CONFIG_SKB_EXTENSIONS=y
#
# Networking options
#
CONFIG_PACKET=m
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_AF_UNIX_OOB=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
CONFIG_TLS_DEVICE=y
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=m
CONFIG_XFRM_USER=m
# CONFIG_XFRM_INTERFACE is not set
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
# CONFIG_NET_KEY_MIGRATE is not set
# CONFIG_XDP_SOCKETS is not set
CONFIG_NET_HANDSHAKE=y
# CONFIG_NET_HANDSHAKE_KUNIT_TEST is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=m
CONFIG_SYN_COOKIES=y
# CONFIG_NET_IPVTI is not set
CONFIG_NET_UDP_TUNNEL=m
# CONFIG_NET_FOU is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
# CONFIG_INET_ESP_OFFLOAD is not set
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
# CONFIG_TCP_CONG_CUBIC is not set
CONFIG_TCP_CONG_WESTWOOD=y
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_NV is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
# CONFIG_TCP_CONG_DCTCP is not set
# CONFIG_TCP_CONG_CDG is not set
# CONFIG_TCP_CONG_BBR is not set
CONFIG_DEFAULT_WESTWOOD=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="westwood"
# CONFIG_TCP_MD5SIG is not set
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
# CONFIG_INET6_ESP_OFFLOAD is not set
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
# CONFIG_IPV6_MIP6 is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
# CONFIG_IPV6_VTI is not set
# CONFIG_IPV6_SIT is not set
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
# CONFIG_IPV6_SEG6_LWTUNNEL is not set
# CONFIG_IPV6_SEG6_HMAC is not set
# CONFIG_IPV6_RPL_LWTUNNEL is not set
# CONFIG_IPV6_IOAM6_LWTUNNEL is not set
# CONFIG_NETLABEL is not set
# CONFIG_MPTCP is not set
# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
# CONFIG_NETFILTER is not set
# CONFIG_IP_DCCP is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
# CONFIG_BRIDGE_MRP is not set
# CONFIG_BRIDGE_CFM is not set
# CONFIG_NET_DSA is not set
# CONFIG_VLAN_8021Q is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_6LOWPAN is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y
#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_CBS is not set
# CONFIG_NET_SCH_ETF is not set
# CONFIG_NET_SCH_TAPRIO is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_SKBPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
CONFIG_NET_SCH_FQ_CODEL=y
# CONFIG_NET_SCH_CAKE is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_PLUG is not set
# CONFIG_NET_SCH_ETS is not set
CONFIG_NET_SCH_DEFAULT=y
CONFIG_DEFAULT_FQ_CODEL=y
# CONFIG_DEFAULT_PFIFO_FAST is not set
CONFIG_DEFAULT_NET_SCH="fq_codel"
#
# Classification
#
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
# CONFIG_NET_CLS_FLOWER is not set
# CONFIG_NET_CLS_MATCHALL is not set
# CONFIG_NET_EMATCH is not set
# CONFIG_NET_CLS_ACT is not set
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
# CONFIG_VSOCKETS_LOOPBACK is not set
# CONFIG_VIRTIO_VSOCKETS is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_MPLS is not set
# CONFIG_NET_NSH is not set
# CONFIG_HSR is not set
# CONFIG_NET_SWITCHDEV is not set
# CONFIG_NET_L3_MASTER_DEV is not set
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_PCPU_DEV_REFCNT=y
CONFIG_MAX_SKB_FRAGS=17
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_SOCK_RX_QUEUE_MAPPING=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_NET_FLOW_LIMIT=y
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# end of Network testing
# end of Networking options
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_HIDP=m
CONFIG_BT_LE=y
CONFIG_BT_LE_L2CAP_ECRED=y
# CONFIG_BT_LEDS is not set
CONFIG_BT_MSFTEXT=y
CONFIG_BT_AOSPEXT=y
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set
CONFIG_BT_FEATURE_DEBUG=y
#
# Bluetooth device drivers
#
CONFIG_BT_INTEL=m
CONFIG_BT_BCM=m
CONFIG_BT_RTL=m
CONFIG_BT_MTK=m
CONFIG_BT_HCIBTUSB=m
CONFIG_BT_HCIBTUSB_AUTOSUSPEND=y
CONFIG_BT_HCIBTUSB_POLL_SYNC=y
CONFIG_BT_HCIBTUSB_BCM=y
CONFIG_BT_HCIBTUSB_MTK=y
CONFIG_BT_HCIBTUSB_RTL=y
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
CONFIG_BT_HCIUART_AG6XX=y
CONFIG_BT_HCIBCM203X=m
# CONFIG_BT_HCIBCM4377 is not set
# CONFIG_BT_HCIBPA10X is not set
CONFIG_BT_HCIBFUSB=m
# CONFIG_BT_HCIDTL1 is not set
# CONFIG_BT_HCIBT3C is not set
# CONFIG_BT_HCIBLUECARD is not set
# CONFIG_BT_HCIVHCI is not set
CONFIG_BT_MRVL=m
CONFIG_BT_ATH3K=m
# CONFIG_BT_VIRTIO is not set
# end of Bluetooth device drivers
# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
# CONFIG_MCTP is not set
CONFIG_WIRELESS=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
# CONFIG_CFG80211_WEXT is not set
CONFIG_CFG80211_KUNIT_TEST=m
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_KUNIT_TEST=m
# CONFIG_MAC80211_MESH is not set
CONFIG_MAC80211_LEDS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
# CONFIG_RFKILL_INPUT is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_FD=y
CONFIG_NET_9P_VIRTIO=y
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
# CONFIG_PSAMPLE is not set
# CONFIG_NET_IFE is not set
# CONFIG_LWTUNNEL is not set
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SOCK_MSG=y
CONFIG_PAGE_POOL=y
# CONFIG_PAGE_POOL_STATS is not set
CONFIG_FAILOVER=y
CONFIG_ETHTOOL_NETLINK=y
CONFIG_NETDEV_ADDR_LIST_TEST=m
CONFIG_NET_TEST=m
#
# Device Drivers
#
CONFIG_HAVE_PCI=y
CONFIG_FORCE_PCI=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCI_SYSCALL=y
# CONFIG_PCIEPORTBUS is not set
# CONFIG_PCIEASPM is not set
# CONFIG_PCIE_PTM is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_ARCH_FALLBACKS=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_STUB is not set
# CONFIG_PCI_IOV is not set
# CONFIG_PCI_PRI is not set
# CONFIG_PCI_PASID is not set
CONFIG_PCI_DYNAMIC_OF_NODES=y
# CONFIG_PCIE_BUS_TUNE_OFF is not set
CONFIG_PCIE_BUS_DEFAULT=y
# CONFIG_PCIE_BUS_SAFE is not set
# CONFIG_PCIE_BUS_PERFORMANCE is not set
# CONFIG_PCIE_BUS_PEER2PEER is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=4
# CONFIG_HOTPLUG_PCI is not set
#
# PCI controller drivers
#
# CONFIG_PCI_FTPCI100 is not set
# CONFIG_PCI_HOST_GENERIC is not set
# CONFIG_PCIE_MICROCHIP_HOST is not set
# CONFIG_PCIE_XILINX is not set
#
# Cadence-based PCIe controllers
#
# CONFIG_PCIE_CADENCE_PLAT_HOST is not set
# end of Cadence-based PCIe controllers
#
# DesignWare-based PCIe controllers
#
# CONFIG_PCI_MESON is not set
# CONFIG_PCIE_DW_PLAT_HOST is not set
# end of DesignWare-based PCIe controllers
#
# Mobiveil-based PCIe controllers
#
# end of Mobiveil-based PCIe controllers
# end of PCI controller drivers
#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint
#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers
# CONFIG_CXL_BUS is not set
CONFIG_PCCARD=m
CONFIG_PCMCIA=m
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y
#
# PC-card bridges
#
CONFIG_YENTA=m
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
# CONFIG_RAPIDIO is not set
#
# Generic Driver Options
#
# CONFIG_UEVENT_HELPER is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DEVTMPFS_SAFE=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_EXTRA_FIRMWARE=""
# CONFIG_FW_LOADER_USER_HELPER is not set
CONFIG_FW_LOADER_COMPRESS=y
# CONFIG_FW_LOADER_COMPRESS_XZ is not set
CONFIG_FW_LOADER_COMPRESS_ZSTD=y
# CONFIG_FW_UPLOAD is not set
# end of Firmware loader
CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_DM_KUNIT_TEST=m
CONFIG_DRIVER_PE_KUNIT_TEST=m
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_REGMAP=y
CONFIG_REGMAP_KUNIT=m
# CONFIG_REGMAP_BUILD is not set
CONFIG_REGMAP_RAM=m
CONFIG_DMA_SHARED_BUFFER=y
CONFIG_DMA_FENCE_TRACE=y
# CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT is not set
# end of Generic Driver Options
#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# CONFIG_MHI_BUS_EP is not set
# end of Bus devices
#
# Cache Drivers
#
# end of Cache Drivers
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
#
# Firmware Drivers
#
#
# ARM System Control and Management Interface Protocol
#
# end of ARM System Control and Management Interface Protocol
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_FW_CFG_SYSFS=m
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
# CONFIG_GOOGLE_FIRMWARE is not set
#
# Qualcomm firmware drivers
#
# end of Qualcomm firmware drivers
#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers
# CONFIG_GNSS is not set
# CONFIG_MTD is not set
CONFIG_DTC=y
CONFIG_OF=y
# CONFIG_OF_UNITTEST is not set
CONFIG_OF_KUNIT_TEST=m
CONFIG_OF_FLATTREE=y
CONFIG_OF_EARLY_FLATTREE=y
CONFIG_OF_KOBJ=y
CONFIG_OF_DYNAMIC=y
CONFIG_OF_ADDRESS=y
CONFIG_OF_IRQ=y
CONFIG_OF_RESERVED_MEM=y
# CONFIG_OF_OVERLAY is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
# CONFIG_PARPORT is not set
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_MAC_FLOPPY is not set
CONFIG_CDROM=y
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_ZRAM is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_VIRTIO_BLK=y
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_UBLK is not set
#
# NVME Support
#
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_NVME_FC is not set
# CONFIG_NVME_TCP is not set
# CONFIG_NVME_TARGET is not set
# end of NVME Support
#
# Misc devices
#
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_PHANTOM is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_SRAM is not set
# CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
# CONFIG_OPEN_DICE is not set
# CONFIG_VCPU_STALL_DETECTOR is not set
# CONFIG_NSM is not set
# CONFIG_C2PORT is not set
#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support
# CONFIG_CB710_CORE is not set
#
# Texas Instruments shared transport line discipline
#
# end of Texas Instruments shared transport line discipline
# CONFIG_SENSORS_LIS3_I2C is not set
# CONFIG_ALTERA_STAPL is not set
# CONFIG_ECHO is not set
# CONFIG_BCM_VK is not set
# CONFIG_MISC_ALCOR_PCI is not set
# CONFIG_MISC_RTSX_PCI is not set
# CONFIG_MISC_RTSX_USB is not set
CONFIG_PVPANIC=y
CONFIG_PVPANIC_MMIO=m
CONFIG_PVPANIC_PCI=m
# end of Misc devices
#
# SCSI device support
#
CONFIG_SCSI_MOD=y
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI_COMMON=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_PROC_FS is not set
CONFIG_SCSI_LIB_KUNIT_TEST=m
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=m
CONFIG_BLK_DEV_BSG=y
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_CONSTANTS=y
# CONFIG_SCSI_LOGGING is not set
CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_PROTO_TEST=m
#
# SCSI Transports
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
# CONFIG_SCSI_ISCSI_ATTRS is not set
# CONFIG_SCSI_SAS_ATTRS is not set
# CONFIG_SCSI_SAS_LIBSAS is not set
# CONFIG_SCSI_SRP_ATTRS is not set
# end of SCSI Transports
CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_MPI3MR is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_NSP32 is not set
# CONFIG_SCSI_WD719X is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_MESH is not set
# CONFIG_SCSI_MAC53C94 is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_VIRTIO=y
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# end of SCSI device support
CONFIG_ATA=y
CONFIG_SATA_HOST=y
CONFIG_ATA_VERBOSE_ERROR=y
# CONFIG_ATA_FORCE is not set
# CONFIG_SATA_PMP is not set
#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI is not set
# CONFIG_SATA_AHCI_PLATFORM is not set
# CONFIG_AHCI_DWC is not set
# CONFIG_AHCI_CEVA is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y
#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y
#
# SATA SFF controllers with BMDMA
#
# CONFIG_ATA_PIIX is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
CONFIG_SATA_SIL=y
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set
#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MACIO=y
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set
#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PCMCIA is not set
# CONFIG_PATA_OF_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set
#
# Generic fallback / legacy drivers
#
# CONFIG_ATA_GENERIC is not set
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
# CONFIG_MD_BITMAP_FILE is not set
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID10 is not set
CONFIG_MD_RAID456=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING=y
# CONFIG_DM_DEBUG_BLOCK_STACK_TRACING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
# CONFIG_DM_SNAPSHOT is not set
CONFIG_DM_THIN_PROVISIONING=m
# CONFIG_DM_CACHE is not set
# CONFIG_DM_WRITECACHE is not set
# CONFIG_DM_ERA is not set
# CONFIG_DM_CLONE is not set
# CONFIG_DM_MIRROR is not set
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_DELAY is not set
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
# CONFIG_DM_LOG_WRITES is not set
# CONFIG_DM_INTEGRITY is not set
# CONFIG_TARGET_CORE is not set
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_KUNIT_UAPI_TEST=m
CONFIG_FIREWIRE_KUNIT_DEVICE_ATTRIBUTE_TEST=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_ADB=y
# CONFIG_ADB_CUDA is not set
CONFIG_ADB_PMU=y
CONFIG_ADB_PMU_EVENT=y
CONFIG_ADB_PMU_LED=y
# CONFIG_ADB_PMU_LED_DISK is not set
CONFIG_PMAC_APM_EMU=m
CONFIG_PMAC_MEDIABAY=y
# CONFIG_PMAC_BACKLIGHT is not set
CONFIG_INPUT_ADBHID=y
CONFIG_MAC_EMUMOUSEBTN=m
CONFIG_THERM_WINDTUNNEL=m
CONFIG_THERM_ADT746X=m
CONFIG_WINDFARM=m
# CONFIG_PMAC_RACKMETER is not set
CONFIG_SENSORS_AMS=m
CONFIG_SENSORS_AMS_PMU=y
CONFIG_SENSORS_AMS_I2C=y
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
CONFIG_WIREGUARD=m
# CONFIG_WIREGUARD_DEBUG is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_IPVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
# CONFIG_MACSEC is not set
CONFIG_NETCONSOLE=y
# CONFIG_NETCONSOLE_EXTENDED_LOG is not set
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=m
# CONFIG_TUN_VNET_CROSS_LE is not set
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=y
# CONFIG_NLMON is not set
# CONFIG_NETKIT is not set
CONFIG_SUNGEM_PHY=y
# CONFIG_ARCNET is not set
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_AGERE is not set
# CONFIG_NET_VENDOR_ALACRITECH is not set
# CONFIG_NET_VENDOR_ALTEON is not set
# CONFIG_ALTERA_TSE is not set
# CONFIG_NET_VENDOR_AMAZON is not set
# CONFIG_NET_VENDOR_AMD is not set
# CONFIG_NET_VENDOR_APPLE is not set
# CONFIG_NET_VENDOR_AQUANTIA is not set
# CONFIG_NET_VENDOR_ARC is not set
# CONFIG_NET_VENDOR_ASIX is not set
# CONFIG_NET_VENDOR_ATHEROS is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_VENDOR_CADENCE is not set
# CONFIG_NET_VENDOR_CAVIUM is not set
# CONFIG_NET_VENDOR_CHELSIO is not set
# CONFIG_NET_VENDOR_CISCO is not set
# CONFIG_NET_VENDOR_CORTINA is not set
# CONFIG_NET_VENDOR_DAVICOM is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DEC is not set
# CONFIG_NET_VENDOR_DLINK is not set
# CONFIG_NET_VENDOR_EMULEX is not set
# CONFIG_NET_VENDOR_ENGLEDER is not set
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_FUJITSU is not set
# CONFIG_NET_VENDOR_FUNGIBLE is not set
# CONFIG_NET_VENDOR_GOOGLE is not set
# CONFIG_NET_VENDOR_HUAWEI is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_JME is not set
# CONFIG_NET_VENDOR_LITEX is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set
# CONFIG_NET_VENDOR_MICROSOFT is not set
# CONFIG_NET_VENDOR_MYRI is not set
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NI is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_NETERION is not set
# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set
# CONFIG_ETHOC is not set
# CONFIG_NET_VENDOR_PACKET_ENGINES is not set
# CONFIG_NET_VENDOR_PENSANDO is not set
# CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_BROCADE is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RDC is not set
# CONFIG_NET_VENDOR_REALTEK is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
# CONFIG_NET_VENDOR_SAMSUNG is not set
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
# CONFIG_NET_VENDOR_SOLARFLARE is not set
# CONFIG_NET_VENDOR_SMSC is not set
# CONFIG_NET_VENDOR_SOCIONEXT is not set
# CONFIG_NET_VENDOR_STMICRO is not set
CONFIG_NET_VENDOR_SUN=y
# CONFIG_HAPPYMEAL is not set
CONFIG_SUNGEM=y
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
# CONFIG_NET_VENDOR_SYNOPSYS is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
# CONFIG_NET_VENDOR_TI is not set
# CONFIG_NET_VENDOR_VERTEXCOM is not set
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_NET_VENDOR_WANGXUN is not set
# CONFIG_NET_VENDOR_WIZNET is not set
# CONFIG_NET_VENDOR_XILINX is not set
# CONFIG_NET_VENDOR_XIRCOM is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_PHYLIB is not set
# CONFIG_PSE_CONTROLLER is not set
# CONFIG_MDIO_DEVICE is not set
#
# PCS device drivers
#
# end of PCS device drivers
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_USB_NET_DRIVERS is not set
CONFIG_WLAN=y
# CONFIG_WLAN_VENDOR_ADMTEK is not set
# CONFIG_WLAN_VENDOR_ATH is not set
# CONFIG_WLAN_VENDOR_ATMEL is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
CONFIG_B43LEGACY=m
CONFIG_B43LEGACY_PCI_AUTOSELECT=y
CONFIG_B43LEGACY_PCICORE_AUTOSELECT=y
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_HWRNG=y
CONFIG_B43LEGACY_DEBUG=y
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
# CONFIG_WLAN_VENDOR_INTEL is not set
# CONFIG_WLAN_VENDOR_INTERSIL is not set
# CONFIG_WLAN_VENDOR_MARVELL is not set
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
# CONFIG_WLAN_VENDOR_MICROCHIP is not set
# CONFIG_WLAN_VENDOR_PURELIFI is not set
# CONFIG_WLAN_VENDOR_RALINK is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
# CONFIG_RTL_CARDS is not set
CONFIG_RTL8XXXU=m
# CONFIG_RTL8XXXU_UNTESTED is not set
# CONFIG_RTW88 is not set
# CONFIG_RTW89 is not set
# CONFIG_WLAN_VENDOR_RSI is not set
# CONFIG_WLAN_VENDOR_SILABS is not set
# CONFIG_WLAN_VENDOR_ST is not set
# CONFIG_WLAN_VENDOR_TI is not set
# CONFIG_WLAN_VENDOR_ZYDAS is not set
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
# CONFIG_MAC80211_HWSIM is not set
# CONFIG_VIRT_WIFI is not set
# CONFIG_WAN is not set
#
# Wireless WAN
#
# CONFIG_WWAN is not set
# end of Wireless WAN
# CONFIG_VMXNET3 is not set
# CONFIG_NETDEVSIM is not set
CONFIG_NET_FAILOVER=y
# CONFIG_ISDN is not set
#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_INPUT_MATRIXKMAP is not set
#
# Userland interfaces
#
# CONFIG_INPUT_MOUSEDEV is not set
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
CONFIG_INPUT_KUNIT_TEST=m
# CONFIG_INPUT_APMPOWER is not set
#
# Input Device Drivers
#
# CONFIG_INPUT_KEYBOARD is not set
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
CONFIG_MOUSE_APPLETOUCH=m
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_ELAN_I2C is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
CONFIG_JOYSTICK_XPAD=m
# CONFIG_JOYSTICK_XPAD_FF is not set
CONFIG_JOYSTICK_XPAD_LEDS=y
# CONFIG_JOYSTICK_PXRC is not set
# CONFIG_JOYSTICK_QWIIC is not set
# CONFIG_JOYSTICK_FSIA6B is not set
# CONFIG_JOYSTICK_SENSEHAT is not set
# CONFIG_JOYSTICK_SEESAW is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_ATMEL_CAPTOUCH is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
CONFIG_INPUT_UINPUT=m
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_DA7280_HAPTICS is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_IQS626A is not set
# CONFIG_INPUT_IQS7222 is not set
# CONFIG_INPUT_CMA3000 is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
# CONFIG_RMI4_CORE is not set
#
# Hardware I/O ports
#
# CONFIG_SERIO is not set
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support
#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
# CONFIG_LEGACY_TIOCSTI is not set
CONFIG_LDISC_AUTOLOAD=y
#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_PCILIB=y
CONFIG_SERIAL_8250_PCI=y
# CONFIG_SERIAL_8250_EXAR is not set
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=8
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set
# CONFIG_SERIAL_8250_PCI1XXXX is not set
CONFIG_SERIAL_8250_FSL=y
# CONFIG_SERIAL_8250_DW is not set
# CONFIG_SERIAL_8250_RT288X is not set
# CONFIG_SERIAL_8250_PERICOM is not set
CONFIG_SERIAL_OF_PLATFORM=y
#
# Non-8250 serial port support
#
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_PMACZILOG is not set
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SIFIVE is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_XILINX_PS_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set
# end of Serial drivers
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_PPC_EPAPR_HV_BYTECHAN is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_N_GSM is not set
# CONFIG_NOZOMI is not set
# CONFIG_NULL_TTY is not set
CONFIG_HVC_DRIVER=y
# CONFIG_HVC_UDBG is not set
# CONFIG_SERIAL_DEV_BUS is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=m
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_BA431 is not set
CONFIG_HW_RANDOM_VIRTIO=m
# CONFIG_HW_RANDOM_CCTRNG is not set
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
CONFIG_DEVMEM=y
CONFIG_NVRAM=m
CONFIG_DEVPORT=y
# CONFIG_TCG_TPM is not set
# CONFIG_XILLYBUS is not set
# CONFIG_XILLYUSB is not set
# end of Character devices
#
# I2C support
#
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=m
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=m
#
# I2C Hardware Bus support
#
#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set
#
# Mac SMBus host controller drivers
#
CONFIG_I2C_POWERMAC=y
#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_MPC is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set
#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_CP2615 is not set
# CONFIG_I2C_PCI1XXXX is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set
#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_VIRTIO is not set
# end of I2C Hardware Bus support
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support
# CONFIG_I3C is not set
# CONFIG_SPI is not set
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
# CONFIG_PPS is not set
#
# PTP clock support
#
# CONFIG_PTP_1588_CLOCK is not set
CONFIG_PTP_1588_CLOCK_OPTIONAL=y
#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# end of PTP clock support
# CONFIG_PINCTRL is not set
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_RESET is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_APM_POWER=m
# CONFIG_IP5XXX_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
CONFIG_BATTERY_PMU=m
# CONFIG_BATTERY_SAMSUNG_SDI is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_LTC4162L is not set
# CONFIG_CHARGER_DETECTOR_MAX14656 is not set
# CONFIG_CHARGER_MAX77976 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_BATTERY_GOLDFISH is not set
# CONFIG_BATTERY_RT5033 is not set
# CONFIG_CHARGER_BD99954 is not set
# CONFIG_BATTERY_UG3105 is not set
# CONFIG_FUEL_GAUGE_MM8013 is not set
CONFIG_HWMON=m
CONFIG_HWMON_DEBUG_CHIP=y
#
# Native drivers
#
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM1177 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_AHT10 is not set
# CONFIG_SENSORS_AQUACOMPUTER_D5NEXT is not set
# CONFIG_SENSORS_AS370 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_ASUS_ROG_RYUJIN is not set
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_CHIPCAP2 is not set
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_CORSAIR_PSU is not set
CONFIG_SENSORS_DRIVETEMP=m
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_GIGABYTE_WATERFORCE is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HS3001 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_POWERZ is not set
# CONFIG_SENSORS_POWR1220 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2990 is not set
# CONFIG_SENSORS_LTC2991 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4222 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4260 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LTC4282 is not set
# CONFIG_SENSORS_MAX127 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX31760 is not set
# CONFIG_MAX31827 is not set
# CONFIG_SENSORS_MAX6620 is not set
# CONFIG_SENSORS_MAX6621 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MAX31790 is not set
# CONFIG_SENSORS_MC34VR500 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_TPS23861 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_NCT6775_I2C is not set
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NPCM7XX is not set
# CONFIG_SENSORS_NZXT_KRAKEN2 is not set
# CONFIG_SENSORS_NZXT_KRAKEN3 is not set
# CONFIG_SENSORS_NZXT_SMART2 is not set
# CONFIG_SENSORS_OCC_P8_I2C is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_PT5161L is not set
# CONFIG_SENSORS_SBTSI is not set
# CONFIG_SENSORS_SBRMI is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHT4x is not set
# CONFIG_SENSORS_SHTC1 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC2305 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_ADC128D818 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_INA238 is not set
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_TMP464 is not set
# CONFIG_SENSORS_TMP513 is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83773G is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_THERMAL is not set
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=m
CONFIG_SSB_SPROM=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
CONFIG_SSB_PCIHOST=y
CONFIG_SSB_B43_PCI_BRIDGE=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
CONFIG_SSB_PCMCIAHOST=y
CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
CONFIG_SSB_DRIVER_PCICORE=y
CONFIG_BCMA_POSSIBLE=y
# CONFIG_BCMA is not set
#
# Multifunction device drivers
#
# CONFIG_MFD_ACT8945A is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_MFD_SMPRO is not set
# CONFIG_MFD_AS3722 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_ATMEL_FLEXCOM is not set
# CONFIG_MFD_ATMEL_HLCDC is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_CS42L43_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_MFD_MAX5970 is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_GATEWORKS_GSC is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_MFD_HI6421_PMIC is not set
# CONFIG_LPC_ICH is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77541 is not set
# CONFIG_MFD_MAX77620 is not set
# CONFIG_MFD_MAX77650 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77714 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6370 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_NTXEC is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_SY7636A is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT4831 is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RT5120 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_RK8XX_I2C is not set
# CONFIG_MFD_RN5T618 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TI_LP87565 is not set
# CONFIG_MFD_TPS65218 is not set
# CONFIG_MFD_TPS65219 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS6594_I2C is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TQMX86 is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_LOCHNAGAR is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_ROHM_BD718XX is not set
# CONFIG_MFD_ROHM_BD71828 is not set
# CONFIG_MFD_ROHM_BD957XMUF is not set
# CONFIG_MFD_STPMIC1 is not set
# CONFIG_MFD_STMFX is not set
# CONFIG_MFD_ATC260X_I2C is not set
# CONFIG_MFD_QCOM_PM8008 is not set
# CONFIG_MFD_RSMU_I2C is not set
# end of Multifunction device drivers
# CONFIG_REGULATOR is not set
# CONFIG_RC_CORE is not set
#
# CEC support
#
# CONFIG_MEDIA_CEC_SUPPORT is not set
# end of CEC support
# CONFIG_MEDIA_SUPPORT is not set
#
# Graphics support
#
CONFIG_APERTURE_HELPERS=y
CONFIG_VIDEO=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_AGP is not set
CONFIG_DRM=y
# CONFIG_DRM_DEBUG_MM is not set
CONFIG_DRM_KUNIT_TEST_HELPERS=m
CONFIG_DRM_KUNIT_TEST=m
CONFIG_DRM_KMS_HELPER=y
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
CONFIG_DRM_DEBUG_MODESET_LOCK=y
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_DISPLAY_HELPER=m
CONFIG_DRM_DISPLAY_DP_HELPER=y
# CONFIG_DRM_DP_AUX_CHARDEV is not set
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_EXEC=m
CONFIG_DRM_BUDDY=m
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=m
CONFIG_DRM_SUBALLOC_HELPER=m
#
# I2C encoder or helper chips
#
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips
#
# ARM devices
#
# end of ARM devices
CONFIG_DRM_RADEON=m
CONFIG_DRM_RADEON_USERPTR=y
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
# CONFIG_DRM_XE is not set
CONFIG_DRM_VGEM=m
# CONFIG_DRM_VKMS is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_QXL is not set
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_VIRTIO_GPU_KMS=y
CONFIG_DRM_PANEL=y
#
# Display Panels
#
# CONFIG_DRM_PANEL_LVDS is not set
# CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01 is not set
# CONFIG_DRM_PANEL_SAMSUNG_ATNA33XC20 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6D7AA0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E63M0 is not set
# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set
# CONFIG_DRM_PANEL_SEIKO_43WVF1G is not set
# CONFIG_DRM_PANEL_EDP is not set
# CONFIG_DRM_PANEL_SIMPLE is not set
# end of Display Panels
CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y
#
# Display Interface Bridges
#
# CONFIG_DRM_CHIPONE_ICN6211 is not set
# CONFIG_DRM_CHRONTEL_CH7033 is not set
# CONFIG_DRM_DISPLAY_CONNECTOR is not set
# CONFIG_DRM_ITE_IT6505 is not set
# CONFIG_DRM_LONTIUM_LT8912B is not set
# CONFIG_DRM_LONTIUM_LT9211 is not set
# CONFIG_DRM_LONTIUM_LT9611 is not set
# CONFIG_DRM_LONTIUM_LT9611UXC is not set
# CONFIG_DRM_ITE_IT66121 is not set
# CONFIG_DRM_LVDS_CODEC is not set
# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set
# CONFIG_DRM_NXP_PTN3460 is not set
# CONFIG_DRM_PARADE_PS8622 is not set
# CONFIG_DRM_PARADE_PS8640 is not set
# CONFIG_DRM_SIL_SII8620 is not set
# CONFIG_DRM_SII902X is not set
# CONFIG_DRM_SII9234 is not set
# CONFIG_DRM_SIMPLE_BRIDGE is not set
# CONFIG_DRM_THINE_THC63LVD1024 is not set
# CONFIG_DRM_TOSHIBA_TC358762 is not set
# CONFIG_DRM_TOSHIBA_TC358764 is not set
# CONFIG_DRM_TOSHIBA_TC358767 is not set
# CONFIG_DRM_TOSHIBA_TC358768 is not set
# CONFIG_DRM_TOSHIBA_TC358775 is not set
# CONFIG_DRM_TI_DLPC3433 is not set
# CONFIG_DRM_TI_TFP410 is not set
# CONFIG_DRM_TI_SN65DSI83 is not set
# CONFIG_DRM_TI_SN65DSI86 is not set
# CONFIG_DRM_TI_TPD12S015 is not set
# CONFIG_DRM_ANALOGIX_ANX6345 is not set
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# CONFIG_DRM_ANALOGIX_ANX7625 is not set
# CONFIG_DRM_I2C_ADV7511 is not set
# CONFIG_DRM_CDNS_DSI is not set
# CONFIG_DRM_CDNS_MHDP8546 is not set
# end of Display Interface Bridges
# CONFIG_DRM_ETNAVIV is not set
# CONFIG_DRM_LOGICVC is not set
# CONFIG_DRM_ARCPGU is not set
CONFIG_DRM_BOCHS=m
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_GM12U320 is not set
# CONFIG_DRM_OFDRM is not set
# CONFIG_DRM_SIMPLEDRM is not set
# CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set
CONFIG_DRM_EXPORT_FOR_TESTS=y
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
CONFIG_DRM_LIB_RANDOM=y
#
# Frame buffer Devices
#
CONFIG_FB=y
CONFIG_FB_MACMODES=y
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
CONFIG_FB_OF=y
# CONFIG_FB_CONTROL is not set
# CONFIG_FB_PLATINUM is not set
# CONFIG_FB_VALKYRIE is not set
CONFIG_FB_CT65550=y
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SM712 is not set
CONFIG_FB_CORE=y
CONFIG_FB_NOTIFY=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DEVICE is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYSMEM_FOPS=y
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_IOMEM_FOPS=y
CONFIG_FB_IOMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y
# CONFIG_FB_MODE_HELPERS is not set
# CONFIG_FB_TILEBLITTING is not set
# end of Frame buffer Devices
#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
CONFIG_LCD_PLATFORM=m
CONFIG_BACKLIGHT_CLASS_DEVICE=m
# CONFIG_BACKLIGHT_KTD2801 is not set
# CONFIG_BACKLIGHT_KTZ8866 is not set
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
CONFIG_BACKLIGHT_LED=m
# end of Backlight & LCD device support
CONFIG_HDMI=y
#
# Console display driver support
#
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION is not set
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support
# CONFIG_LOGO is not set
# end of Graphics support
# CONFIG_DRM_ACCEL is not set
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_SND_TIMER=m
CONFIG_SND_PCM=m
CONFIG_SND_HWDEP=m
CONFIG_SND_SEQ_DEVICE=m
CONFIG_SND_RAWMIDI=m
CONFIG_SND_CORE_TEST=m
CONFIG_SND_JACK=y
CONFIG_SND_JACK_INPUT_DEV=y
# CONFIG_SND_OSSEMUL is not set
CONFIG_SND_PCM_TIMER=y
CONFIG_SND_HRTIMER=m
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_MAX_CARDS=6
# CONFIG_SND_SUPPORT_OLD_API is not set
CONFIG_SND_PROC_FS=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
# CONFIG_SND_CTL_FAST_LOOKUP is not set
# CONFIG_SND_DEBUG is not set
CONFIG_SND_CTL_INPUT_VALIDATION=y
CONFIG_SND_VMASTER=y
CONFIG_SND_SEQUENCER=m
# CONFIG_SND_SEQ_DUMMY is not set
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=m
CONFIG_SND_SEQ_MIDI=m
CONFIG_SND_SEQ_VIRMIDI=m
# CONFIG_SND_SEQ_UMP is not set
CONFIG_SND_DRIVERS=y
# CONFIG_SND_DUMMY is not set
CONFIG_SND_ALOOP=m
# CONFIG_SND_PCMTEST is not set
CONFIG_SND_VIRMIDI=m
# CONFIG_SND_MTPAV is not set
# CONFIG_SND_SERIAL_U16550 is not set
# CONFIG_SND_MPU401 is not set
CONFIG_SND_PCI=y
# CONFIG_SND_AD1889 is not set
# CONFIG_SND_ALS300 is not set
# CONFIG_SND_ALS4000 is not set
# CONFIG_SND_ALI5451 is not set
# CONFIG_SND_ATIIXP is not set
# CONFIG_SND_ATIIXP_MODEM is not set
# CONFIG_SND_AU8810 is not set
# CONFIG_SND_AU8820 is not set
# CONFIG_SND_AU8830 is not set
# CONFIG_SND_AW2 is not set
# CONFIG_SND_AZT3328 is not set
# CONFIG_SND_BT87X is not set
# CONFIG_SND_CA0106 is not set
# CONFIG_SND_CMIPCI is not set
# CONFIG_SND_OXYGEN is not set
# CONFIG_SND_CS4281 is not set
# CONFIG_SND_CS46XX is not set
# CONFIG_SND_CTXFI is not set
# CONFIG_SND_DARLA20 is not set
# CONFIG_SND_GINA20 is not set
# CONFIG_SND_LAYLA20 is not set
# CONFIG_SND_DARLA24 is not set
# CONFIG_SND_GINA24 is not set
# CONFIG_SND_LAYLA24 is not set
# CONFIG_SND_MONA is not set
# CONFIG_SND_MIA is not set
# CONFIG_SND_ECHO3G is not set
# CONFIG_SND_INDIGO is not set
# CONFIG_SND_INDIGOIO is not set
# CONFIG_SND_INDIGODJ is not set
# CONFIG_SND_INDIGOIOX is not set
# CONFIG_SND_INDIGODJX is not set
# CONFIG_SND_EMU10K1 is not set
# CONFIG_SND_EMU10K1X is not set
# CONFIG_SND_ENS1370 is not set
# CONFIG_SND_ENS1371 is not set
# CONFIG_SND_ES1938 is not set
# CONFIG_SND_ES1968 is not set
# CONFIG_SND_FM801 is not set
# CONFIG_SND_HDSP is not set
# CONFIG_SND_HDSPM is not set
# CONFIG_SND_ICE1712 is not set
# CONFIG_SND_ICE1724 is not set
# CONFIG_SND_INTEL8X0 is not set
# CONFIG_SND_INTEL8X0M is not set
# CONFIG_SND_KORG1212 is not set
# CONFIG_SND_LOLA is not set
# CONFIG_SND_LX6464ES is not set
# CONFIG_SND_MAESTRO3 is not set
# CONFIG_SND_MIXART is not set
# CONFIG_SND_NM256 is not set
# CONFIG_SND_PCXHR is not set
# CONFIG_SND_RIPTIDE is not set
# CONFIG_SND_RME32 is not set
# CONFIG_SND_RME96 is not set
# CONFIG_SND_RME9652 is not set
# CONFIG_SND_SE6X is not set
# CONFIG_SND_SONICVIBES is not set
# CONFIG_SND_TRIDENT is not set
# CONFIG_SND_VIA82XX is not set
# CONFIG_SND_VIA82XX_MODEM is not set
# CONFIG_SND_VIRTUOSO is not set
# CONFIG_SND_VX222 is not set
# CONFIG_SND_YMFPCI is not set
#
# HD-Audio
#
CONFIG_SND_HDA=m
CONFIG_SND_HDA_INTEL=m
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
# CONFIG_SND_HDA_INPUT_BEEP is not set
# CONFIG_SND_HDA_PATCH_LOADER is not set
# CONFIG_SND_HDA_CIRRUS_SCODEC_KUNIT_TEST is not set
# CONFIG_SND_HDA_CODEC_REALTEK is not set
# CONFIG_SND_HDA_CODEC_ANALOG is not set
# CONFIG_SND_HDA_CODEC_SIGMATEL is not set
# CONFIG_SND_HDA_CODEC_VIA is not set
CONFIG_SND_HDA_CODEC_HDMI=m
# CONFIG_SND_HDA_CODEC_CIRRUS is not set
# CONFIG_SND_HDA_CODEC_CS8409 is not set
# CONFIG_SND_HDA_CODEC_CONEXANT is not set
# CONFIG_SND_HDA_CODEC_CA0110 is not set
# CONFIG_SND_HDA_CODEC_CA0132 is not set
# CONFIG_SND_HDA_CODEC_CMEDIA is not set
# CONFIG_SND_HDA_CODEC_SI3054 is not set
# CONFIG_SND_HDA_GENERIC is not set
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
# CONFIG_SND_HDA_INTEL_HDMI_SILENT_STREAM is not set
# CONFIG_SND_HDA_CTL_DEV_ID is not set
# end of HD-Audio
CONFIG_SND_HDA_CORE=m
CONFIG_SND_HDA_COMPONENT=y
CONFIG_SND_HDA_PREALLOC_SIZE=2048
CONFIG_SND_INTEL_DSP_CONFIG=m
# CONFIG_SND_PPC is not set
CONFIG_SND_AOA=m
CONFIG_SND_AOA_FABRIC_LAYOUT=m
CONFIG_SND_AOA_ONYX=m
CONFIG_SND_AOA_TAS=m
CONFIG_SND_AOA_TOONIE=m
CONFIG_SND_AOA_SOUNDBUS=m
CONFIG_SND_AOA_SOUNDBUS_I2S=m
# CONFIG_SND_USB is not set
CONFIG_SND_FIREWIRE=y
CONFIG_SND_FIREWIRE_LIB=m
# CONFIG_SND_DICE is not set
# CONFIG_SND_OXFW is not set
CONFIG_SND_ISIGHT=m
# CONFIG_SND_FIREWORKS is not set
# CONFIG_SND_BEBOB is not set
# CONFIG_SND_FIREWIRE_DIGI00X is not set
# CONFIG_SND_FIREWIRE_TASCAM is not set
# CONFIG_SND_FIREWIRE_MOTU is not set
# CONFIG_SND_FIREFACE is not set
# CONFIG_SND_PCMCIA is not set
# CONFIG_SND_SOC is not set
# CONFIG_SND_VIRTIO is not set
CONFIG_HID_SUPPORT=y
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y
#
# Special HID drivers
#
# CONFIG_HID_A4TECH is not set
# CONFIG_HID_ACCUTOUCH is not set
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=y
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_ASUS is not set
# CONFIG_HID_AUREAL is not set
# CONFIG_HID_BELKIN is not set
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
# CONFIG_HID_CHERRY is not set
# CONFIG_HID_CHICONY is not set
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
# CONFIG_HID_PRODIKEYS is not set
# CONFIG_HID_CMEDIA is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
# CONFIG_HID_CYPRESS is not set
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
# CONFIG_HID_EVISION is not set
# CONFIG_HID_EZKEY is not set
# CONFIG_HID_FT260 is not set
# CONFIG_HID_GEMBIRD is not set
# CONFIG_HID_GFRM is not set
# CONFIG_HID_GLORIOUS is not set
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_GOOGLE_STADIA_FF is not set
# CONFIG_HID_VIVALDI is not set
# CONFIG_HID_GT683R is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
CONFIG_HID_UCLOGIC=m
# CONFIG_HID_WALTOP is not set
# CONFIG_HID_VIEWSONIC is not set
# CONFIG_HID_VRC2 is not set
# CONFIG_HID_XIAOMI is not set
# CONFIG_HID_GYRATION is not set
# CONFIG_HID_ICADE is not set
# CONFIG_HID_ITE is not set
# CONFIG_HID_JABRA is not set
# CONFIG_HID_TWINHAN is not set
# CONFIG_HID_KENSINGTON is not set
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LED is not set
# CONFIG_HID_LENOVO is not set
# CONFIG_HID_LETSKETCH is not set
# CONFIG_HID_LOGITECH is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
# CONFIG_HID_MEGAWORLD_FF is not set
# CONFIG_HID_REDRAGON is not set
CONFIG_HID_MICROSOFT=m
# CONFIG_HID_MONTEREY is not set
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NINTENDO=m
# CONFIG_NINTENDO_FF is not set
# CONFIG_HID_NTI is not set
# CONFIG_HID_NTRIG is not set
# CONFIG_HID_NVIDIA_SHIELD is not set
# CONFIG_HID_ORTEK is not set
# CONFIG_HID_PANTHERLORD is not set
# CONFIG_HID_PENMOUNT is not set
# CONFIG_HID_PETALYNX is not set
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PLANTRONICS is not set
# CONFIG_HID_PXRC is not set
# CONFIG_HID_RAZER is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_RETRODE is not set
CONFIG_HID_ROCCAT=m
# CONFIG_HID_SAITEK is not set
# CONFIG_HID_SAMSUNG is not set
# CONFIG_HID_SEMITEK is not set
# CONFIG_HID_SIGMAMICRO is not set
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEAM is not set
# CONFIG_HID_STEELSERIES is not set
# CONFIG_HID_SUNPLUS is not set
# CONFIG_HID_RMI is not set
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
# CONFIG_HID_TOPSEED is not set
# CONFIG_HID_TOPRE is not set
# CONFIG_HID_THINGM is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
# CONFIG_HID_WACOM is not set
CONFIG_HID_WIIMOTE=m
# CONFIG_HID_XINMO is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set
# CONFIG_HID_ALPS is not set
# CONFIG_HID_MCP2221 is not set
CONFIG_HID_KUNIT_TEST=m
# end of Special HID drivers
#
# HID-BPF support
#
# end of HID-BPF support
#
# USB HID support
#
CONFIG_USB_HID=y
# CONFIG_HID_PID is not set
CONFIG_USB_HIDDEV=y
# end of USB HID support
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_BIG_ENDIAN_DESC=y
CONFIG_USB_OHCI_BIG_ENDIAN_MMIO=y
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
# CONFIG_USB_PCI_AMD is not set
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
# CONFIG_USB_LEDS_TRIGGER_USBPORT is not set
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_DEFAULT_AUTHORIZATION_MODE=1
CONFIG_USB_MON=m
#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_XPS_USB_HCD_XILINX is not set
# CONFIG_USB_EHCI_FSL is not set
CONFIG_USB_EHCI_HCD_PPC_OF=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PPC_OF_BE=y
# CONFIG_USB_OHCI_HCD_PPC_OF_LE is not set
CONFIG_USB_OHCI_HCD_PPC_OF=y
CONFIG_USB_OHCI_HCD_PCI=m
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
# CONFIG_USB_UHCI_HCD is not set
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_SSB is not set
# CONFIG_USB_HCD_TEST_MODE is not set
#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set
#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
CONFIG_USB_UAS=m
#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USBIP_CORE is not set
#
# USB dual-mode controller drivers
#
# CONFIG_USB_CDNS_SUPPORT is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set
#
# USB port drivers
#
CONFIG_USB_SERIAL=m
# CONFIG_USB_SERIAL_GENERIC is not set
# CONFIG_USB_SERIAL_SIMPLE is not set
# CONFIG_USB_SERIAL_AIRCABLE is not set
# CONFIG_USB_SERIAL_ARK3116 is not set
# CONFIG_USB_SERIAL_BELKIN is not set
# CONFIG_USB_SERIAL_CH341 is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_CP210X is not set
# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
# CONFIG_USB_SERIAL_EMPEG is not set
CONFIG_USB_SERIAL_FTDI_SIO=m
# CONFIG_USB_SERIAL_VISOR is not set
# CONFIG_USB_SERIAL_IPAQ is not set
# CONFIG_USB_SERIAL_IR is not set
# CONFIG_USB_SERIAL_EDGEPORT is not set
# CONFIG_USB_SERIAL_EDGEPORT_TI is not set
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
# CONFIG_USB_SERIAL_GARMIN is not set
# CONFIG_USB_SERIAL_IPW is not set
# CONFIG_USB_SERIAL_IUU is not set
# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
# CONFIG_USB_SERIAL_KEYSPAN is not set
# CONFIG_USB_SERIAL_KLSI is not set
# CONFIG_USB_SERIAL_KOBIL_SCT is not set
# CONFIG_USB_SERIAL_MCT_U232 is not set
# CONFIG_USB_SERIAL_METRO is not set
# CONFIG_USB_SERIAL_MOS7720 is not set
# CONFIG_USB_SERIAL_MOS7840 is not set
# CONFIG_USB_SERIAL_MXUPORT is not set
# CONFIG_USB_SERIAL_NAVMAN is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_OTI6858 is not set
# CONFIG_USB_SERIAL_QCAUX is not set
# CONFIG_USB_SERIAL_QUALCOMM is not set
# CONFIG_USB_SERIAL_SPCP8X5 is not set
# CONFIG_USB_SERIAL_SAFE is not set
# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
# CONFIG_USB_SERIAL_SYMBOL is not set
# CONFIG_USB_SERIAL_TI is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_OPTION is not set
# CONFIG_USB_SERIAL_OMNINET is not set
# CONFIG_USB_SERIAL_OPTICON is not set
# CONFIG_USB_SERIAL_XSENS_MT is not set
# CONFIG_USB_SERIAL_WISHBONE is not set
# CONFIG_USB_SERIAL_SSU100 is not set
# CONFIG_USB_SERIAL_QT2 is not set
# CONFIG_USB_SERIAL_UPD78F0730 is not set
# CONFIG_USB_SERIAL_XR is not set
# CONFIG_USB_SERIAL_DEBUG is not set
#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
CONFIG_USB_APPLEDISPLAY=m
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
CONFIG_USB_ISIGHTFW=m
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HUB_USB251XB is not set
# CONFIG_USB_HSIC_USB3503 is not set
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
# CONFIG_USB_ONBOARD_HUB is not set
#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers
# CONFIG_USB_GADGET is not set
# CONFIG_TYPEC is not set
# CONFIG_USB_ROLE_SWITCH is not set
# CONFIG_MMC is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
CONFIG_LEDS_BRIGHTNESS_HW_CHANGED=y
#
# LED drivers
#
# CONFIG_LEDS_AN30259A is not set
# CONFIG_LEDS_AW200XX is not set
# CONFIG_LEDS_AW2013 is not set
# CONFIG_LEDS_BCM6328 is not set
# CONFIG_LEDS_BCM6358 is not set
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_LM3692X is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP8860 is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA995X is not set
# CONFIG_LEDS_BD2606MVV is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_IS31FL319X is not set
# CONFIG_LEDS_IS31FL32XX is not set
#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
# CONFIG_LEDS_BLINKM is not set
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_LM3697 is not set
#
# Flash and Torch LED drivers
#
#
# RGB LED drivers
#
#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
# CONFIG_LEDS_TRIGGER_TIMER is not set
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
CONFIG_LEDS_TRIGGER_DISK=y
# CONFIG_LEDS_TRIGGER_HEARTBEAT is not set
# CONFIG_LEDS_TRIGGER_BACKLIGHT is not set
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
#
# iptables trigger is under Netfilter config (LED target)
#
# CONFIG_LEDS_TRIGGER_TRANSIENT is not set
# CONFIG_LEDS_TRIGGER_CAMERA is not set
CONFIG_LEDS_TRIGGER_PANIC=y
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
# CONFIG_LEDS_TRIGGER_AUDIO is not set
# CONFIG_LEDS_TRIGGER_TTY is not set
#
# Simple LED drivers
#
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_SYSTOHC_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_LIB_KUNIT_TEST=m
CONFIG_RTC_NVMEM=y
#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set
#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_HYM8563 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_NCT3018Y is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12026 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8010 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set
#
# SPI RTC drivers
#
CONFIG_RTC_I2C_AND_SPI=y
#
# SPI and I2C RTC drivers
#
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set
# CONFIG_RTC_DRV_RX6110 is not set
#
# Platform RTC drivers
#
# CONFIG_RTC_DRV_CMOS is not set
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_DS2404 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_ZYNQMP is not set
#
# on-CPU RTC drivers
#
CONFIG_RTC_DRV_GENERIC=y
# CONFIG_RTC_DRV_CADENCE is not set
# CONFIG_RTC_DRV_FTRTC010 is not set
# CONFIG_RTC_DRV_R7301 is not set
#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_GOLDFISH is not set
# CONFIG_DMADEVICES is not set
#
# DMABUF options
#
CONFIG_SYNC_FILE=y
# CONFIG_SW_SYNC is not set
CONFIG_UDMABUF=y
# CONFIG_DMABUF_MOVE_NOTIFY is not set
CONFIG_DMABUF_DEBUG=y
CONFIG_DMABUF_SELFTESTS=m
CONFIG_DMABUF_HEAPS=y
# CONFIG_DMABUF_SYSFS_STATS is not set
CONFIG_DMABUF_HEAPS_SYSTEM=y
# end of DMABUF options
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_BALLOON is not set
# CONFIG_VIRTIO_INPUT is not set
# CONFIG_VIRTIO_MMIO is not set
CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST_TASK=y
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_VSOCK is not set
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set
#
# Microsoft Hyper-V guest support
#
# end of Microsoft Hyper-V guest support
# CONFIG_GREYBUS is not set
# CONFIG_COMEDI is not set
# CONFIG_STAGING is not set
# CONFIG_GOLDFISH is not set
# CONFIG_COMMON_CLK is not set
# CONFIG_HWSPINLOCK is not set
#
# Clock Source drivers
#
# end of Clock Source drivers
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_SUPPORT=y
#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support
# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMUFD is not set
#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers
#
# Rpmsg drivers
#
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers
# CONFIG_SOUNDWIRE is not set
#
# SOC (System On Chip) specific Drivers
#
#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers
#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers
#
# NXP/Freescale QorIQ SoC drivers
#
# CONFIG_QUICC_ENGINE is not set
# end of NXP/Freescale QorIQ SoC drivers
#
# fujitsu SoC drivers
#
# end of fujitsu SoC drivers
#
# i.MX SoC drivers
#
# end of i.MX SoC drivers
#
# Enable LiteX SoC Builder specific drivers
#
# CONFIG_LITEX_SOC_CONTROLLER is not set
# end of Enable LiteX SoC Builder specific drivers
# CONFIG_WPCM450_SOC is not set
#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers
# CONFIG_SOC_TI is not set
#
# Xilinx SoC drivers
#
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers
#
# PM Domains
#
#
# Amlogic PM Domains
#
# end of Amlogic PM Domains
#
# Broadcom PM Domains
#
# end of Broadcom PM Domains
#
# i.MX PM Domains
#
# end of i.MX PM Domains
#
# Qualcomm PM Domains
#
# end of Qualcomm PM Domains
# end of PM Domains
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_PWM is not set
#
# IRQ chip support
#
CONFIG_IRQCHIP=y
# CONFIG_AL_FIC is not set
# CONFIG_XILINX_INTC is not set
# end of IRQ chip support
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_PHY_CAN_TRANSCEIVER is not set
#
# PHY drivers for Broadcom platforms
#
# CONFIG_BCM_KONA_USB2_PHY is not set
# end of PHY drivers for Broadcom platforms
# CONFIG_PHY_CADENCE_DPHY is not set
# CONFIG_PHY_CADENCE_DPHY_RX is not set
# CONFIG_PHY_CADENCE_SALVO is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# end of PHY Subsystem
# CONFIG_POWERCAP is not set
# CONFIG_MCB is not set
#
# Performance monitor support
#
# CONFIG_DWC_PCIE_PMU is not set
# end of Performance monitor support
# CONFIG_RAS is not set
# CONFIG_USB4 is not set
#
# Android
#
# CONFIG_ANDROID_BINDER_IPC is not set
# end of Android
# CONFIG_DAX is not set
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y
CONFIG_NVMEM_LAYOUTS=y
#
# Layout Types
#
# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set
# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set
# end of Layout Types
# CONFIG_NVMEM_RMEM is not set
#
# HW tracing support
#
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
# end of HW tracing support
# CONFIG_FPGA is not set
# CONFIG_FSI is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# CONFIG_PECI is not set
# CONFIG_HTE is not set
# end of Device Drivers
#
# File systems
#
CONFIG_VALIDATE_FS_PARSER=y
CONFIG_FS_IOMAP=y
CONFIG_FS_STACK=y
CONFIG_BUFFER_HEAD=y
CONFIG_LEGACY_DIRECT_IO=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
# CONFIG_EXT4_FS_SECURITY is not set
# CONFIG_EXT4_DEBUG is not set
CONFIG_EXT4_KUNIT_TESTS=m
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
# CONFIG_XFS_SUPPORT_V4 is not set
# CONFIG_XFS_SUPPORT_ASCII_CI is not set
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_ONLINE_SCRUB is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
CONFIG_BTRFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
# CONFIG_F2FS_FS is not set
CONFIG_BCACHEFS_FS=m
# CONFIG_BCACHEFS_QUOTA is not set
# CONFIG_BCACHEFS_ERASURE_CODING is not set
CONFIG_BCACHEFS_POSIX_ACL=y
# CONFIG_BCACHEFS_DEBUG is not set
CONFIG_BCACHEFS_TESTS=y
# CONFIG_BCACHEFS_LOCK_TIME_STATS is not set
# CONFIG_BCACHEFS_NO_LATENCY_ACCT is not set
CONFIG_BCACHEFS_SIX_OPTIMISTIC_SPIN=y
CONFIG_MEAN_AND_VARIANCE_UNIT_TEST=m
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
# CONFIG_EXPORTFS_BLOCK_OPS is not set
CONFIG_FILE_LOCKING=y
# CONFIG_FS_ENCRYPTION is not set
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
# CONFIG_FANOTIFY_ACCESS_PERMISSIONS is not set
# CONFIG_QUOTA is not set
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
# CONFIG_CUSE is not set
CONFIG_VIRTIO_FS=m
CONFIG_FUSE_PASSTHROUGH=y
# CONFIG_OVERLAY_FS is not set
#
# Caches
#
CONFIG_NETFS_SUPPORT=y
# CONFIG_NETFS_STATS is not set
# CONFIG_FSCACHE is not set
# end of Caches
#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems
#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-15"
CONFIG_FAT_DEFAULT_UTF8=y
CONFIG_FAT_KUNIT_TEST=m
CONFIG_EXFAT_FS=m
CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8"
CONFIG_NTFS3_FS=m
CONFIG_NTFS3_LZX_XPRESS=y
# CONFIG_NTFS3_FS_POSIX_ACL is not set
# end of DOS/FAT/EXFAT/NT Filesystems
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
# CONFIG_PROC_KCORE is not set
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
# CONFIG_PROC_CHILDREN is not set
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_QUOTA is not set
CONFIG_CONFIGFS_FS=m
# end of Pseudo filesystems
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
CONFIG_AFFS_FS=m
# CONFIG_ECRYPT_FS is not set
CONFIG_HFS_FS=m
CONFIG_HFSPLUS_FS=m
CONFIG_BEFS_FS=m
CONFIG_BEFS_DEBUG=y
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
# CONFIG_PSTORE is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m
# CONFIG_NFS_V2 is not set
# CONFIG_NFS_V3 is not set
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
# CONFIG_NFS_FSCACHE is not set
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
# CONFIG_NFS_V4_2_READ_PLUS is not set
# CONFIG_NFSD is not set
CONFIG_GRACE_PERIOD=m
CONFIG_LOCKD=m
CONFIG_NFS_COMMON=y
CONFIG_NFS_V4_2_SSC_HELPER=y
CONFIG_SUNRPC=m
CONFIG_SUNRPC_BACKCHANNEL=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
CONFIG_SUNRPC_DEBUG=y
# CONFIG_CEPH_FS is not set
CONFIG_CIFS=m
CONFIG_CIFS_STATS2=y
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
CONFIG_CIFS_SWN_UPCALL=y
# CONFIG_SMB_SERVER is not set
CONFIG_SMBFS=m
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_9P_FS=y
CONFIG_9P_FS_POSIX_ACL=y
# CONFIG_9P_FS_SECURITY is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=m
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
CONFIG_NLS_CODEPAGE_850=m
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
CONFIG_NLS_ISO8859_15=m
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_MAC_ROMAN=m
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
CONFIG_NLS_UCS2_UTILS=m
# CONFIG_DLM is not set
CONFIG_UNICODE=m
# CONFIG_UNICODE_NORMALIZATION_SELFTEST is not set
CONFIG_IO_WQ=y
# end of File systems
#
# Security options
#
CONFIG_KEYS=y
CONFIG_KEYS_REQUEST_CACHE=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEY_DH_OPERATIONS=y
CONFIG_KEY_NOTIFICATIONS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
# CONFIG_SECURITYFS is not set
# CONFIG_SECURITY_NETWORK is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
# CONFIG_STATIC_USERMODEHELPER is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
# CONFIG_SECURITY_LANDLOCK is not set
# CONFIG_INTEGRITY is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,bpf"
#
# Kernel hardening options
#
#
# Memory initialization
#
CONFIG_CC_HAS_AUTO_VAR_INIT_PATTERN=y
CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_BARE=y
CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO=y
# CONFIG_INIT_STACK_NONE is not set
CONFIG_INIT_STACK_ALL_PATTERN=y
# CONFIG_INIT_STACK_ALL_ZERO is not set
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
CONFIG_CC_HAS_ZERO_CALL_USED_REGS=y
# CONFIG_ZERO_CALL_USED_REGS is not set
# end of Memory initialization
#
# Hardening of kernel data structures
#
CONFIG_LIST_HARDENED=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
# end of Hardening of kernel data structures
CONFIG_RANDSTRUCT_NONE=y
# CONFIG_RANDSTRUCT_FULL is not set
# CONFIG_RANDSTRUCT_PERFORMANCE is not set
# end of Kernel hardening options
# end of Security options
CONFIG_XOR_BLOCKS=y
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y
#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=m
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SIG2=y
CONFIG_CRYPTO_SKCIPHER=m
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=m
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=y
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
# CONFIG_CRYPTO_MANAGER_EXTRA_TESTS is not set
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_NULL2=m
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_ENGINE=m
# end of Crypto core or helper
#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=y
# CONFIG_CRYPTO_DH_RFC7919_GROUPS is not set
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECDSA is not set
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_SM2 is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# end of Public-key cryptography
#
# Block ciphers
#
CONFIG_CRYPTO_AES=m
# CONFIG_CRYPTO_AES_TI is not set
# CONFIG_CRYPTO_ARIA is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST6 is not set
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SM4_GENERIC is not set
# CONFIG_CRYPTO_TWOFISH is not set
# end of Block ciphers
#
# Length-preserving ciphers and modes
#
CONFIG_CRYPTO_ADIANTUM=m
CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=m
# CONFIG_CRYPTO_CTS is not set
CONFIG_CRYPTO_ECB=m
# CONFIG_CRYPTO_HCTR2 is not set
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
CONFIG_CRYPTO_XTS=m
CONFIG_CRYPTO_NHPOLY1305=m
# end of Length-preserving ciphers and modes
#
# AEAD (authenticated encryption with associated data) ciphers
#
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=m
CONFIG_CRYPTO_GENIV=m
CONFIG_CRYPTO_SEQIV=m
CONFIG_CRYPTO_ECHAINIV=m
CONFIG_CRYPTO_ESSIV=m
# end of AEAD (authenticated encryption with associated data) ciphers
#
# Hashes, digests, and MACs
#
CONFIG_CRYPTO_BLAKE2B=y
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_GHASH=m
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
CONFIG_CRYPTO_POLY1305=m
# CONFIG_CRYPTO_RMD160 is not set
CONFIG_CRYPTO_SHA1=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=m
CONFIG_CRYPTO_SHA3=m
# CONFIG_CRYPTO_SM3_GENERIC is not set
# CONFIG_CRYPTO_STREEBOG is not set
# CONFIG_CRYPTO_VMAC is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_XCBC is not set
CONFIG_CRYPTO_XXHASH=y
# end of Hashes, digests, and MACs
#
# CRCs (cyclic redundancy checks)
#
CONFIG_CRYPTO_CRC32C=y
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_CRCT10DIF is not set
# CONFIG_CRYPTO_CRC64_ROCKSOFT is not set
# end of CRCs (cyclic redundancy checks)
#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
CONFIG_CRYPTO_LZ4=m
# CONFIG_CRYPTO_LZ4HC is not set
CONFIG_CRYPTO_ZSTD=y
# end of Compression
#
# Random number generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRYPTO_DRBG_MENU=m
CONFIG_CRYPTO_DRBG_HMAC=y
# CONFIG_CRYPTO_DRBG_HASH is not set
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=m
CONFIG_CRYPTO_JITTERENTROPY=m
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS=64
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE=32
CONFIG_CRYPTO_JITTERENTROPY_OSR=1
CONFIG_CRYPTO_KDF800108_CTR=y
# end of Random number generation
#
# Userspace interface
#
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=m
# CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE is not set
# CONFIG_CRYPTO_STATS is not set
# end of Userspace interface
CONFIG_CRYPTO_HASH_INFO=y
#
# Accelerated Cryptographic Algorithms for CPU (powerpc)
#
CONFIG_CRYPTO_MD5_PPC=m
CONFIG_CRYPTO_SHA1_PPC=m
# end of Accelerated Cryptographic Algorithms for CPU (powerpc)
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_HIFN_795X is not set
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_VIRTIO=m
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_CCREE is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS8_PRIVATE_KEY_PARSER=m
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set
# CONFIG_FIPS_SIGNATURE_SELFTEST is not set
#
# Certificates for signature checking
#
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
# end of Certificates for signature checking
CONFIG_BINARY_PRINTF=y
#
# Library routines
#
CONFIG_RAID6_PQ=y
CONFIG_RAID6_PQ_BENCHMARK=y
CONFIG_LINEAR_RANGES=m
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
# CONFIG_CORDIC is not set
CONFIG_PRIME_NUMBERS=m
#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_UTILS=y
CONFIG_CRYPTO_LIB_AES=m
CONFIG_CRYPTO_LIB_ARC4=m
CONFIG_CRYPTO_LIB_GF128MUL=m
CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC=y
CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m
CONFIG_CRYPTO_LIB_CHACHA=m
CONFIG_CRYPTO_LIB_CURVE25519_GENERIC=m
CONFIG_CRYPTO_LIB_CURVE25519=m
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=1
CONFIG_CRYPTO_LIB_POLY1305_GENERIC=m
CONFIG_CRYPTO_LIB_POLY1305=m
CONFIG_CRYPTO_LIB_CHACHA20POLY1305=m
CONFIG_CRYPTO_LIB_SHA1=y
CONFIG_CRYPTO_LIB_SHA256=y
# end of Crypto library routines
# CONFIG_CRC_CCITT is not set
CONFIG_CRC16=y
# CONFIG_CRC_T10DIF is not set
# CONFIG_CRC64_ROCKSOFT is not set
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
CONFIG_CRC64=m
# CONFIG_CRC4 is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=y
# CONFIG_CRC8 is not set
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_COMPRESS=m
CONFIG_LZ4HC_COMPRESS=m
CONFIG_LZ4_DECOMPRESS=m
CONFIG_ZSTD_COMMON=y
CONFIG_ZSTD_COMPRESS=y
CONFIG_ZSTD_DECOMPRESS=y
# CONFIG_XZ_DEC is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC16=y
CONFIG_REED_SOLOMON_DEC16=y
CONFIG_INTERVAL_TREE=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_CLOSURES=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_DMA_DECLARE_COHERENT=y
CONFIG_ARCH_DMA_DEFAULT_COHERENT=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_DMA_MAP_BENCHMARK is not set
CONFIG_SGL_ALLOC=y
# CONFIG_FORCE_NR_CPUS is not set
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_GENERIC_ATOMIC64=y
CONFIG_CLZ_TAB=y
# CONFIG_IRQ_POLL is not set
CONFIG_MPILIB=y
CONFIG_DIMLIB=y
CONFIG_LIBFDT=y
CONFIG_OID_REGISTRY=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_FONT_SUN8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_STACKWALK=y
CONFIG_STACKDEPOT=y
CONFIG_STACKDEPOT_ALWAYS_INIT=y
CONFIG_STACKDEPOT_MAX_FRAMES=64
CONFIG_SBITMAP=y
# CONFIG_LWQ_TEST is not set
# end of Library routines
CONFIG_GENERIC_IOREMAP=y
#
# Kernel hacking
#
#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
# CONFIG_PRINTK_CALLER is not set
# CONFIG_STACKTRACE_BUILD_ID is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_DYNAMIC_DEBUG_CORE is not set
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_MISC is not set
#
# Compile-time checks and compiler options
#
CONFIG_AS_HAS_NON_CONST_ULEB128=y
CONFIG_DEBUG_INFO_NONE=y
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
# CONFIG_DEBUG_INFO_DWARF4 is not set
# CONFIG_DEBUG_INFO_DWARF5 is not set
CONFIG_FRAME_WARN=1024
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B is not set
# CONFIG_VMLINUX_MAP is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options
#
# Generic Kernel Debugging Instruments
#
# CONFIG_MAGIC_SYSRQ is not set
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN=y
# CONFIG_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
CONFIG_HAVE_KCSAN_COMPILER=y
CONFIG_KCSAN=y
CONFIG_KCSAN_SELFTEST=y
CONFIG_KCSAN_EARLY_ENABLE=y
CONFIG_KCSAN_NUM_WATCHPOINTS=64
CONFIG_KCSAN_UDELAY_TASK=80
CONFIG_KCSAN_UDELAY_INTERRUPT=20
# CONFIG_KCSAN_DELAY_RANDOMIZE is not set
CONFIG_KCSAN_SKIP_WATCH=4000
# CONFIG_KCSAN_SKIP_WATCH_RANDOMIZE is not set
CONFIG_KCSAN_INTERRUPT_WATCHER=y
CONFIG_KCSAN_REPORT_ONCE_IN_MS=3000
CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=y
CONFIG_KCSAN_STRICT=y
CONFIG_KCSAN_WEAK_MEMORY=y
# end of Generic Kernel Debugging Instruments
#
# Networking Debugging
#
# CONFIG_NET_DEV_REFCNT_TRACKER is not set
# CONFIG_NET_NS_REFCNT_TRACKER is not set
# CONFIG_DEBUG_NET is not set
# end of Networking Debugging
#
# Memory Debugging
#
CONFIG_PAGE_EXTENSION=y
CONFIG_DEBUG_PAGEALLOC=y
# CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT is not set
CONFIG_SLUB_DEBUG=y
CONFIG_SLUB_DEBUG_ON=y
CONFIG_PAGE_OWNER=y
CONFIG_PAGE_POISONING=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_ARCH_HAS_DEBUG_WX=y
CONFIG_DEBUG_WX=y
CONFIG_GENERIC_PTDUMP=y
CONFIG_PTDUMP_CORE=y
# CONFIG_PTDUMP_DEBUGFS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SHRINKER_DEBUG is not set
# CONFIG_DEBUG_STACK_USAGE is not set
CONFIG_SCHED_STACK_END_CHECK=y
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
CONFIG_DEBUG_VM_PGTABLE=y
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_DEBUG_KMAP_LOCAL is not set
# CONFIG_DEBUG_HIGHMEM is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
# CONFIG_KASAN is not set
CONFIG_HAVE_ARCH_KFENCE=y
# CONFIG_KFENCE is not set
# end of Memory Debugging
CONFIG_DEBUG_SHIRQ=y
#
# Debug Oops, Lockups and Hangs
#
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_HAVE_HARDLOCKUP_DETECTOR_BUDDY=y
CONFIG_HARDLOCKUP_DETECTOR=y
# CONFIG_HARDLOCKUP_DETECTOR_PERF is not set
CONFIG_HARDLOCKUP_DETECTOR_BUDDY=y
# CONFIG_HARDLOCKUP_DETECTOR_ARCH is not set
CONFIG_HARDLOCKUP_DETECTOR_COUNTS_HRTIMER=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=60
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_WQ_WATCHDOG=y
# CONFIG_WQ_CPU_INTENSIVE_REPORT is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs
#
# Scheduler Debugging
#
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHED_INFO=y
# CONFIG_SCHEDSTATS is not set
# end of Scheduler Debugging
# CONFIG_DEBUG_TIMEKEEPING is not set
#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
CONFIG_DEBUG_RWSEMS=y
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)
CONFIG_DEBUG_IRQFLAGS=y
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set
#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
# CONFIG_DEBUG_PLIST is not set
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
CONFIG_DEBUG_CLOSURES=y
CONFIG_DEBUG_MAPLE_TREE=y
# end of Debug kernel data structures
#
# RCU Debugging
#
# CONFIG_RCU_SCALE_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_RCU_EXP_CPU_STALL_TIMEOUT=0
CONFIG_RCU_CPU_STALL_CPUTIME=y
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging
# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
# CONFIG_LATENCYTOP is not set
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_OBJTOOL_MCOUNT=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACING_SUPPORT=y
# CONFIG_FTRACE is not set
# CONFIG_SAMPLES is not set
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
CONFIG_IO_STRICT_DEVMEM=y
#
# powerpc Debugging
#
# CONFIG_PPC_DISABLE_WERROR is not set
CONFIG_PPC_WERROR=y
CONFIG_PRINT_STACK_DEPTH=64
# CONFIG_PPC_EMULATED_STATS is not set
# CONFIG_CODE_PATCHING_SELFTEST is not set
# CONFIG_JUMP_LABEL_FEATURE_CHECKS is not set
# CONFIG_FTR_FIXUP_SELFTEST is not set
# CONFIG_MSI_BITMAP_SELFTEST is not set
# CONFIG_XMON is not set
# CONFIG_BDI_SWITCH is not set
CONFIG_BOOTX_TEXT=y
# CONFIG_PPC_EARLY_DEBUG is not set
# end of powerpc Debugging
#
# Kernel Testing and Coverage
#
CONFIG_KUNIT=m
CONFIG_KUNIT_DEBUGFS=y
CONFIG_KUNIT_TEST=m
# CONFIG_KUNIT_EXAMPLE_TEST is not set
# CONFIG_KUNIT_ALL_TESTS is not set
CONFIG_KUNIT_DEFAULT_ENABLED=y
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
# CONFIG_FAULT_INJECTION is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
# CONFIG_TEST_DHRY is not set
# CONFIG_LKDTM is not set
CONFIG_CPUMASK_KUNIT_TEST=m
CONFIG_TEST_LIST_SORT=m
CONFIG_TEST_MIN_HEAP=m
CONFIG_TEST_SORT=m
CONFIG_TEST_DIV64=m
CONFIG_TEST_IOV_ITER=m
CONFIG_BACKTRACE_SELF_TEST=m
# CONFIG_TEST_REF_TRACKER is not set
CONFIG_RBTREE_TEST=m
CONFIG_REED_SOLOMON_TEST=m
CONFIG_INTERVAL_TREE_TEST=m
CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y
CONFIG_ASYNC_RAID6_TEST=m
# CONFIG_TEST_HEXDUMP is not set
CONFIG_STRING_KUNIT_TEST=m
CONFIG_STRING_HELPERS_KUNIT_TEST=m
CONFIG_TEST_KSTRTOX=y
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_SCANF is not set
# CONFIG_TEST_BITMAP is not set
CONFIG_TEST_UUID=m
CONFIG_TEST_XARRAY=m
CONFIG_TEST_MAPLE_TREE=m
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
CONFIG_TEST_BITOPS=m
CONFIG_TEST_VMALLOC=m
CONFIG_TEST_USER_COPY=m
CONFIG_TEST_BPF=m
# CONFIG_TEST_BLACKHOLE_DEV is not set
CONFIG_FIND_BIT_BENCHMARK=m
# CONFIG_TEST_FIRMWARE is not set
CONFIG_TEST_SYSCTL=m
CONFIG_BITFIELD_KUNIT=m
CONFIG_CHECKSUM_KUNIT=m
CONFIG_HASH_KUNIT_TEST=m
CONFIG_RESOURCE_KUNIT_TEST=m
CONFIG_SYSCTL_KUNIT_TEST=m
CONFIG_LIST_KUNIT_TEST=m
CONFIG_HASHTABLE_KUNIT_TEST=m
CONFIG_LINEAR_RANGES_TEST=m
CONFIG_CMDLINE_KUNIT_TEST=m
CONFIG_BITS_TEST=m
CONFIG_SLUB_KUNIT_TEST=m
CONFIG_MEMCPY_KUNIT_TEST=m
CONFIG_IS_SIGNED_TYPE_KUNIT_TEST=m
CONFIG_OVERFLOW_KUNIT_TEST=m
CONFIG_STACKINIT_KUNIT_TEST=m
CONFIG_FORTIFY_KUNIT_TEST=m
CONFIG_STRCAT_KUNIT_TEST=m
CONFIG_STRSCPY_KUNIT_TEST=m
CONFIG_SIPHASH_KUNIT_TEST=m
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_KMOD is not set
CONFIG_TEST_MEMCAT_P=m
CONFIG_TEST_MEMINIT=m
CONFIG_TEST_FREE_PAGES=m
CONFIG_TEST_OBJPOOL=m
CONFIG_ARCH_USE_MEMTEST=y
# CONFIG_MEMTEST is not set
# end of Kernel Testing and Coverage
#
# Rust hacking
#
# end of Rust hacking
# end of Kernel hacking
[-- Attachment #3: dmesg_69-rc4_g4_04 --]
[-- Type: application/octet-stream, Size: 77070 bytes --]
[ 60.350911] interrupt_async_enter_prepare+0x64/0xc4
[ 60.374183] do_IRQ+0x18/0x2c
[ 60.397365] HardwareInterrupt_virt+0x108/0x10c
[ 60.420718] do_raw_spin_unlock+0x10c/0x130
[ 60.444258] 0x9032
[ 60.467597] kcsan_setup_watchpoint+0x300/0x4cc
[ 60.491224] kernel_wait4+0x17c/0x200
[ 60.514737] sys_wait4+0x84/0xe0
[ 60.538119] system_call_exception+0x15c/0x1c0
[ 60.561604] ret_from_syscall+0x0/0x2c
[ 60.609428] write to 0xc2eff19c of 4 bytes by task 114 on cpu 0:
[ 60.633822] kernel_wait4+0x17c/0x200
[ 60.658312] sys_wait4+0x84/0xe0
[ 60.682758] system_call_exception+0x15c/0x1c0
[ 60.707358] ret_from_syscall+0x0/0x2c
[ 60.756267] Reported by Kernel Concurrency Sanitizer on:
[ 60.780795] CPU: 0 PID: 114 Comm: gendepends.sh Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 60.805881] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 60.831112] ==================================================================
[ 67.142520] ==================================================================
[ 67.168991] BUG: KCSAN: data-race in handle_mm_fault / save_stack
[ 67.221726] read to 0xc2ef9b10 of 2 bytes by interrupt on cpu 0:
[ 67.248713] save_stack+0x3c/0xec
[ 67.275637] __reset_page_owner+0xd8/0x234
[ 67.302694] free_unref_page_prepare+0x124/0x1dc
[ 67.329878] free_unref_page+0x40/0x114
[ 67.356996] pagetable_free+0x48/0x60
[ 67.384066] pte_free_now+0x50/0x74
[ 67.411031] pte_fragment_free+0x198/0x19c
[ 67.437970] pgtable_free+0x34/0x78
[ 67.464778] tlb_remove_table_rcu+0x8c/0x90
[ 67.491565] rcu_core+0x564/0xa88
[ 67.518043] rcu_core_si+0x20/0x3c
[ 67.544219] __do_softirq+0x1dc/0x218
[ 67.570202] do_softirq_own_stack+0x54/0x74
[ 67.595632] do_softirq_own_stack+0x44/0x74
[ 67.620352] __irq_exit_rcu+0x6c/0xbc
[ 67.644834] irq_exit+0x10/0x20
[ 67.669066] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 67.693435] timer_interrupt+0x64/0x178
[ 67.717627] Decrementer_virt+0x108/0x10c
[ 67.741776] 0xc1f1a6a0
[ 67.765735] 0xc1f1a6a0
[ 67.789591] kcsan_setup_watchpoint+0x300/0x4cc
[ 67.813724] handle_mm_fault+0x214/0xed0
[ 67.837916] ___do_page_fault+0x4d8/0x630
[ 67.862248] do_page_fault+0x28/0x40
[ 67.886576] DataAccess_virt+0x124/0x17c
[ 67.935091] write to 0xc2ef9b10 of 2 bytes by task 329 on cpu 0:
[ 67.959710] handle_mm_fault+0x214/0xed0
[ 67.984283] ___do_page_fault+0x4d8/0x630
[ 68.009051] do_page_fault+0x28/0x40
[ 68.033783] DataAccess_virt+0x124/0x17c
[ 68.083292] Reported by Kernel Concurrency Sanitizer on:
[ 68.108461] CPU: 0 PID: 329 Comm: grep Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 68.133782] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 68.158952] ==================================================================
[ 75.578869] ==================================================================
[ 75.604454] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 75.655432] write (marked) to 0xeedc9c11 of 1 bytes by interrupt on cpu 1:
[ 75.681312] rcu_report_qs_rdp+0x15c/0x18c
[ 75.707121] rcu_core+0x1f0/0xa88
[ 75.732883] rcu_core_si+0x20/0x3c
[ 75.758555] __do_softirq+0x1dc/0x218
[ 75.784228] do_softirq_own_stack+0x54/0x74
[ 75.809978] do_softirq_own_stack+0x44/0x74
[ 75.835450] __irq_exit_rcu+0x6c/0xbc
[ 75.860603] irq_exit+0x10/0x20
[ 75.885401] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 75.910261] timer_interrupt+0x64/0x178
[ 75.934741] Decrementer_virt+0x108/0x10c
[ 75.959042] 0x15
[ 75.983236] 0x0
[ 76.006939] kcsan_setup_watchpoint+0x300/0x4cc
[ 76.030722] rcu_all_qs+0x58/0x17c
[ 76.054281] __cond_resched+0x50/0x58
[ 76.077660] down_read+0x20/0x16c
[ 76.100808] walk_component+0xf4/0x150
[ 76.123982] path_lookupat+0xe8/0x21c
[ 76.147079] filename_lookup+0x90/0x100
[ 76.170236] user_path_at_empty+0x58/0x90
[ 76.193421] do_readlinkat+0x74/0x180
[ 76.216588] sys_readlinkat+0x5c/0x88
[ 76.239765] system_call_exception+0x15c/0x1c0
[ 76.263040] ret_from_syscall+0x0/0x2c
[ 76.309124] read to 0xeedc9c11 of 1 bytes by task 528 on cpu 1:
[ 76.332648] rcu_all_qs+0x58/0x17c
[ 76.356255] __cond_resched+0x50/0x58
[ 76.379844] down_read+0x20/0x16c
[ 76.403551] walk_component+0xf4/0x150
[ 76.427278] path_lookupat+0xe8/0x21c
[ 76.451026] filename_lookup+0x90/0x100
[ 76.474683] user_path_at_empty+0x58/0x90
[ 76.498267] do_readlinkat+0x74/0x180
[ 76.521790] sys_readlinkat+0x5c/0x88
[ 76.545297] system_call_exception+0x15c/0x1c0
[ 76.569079] ret_from_syscall+0x0/0x2c
[ 76.616105] Reported by Kernel Concurrency Sanitizer on:
[ 76.639868] CPU: 1 PID: 528 Comm: udevadm Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 76.664100] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 76.688790] ==================================================================
[ 84.242338] ohci-pci 0001:00:12.0: OHCI PCI host controller
[ 84.354205] ohci-pci 0001:00:12.0: new USB bus registered, assigned bus number 3
[ 84.435743] ohci-pci 0001:00:12.0: irq 52, io mem 0x8008c000
[ 84.686185] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 84.727113] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 84.767527] usb usb3: Product: OHCI PCI host controller
[ 84.807744] usb usb3: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 84.849003] usb usb3: SerialNumber: 0001:00:12.0
[ 84.902522] hub 3-0:1.0: USB hub found
[ 84.944146] hub 3-0:1.0: 3 ports detected
[ 85.151114] ohci-pci 0001:00:12.1: OHCI PCI host controller
[ 85.392801] ohci-pci 0001:00:12.1: new USB bus registered, assigned bus number 4
[ 85.512940] ohci-pci 0001:00:12.1: irq 52, io mem 0x8008b000
[ 85.819520] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 85.861383] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 85.902304] usb usb4: Product: OHCI PCI host controller
[ 85.943139] usb usb4: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 85.982851] usb usb4: SerialNumber: 0001:00:12.1
[ 86.066872] hub 4-0:1.0: USB hub found
[ 86.117898] hub 4-0:1.0: 2 ports detected
[ 86.381077] Apple USB OHCI 0001:00:18.0 disabled by firmware
[ 86.707225] Apple USB OHCI 0001:00:19.0 disabled by firmware
[ 86.921002] ohci-pci 0001:00:1b.0: OHCI PCI host controller
[ 86.960853] ohci-pci 0001:00:1b.0: new USB bus registered, assigned bus number 5
[ 87.011362] ohci-pci 0001:00:1b.0: irq 63, io mem 0x80084000
[ 87.266252] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 87.306689] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 87.346175] usb usb5: Product: OHCI PCI host controller
[ 87.388986] usb usb5: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 87.428575] usb usb5: SerialNumber: 0001:00:1b.0
[ 87.503678] b43-pci-bridge 0001:00:16.0: enabling device (0004 -> 0006)
[ 87.616976] ssb: Found chip with id 0x4306, rev 0x02 and package 0x00
[ 87.877391] hub 5-0:1.0: USB hub found
[ 88.188820] b43-pci-bridge 0001:00:16.0: Sonics Silicon Backplane found on PCI device 0001:00:16.0
[ 88.429085] hub 5-0:1.0: 3 ports detected
[ 88.990850] ohci-pci 0001:00:1b.1: OHCI PCI host controller
[ 89.412328] ohci-pci 0001:00:1b.1: new USB bus registered, assigned bus number 6
[ 89.547659] ohci-pci 0001:00:1b.1: irq 63, io mem 0x80083000
[ 90.020865] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.09
[ 90.065497] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 90.110271] usb usb6: Product: OHCI PCI host controller
[ 90.154401] usb usb6: Manufacturer: Linux 6.9.0-rc4-PMacG4-dirty ohci_hcd
[ 90.200694] usb usb6: SerialNumber: 0001:00:1b.1
[ 90.204953] [drm] radeon kernel modesetting enabled.
[ 90.612186] Console: switching to colour dummy device 80x25
[ 90.649146] hub 6-0:1.0: USB hub found
[ 90.649547] hub 6-0:1.0: 2 ports detected
[ 90.700923] radeon 0000:00:10.0: enabling device (0006 -> 0007)
[ 90.786008] [drm] initializing kernel modesetting (RV350 0x1002:0x4150 0x1002:0x0002 0x00).
[ 90.786633] [drm] Forcing AGP to PCI mode
[ 90.787252] radeon 0000:00:10.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000
[ 91.273734] [drm] Generation 2 PCI interface, using max accessible memory
[ 91.274292] radeon 0000:00:10.0: VRAM: 256M 0x00000000A0000000 - 0x00000000AFFFFFFF (256M used)
[ 91.274688] radeon 0000:00:10.0: GTT: 512M 0x0000000080000000 - 0x000000009FFFFFFF
[ 91.275283] [drm] Detected VRAM RAM=256M, BAR=256M
[ 91.275763] [drm] RAM width 128bits DDR
[ 91.303103] [drm] radeon: 256M of VRAM memory ready
[ 91.303385] [drm] radeon: 512M of GTT memory ready.
[ 91.304588] [drm] GART: num cpu pages 131072, num gpu pages 131072
[ 91.897823] [drm] radeon: 1 quad pipes, 1 Z pipes initialized
[ 91.898352] [drm] PCI GART of 512M enabled (table at 0x0000000003B00000).
[ 91.922492] radeon 0000:00:10.0: WB enabled
[ 91.922938] radeon 0000:00:10.0: fence driver on ring 0 use gpu addr 0x0000000080000000
[ 91.951295] [drm] radeon: irq initialized.
[ 91.951821] [drm] Loading R300 Microcode
[ 92.296417] [drm] radeon: ring at 0x0000000080001000
[ 92.298345] [drm] ring test succeeded in 0 usecs
[ 92.319800] random: crng init done
[ 92.550561] [drm] ib test succeeded in 0 usecs
[ 92.920129] [drm] Radeon Display Connectors
[ 92.920466] [drm] Connector 0:
[ 92.920726] [drm] DVI-I-1
[ 92.920960] [drm] HPD2
[ 92.921186] [drm] DDC: 0x64 0x64 0x64 0x64 0x64 0x64 0x64 0x64
[ 92.921575] [drm] Encoders:
[ 92.921822] [drm] CRT1: INTERNAL_DAC1
[ 92.922129] [drm] DFP2: INTERNAL_DVO1
[ 92.922504] [drm] Connector 1:
[ 92.922739] [drm] DVI-I-2
[ 92.923049] [drm] HPD1
[ 92.923274] [drm] DDC: 0x60 0x60 0x60 0x60 0x60 0x60 0x60 0x60
[ 92.923691] [drm] Encoders:
[ 92.923857] [drm] CRT2: INTERNAL_DAC2
[ 92.924125] [drm] DFP1: INTERNAL_TMDS1
[ 92.970473] [drm] Initialized radeon 2.50.0 20080528 for 0000:00:10.0 on minor 0
[ 92.992946] ==================================================================
[ 92.993307] BUG: KCSAN: data-race in blk_finish_plug / blk_time_get_ns
[ 92.993726] read to 0xc1fb63b0 of 4 bytes by interrupt on cpu 0:
[ 92.993948] blk_time_get_ns+0x24/0xf4
[ 92.994185] __blk_mq_end_request+0x58/0xe8
[ 92.994408] scsi_end_request+0x120/0x2d4
[ 92.994652] scsi_io_completion+0x290/0x6b4
[ 92.994894] scsi_finish_command+0x160/0x1a4
[ 92.995116] scsi_complete+0xf0/0x128
[ 92.995349] blk_complete_reqs+0xb4/0xd8
[ 92.995554] blk_done_softirq+0x68/0xa4
[ 92.995758] __do_softirq+0x1dc/0x218
[ 92.995990] do_softirq_own_stack+0x54/0x74
[ 92.996225] do_softirq_own_stack+0x44/0x74
[ 92.996456] __irq_exit_rcu+0x6c/0xbc
[ 92.996673] irq_exit+0x10/0x20
[ 92.996881] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 92.997135] do_IRQ+0x24/0x2c
[ 92.997343] HardwareInterrupt_virt+0x108/0x10c
[ 92.997572] 0x40
[ 92.997740] 0x40
[ 92.997901] kcsan_setup_watchpoint+0x300/0x4cc
[ 92.998120] blk_finish_plug+0x48/0x6c
[ 92.998323] read_pages+0xf0/0x214
[ 92.998543] page_cache_ra_unbounded+0x120/0x244
[ 92.998787] do_page_cache_ra+0x90/0xb8
[ 92.999012] force_page_cache_ra+0x12c/0x130
[ 92.999247] page_cache_sync_ra+0xc4/0xdc
[ 92.999476] filemap_get_pages+0x1a4/0x708
[ 92.999723] filemap_read+0x204/0x4c0
[ 92.999952] blkdev_read_iter+0x1e8/0x25c
[ 93.000181] vfs_read+0x29c/0x2f4
[ 93.000389] ksys_read+0xb8/0x134
[ 93.000599] sys_read+0x4c/0x74
[ 93.000802] system_call_exception+0x15c/0x1c0
[ 93.001042] ret_from_syscall+0x0/0x2c
[ 93.001387] write to 0xc1fb63b0 of 4 bytes by task 575 on cpu 0:
[ 93.001609] blk_finish_plug+0x48/0x6c
[ 93.001814] read_pages+0xf0/0x214
[ 93.002031] page_cache_ra_unbounded+0x120/0x244
[ 93.002271] do_page_cache_ra+0x90/0xb8
[ 93.002496] force_page_cache_ra+0x12c/0x130
[ 93.002730] page_cache_sync_ra+0xc4/0xdc
[ 93.002959] filemap_get_pages+0x1a4/0x708
[ 93.003197] filemap_read+0x204/0x4c0
[ 93.003428] blkdev_read_iter+0x1e8/0x25c
[ 93.003652] vfs_read+0x29c/0x2f4
[ 93.003858] ksys_read+0xb8/0x134
[ 93.004065] sys_read+0x4c/0x74
[ 93.004268] system_call_exception+0x15c/0x1c0
[ 93.004504] ret_from_syscall+0x0/0x2c
[ 93.004842] Reported by Kernel Concurrency Sanitizer on:
[ 93.005036] CPU: 0 PID: 575 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 93.005309] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 93.005517] ==================================================================
[ 93.873453] [drm] fb mappable at 0xA0040000
[ 93.873817] [drm] vram apper at 0xA0000000
[ 93.874106] [drm] size 8294400
[ 93.874361] [drm] fb depth is 24
[ 93.874538] [drm] pitch is 7680
[ 94.252525] Console: switching to colour frame buffer device 240x67
[ 95.062293] radeon 0000:00:10.0: [drm] fb0: radeondrmfb frame buffer device
[ 97.049715] firewire_ohci 0002:00:0e.0: enabling device (0000 -> 0002)
[ 97.199210] firewire_ohci 0002:00:0e.0: added OHCI v1.10 device as card 0, 8 IR + 8 IT contexts, quirks 0x0
[ 97.412736] gem 0002:00:0f.0 enP2p0s15: renamed from eth0 (while UP)
[ 97.613568] ADM1030 fan controller [@2c]
[ 97.685542] DS1775 digital thermometer [@49]
[ 97.687865] Temp: 58.8 C
[ 97.687914] Hyst: 70.0 C
[ 97.689321] OS: 75.0 C
[ 97.741434] firewire_core 0002:00:0e.0: created device fw0: GUID 000a95fffe9c763a, S800
[ 99.215587] ==================================================================
[ 99.217409] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 99.219434] write (marked) to 0xeedacc11 of 1 bytes by interrupt on cpu 0:
[ 99.221074] rcu_report_qs_rdp+0x15c/0x18c
[ 99.222137] rcu_core+0x1f0/0xa88
[ 99.223034] rcu_core_si+0x20/0x3c
[ 99.223948] __do_softirq+0x1dc/0x218
[ 99.224944] do_softirq_own_stack+0x54/0x74
[ 99.226047] do_softirq_own_stack+0x44/0x74
[ 99.227145] __irq_exit_rcu+0x6c/0xbc
[ 99.228124] irq_exit+0x10/0x20
[ 99.228992] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 99.230356] timer_interrupt+0x64/0x178
[ 99.231364] Decrementer_virt+0x108/0x10c
[ 99.232415] 0x1
[ 99.232987] 0x5c
[ 99.233571] kcsan_setup_watchpoint+0x300/0x4cc
[ 99.234725] rcu_all_qs+0x58/0x17c
[ 99.235645] __cond_resched+0x50/0x58
[ 99.236623] kmem_cache_alloc+0x48/0x228
[ 99.237670] anon_vma_fork+0xbc/0x1e8
[ 99.238635] copy_process+0x1f14/0x3324
[ 99.239672] kernel_clone+0x78/0x2d0
[ 99.240641] sys_clone+0xe0/0x110
[ 99.241556] system_call_exception+0x15c/0x1c0
[ 99.242710] ret_from_syscall+0x0/0x2c
[ 99.356241] read to 0xeedacc11 of 1 bytes by task 719 on cpu 0:
[ 99.413875] rcu_all_qs+0x58/0x17c
[ 99.471688] __cond_resched+0x50/0x58
[ 99.529622] kmem_cache_alloc+0x48/0x228
[ 99.587637] anon_vma_fork+0xbc/0x1e8
[ 99.645528] copy_process+0x1f14/0x3324
[ 99.703716] kernel_clone+0x78/0x2d0
[ 99.761923] sys_clone+0xe0/0x110
[ 99.819992] system_call_exception+0x15c/0x1c0
[ 99.878269] ret_from_syscall+0x0/0x2c
[ 99.993841] Reported by Kernel Concurrency Sanitizer on:
[ 100.051585] CPU: 0 PID: 719 Comm: openrc-run.sh Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 100.110064] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 100.168370] ==================================================================
[ 101.851821] EXT4-fs (sda5): re-mounted fa07e66f-b4f9-404f-85d8-487d3c097aec r/w. Quota mode: disabled.
[ 102.483920] EXT4-fs (sda5): re-mounted fa07e66f-b4f9-404f-85d8-487d3c097aec r/w. Quota mode: disabled.
[ 104.866209] snd-aoa-fabric-layout: Using direct GPIOs
[ 105.217508] snd-aoa-fabric-layout: can use this codec
[ 105.470497] snd-aoa-codec-tas: tas found, addr 0x35 on /pci@f2000000/mac-io@17/i2c@18000/deq@6a
[ 105.907575] CPU-temp: 58.9 C
[ 105.907650] , Case: 35.5 C
[ 106.016350] , Fan: 5 (tuned -6)
[ 106.679581] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[ 107.172258] ==================================================================
[ 107.235050] BUG: KCSAN: data-race in _copy_to_user / interrupt_async_enter_prepare
[ 107.360040] read to 0xc3499f5c of 4 bytes by task 547 on cpu 1:
[ 107.423383] interrupt_async_enter_prepare+0x64/0xc4
[ 107.487499] do_IRQ+0x18/0x2c
[ 107.551661] HardwareInterrupt_virt+0x108/0x10c
[ 107.616591] 0xbc4640
[ 107.680385] 0xd
[ 107.742790] kcsan_setup_watchpoint+0x300/0x4cc
[ 107.805424] _copy_to_user+0x9c/0xdc
[ 107.867387] cp_statx+0x348/0x384
[ 107.928284] do_statx+0xc8/0xfc
[ 107.988247] sys_statx+0x8c/0xc8
[ 108.047635] system_call_exception+0x15c/0x1c0
[ 108.106929] ret_from_syscall+0x0/0x2c
[ 108.223641] write to 0xc3499f5c of 4 bytes by task 547 on cpu 1:
[ 108.283215] _copy_to_user+0x9c/0xdc
[ 108.342989] cp_statx+0x348/0x384
[ 108.402639] do_statx+0xc8/0xfc
[ 108.462438] sys_statx+0x8c/0xc8
[ 108.522074] system_call_exception+0x15c/0x1c0
[ 108.582153] ret_from_syscall+0x0/0x2c
[ 108.700558] Reported by Kernel Concurrency Sanitizer on:
[ 108.760385] CPU: 1 PID: 547 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 108.821586] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 108.883577] ==================================================================
[ 108.925512] Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[ 109.199155] Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
[ 109.276375] Adding 8388604k swap on /dev/sdb6. Priority:-2 extents:1 across:8388604k
[ 109.314175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 109.544449] cfg80211: failed to load regulatory.db
[ 110.594360] b43legacy-phy0: Broadcom 4306 WLAN found (core revision 4)
[ 110.742139] b43legacy-phy0 debug: Found PHY: Analog 1, Type 2, Revision 1
[ 110.742258] b43legacy-phy0 debug: Found Radio: Manuf 0x17F, Version 0x2050, Revision 2
[ 110.775448] b43legacy-phy0 debug: Radio initialized
[ 110.778851] Broadcom 43xx-legacy driver loaded [ Features: PLID ]
[ 110.900422] b43legacy-phy0: Loading firmware b43legacy/ucode4.fw
[ 111.029503] b43legacy-phy0: Loading firmware b43legacy/pcm4.fw
[ 111.153092] b43legacy-phy0: Loading firmware b43legacy/b0g0initvals2.fw
[ 111.287784] ieee80211 phy0: Selected rate control algorithm 'minstrel_ht'
[ 111.647673] EXT4-fs (sdc5): mounting ext2 file system using the ext4 subsystem
[ 111.800289] EXT4-fs (sdc5): mounted filesystem e4e8af9e-0f0d-44f9-b983-71bf61d782de r/w without journal. Quota mode: disabled.
[ 111.927130] ext2 filesystem being mounted at /boot supports timestamps until 2038-01-19 (0x7fffffff)
[ 112.067788] BTRFS: device label tmp devid 1 transid 2859 /dev/sda6 (8:6) scanned by mount (899)
[ 112.207634] BTRFS info (device sda6): first mount of filesystem 65162d91-887e-4e48-a356-fbf7093eefb5
[ 112.340711] BTRFS info (device sda6): using xxhash64 (xxhash64-generic) checksum algorithm
[ 112.473698] BTRFS info (device sda6): using free-space-tree
[ 134.785416] b43legacy-phy0: Loading firmware version 0x127, patch level 14 (2005-04-18 02:36:27)
[ 134.872724] b43legacy-phy0 debug: Chip initialized
[ 134.918765] b43legacy-phy0 debug: 30-bit DMA initialized
[ 134.930672] b43legacy-phy0 debug: Wireless interface started
[ 134.930824] b43legacy-phy0 debug: Adding Interface type 2
[ 135.340440] NET: Registered PF_PACKET protocol family
[ 142.262239] ==================================================================
[ 142.262373] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 142.262522] write (marked) to 0xeedacc11 of 1 bytes by interrupt on cpu 0:
[ 142.262599] rcu_report_qs_rdp+0x15c/0x18c
[ 142.262688] rcu_core+0x1f0/0xa88
[ 142.262775] rcu_core_si+0x20/0x3c
[ 142.262862] __do_softirq+0x1dc/0x218
[ 142.262974] do_softirq_own_stack+0x54/0x74
[ 142.263084] do_softirq_own_stack+0x44/0x74
[ 142.263190] __irq_exit_rcu+0x6c/0xbc
[ 142.263287] irq_exit+0x10/0x20
[ 142.263380] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 142.263478] timer_interrupt+0x64/0x178
[ 142.263564] Decrementer_virt+0x108/0x10c
[ 142.263659] 0xf393dd80
[ 142.263737] 0xc1b7f120
[ 142.263808] kcsan_setup_watchpoint+0x300/0x4cc
[ 142.263898] rcu_all_qs+0x58/0x17c
[ 142.263989] __cond_resched+0x50/0x58
[ 142.264078] dput+0x28/0x90
[ 142.264174] path_put+0x2c/0x54
[ 142.264271] terminate_walk+0x80/0x110
[ 142.264371] path_lookupat+0x120/0x21c
[ 142.264481] filename_lookup+0x90/0x100
[ 142.264594] vfs_statx+0x8c/0x25c
[ 142.264674] do_statx+0xb4/0xfc
[ 142.264754] sys_statx+0x8c/0xc8
[ 142.264836] system_call_exception+0x15c/0x1c0
[ 142.264945] ret_from_syscall+0x0/0x2c
[ 142.265079] read to 0xeedacc11 of 1 bytes by task 1278 on cpu 0:
[ 142.265153] rcu_all_qs+0x58/0x17c
[ 142.265245] __cond_resched+0x50/0x58
[ 142.265333] dput+0x28/0x90
[ 142.265426] path_put+0x2c/0x54
[ 142.265520] terminate_walk+0x80/0x110
[ 142.265620] path_lookupat+0x120/0x21c
[ 142.265729] filename_lookup+0x90/0x100
[ 142.265841] vfs_statx+0x8c/0x25c
[ 142.265921] do_statx+0xb4/0xfc
[ 142.266001] sys_statx+0x8c/0xc8
[ 142.266082] system_call_exception+0x15c/0x1c0
[ 142.266189] ret_from_syscall+0x0/0x2c
[ 142.266315] Reported by Kernel Concurrency Sanitizer on:
[ 142.266370] CPU: 0 PID: 1278 Comm: openrc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 142.266464] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 142.266525] ==================================================================
[ 146.864470] CPU-temp: 59.2 C
[ 146.864533] , Case: 35.6 C
[ 146.864575] , Fan: 6 (tuned +1)
[ 155.274777] ==================================================================
[ 155.274912] BUG: KCSAN: data-race in do_sys_poll / interrupt_async_enter_prepare
[ 155.275072] read to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 155.275146] interrupt_async_enter_prepare+0x64/0xc4
[ 155.275243] timer_interrupt+0x1c/0x178
[ 155.275329] Decrementer_virt+0x108/0x10c
[ 155.275425] do_raw_spin_unlock+0x10c/0x130
[ 155.275526] 0x9032
[ 155.275599] kcsan_setup_watchpoint+0x300/0x4cc
[ 155.275689] do_sys_poll+0x500/0x614
[ 155.275778] sys_poll+0xac/0x160
[ 155.275866] system_call_exception+0x15c/0x1c0
[ 155.275975] ret_from_syscall+0x0/0x2c
[ 155.276106] write to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 155.276180] do_sys_poll+0x500/0x614
[ 155.276269] sys_poll+0xac/0x160
[ 155.276357] system_call_exception+0x15c/0x1c0
[ 155.276464] ret_from_syscall+0x0/0x2c
[ 155.276590] Reported by Kernel Concurrency Sanitizer on:
[ 155.276644] CPU: 0 PID: 1568 Comm: wmaker Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 155.276739] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 155.276799] ==================================================================
[ 212.002338] CPU-temp: 59.6 C
[ 212.002409] , Case: 35.7 C
[ 212.002474] , Fan: 7 (tuned +1)
[ 252.536412] ==================================================================
[ 252.536552] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 252.536727] read to 0xeeda9094 of 1 bytes by interrupt on cpu 1:
[ 252.536803] tmigr_next_groupevt+0x60/0xd8
[ 252.536906] tmigr_handle_remote_up+0x94/0x394
[ 252.537011] __walk_groups+0x74/0xc8
[ 252.537107] tmigr_handle_remote+0x13c/0x198
[ 252.537211] run_timer_softirq+0x94/0x98
[ 252.537320] __do_softirq+0x1dc/0x218
[ 252.537433] do_softirq_own_stack+0x54/0x74
[ 252.537543] do_softirq_own_stack+0x44/0x74
[ 252.537650] __irq_exit_rcu+0x6c/0xbc
[ 252.537747] irq_exit+0x10/0x20
[ 252.537839] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 252.537937] timer_interrupt+0x64/0x178
[ 252.538025] Decrementer_virt+0x108/0x10c
[ 252.538120] _raw_spin_unlock_irqrestore+0x28/0x58
[ 252.538232] free_to_partial_list+0x100/0x3c8
[ 252.538342] kfree+0x15c/0x1bc
[ 252.538439] skb_kfree_head+0x68/0x6c
[ 252.538548] skb_free_head+0xbc/0xc0
[ 252.538628] skb_release_data+0x1c4/0x1d4
[ 252.538714] skb_release_all+0x50/0x70
[ 252.538796] __kfree_skb+0x2c/0x4c
[ 252.538875] kfree_skb_reason+0x34/0x4c
[ 252.538958] kfree_skb+0x28/0x40
[ 252.539039] unix_stream_read_generic+0x9ac/0xae0
[ 252.539138] unix_stream_recvmsg+0x118/0x11c
[ 252.539234] sock_recvmsg_nosec+0x5c/0x88
[ 252.539329] ____sys_recvmsg+0xc4/0x270
[ 252.539427] ___sys_recvmsg+0x90/0xd4
[ 252.539532] __sys_recvmsg+0xb0/0xf8
[ 252.539637] sys_recvmsg+0x50/0x78
[ 252.539740] system_call_exception+0x15c/0x1c0
[ 252.539850] ret_from_syscall+0x0/0x2c
[ 252.539980] write to 0xeeda9094 of 1 bytes by task 0 on cpu 0:
[ 252.540053] tmigr_cpu_activate+0xe8/0x12c
[ 252.540156] timer_clear_idle+0x60/0x80
[ 252.540262] tick_nohz_restart_sched_tick+0x3c/0x170
[ 252.540365] tick_nohz_idle_exit+0xe0/0x158
[ 252.540465] do_idle+0x54/0x11c
[ 252.540547] cpu_startup_entry+0x30/0x34
[ 252.540634] kernel_init+0x0/0x1a4
[ 252.540732] console_on_rootfs+0x0/0xc8
[ 252.540814] 0x3610
[ 252.540926] Reported by Kernel Concurrency Sanitizer on:
[ 252.540981] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 252.541076] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 252.541137] ==================================================================
[ 269.361258] ==================================================================
[ 269.424130] BUG: KCSAN: data-race in copy_iovec_from_user / interrupt_async_enter_prepare
[ 269.551580] read to 0xc34987dc of 4 bytes by task 1577 on cpu 0:
[ 269.616042] interrupt_async_enter_prepare+0x64/0xc4
[ 269.680588] do_IRQ+0x18/0x2c
[ 269.745159] HardwareInterrupt_virt+0x108/0x10c
[ 269.810375] ___sys_recvmsg+0xa8/0xd4
[ 269.875466] 0x1
[ 269.939950] kcsan_setup_watchpoint+0x300/0x4cc
[ 270.005262] copy_iovec_from_user+0xb0/0x10c
[ 270.070322] __import_iovec+0xfc/0x22c
[ 270.134934] import_iovec+0x50/0x84
[ 270.199533] copy_msghdr_from_user+0xa0/0xd4
[ 270.264728] ___sys_recvmsg+0x6c/0xd4
[ 270.330041] __sys_recvmsg+0xb0/0xf8
[ 270.395115] sys_recvmsg+0x50/0x78
[ 270.459977] system_call_exception+0x15c/0x1c0
[ 270.525143] ret_from_syscall+0x0/0x2c
[ 270.653525] write to 0xc34987dc of 4 bytes by task 1577 on cpu 0:
[ 270.717547] copy_iovec_from_user+0xb0/0x10c
[ 270.780806] __import_iovec+0xfc/0x22c
[ 270.843348] import_iovec+0x50/0x84
[ 270.905420] copy_msghdr_from_user+0xa0/0xd4
[ 270.966956] ___sys_recvmsg+0x6c/0xd4
[ 271.027596] __sys_recvmsg+0xb0/0xf8
[ 271.087124] sys_recvmsg+0x50/0x78
[ 271.145899] system_call_exception+0x15c/0x1c0
[ 271.204429] ret_from_syscall+0x0/0x2c
[ 271.320364] Reported by Kernel Concurrency Sanitizer on:
[ 271.379532] CPU: 0 PID: 1577 Comm: urxvt Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 271.439191] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 271.498416] ==================================================================
[ 276.865543] CPU-temp: 59.9 C
[ 276.865623] , Case: 35.8 C
[ 276.968161] , Fan: 8 (tuned +1)
[ 279.054669] ==================================================================
[ 279.111269] BUG: KCSAN: data-race in copy_iovec_from_user / interrupt_async_enter_prepare
[ 279.223825] read to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 279.280806] interrupt_async_enter_prepare+0x64/0xc4
[ 279.337796] do_IRQ+0x18/0x2c
[ 279.394353] HardwareInterrupt_virt+0x108/0x10c
[ 279.451258] 0x1
[ 279.507766] 0x1000
[ 279.563800] kcsan_setup_watchpoint+0x300/0x4cc
[ 279.620285] copy_iovec_from_user+0xb0/0x10c
[ 279.676778] __import_iovec+0xfc/0x22c
[ 279.733472] import_iovec+0x50/0x84
[ 279.789929] copy_msghdr_from_user+0xa0/0xd4
[ 279.846778] ___sys_recvmsg+0x6c/0xd4
[ 279.903213] __sys_recvmsg+0xb0/0xf8
[ 279.959331] sys_recvmsg+0x50/0x78
[ 280.015040] system_call_exception+0x15c/0x1c0
[ 280.071038] ret_from_syscall+0x0/0x2c
[ 280.183559] write to 0xc1fb65dc of 4 bytes by task 1568 on cpu 0:
[ 280.241201] copy_iovec_from_user+0xb0/0x10c
[ 280.298804] __import_iovec+0xfc/0x22c
[ 280.356543] import_iovec+0x50/0x84
[ 280.414376] copy_msghdr_from_user+0xa0/0xd4
[ 280.472566] ___sys_recvmsg+0x6c/0xd4
[ 280.531236] __sys_recvmsg+0xb0/0xf8
[ 280.589458] sys_recvmsg+0x50/0x78
[ 280.647220] system_call_exception+0x15c/0x1c0
[ 280.704265] ret_from_syscall+0x0/0x2c
[ 280.815096] Reported by Kernel Concurrency Sanitizer on:
[ 280.870689] CPU: 0 PID: 1568 Comm: wmaker Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 280.927061] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 280.983547] ==================================================================
[ 333.820031] CPU-temp: 60.1 C
[ 333.820104] , Case: 36.0 C
[ 333.922934] , Fan: 9 (tuned +1)
[ 386.720306] ==================================================================
[ 386.780763] BUG: KCSAN: data-race in __run_timer_base / next_expiry_recalc
[ 386.900308] write to 0xeedc4918 of 4 bytes by interrupt on cpu 1:
[ 386.961089] next_expiry_recalc+0xbc/0x15c
[ 387.022044] __run_timer_base+0x278/0x38c
[ 387.083095] run_timer_base+0x5c/0x7c
[ 387.144161] run_timer_softirq+0x34/0x98
[ 387.205064] __do_softirq+0x1dc/0x218
[ 387.265807] do_softirq_own_stack+0x54/0x74
[ 387.326741] do_softirq_own_stack+0x44/0x74
[ 387.386848] __irq_exit_rcu+0x6c/0xbc
[ 387.446427] irq_exit+0x10/0x20
[ 387.505765] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 387.565965] timer_interrupt+0x64/0x178
[ 387.625952] Decrementer_virt+0x108/0x10c
[ 387.685840] default_idle_call+0x38/0x48
[ 387.745740] do_idle+0xfc/0x11c
[ 387.805480] cpu_startup_entry+0x30/0x34
[ 387.865333] start_secondary+0x504/0x854
[ 387.925068] 0x3338
[ 388.042760] read to 0xeedc4918 of 4 bytes by interrupt on cpu 0:
[ 388.101842] __run_timer_base+0x4c/0x38c
[ 388.160468] timer_expire_remote+0x48/0x68
[ 388.218450] tmigr_handle_remote_up+0x1f4/0x394
[ 388.275754] __walk_groups+0x74/0xc8
[ 388.333193] tmigr_handle_remote+0x13c/0x198
[ 388.391077] run_timer_softirq+0x94/0x98
[ 388.448233] __do_softirq+0x1dc/0x218
[ 388.504471] do_softirq_own_stack+0x54/0x74
[ 388.560085] do_softirq_own_stack+0x44/0x74
[ 388.614865] __irq_exit_rcu+0x6c/0xbc
[ 388.669169] irq_exit+0x10/0x20
[ 388.723070] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 388.777663] timer_interrupt+0x64/0x178
[ 388.832063] Decrementer_virt+0x108/0x10c
[ 388.886823] default_idle_call+0x38/0x48
[ 388.941375] do_idle+0xfc/0x11c
[ 388.995612] cpu_startup_entry+0x30/0x34
[ 389.049972] kernel_init+0x0/0x1a4
[ 389.104285] console_on_rootfs+0x0/0xc8
[ 389.158566] 0x3610
[ 389.265473] Reported by Kernel Concurrency Sanitizer on:
[ 389.319778] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 389.375176] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 389.430835] ==================================================================
[ 452.659321] pagealloc: memory corruption
[ 452.756403] fffdfff0: 00 00 00 00 ....
[ 452.854833] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 452.953923] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 453.053902] Call Trace:
[ 453.150878] [f1919c00] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[ 453.251275] [f1919c20] [c0be4ee8] dump_stack+0x20/0x34
[ 453.350119] [f1919c30] [c02c47c0] __kernel_unpoison_pages+0x198/0x1a8
[ 453.451915] [f1919c80] [c029b62c] post_alloc_hook+0x8c/0xf0
[ 453.553600] [f1919cb0] [c029b6b4] prep_new_page+0x24/0x5c
[ 453.654442] [f1919cd0] [c029c9dc] get_page_from_freelist+0x564/0x660
[ 453.755561] [f1919d60] [c029dfcc] __alloc_pages+0x114/0x8dc
[ 453.856815] [f1919e20] [c02764f0] folio_prealloc.isra.0+0x44/0xec
[ 453.959273] [f1919e40] [c027be28] handle_mm_fault+0x488/0xed0
[ 454.057617] [f1919ed0] [c00340f4] ___do_page_fault+0x4d8/0x630
[ 454.154895] [f1919f10] [c003446c] do_page_fault+0x28/0x40
[ 454.251719] [f1919f30] [c000433c] DataAccess_virt+0x124/0x17c
[ 454.349211] --- interrupt: 300 at 0x413008
[ 454.445748] NIP: 00413008 LR: 00412fe8 CTR: 00000000
[ 454.542365] REGS: f1919f40 TRAP: 0300 Not tainted (6.9.0-rc4-PMacG4-dirty)
[ 454.638976] MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 20882464 XER: 00000000
[ 454.733294] DAR: 8d7de010 DSISR: 42000000
GPR00: 00412fe8 afa78860 a7dc6700 6b871010 3c500000 20884462 00000003 003301e4
GPR08: 21f6e000 21f6d000 00000000 408258ea 20882462 0042ff68 00000000 40882462
GPR16: ffffffff 00000000 00000002 00000000 00000001 00000000 00430018 00000001
GPR24: ffffffff ffffffff 3c500000 0000005a 6b871010 00000000 00437cd0 00001000
[ 455.228075] NIP [00413008] 0x413008
[ 455.327281] LR [00412fe8] 0x412fe8
[ 455.422923] --- interrupt: 300
[ 455.523201] page: refcount:1 mapcount:0 mapping:00000000 index:0x1 pfn:0x31069
[ 455.624640] flags: 0x80000000(zone=2)
[ 455.725989] page_type: 0xffffffff()
[ 455.826265] raw: 80000000 00000100 00000122 00000000 00000001 00000000 ffffffff 00000001
[ 455.931213] raw: 00000000
[ 456.032785] page dumped because: pagealloc: corrupted page details
[ 456.137755] page_owner info is not present (never set?)
[ 471.812481] ==================================================================
[ 471.875913] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 472.002063] read (marked) to 0xefbfb770 of 4 bytes by task 39 on cpu 0:
[ 472.066742] lru_gen_look_around+0x320/0x634
[ 472.130601] folio_referenced_one+0x32c/0x404
[ 472.194198] rmap_walk_anon+0x1c4/0x24c
[ 472.257718] rmap_walk+0x70/0x7c
[ 472.320908] folio_referenced+0x194/0x1ec
[ 472.384159] shrink_folio_list+0x6a8/0xd28
[ 472.447385] evict_folios+0xcc0/0x1204
[ 472.510527] try_to_shrink_lruvec+0x214/0x2f0
[ 472.573863] shrink_one+0x104/0x1e8
[ 472.637032] shrink_node+0x314/0xc3c
[ 472.700496] balance_pgdat+0x498/0x914
[ 472.763930] kswapd+0x304/0x398
[ 472.827248] kthread+0x174/0x178
[ 472.890132] start_kernel_thread+0x10/0x14
[ 473.015917] write to 0xefbfb770 of 4 bytes by task 1594 on cpu 1:
[ 473.080139] list_add+0x58/0x94
[ 473.143681] evict_folios+0xb04/0x1204
[ 473.207333] try_to_shrink_lruvec+0x214/0x2f0
[ 473.271180] shrink_one+0x104/0x1e8
[ 473.334921] shrink_node+0x314/0xc3c
[ 473.398514] do_try_to_free_pages+0x500/0x7e4
[ 473.462735] try_to_free_pages+0x150/0x18c
[ 473.526742] __alloc_pages+0x460/0x8dc
[ 473.590118] folio_prealloc.isra.0+0x44/0xec
[ 473.652888] handle_mm_fault+0x488/0xed0
[ 473.714904] ___do_page_fault+0x4d8/0x630
[ 473.776247] do_page_fault+0x28/0x40
[ 473.837398] DataAccess_virt+0x124/0x17c
[ 473.957872] Reported by Kernel Concurrency Sanitizer on:
[ 474.018336] CPU: 1 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 474.079266] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 474.140486] ==================================================================
[ 476.045778] ==================================================================
[ 476.107390] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 476.230084] read (marked) to 0xef9ba594 of 4 bytes by task 1593 on cpu 0:
[ 476.292384] lru_gen_look_around+0x320/0x634
[ 476.354216] folio_referenced_one+0x32c/0x404
[ 476.416032] rmap_walk_anon+0x1c4/0x24c
[ 476.477599] rmap_walk+0x70/0x7c
[ 476.538677] folio_referenced+0x194/0x1ec
[ 476.599863] shrink_folio_list+0x6a8/0xd28
[ 476.660728] evict_folios+0xcc0/0x1204
[ 476.721348] try_to_shrink_lruvec+0x214/0x2f0
[ 476.781560] shrink_one+0x104/0x1e8
[ 476.841011] shrink_node+0x314/0xc3c
[ 476.899794] do_try_to_free_pages+0x500/0x7e4
[ 476.958094] try_to_free_pages+0x150/0x18c
[ 477.015971] __alloc_pages+0x460/0x8dc
[ 477.073511] folio_prealloc.isra.0+0x44/0xec
[ 477.131177] handle_mm_fault+0x488/0xed0
[ 477.187936] ___do_page_fault+0x4d8/0x630
[ 477.244819] do_page_fault+0x28/0x40
[ 477.301705] DataAccess_virt+0x124/0x17c
[ 477.413345] write to 0xef9ba594 of 4 bytes by task 39 on cpu 1:
[ 477.469994] list_add+0x58/0x94
[ 477.525372] evict_folios+0xb04/0x1204
[ 477.580264] try_to_shrink_lruvec+0x214/0x2f0
[ 477.634933] shrink_one+0x104/0x1e8
[ 477.689145] shrink_node+0x314/0xc3c
[ 477.742465] balance_pgdat+0x498/0x914
[ 477.795104] kswapd+0x304/0x398
[ 477.847128] kthread+0x174/0x178
[ 477.898527] start_kernel_thread+0x10/0x14
[ 478.000334] Reported by Kernel Concurrency Sanitizer on:
[ 478.052065] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 478.105114] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 478.158491] ==================================================================
[ 484.836016] ==================================================================
[ 484.890251] BUG: KCSAN: data-race in __mod_memcg_lruvec_state / mem_cgroup_css_rstat_flush
[ 484.999385] read (marked) to 0xeedd91ac of 4 bytes by task 1593 on cpu 0:
[ 485.055331] mem_cgroup_css_rstat_flush+0xcc/0x518
[ 485.111764] cgroup_rstat_flush_locked+0x528/0x538
[ 485.168325] cgroup_rstat_flush+0x38/0x5c
[ 485.224702] do_flush_stats+0x78/0x9c
[ 485.281044] mem_cgroup_flush_stats+0x7c/0x80
[ 485.337605] zswap_shrinker_count+0xb8/0x150
[ 485.393845] do_shrink_slab+0x7c/0x540
[ 485.449674] shrink_slab+0x1f0/0x384
[ 485.505456] shrink_one+0x140/0x1e8
[ 485.560938] shrink_node+0x314/0xc3c
[ 485.616173] do_try_to_free_pages+0x500/0x7e4
[ 485.671835] try_to_free_pages+0x150/0x18c
[ 485.727443] __alloc_pages+0x460/0x8dc
[ 485.782944] folio_prealloc.isra.0+0x44/0xec
[ 485.838574] handle_mm_fault+0x488/0xed0
[ 485.894091] ___do_page_fault+0x4d8/0x630
[ 485.949620] do_page_fault+0x28/0x40
[ 486.005049] DataAccess_virt+0x124/0x17c
[ 486.115237] write to 0xeedd91ac of 4 bytes by task 39 on cpu 1:
[ 486.171210] __mod_memcg_lruvec_state+0x8c/0x154
[ 486.227322] __mod_lruvec_state+0x58/0x78
[ 486.282611] lru_gen_update_size+0x130/0x240
[ 486.337329] lru_gen_del_folio+0x104/0x140
[ 486.391280] evict_folios+0xaf8/0x1204
[ 486.445636] try_to_shrink_lruvec+0x214/0x2f0
[ 486.499529] shrink_one+0x104/0x1e8
[ 486.552893] shrink_node+0x314/0xc3c
[ 486.605603] balance_pgdat+0x498/0x914
[ 486.657986] kswapd+0x304/0x398
[ 486.709948] kthread+0x174/0x178
[ 486.761693] start_kernel_thread+0x10/0x14
[ 486.865145] Reported by Kernel Concurrency Sanitizer on:
[ 486.917476] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 486.970887] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 487.024556] ==================================================================
[ 488.445808] ==================================================================
[ 488.500314] BUG: KCSAN: data-race in list_del / lru_gen_look_around
[ 488.608881] read (marked) to 0xef383a00 of 4 bytes by task 1594 on cpu 0:
[ 488.664363] lru_gen_look_around+0x320/0x634
[ 488.720003] folio_referenced_one+0x32c/0x404
[ 488.775696] rmap_walk_anon+0x1c4/0x24c
[ 488.831310] rmap_walk+0x70/0x7c
[ 488.886546] folio_referenced+0x194/0x1ec
[ 488.941958] shrink_folio_list+0x6a8/0xd28
[ 488.997442] evict_folios+0xcc0/0x1204
[ 489.052550] try_to_shrink_lruvec+0x214/0x2f0
[ 489.107616] shrink_one+0x104/0x1e8
[ 489.162617] shrink_node+0x314/0xc3c
[ 489.217347] do_try_to_free_pages+0x500/0x7e4
[ 489.272219] try_to_free_pages+0x150/0x18c
[ 489.327292] __alloc_pages+0x460/0x8dc
[ 489.382392] folio_prealloc.isra.0+0x44/0xec
[ 489.437664] handle_mm_fault+0x488/0xed0
[ 489.493033] ___do_page_fault+0x4d8/0x630
[ 489.548450] do_page_fault+0x28/0x40
[ 489.603743] DataAccess_virt+0x124/0x17c
[ 489.712459] write to 0xef383a00 of 4 bytes by task 39 on cpu 1:
[ 489.766735] list_del+0x2c/0x5c
[ 489.820297] lru_gen_del_folio+0x110/0x140
[ 489.874513] evict_folios+0xaf8/0x1204
[ 489.927811] try_to_shrink_lruvec+0x214/0x2f0
[ 489.980494] shrink_one+0x104/0x1e8
[ 490.032600] shrink_node+0x314/0xc3c
[ 490.084017] balance_pgdat+0x498/0x914
[ 490.135319] kswapd+0x304/0x398
[ 490.186592] kthread+0x174/0x178
[ 490.237688] start_kernel_thread+0x10/0x14
[ 490.339293] Reported by Kernel Concurrency Sanitizer on:
[ 490.390696] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 490.443194] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 490.496203] ==================================================================
[ 504.870324] ==================================================================
[ 504.926179] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 505.035013] read to 0xc121b328 of 8 bytes by task 39 on cpu 0:
[ 505.089891] zswap_store+0x118/0xa18
[ 505.145917] swap_writepage+0x4c/0xe8
[ 505.200945] pageout+0x1dc/0x304
[ 505.256018] shrink_folio_list+0xa70/0xd28
[ 505.311460] evict_folios+0xcc0/0x1204
[ 505.366557] try_to_shrink_lruvec+0x214/0x2f0
[ 505.422439] shrink_one+0x104/0x1e8
[ 505.476800] shrink_node+0x314/0xc3c
[ 505.530919] balance_pgdat+0x498/0x914
[ 505.585030] kswapd+0x304/0x398
[ 505.639149] kthread+0x174/0x178
[ 505.692932] start_kernel_thread+0x10/0x14
[ 505.800244] write to 0xc121b328 of 8 bytes by task 1593 on cpu 1:
[ 505.854808] zswap_update_total_size+0x58/0xe8
[ 505.910040] zswap_entry_free+0xdc/0x1c0
[ 505.964971] zswap_load+0x190/0x19c
[ 506.019793] swap_read_folio+0xbc/0x450
[ 506.074754] swap_cluster_readahead+0x2f8/0x338
[ 506.129791] swapin_readahead+0x430/0x438
[ 506.184612] do_swap_page+0x1e0/0x9bc
[ 506.238597] handle_mm_fault+0xecc/0xed0
[ 506.291968] ___do_page_fault+0x4d8/0x630
[ 506.344759] do_page_fault+0x28/0x40
[ 506.398273] DataAccess_virt+0x124/0x17c
[ 506.503169] Reported by Kernel Concurrency Sanitizer on:
[ 506.555788] CPU: 1 PID: 1593 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 506.609554] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 506.662427] ==================================================================
[ 510.124486] ==================================================================
[ 510.180131] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 510.291131] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 510.347527] hrtimer_active+0xb0/0x100
[ 510.403984] task_tick_fair+0xc8/0xcc
[ 510.460204] scheduler_tick+0x6c/0xcc
[ 510.516434] update_process_times+0xc8/0x120
[ 510.572773] tick_nohz_handler+0x1ac/0x270
[ 510.629081] __hrtimer_run_queues+0x170/0x1d8
[ 510.685810] hrtimer_interrupt+0x168/0x350
[ 510.742347] timer_interrupt+0x108/0x178
[ 510.798808] Decrementer_virt+0x108/0x10c
[ 510.855184] memcg_rstat_updated+0x154/0x15c
[ 510.911753] __mod_memcg_lruvec_state+0x118/0x154
[ 510.968523] __mod_lruvec_state+0x58/0x78
[ 511.025058] __lruvec_stat_mod_folio+0x88/0x8c
[ 511.081447] folio_remove_rmap_ptes+0xc8/0x150
[ 511.137516] unmap_page_range+0x6f8/0x8bc
[ 511.193560] unmap_vmas+0x11c/0x174
[ 511.249316] unmap_region+0x134/0x1dc
[ 511.304910] do_vmi_align_munmap+0x3ac/0x4ac
[ 511.360666] do_vmi_munmap+0x114/0x11c
[ 511.416401] __vm_munmap+0xcc/0x124
[ 511.472115] sys_munmap+0x40/0x64
[ 511.528049] system_call_exception+0x15c/0x1c0
[ 511.584233] ret_from_syscall+0x0/0x2c
[ 511.695258] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 511.751441] __hrtimer_run_queues+0x1cc/0x1d8
[ 511.807288] hrtimer_interrupt+0x168/0x350
[ 511.862980] timer_interrupt+0x108/0x178
[ 511.917466] Decrementer_virt+0x108/0x10c
[ 511.972362] find_stack+0x198/0x1dc
[ 512.026447] do_raw_spin_lock+0xbc/0x11c
[ 512.080033] _raw_spin_lock+0x24/0x3c
[ 512.133252] __pte_offset_map_lock+0x58/0xb8
[ 512.186376] page_vma_mapped_walk+0x1e0/0x468
[ 512.239590] remove_migration_pte+0xf4/0x334
[ 512.292790] rmap_walk_anon+0x1c4/0x24c
[ 512.345898] rmap_walk+0x70/0x7c
[ 512.398564] remove_migration_ptes+0x98/0x9c
[ 512.451480] migrate_pages_batch+0x8ec/0xb38
[ 512.504414] migrate_pages+0x290/0x77c
[ 512.557249] compact_zone+0xb48/0xf04
[ 512.609972] compact_node+0xe8/0x158
[ 512.662532] kcompactd+0x2c0/0x2d8
[ 512.715068] kthread+0x174/0x178
[ 512.767460] start_kernel_thread+0x10/0x14
[ 512.871299] Reported by Kernel Concurrency Sanitizer on:
[ 512.923314] CPU: 0 PID: 31 Comm: kcompactd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 512.976594] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 513.030308] ==================================================================
[ 528.568529] ==================================================================
[ 528.623563] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 528.733089] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 528.788901] hrtimer_active+0xb0/0x100
[ 528.844762] task_tick_fair+0xc8/0xcc
[ 528.900519] scheduler_tick+0x6c/0xcc
[ 528.956040] update_process_times+0xc8/0x120
[ 529.011842] tick_nohz_handler+0x1ac/0x270
[ 529.068353] __hrtimer_run_queues+0x170/0x1d8
[ 529.123288] hrtimer_interrupt+0x168/0x350
[ 529.177586] timer_interrupt+0x108/0x178
[ 529.231317] Decrementer_virt+0x108/0x10c
[ 529.285354] memcg_rstat_updated+0x2c/0x15c
[ 529.338748] __mod_memcg_lruvec_state+0x30/0x154
[ 529.391722] __mod_lruvec_state+0x58/0x78
[ 529.444551] __lruvec_stat_mod_folio+0x88/0x8c
[ 529.498429] folio_remove_rmap_ptes+0xc8/0x150
[ 529.551038] unmap_page_range+0x6f8/0x8bc
[ 529.603804] unmap_vmas+0x11c/0x174
[ 529.656712] unmap_region+0x134/0x1dc
[ 529.709663] do_vmi_align_munmap+0x3ac/0x4ac
[ 529.762012] do_vmi_munmap+0x114/0x11c
[ 529.814038] __vm_munmap+0xcc/0x124
[ 529.866185] sys_munmap+0x40/0x64
[ 529.918142] system_call_exception+0x15c/0x1c0
[ 529.970373] ret_from_syscall+0x0/0x2c
[ 530.073406] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 530.125836] __hrtimer_run_queues+0x1cc/0x1d8
[ 530.178436] hrtimer_interrupt+0x168/0x350
[ 530.230954] timer_interrupt+0x108/0x178
[ 530.283567] Decrementer_virt+0x108/0x10c
[ 530.336311] 0xc4a28800
[ 530.388668] cgroup_rstat_updated+0x50/0x150
[ 530.441621] memcg_rstat_updated+0x7c/0x15c
[ 530.494654] __mod_memcg_lruvec_state+0x118/0x154
[ 530.547963] __mod_lruvec_state+0x58/0x78
[ 530.601108] __lruvec_stat_mod_folio+0x88/0x8c
[ 530.654289] folio_remove_rmap_ptes+0xc8/0x150
[ 530.707564] unmap_page_range+0x6f8/0x8bc
[ 530.760503] unmap_vmas+0x11c/0x174
[ 530.812737] unmap_region+0x134/0x1dc
[ 530.864783] do_vmi_align_munmap+0x3ac/0x4ac
[ 530.916971] do_vmi_munmap+0x114/0x11c
[ 530.969005] __vm_munmap+0xcc/0x124
[ 531.020979] sys_munmap+0x40/0x64
[ 531.072850] system_call_exception+0x15c/0x1c0
[ 531.125022] ret_from_syscall+0x0/0x2c
[ 531.228289] Reported by Kernel Concurrency Sanitizer on:
[ 531.280569] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 531.334009] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 531.388022] ==================================================================
[ 563.307241] ==================================================================
[ 563.362164] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 563.472308] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 563.528696] hrtimer_active+0xb0/0x100
[ 563.585352] task_tick_fair+0xc8/0xcc
[ 563.642002] scheduler_tick+0x6c/0xcc
[ 563.698393] update_process_times+0xc8/0x120
[ 563.754995] tick_nohz_handler+0x1ac/0x270
[ 563.811358] __hrtimer_run_queues+0x170/0x1d8
[ 563.867091] hrtimer_interrupt+0x168/0x350
[ 563.922175] timer_interrupt+0x108/0x178
[ 563.976509] Decrementer_virt+0x108/0x10c
[ 564.031245] percpu_counter_add_batch+0x1dc/0x1fc
[ 564.085623] percpu_counter_add+0x44/0x68
[ 564.139133] handle_mm_fault+0x86c/0xed0
[ 564.192221] ___do_page_fault+0x4d8/0x630
[ 564.245005] do_page_fault+0x28/0x40
[ 564.297817] DataAccess_virt+0x124/0x17c
[ 564.403062] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 564.456530] __hrtimer_run_queues+0x1cc/0x1d8
[ 564.510280] hrtimer_interrupt+0x168/0x350
[ 564.563961] timer_interrupt+0x108/0x178
[ 564.617565] Decrementer_virt+0x108/0x10c
[ 564.671173] 0x595
[ 564.724345] memchr_inv+0x100/0x188
[ 564.777722] __kernel_unpoison_pages+0xe0/0x1a8
[ 564.831361] post_alloc_hook+0x8c/0xf0
[ 564.884944] prep_new_page+0x24/0x5c
[ 564.938342] get_page_from_freelist+0x564/0x660
[ 564.991991] __alloc_pages+0x114/0x8dc
[ 565.045672] folio_prealloc.isra.0+0x44/0xec
[ 565.099752] handle_mm_fault+0x488/0xed0
[ 565.153686] ___do_page_fault+0x4d8/0x630
[ 565.207797] do_page_fault+0x28/0x40
[ 565.261822] DataAccess_virt+0x124/0x17c
[ 565.369310] Reported by Kernel Concurrency Sanitizer on:
[ 565.423579] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 565.479243] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 565.534848] ==================================================================
[ 566.720422] ==================================================================
[ 566.776545] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 566.888607] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 566.945636] hrtimer_active+0xb0/0x100
[ 567.002396] task_tick_fair+0xc8/0xcc
[ 567.058903] scheduler_tick+0x6c/0xcc
[ 567.115129] update_process_times+0xc8/0x120
[ 567.171522] tick_nohz_handler+0x1ac/0x270
[ 567.227935] __hrtimer_run_queues+0x170/0x1d8
[ 567.284401] hrtimer_interrupt+0x168/0x350
[ 567.340786] timer_interrupt+0x108/0x178
[ 567.397215] Decrementer_virt+0x108/0x10c
[ 567.453799] kcsan_setup_watchpoint+0x300/0x4cc
[ 567.510581] stack_trace_save+0x40/0xa4
[ 567.567366] save_stack+0xa4/0xec
[ 567.624009] __set_page_owner+0x38/0x2dc
[ 567.680879] prep_new_page+0x24/0x5c
[ 567.737592] get_page_from_freelist+0x564/0x660
[ 567.794672] __alloc_pages+0x114/0x8dc
[ 567.851607] folio_prealloc.isra.0+0x44/0xec
[ 567.908433] handle_mm_fault+0x488/0xed0
[ 567.964553] ___do_page_fault+0x4d8/0x630
[ 568.020061] do_page_fault+0x28/0x40
[ 568.074778] DataAccess_virt+0x124/0x17c
[ 568.184134] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 568.239095] __hrtimer_run_queues+0x1cc/0x1d8
[ 568.293623] hrtimer_interrupt+0x168/0x350
[ 568.347815] timer_interrupt+0x108/0x178
[ 568.402063] Decrementer_virt+0x108/0x10c
[ 568.456590] memchr_inv+0x100/0x188
[ 568.511078] __kernel_unpoison_pages+0xe0/0x1a8
[ 568.565651] post_alloc_hook+0x8c/0xf0
[ 568.620041] prep_new_page+0x24/0x5c
[ 568.674241] get_page_from_freelist+0x564/0x660
[ 568.728680] __alloc_pages+0x114/0x8dc
[ 568.783144] folio_prealloc.isra.0+0x44/0xec
[ 568.837644] handle_mm_fault+0x488/0xed0
[ 568.892186] ___do_page_fault+0x4d8/0x630
[ 568.946782] do_page_fault+0x28/0x40
[ 569.001443] DataAccess_virt+0x124/0x17c
[ 569.110268] Reported by Kernel Concurrency Sanitizer on:
[ 569.165538] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 569.221571] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 569.277546] ==================================================================
[ 573.083473] ==================================================================
[ 573.140478] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 573.253599] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 573.311185] hrtimer_active+0xb0/0x100
[ 573.368855] task_tick_fair+0xc8/0xcc
[ 573.426338] scheduler_tick+0x6c/0xcc
[ 573.483586] update_process_times+0xc8/0x120
[ 573.540944] tick_nohz_handler+0x1ac/0x270
[ 573.598207] __hrtimer_run_queues+0x170/0x1d8
[ 573.655508] hrtimer_interrupt+0x168/0x350
[ 573.712905] timer_interrupt+0x108/0x178
[ 573.770161] Decrementer_virt+0x108/0x10c
[ 573.827391] __mod_node_page_state+0xf0/0x120
[ 573.884763] __mod_lruvec_state+0x2c/0x78
[ 573.942017] __lruvec_stat_mod_folio+0x88/0x8c
[ 573.999248] folio_remove_rmap_ptes+0xc8/0x150
[ 574.055832] unmap_page_range+0x6f8/0x8bc
[ 574.111688] unmap_vmas+0x11c/0x174
[ 574.166627] unmap_region+0x134/0x1dc
[ 574.221884] do_vmi_align_munmap+0x3ac/0x4ac
[ 574.276683] do_vmi_munmap+0x114/0x11c
[ 574.330669] __vm_munmap+0xcc/0x124
[ 574.384227] sys_munmap+0x40/0x64
[ 574.437248] system_call_exception+0x15c/0x1c0
[ 574.490657] ret_from_syscall+0x0/0x2c
[ 574.596853] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 574.650843] __hrtimer_run_queues+0x1cc/0x1d8
[ 574.705065] hrtimer_interrupt+0x168/0x350
[ 574.759258] timer_interrupt+0x108/0x178
[ 574.813360] Decrementer_virt+0x108/0x10c
[ 574.867513] 0xc1f18020
[ 574.921225] __mod_node_page_state+0x7c/0x120
[ 574.975368] __mod_lruvec_state+0x3c/0x78
[ 575.029458] __lruvec_stat_mod_folio+0x88/0x8c
[ 575.083714] folio_add_new_anon_rmap+0x130/0x19c
[ 575.138111] handle_mm_fault+0x87c/0xed0
[ 575.192365] ___do_page_fault+0x4d8/0x630
[ 575.246563] do_page_fault+0x28/0x40
[ 575.300625] DataAccess_virt+0x124/0x17c
[ 575.407905] Reported by Kernel Concurrency Sanitizer on:
[ 575.462192] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 575.517670] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 575.573511] ==================================================================
[ 579.993169] ==================================================================
[ 580.049442] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 580.161663] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 580.218764] hrtimer_active+0xb0/0x100
[ 580.275622] task_tick_fair+0xc8/0xcc
[ 580.332267] scheduler_tick+0x6c/0xcc
[ 580.388652] update_process_times+0xc8/0x120
[ 580.445227] tick_nohz_handler+0x1ac/0x270
[ 580.502867] __hrtimer_run_queues+0x170/0x1d8
[ 580.559642] hrtimer_interrupt+0x168/0x350
[ 580.616166] timer_interrupt+0x108/0x178
[ 580.672611] Decrementer_virt+0x108/0x10c
[ 580.730396] 0xffffffff
[ 580.786775] page_mapcount+0x2c/0xa8
[ 580.843024] unmap_page_range+0x700/0x8bc
[ 580.899830] unmap_vmas+0x11c/0x174
[ 580.956114] unmap_region+0x134/0x1dc
[ 581.011260] do_vmi_align_munmap+0x3ac/0x4ac
[ 581.065927] do_vmi_munmap+0x114/0x11c
[ 581.119728] __vm_munmap+0xcc/0x124
[ 581.173851] sys_munmap+0x40/0x64
[ 581.227159] system_call_exception+0x15c/0x1c0
[ 581.280190] ret_from_syscall+0x0/0x2c
[ 581.384626] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 581.438036] __hrtimer_run_queues+0x1cc/0x1d8
[ 581.491426] hrtimer_interrupt+0x168/0x350
[ 581.544824] timer_interrupt+0x108/0x178
[ 581.598099] Decrementer_virt+0x108/0x10c
[ 581.651273] flush_dcache_icache_folio+0x94/0x1a0
[ 581.704651] set_ptes+0xcc/0x144
[ 581.757983] handle_mm_fault+0x634/0xed0
[ 581.811404] ___do_page_fault+0x4d8/0x630
[ 581.864837] do_page_fault+0x28/0x40
[ 581.918179] DataAccess_virt+0x124/0x17c
[ 582.024420] Reported by Kernel Concurrency Sanitizer on:
[ 582.078308] CPU: 0 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 582.133644] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 582.189451] ==================================================================
[ 641.910995] ==================================================================
[ 641.966187] BUG: KCSAN: data-race in interrupt_async_enter_prepare / set_fd_set
[ 642.076270] read to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 642.132235] interrupt_async_enter_prepare+0x64/0xc4
[ 642.188074] timer_interrupt+0x1c/0x178
[ 642.243862] Decrementer_virt+0x108/0x10c
[ 642.299563] 0xfefefefe
[ 642.354267] 0x0
[ 642.408407] kcsan_setup_watchpoint+0x300/0x4cc
[ 642.463244] set_fd_set+0xa4/0xec
[ 642.517966] core_sys_select+0x1ec/0x240
[ 642.572793] sys_pselect6_time32+0x190/0x1b4
[ 642.627633] system_call_exception+0x15c/0x1c0
[ 642.682584] ret_from_syscall+0x0/0x2c
[ 642.791857] write to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 642.847530] set_fd_set+0xa4/0xec
[ 642.902848] core_sys_select+0x1ec/0x240
[ 642.958519] sys_pselect6_time32+0x190/0x1b4
[ 643.014008] system_call_exception+0x15c/0x1c0
[ 643.069680] ret_from_syscall+0x0/0x2c
[ 643.179351] Reported by Kernel Concurrency Sanitizer on:
[ 643.234027] CPU: 0 PID: 1525 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 643.289155] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 643.345096] ==================================================================
[ 789.051163] ==================================================================
[ 789.106819] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 789.217527] write to 0xeedd91a0 of 4 bytes by task 40 on cpu 0:
[ 789.273728] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 789.330051] cgroup_rstat_flush_locked+0x528/0x538
[ 789.386476] cgroup_rstat_flush+0x38/0x5c
[ 789.442576] do_flush_stats+0x78/0x9c
[ 789.498516] flush_memcg_stats_dwork+0x34/0x70
[ 789.554606] process_scheduled_works+0x350/0x494
[ 789.610721] worker_thread+0x2a4/0x300
[ 789.666832] kthread+0x174/0x178
[ 789.722710] start_kernel_thread+0x10/0x14
[ 789.834825] write to 0xeedd91a0 of 4 bytes by task 1594 on cpu 1:
[ 789.892152] memcg_rstat_updated+0xd8/0x15c
[ 789.949397] __mod_memcg_lruvec_state+0x118/0x154
[ 790.006733] __mod_lruvec_state+0x58/0x78
[ 790.064148] __lruvec_stat_mod_folio+0x88/0x8c
[ 790.121707] folio_add_new_anon_rmap+0x130/0x19c
[ 790.179460] handle_mm_fault+0x87c/0xed0
[ 790.237134] ___do_page_fault+0x4d8/0x630
[ 790.294833] do_page_fault+0x28/0x40
[ 790.352533] DataAccess_virt+0x124/0x17c
[ 790.466485] value changed: 0x00000032 -> 0x00000000
[ 790.580686] Reported by Kernel Concurrency Sanitizer on:
[ 790.638575] CPU: 1 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 790.697513] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 790.756623] ==================================================================
[ 801.198092] ==================================================================
[ 801.258682] BUG: KCSAN: data-race in memcg_rstat_updated / memcg_rstat_updated
[ 801.378522] read to 0xeedd91a0 of 4 bytes by interrupt on cpu 1:
[ 801.439371] memcg_rstat_updated+0xcc/0x15c
[ 801.499726] __mod_memcg_state+0xf4/0xf8
[ 801.559395] mod_memcg_state+0x3c/0x74
[ 801.618309] mem_cgroup_charge_skmem+0x54/0xf0
[ 801.676767] __sk_mem_raise_allocated+0xa0/0x418
[ 801.735810] __sk_mem_schedule+0x60/0xb8
[ 801.794018] sk_rmem_schedule+0x90/0xb4
[ 801.851523] tcp_try_rmem_schedule+0x3e8/0x59c
[ 801.908923] tcp_data_queue+0x234/0x1138
[ 801.965807] tcp_rcv_established+0x5c0/0x6f0
[ 802.022610] tcp_v4_do_rcv+0x138/0x3b0
[ 802.079313] tcp_v4_rcv+0xc0c/0xe20
[ 802.135981] ip_protocol_deliver_rcu+0xa4/0x2a4
[ 802.193162] ip_local_deliver+0x1d8/0x1dc
[ 802.250162] ip_sublist_rcv_finish+0x94/0xa4
[ 802.307089] ip_list_rcv_finish.constprop.0+0x6c/0x1c4
[ 802.364412] ip_list_rcv+0x80/0x1a0
[ 802.421375] __netif_receive_skb_list_ptype+0x68/0x118
[ 802.478877] __netif_receive_skb_list_core+0x80/0x158
[ 802.536042] netif_receive_skb_list_internal+0x1f0/0x3e4
[ 802.593554] gro_normal_list+0x60/0x8c
[ 802.650642] napi_complete_done+0x108/0x284
[ 802.707472] gem_poll+0x1400/0x1638
[ 802.764247] __napi_poll.constprop.0+0x64/0x228
[ 802.821469] net_rx_action+0x3bc/0x5ac
[ 802.878388] __do_softirq+0x1dc/0x218
[ 802.935163] do_softirq_own_stack+0x54/0x74
[ 802.992141] do_softirq_own_stack+0x44/0x74
[ 803.048409] __irq_exit_rcu+0x6c/0xbc
[ 803.103980] irq_exit+0x10/0x20
[ 803.158845] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 803.213749] do_IRQ+0x24/0x2c
[ 803.268820] HardwareInterrupt_virt+0x108/0x10c
[ 803.323524] get_page_from_freelist+0x564/0x660
[ 803.377514] 0xc4a28800
[ 803.430781] kcsan_setup_watchpoint+0x300/0x4cc
[ 803.484260] memcg_rstat_updated+0xd8/0x15c
[ 803.537584] __mod_memcg_lruvec_state+0x118/0x154
[ 803.591269] __mod_lruvec_state+0x58/0x78
[ 803.644970] __lruvec_stat_mod_folio+0x88/0x8c
[ 803.698607] folio_add_new_anon_rmap+0x130/0x19c
[ 803.752290] handle_mm_fault+0x87c/0xed0
[ 803.805839] ___do_page_fault+0x4d8/0x630
[ 803.859528] do_page_fault+0x28/0x40
[ 803.913090] DataAccess_virt+0x124/0x17c
[ 804.019591] write to 0xeedd91a0 of 4 bytes by task 1594 on cpu 1:
[ 804.073476] memcg_rstat_updated+0xd8/0x15c
[ 804.127161] __mod_memcg_lruvec_state+0x118/0x154
[ 804.180876] __mod_lruvec_state+0x58/0x78
[ 804.234425] __lruvec_stat_mod_folio+0x88/0x8c
[ 804.288016] folio_add_new_anon_rmap+0x130/0x19c
[ 804.341587] handle_mm_fault+0x87c/0xed0
[ 804.395136] ___do_page_fault+0x4d8/0x630
[ 804.448881] do_page_fault+0x28/0x40
[ 804.502451] DataAccess_virt+0x124/0x17c
[ 804.609130] value changed: 0x00000012 -> 0x00000013
[ 804.715953] Reported by Kernel Concurrency Sanitizer on:
[ 804.769360] CPU: 1 PID: 1594 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 804.823212] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 804.876762] ==================================================================
[ 842.725847] ==================================================================
[ 842.780124] BUG: KCSAN: data-race in filldir64 / interrupt_async_enter_prepare
[ 842.887232] read to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 0:
[ 842.941211] interrupt_async_enter_prepare+0x64/0xc4
[ 842.995309] timer_interrupt+0x1c/0x178
[ 843.049347] Decrementer_virt+0x108/0x10c
[ 843.103290] 0xeee9b9f8
[ 843.156782] page_address+0x60/0x134
[ 843.210476] kcsan_setup_watchpoint+0x300/0x4cc
[ 843.264485] filldir64+0x10c/0x2d4
[ 843.318271] dir_emit_dots+0x168/0x1a4
[ 843.372123] proc_task_readdir+0x6c/0x340
[ 843.426051] iterate_dir+0xe4/0x248
[ 843.479886] sys_getdents64+0xb0/0x1fc
[ 843.533912] system_call_exception+0x15c/0x1c0
[ 843.588011] ret_from_syscall+0x0/0x2c
[ 843.695515] write to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 0:
[ 843.750187] filldir64+0x10c/0x2d4
[ 843.804568] dir_emit_dots+0x168/0x1a4
[ 843.858790] proc_task_readdir+0x6c/0x340
[ 843.913275] iterate_dir+0xe4/0x248
[ 843.967382] sys_getdents64+0xb0/0x1fc
[ 844.021271] system_call_exception+0x15c/0x1c0
[ 844.075329] ret_from_syscall+0x0/0x2c
[ 844.182846] Reported by Kernel Concurrency Sanitizer on:
[ 844.237183] CPU: 0 PID: 1608 Comm: htop Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 844.292805] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 844.348677] ==================================================================
[ 857.632000] ==================================================================
[ 857.689040] BUG: KCSAN: data-race in ____sys_recvmsg / interrupt_async_enter_prepare
[ 857.803287] read to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 857.860911] interrupt_async_enter_prepare+0x64/0xc4
[ 857.918431] timer_interrupt+0x1c/0x178
[ 857.975859] Decrementer_virt+0x108/0x10c
[ 858.033192] 0xf33c1b3c
[ 858.090110] 0x4000
[ 858.146531] kcsan_setup_watchpoint+0x300/0x4cc
[ 858.203514] ____sys_recvmsg+0x1a0/0x270
[ 858.260435] ___sys_recvmsg+0x90/0xd4
[ 858.317191] __sys_recvmsg+0xb0/0xf8
[ 858.373786] sys_recvmsg+0x50/0x78
[ 858.430107] system_call_exception+0x15c/0x1c0
[ 858.486693] ret_from_syscall+0x0/0x2c
[ 858.599379] write to 0xc2efda1c of 4 bytes by task 1525 on cpu 0:
[ 858.656889] ____sys_recvmsg+0x1a0/0x270
[ 858.713762] ___sys_recvmsg+0x90/0xd4
[ 858.770135] __sys_recvmsg+0xb0/0xf8
[ 858.826333] sys_recvmsg+0x50/0x78
[ 858.882338] system_call_exception+0x15c/0x1c0
[ 858.938542] ret_from_syscall+0x0/0x2c
[ 859.050306] Reported by Kernel Concurrency Sanitizer on:
[ 859.107157] CPU: 0 PID: 1525 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 859.164937] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 859.223053] ==================================================================
[ 899.064182] ==================================================================
[ 899.125213] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 899.246007] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 899.306586] hrtimer_active+0xb0/0x100
[ 899.366160] task_tick_fair+0xc8/0xcc
[ 899.424917] scheduler_tick+0x6c/0xcc
[ 899.483903] update_process_times+0xc8/0x120
[ 899.542400] tick_nohz_handler+0x1ac/0x270
[ 899.600361] __hrtimer_run_queues+0x170/0x1d8
[ 899.658073] hrtimer_interrupt+0x168/0x350
[ 899.715431] timer_interrupt+0x108/0x178
[ 899.772539] Decrementer_virt+0x108/0x10c
[ 899.829644] 0x6e02
[ 899.886308] HUF_compress1X_usingCTable_internal.isra.0+0xfe8/0x11c0
[ 899.944629] HUF_compress4X_usingCTable_internal.isra.0+0x1ac/0x1d0
[ 900.002386] HUF_compressCTable_internal.isra.0+0xbc/0xc0
[ 900.060166] HUF_compress_internal.isra.0+0x17c/0x45c
[ 900.117911] HUF_compress4X_repeat+0x80/0xbc
[ 900.175716] ZSTD_compressLiterals+0x230/0x350
[ 900.233376] ZSTD_entropyCompressSeqStore.constprop.0+0x130/0x3c4
[ 900.291780] ZSTD_compressBlock_internal+0x150/0x240
[ 900.350171] ZSTD_compressContinue_internal+0xab4/0xb88
[ 900.408568] ZSTD_compressEnd+0x50/0x1e4
[ 900.466700] ZSTD_compressStream2+0x360/0x8b8
[ 900.524437] ZSTD_compressStream2_simpleArgs+0x7c/0xd8
[ 900.581862] ZSTD_compress2+0xbc/0x13c
[ 900.639007] zstd_compress_cctx+0x68/0x9c
[ 900.696102] __zstd_compress+0x70/0xc4
[ 900.753102] zstd_scompress+0x44/0x74
[ 900.810045] scomp_acomp_comp_decomp+0x328/0x4e4
[ 900.867222] scomp_acomp_compress+0x28/0x48
[ 900.924057] zswap_store+0x834/0xa18
[ 900.980844] swap_writepage+0x4c/0xe8
[ 901.037488] pageout+0x1dc/0x304
[ 901.093196] shrink_folio_list+0xa70/0xd28
[ 901.148454] evict_folios+0xcc0/0x1204
[ 901.202977] try_to_shrink_lruvec+0x214/0x2f0
[ 901.258168] shrink_one+0x104/0x1e8
[ 901.312462] shrink_node+0x314/0xc3c
[ 901.365852] do_try_to_free_pages+0x500/0x7e4
[ 901.419109] try_to_free_pages+0x150/0x18c
[ 901.471981] __alloc_pages+0x460/0x8dc
[ 901.524637] folio_prealloc.isra.0+0x44/0xec
[ 901.577526] handle_mm_fault+0x488/0xed0
[ 901.630288] ___do_page_fault+0x4d8/0x630
[ 901.683476] do_page_fault+0x28/0x40
[ 901.736432] DataAccess_virt+0x124/0x17c
[ 901.842006] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 901.896048] __hrtimer_run_queues+0x1cc/0x1d8
[ 901.950088] hrtimer_interrupt+0x168/0x350
[ 902.004081] timer_interrupt+0x108/0x178
[ 902.057964] Decrementer_virt+0x108/0x10c
[ 902.111847] 0xd
[ 902.164887] ZSTD_compressBlock_doubleFast+0x1358/0x2854
[ 902.218615] ZSTD_buildSeqStore+0x3b8/0x3bc
[ 902.272298] ZSTD_compressBlock_internal+0x44/0x240
[ 902.326319] ZSTD_compressContinue_internal+0xab4/0xb88
[ 902.380552] ZSTD_compressEnd+0x50/0x1e4
[ 902.434501] ZSTD_compressStream2+0x360/0x8b8
[ 902.488294] ZSTD_compressStream2_simpleArgs+0x7c/0xd8
[ 902.542191] ZSTD_compress2+0xbc/0x13c
[ 902.595500] zstd_compress_cctx+0x68/0x9c
[ 902.648223] __zstd_compress+0x70/0xc4
[ 902.700112] zstd_scompress+0x44/0x74
[ 902.751241] scomp_acomp_comp_decomp+0x328/0x4e4
[ 902.803142] scomp_acomp_compress+0x28/0x48
[ 902.854101] zswap_store+0x834/0xa18
[ 902.904406] swap_writepage+0x4c/0xe8
[ 902.954293] pageout+0x1dc/0x304
[ 903.003615] shrink_folio_list+0xa70/0xd28
[ 903.053351] evict_folios+0xcc0/0x1204
[ 903.103206] try_to_shrink_lruvec+0x214/0x2f0
[ 903.153455] shrink_one+0x104/0x1e8
[ 903.203317] shrink_node+0x314/0xc3c
[ 903.252906] balance_pgdat+0x498/0x914
[ 903.302390] kswapd+0x304/0x398
[ 903.351652] kthread+0x174/0x178
[ 903.400956] start_kernel_thread+0x10/0x14
[ 903.498731] Reported by Kernel Concurrency Sanitizer on:
[ 903.548555] CPU: 0 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 903.599232] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 903.650208] ==================================================================
[ 906.388161] ==================================================================
[ 906.438415] BUG: KCSAN: data-race in list_del / lru_gen_look_around
[ 906.537584] read (marked) to 0xef8a86b8 of 4 bytes by task 1337 on cpu 0:
[ 906.588237] lru_gen_look_around+0x320/0x634
[ 906.639064] folio_referenced_one+0x32c/0x404
[ 906.690180] rmap_walk_anon+0x1c4/0x24c
[ 906.741310] rmap_walk+0x70/0x7c
[ 906.792053] folio_referenced+0x194/0x1ec
[ 906.843086] shrink_folio_list+0x6a8/0xd28
[ 906.894189] evict_folios+0xcc0/0x1204
[ 906.945320] try_to_shrink_lruvec+0x214/0x2f0
[ 906.996523] shrink_one+0x104/0x1e8
[ 907.047743] shrink_node+0x314/0xc3c
[ 907.098786] do_try_to_free_pages+0x500/0x7e4
[ 907.150110] try_to_free_pages+0x150/0x18c
[ 907.201486] __alloc_pages+0x460/0x8dc
[ 907.252798] folio_alloc.constprop.0+0x30/0x50
[ 907.304295] __filemap_get_folio+0x164/0x1e4
[ 907.355984] ext4_da_write_begin+0x158/0x24c
[ 907.407354] generic_perform_write+0x114/0x2f0
[ 907.459021] ext4_buffered_write_iter+0x94/0x194
[ 907.510768] ext4_file_write_iter+0x1e0/0x828
[ 907.562389] do_iter_readv_writev+0x1a4/0x23c
[ 907.613926] vfs_writev+0x124/0x2a0
[ 907.665300] do_writev+0xc8/0x1bc
[ 907.716518] sys_writev+0x50/0x78
[ 907.767598] system_call_exception+0x15c/0x1c0
[ 907.818951] ret_from_syscall+0x0/0x2c
[ 907.920788] write to 0xef8a86b8 of 4 bytes by task 1611 on cpu 1:
[ 907.972293] list_del+0x2c/0x5c
[ 908.023363] lru_gen_del_folio+0x110/0x140
[ 908.074604] evict_folios+0xaf8/0x1204
[ 908.125907] try_to_shrink_lruvec+0x214/0x2f0
[ 908.177343] shrink_one+0x104/0x1e8
[ 908.228612] shrink_node+0x314/0xc3c
[ 908.279487] do_try_to_free_pages+0x500/0x7e4
[ 908.330410] try_to_free_pages+0x150/0x18c
[ 908.381248] __alloc_pages+0x460/0x8dc
[ 908.432012] folio_prealloc.isra.0+0x44/0xec
[ 908.482927] handle_mm_fault+0x488/0xed0
[ 908.533908] ___do_page_fault+0x4d8/0x630
[ 908.585056] do_page_fault+0x28/0x40
[ 908.636089] DataAccess_virt+0x124/0x17c
[ 908.737702] Reported by Kernel Concurrency Sanitizer on:
[ 908.789208] CPU: 1 PID: 1611 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 908.841703] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 908.894834] ==================================================================
[ 917.245693] ==================================================================
[ 917.299728] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 917.408432] write to 0xeedd91a0 of 4 bytes by task 2 on cpu 0:
[ 917.463602] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 917.518709] cgroup_rstat_flush_locked+0x528/0x538
[ 917.573889] cgroup_rstat_flush+0x38/0x5c
[ 917.628921] do_flush_stats+0x78/0x9c
[ 917.684000] mem_cgroup_flush_stats+0x7c/0x80
[ 917.739357] zswap_shrinker_count+0xb8/0x150
[ 917.794928] do_shrink_slab+0x7c/0x540
[ 917.850431] shrink_slab+0x1f0/0x384
[ 917.905863] shrink_one+0x140/0x1e8
[ 917.960830] shrink_node+0x314/0xc3c
[ 918.014963] do_try_to_free_pages+0x500/0x7e4
[ 918.068723] try_to_free_pages+0x150/0x18c
[ 918.121805] __alloc_pages+0x460/0x8dc
[ 918.175295] __alloc_pages_bulk+0x140/0x340
[ 918.228022] __vmalloc_node_range+0x310/0x530
[ 918.280599] copy_process+0x608/0x3324
[ 918.332468] kernel_clone+0x78/0x2d0
[ 918.383718] kernel_thread+0xbc/0xe8
[ 918.434646] kthreadd+0x200/0x284
[ 918.485366] start_kernel_thread+0x10/0x14
[ 918.587160] read to 0xeedd91a0 of 4 bytes by task 39 on cpu 1:
[ 918.639042] memcg_rstat_updated+0xcc/0x15c
[ 918.690798] __mod_memcg_lruvec_state+0x118/0x154
[ 918.742670] __mod_lruvec_state+0x58/0x78
[ 918.794343] lru_gen_update_size+0x130/0x240
[ 918.846290] lru_gen_add_folio+0x198/0x288
[ 918.898076] move_folios_to_lru+0x29c/0x350
[ 918.949848] evict_folios+0xd20/0x1204
[ 919.001524] try_to_shrink_lruvec+0x214/0x2f0
[ 919.053494] shrink_one+0x104/0x1e8
[ 919.105116] shrink_node+0x314/0xc3c
[ 919.156616] balance_pgdat+0x498/0x914
[ 919.207970] kswapd+0x304/0x398
[ 919.259058] kthread+0x174/0x178
[ 919.309981] start_kernel_thread+0x10/0x14
[ 919.411884] Reported by Kernel Concurrency Sanitizer on:
[ 919.463717] CPU: 1 PID: 39 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 919.516723] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 919.570035] ==================================================================
[ 927.578462] Key type dns_resolver registered
[ 928.915260] Key type cifs.idmap registered
[ 929.094635] CIFS: Attempting to mount //192.168.2.3/yea_home
[ 933.757206] ==================================================================
[ 933.814618] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 933.929568] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 933.988103] hrtimer_active+0xb0/0x100
[ 934.046727] task_tick_fair+0xc8/0xcc
[ 934.104691] scheduler_tick+0x6c/0xcc
[ 934.162283] update_process_times+0xc8/0x120
[ 934.220063] tick_nohz_handler+0x1ac/0x270
[ 934.277793] __hrtimer_run_queues+0x170/0x1d8
[ 934.335613] hrtimer_interrupt+0x168/0x350
[ 934.393444] timer_interrupt+0x108/0x178
[ 934.451240] Decrementer_virt+0x108/0x10c
[ 934.509027] 0xc11d8420
[ 934.566483] 0x29f00
[ 934.623270] kcsan_setup_watchpoint+0x300/0x4cc
[ 934.680057] page_ext_get+0x98/0xc0
[ 934.736043] __reset_page_owner+0x3c/0x234
[ 934.791487] free_unref_page_prepare+0x124/0x1dc
[ 934.847571] free_unref_folios+0xcc/0x208
[ 934.902681] folios_put_refs+0x1c8/0x1cc
[ 934.956979] free_pages_and_swap_cache+0x1c8/0x1d0
[ 935.011280] tlb_flush_mmu+0x200/0x288
[ 935.065230] unmap_page_range+0x4f8/0x8bc
[ 935.118995] unmap_vmas+0x11c/0x174
[ 935.172707] exit_mmap+0x170/0x2e0
[ 935.226475] __mmput+0x4c/0x188
[ 935.279858] mmput+0x74/0x94
[ 935.332902] do_exit+0x55c/0xd08
[ 935.385817] do_group_exit+0x58/0xfc
[ 935.438665] get_signal+0x73c/0x8c0
[ 935.491638] do_notify_resume+0x94/0x47c
[ 935.544891] interrupt_exit_user_prepare_main+0xa8/0xac
[ 935.598584] interrupt_exit_user_prepare+0x54/0x74
[ 935.651886] interrupt_return+0x14/0x190
[ 935.757849] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 935.812083] __hrtimer_run_queues+0x1cc/0x1d8
[ 935.866163] hrtimer_interrupt+0x168/0x350
[ 935.920317] timer_interrupt+0x108/0x178
[ 935.974671] Decrementer_virt+0x108/0x10c
[ 936.029242] mmput+0x74/0x94
[ 936.083487] __reset_page_owner+0x20c/0x234
[ 936.138013] free_unref_page_prepare+0x124/0x1dc
[ 936.192475] free_unref_folios+0xcc/0x208
[ 936.246380] folios_put_refs+0x1c8/0x1cc
[ 936.300183] free_pages_and_swap_cache+0x1c8/0x1d0
[ 936.354241] tlb_flush_mmu+0x200/0x288
[ 936.408213] unmap_page_range+0x4f8/0x8bc
[ 936.462314] unmap_vmas+0x11c/0x174
[ 936.516131] exit_mmap+0x170/0x2e0
[ 936.569830] __mmput+0x4c/0x188
[ 936.623246] mmput+0x74/0x94
[ 936.676396] do_exit+0x55c/0xd08
[ 936.729625] do_group_exit+0x58/0xfc
[ 936.782887] get_signal+0x73c/0x8c0
[ 936.836245] do_notify_resume+0x94/0x47c
[ 936.889731] interrupt_exit_user_prepare_main+0xa8/0xac
[ 936.943717] interrupt_exit_user_prepare+0x54/0x74
[ 936.997344] interrupt_return+0x14/0x190
[ 937.102654] Reported by Kernel Concurrency Sanitizer on:
[ 937.155242] CPU: 0 PID: 1611 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 937.208309] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 937.261046] ==================================================================
[ 952.256115] ==================================================================
[ 952.307600] BUG: KCSAN: data-race in _copy_to_user / interrupt_async_enter_prepare
[ 952.408873] read to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 1:
[ 952.459599] interrupt_async_enter_prepare+0x64/0xc4
[ 952.510398] timer_interrupt+0x1c/0x178
[ 952.560756] Decrementer_virt+0x108/0x10c
[ 952.611137] 0xf37c9c18
[ 952.661389] 0x0
[ 952.711105] kcsan_setup_watchpoint+0x300/0x4cc
[ 952.761473] _copy_to_user+0x58/0xdc
[ 952.811719] cp_statx+0x348/0x384
[ 952.861700] do_statx+0xc8/0xfc
[ 952.911329] sys_statx+0x8c/0xc8
[ 952.960860] system_call_exception+0x15c/0x1c0
[ 953.010711] ret_from_syscall+0x0/0x2c
[ 953.110024] write to 0xc4b6e5dc of 4 bytes by task 1608 on cpu 1:
[ 953.160452] _copy_to_user+0x58/0xdc
[ 953.210974] cp_statx+0x348/0x384
[ 953.261269] do_statx+0xc8/0xfc
[ 953.311306] sys_statx+0x8c/0xc8
[ 953.361267] system_call_exception+0x15c/0x1c0
[ 953.411405] ret_from_syscall+0x0/0x2c
[ 953.510221] Reported by Kernel Concurrency Sanitizer on:
[ 953.560401] CPU: 1 PID: 1608 Comm: htop Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 953.611794] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 953.663592] ==================================================================
[-- Attachment #4: dmesg_69-rc4_g4_02 --]
[-- Type: application/octet-stream, Size: 76408 bytes --]
[ 114.850479] kernfs_refresh_inode+0x40/0x1c0
[ 114.911781] kernfs_iop_getattr+0x84/0xd0
[ 114.971637] vfs_getattr_nosec+0x138/0x18c
[ 115.030664] vfs_getattr+0x88/0x90
[ 115.088781] vfs_statx+0xa8/0x25c
[ 115.146327] do_statx+0xb4/0xfc
[ 115.203307] sys_statx+0x8c/0xc8
[ 115.259711] system_call_exception+0x15c/0x1c0
[ 115.316465] ret_from_syscall+0x0/0x2c
[ 115.429725] write to 0xc1887ce8 of 2 bytes by task 590 on cpu 1:
[ 115.487354] kernfs_refresh_inode+0x40/0x1c0
[ 115.545724] kernfs_iop_permission+0x74/0xbc
[ 115.604075] inode_permission+0x84/0x20c
[ 115.662475] link_path_walk+0x114/0x4c0
[ 115.720560] path_lookupat+0x78/0x21c
[ 115.778366] path_openat+0x1d8/0xe98
[ 115.836052] do_filp_open+0x88/0xec
[ 115.893683] do_sys_openat2+0x9c/0xf8
[ 115.951309] do_sys_open+0x48/0x74
[ 116.008532] sys_openat+0x5c/0x88
[ 116.065613] system_call_exception+0x15c/0x1c0
[ 116.123132] ret_from_syscall+0x0/0x2c
[ 116.237575] Reported by Kernel Concurrency Sanitizer on:
[ 116.295758] CPU: 1 PID: 590 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 116.355514] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 116.415730] ==================================================================
[ 117.050295] Adding 8388604k swap on /dev/sdb6. Priority:-2 extents:1 across:8388604k
[ 118.414158] EXT4-fs (sdc5): mounting ext2 file system using the ext4 subsystem
[ 118.550248] EXT4-fs (sdc5): mounted filesystem e4e8af9e-0f0d-44f9-b983-71bf61d782de r/w without journal. Quota mode: disabled.
[ 118.671048] ext2 filesystem being mounted at /boot supports timestamps until 2038-01-19 (0x7fffffff)
[ 118.800234] BTRFS: device label tmp devid 1 transid 2856 /dev/sda6 (8:6) scanned by mount (916)
[ 118.932560] BTRFS info (device sda6): first mount of filesystem 65162d91-887e-4e48-a356-fbf7093eefb5
[ 119.056738] BTRFS info (device sda6): using xxhash64 (xxhash64-generic) checksum algorithm
[ 119.180037] BTRFS info (device sda6): using free-space-tree
[ 122.613242] ==================================================================
[ 122.613372] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 122.613531] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 122.613588] hrtimer_active+0xb0/0x100
[ 122.613683] task_tick_fair+0xc8/0xcc
[ 122.613766] scheduler_tick+0x6c/0xcc
[ 122.613831] update_process_times+0xc8/0x120
[ 122.613920] tick_nohz_handler+0x1ac/0x270
[ 122.614000] __hrtimer_run_queues+0x170/0x1d8
[ 122.614094] hrtimer_interrupt+0x168/0x350
[ 122.614188] timer_interrupt+0x108/0x178
[ 122.614256] Decrementer_virt+0x108/0x10c
[ 122.614332] 0x84004482
[ 122.614385] rcu_all_qs+0x58/0x17c
[ 122.614459] __cond_resched+0x50/0x58
[ 122.614530] console_conditional_schedule+0x38/0x50
[ 122.614622] fbcon_redraw+0x1a4/0x24c
[ 122.614688] fbcon_scroll+0xe0/0x1dc
[ 122.614754] con_scroll+0x19c/0x1dc
[ 122.614820] lf+0x64/0xfc
[ 122.614878] do_con_write+0x9e0/0x263c
[ 122.614950] con_write+0x34/0x64
[ 122.615017] do_output_char+0x1cc/0x2f4
[ 122.615103] n_tty_write+0x4c8/0x574
[ 122.615188] file_tty_write.isra.0+0x284/0x300
[ 122.615270] tty_write+0x34/0x58
[ 122.615344] redirected_tty_write+0xdc/0xe4
[ 122.615426] vfs_write+0x2b8/0x318
[ 122.615500] ksys_write+0xb8/0x134
[ 122.615572] sys_write+0x4c/0x74
[ 122.615643] system_call_exception+0x15c/0x1c0
[ 122.615732] ret_from_syscall+0x0/0x2c
[ 122.615817] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 122.615869] __hrtimer_run_queues+0x12c/0x1d8
[ 122.615963] hrtimer_interrupt+0x168/0x350
[ 122.616057] timer_interrupt+0x108/0x178
[ 122.616123] Decrementer_virt+0x108/0x10c
[ 122.616197] memchr_inv+0x100/0x188
[ 122.616281] __kernel_unpoison_pages+0xe0/0x1a8
[ 122.616354] post_alloc_hook+0x8c/0xf0
[ 122.616446] prep_new_page+0x24/0x5c
[ 122.616533] get_page_from_freelist+0x564/0x660
[ 122.616629] __alloc_pages+0x114/0x8dc
[ 122.616722] folio_prealloc.isra.0+0x9c/0xec
[ 122.616825] do_wp_page+0x5cc/0xb98
[ 122.616889] handle_mm_fault+0xd88/0xed0
[ 122.616956] ___do_page_fault+0x4d8/0x630
[ 122.617051] do_page_fault+0x28/0x40
[ 122.617145] DataAccess_virt+0x124/0x17c
[ 122.617242] Reported by Kernel Concurrency Sanitizer on:
[ 122.617276] CPU: 0 PID: 563 Comm: (udev-worker) Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 122.617354] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 122.617395] ==================================================================
[ 129.152749] CPU-temp: 59.3 C
[ 129.152824] , Case: 35.6 C
[ 129.252654] , Fan: 6 (tuned +1)
[ 145.249842] ==================================================================
[ 145.249975] BUG: KCSAN: data-race in copy_iovec_from_user / interrupt_async_enter_prepare
[ 145.250148] read to 0xc29df19c of 4 bytes by task 1355 on cpu 0:
[ 145.250221] interrupt_async_enter_prepare+0x64/0xc4
[ 145.250314] timer_interrupt+0x1c/0x178
[ 145.250399] Decrementer_virt+0x108/0x10c
[ 145.250495] ___slab_alloc+0x31c/0x5dc
[ 145.250602] 0xf3841c88
[ 145.250679] kcsan_setup_watchpoint+0x300/0x4cc
[ 145.250768] copy_iovec_from_user+0x44/0x10c
[ 145.250873] iovec_from_user+0xd0/0xdc
[ 145.250980] __import_iovec+0x118/0x22c
[ 145.251087] import_iovec+0x50/0x84
[ 145.251191] vfs_writev+0xac/0x2a0
[ 145.251283] do_writev+0xc8/0x1bc
[ 145.251371] sys_writev+0x50/0x78
[ 145.251463] system_call_exception+0x15c/0x1c0
[ 145.251571] ret_from_syscall+0x0/0x2c
[ 145.251700] write to 0xc29df19c of 4 bytes by task 1355 on cpu 0:
[ 145.251772] copy_iovec_from_user+0x44/0x10c
[ 145.251878] iovec_from_user+0xd0/0xdc
[ 145.251983] __import_iovec+0x118/0x22c
[ 145.252090] import_iovec+0x50/0x84
[ 145.252194] vfs_writev+0xac/0x2a0
[ 145.252283] do_writev+0xc8/0x1bc
[ 145.252371] sys_writev+0x50/0x78
[ 145.252461] system_call_exception+0x15c/0x1c0
[ 145.252567] ret_from_syscall+0x0/0x2c
[ 145.252691] Reported by Kernel Concurrency Sanitizer on:
[ 145.252745] CPU: 0 PID: 1355 Comm: syslogd Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 145.252839] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 145.252899] ==================================================================
[ 147.179793] b43legacy-phy0: Loading firmware version 0x127, patch level 14 (2005-04-18 02:36:27)
[ 147.267106] b43legacy-phy0 debug: Chip initialized
[ 147.312848] b43legacy-phy0 debug: 30-bit DMA initialized
[ 147.324745] b43legacy-phy0 debug: Wireless interface started
[ 147.336810] b43legacy-phy0 debug: Adding Interface type 2
[ 147.360298] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.360401] b43legacy-phy0 debug: RX: Packet dropped
[ 147.407501] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.407603] b43legacy-phy0 debug: RX: Packet dropped
[ 147.413213] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.413303] b43legacy-phy0 debug: RX: Packet dropped
[ 147.418268] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.418363] b43legacy-phy0 debug: RX: Packet dropped
[ 147.427312] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.427414] b43legacy-phy0 debug: RX: Packet dropped
[ 147.445950] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.446049] b43legacy-phy0 debug: RX: Packet dropped
[ 147.481984] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.482104] b43legacy-phy0 debug: RX: Packet dropped
[ 147.486390] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.486487] b43legacy-phy0 debug: RX: Packet dropped
[ 147.488969] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.489087] b43legacy-phy0 debug: RX: Packet dropped
[ 147.534423] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 147.534517] b43legacy-phy0 debug: RX: Packet dropped
[ 147.538166] b43legacy-phy0 debug: RX: Packet dropped
[ 147.545897] b43legacy-phy0 debug: RX: Packet dropped
[ 147.625904] b43legacy-phy0 debug: RX: Packet dropped
[ 147.631379] b43legacy-phy0 debug: RX: Packet dropped
[ 147.684197] b43legacy-phy0 debug: RX: Packet dropped
[ 147.709147] b43legacy-phy0 debug: RX: Packet dropped
[ 147.735089] b43legacy-phy0 debug: RX: Packet dropped
[ 147.748795] b43legacy-phy0 debug: RX: Packet dropped
[ 148.203300] NET: Registered PF_PACKET protocol family
[ 156.352809] ==================================================================
[ 156.352954] BUG: KCSAN: data-race in interrupt_async_enter_prepare / raw_copy_to_user
[ 156.353130] read to 0xc32dc29c of 4 bytes by task 1486 on cpu 1:
[ 156.353204] interrupt_async_enter_prepare+0x64/0xc4
[ 156.353300] timer_interrupt+0x1c/0x178
[ 156.353386] Decrementer_virt+0x108/0x10c
[ 156.353483] 0x1841d4a2
[ 156.353558] 0x6d8169f5
[ 156.353625] kcsan_setup_watchpoint+0x300/0x4cc
[ 156.353715] raw_copy_to_user+0x74/0xb4
[ 156.353819] _copy_to_iter+0x120/0x694
[ 156.353925] get_random_bytes_user+0x128/0x1a0
[ 156.354016] sys_getrandom+0x108/0x110
[ 156.354103] system_call_exception+0x15c/0x1c0
[ 156.354213] ret_from_syscall+0x0/0x2c
[ 156.354343] write to 0xc32dc29c of 4 bytes by task 1486 on cpu 1:
[ 156.354416] raw_copy_to_user+0x74/0xb4
[ 156.354520] _copy_to_iter+0x120/0x694
[ 156.354626] get_random_bytes_user+0x128/0x1a0
[ 156.354715] sys_getrandom+0x108/0x110
[ 156.354802] system_call_exception+0x15c/0x1c0
[ 156.354908] ret_from_syscall+0x0/0x2c
[ 156.355034] Reported by Kernel Concurrency Sanitizer on:
[ 156.355088] CPU: 1 PID: 1486 Comm: sshd Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 156.355182] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 156.355242] ==================================================================
[ 161.546024] ==================================================================
[ 161.546124] BUG: KCSAN: data-race in rcu_all_qs / rcu_report_qs_rdp
[ 161.546228] write (marked) to 0xeedc9c11 of 1 bytes by interrupt on cpu 1:
[ 161.546284] rcu_report_qs_rdp+0x15c/0x18c
[ 161.546350] rcu_core+0x1f0/0xa88
[ 161.546415] rcu_core_si+0x20/0x3c
[ 161.546480] __do_softirq+0x1dc/0x218
[ 161.546570] do_softirq_own_stack+0x54/0x74
[ 161.546657] do_softirq_own_stack+0x44/0x74
[ 161.546741] __irq_exit_rcu+0x6c/0xbc
[ 161.546817] irq_exit+0x10/0x20
[ 161.546887] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 161.546963] timer_interrupt+0x64/0x178
[ 161.547026] Decrementer_virt+0x108/0x10c
[ 161.547098] 0x0
[ 161.547144] 0xffffffff
[ 161.547188] kcsan_setup_watchpoint+0x300/0x4cc
[ 161.547255] rcu_all_qs+0x58/0x17c
[ 161.547324] __cond_resched+0x50/0x58
[ 161.547391] console_conditional_schedule+0x38/0x50
[ 161.547477] fbcon_redraw+0x1a4/0x24c
[ 161.547543] fbcon_scroll+0xe0/0x1dc
[ 161.547607] con_scroll+0x19c/0x1dc
[ 161.547671] lf+0x64/0xfc
[ 161.547727] do_con_write+0x9e0/0x263c
[ 161.547797] con_write+0x34/0x64
[ 161.547862] do_output_char+0x1cc/0x2f4
[ 161.547948] n_tty_write+0x4c8/0x574
[ 161.548030] file_tty_write.isra.0+0x284/0x300
[ 161.548110] tty_write+0x34/0x58
[ 161.548182] redirected_tty_write+0xdc/0xe4
[ 161.548261] vfs_write+0x2b8/0x318
[ 161.548333] ksys_write+0xb8/0x134
[ 161.548403] sys_write+0x4c/0x74
[ 161.548471] system_call_exception+0x15c/0x1c0
[ 161.548559] ret_from_syscall+0x0/0x2c
[ 161.548646] read to 0xeedc9c11 of 1 bytes by task 1558 on cpu 1:
[ 161.548697] rcu_all_qs+0x58/0x17c
[ 161.548767] __cond_resched+0x50/0x58
[ 161.548832] console_conditional_schedule+0x38/0x50
[ 161.548919] fbcon_redraw+0x1a4/0x24c
[ 161.548982] fbcon_scroll+0xe0/0x1dc
[ 161.549046] con_scroll+0x19c/0x1dc
[ 161.549108] lf+0x64/0xfc
[ 161.549164] do_con_write+0x9e0/0x263c
[ 161.549233] con_write+0x34/0x64
[ 161.549299] do_output_char+0x1cc/0x2f4
[ 161.549378] n_tty_write+0x4c8/0x574
[ 161.549460] file_tty_write.isra.0+0x284/0x300
[ 161.549539] tty_write+0x34/0x58
[ 161.549611] redirected_tty_write+0xdc/0xe4
[ 161.549689] vfs_write+0x2b8/0x318
[ 161.549759] ksys_write+0xb8/0x134
[ 161.549829] sys_write+0x4c/0x74
[ 161.549898] system_call_exception+0x15c/0x1c0
[ 161.549982] ret_from_syscall+0x0/0x2c
[ 161.550064] Reported by Kernel Concurrency Sanitizer on:
[ 161.550097] CPU: 1 PID: 1558 Comm: ebegin Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 161.550169] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 161.550208] ==================================================================
[ 178.005079] CPU-temp: 59.6 C
[ 178.005153] , Case: 35.7 C
[ 178.005217] , Fan: 7 (tuned +1)
[ 237.396120] ==================================================================
[ 237.396262] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 237.396447] write to 0xeedc6094 of 1 bytes by task 0 on cpu 1:
[ 237.396524] tmigr_cpu_activate+0xe8/0x12c
[ 237.396632] timer_clear_idle+0x60/0x80
[ 237.396746] tick_nohz_restart_sched_tick+0x3c/0x170
[ 237.396852] tick_nohz_idle_exit+0xe0/0x158
[ 237.396955] do_idle+0x54/0x11c
[ 237.397042] cpu_startup_entry+0x30/0x34
[ 237.397131] start_secondary+0x504/0x854
[ 237.397231] 0x3338
[ 237.397347] read to 0xeedc6094 of 1 bytes by interrupt on cpu 0:
[ 237.397423] tmigr_next_groupevt+0x60/0xd8
[ 237.397528] tmigr_handle_remote_up+0x94/0x394
[ 237.397636] __walk_groups+0x74/0xc8
[ 237.397735] tmigr_handle_remote+0x13c/0x198
[ 237.397843] run_timer_softirq+0x94/0x98
[ 237.397952] __do_softirq+0x1dc/0x218
[ 237.398068] do_softirq_own_stack+0x54/0x74
[ 237.398182] do_softirq_own_stack+0x44/0x74
[ 237.398292] __irq_exit_rcu+0x6c/0xbc
[ 237.398392] irq_exit+0x10/0x20
[ 237.398488] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 237.398590] timer_interrupt+0x64/0x178
[ 237.398679] Decrementer_virt+0x108/0x10c
[ 237.398778] default_idle_call+0x38/0x48
[ 237.398871] do_idle+0xfc/0x11c
[ 237.398955] cpu_startup_entry+0x30/0x34
[ 237.399044] kernel_init+0x0/0x1a4
[ 237.399146] console_on_rootfs+0x0/0xc8
[ 237.399231] 0x3610
[ 237.399343] value changed: 0x00 -> 0x01
[ 237.399449] Reported by Kernel Concurrency Sanitizer on:
[ 237.399505] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 237.399603] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 237.399665] ==================================================================
[ 243.045849] CPU-temp: 59.9 C
[ 243.045914] , Case: 35.8 C
[ 243.046057] , Fan: 8 (tuned +1)
[ 249.349141] ==================================================================
[ 249.349270] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 249.349443] read to 0xeeda9094 of 1 bytes by interrupt on cpu 1:
[ 249.349518] tmigr_next_groupevt+0x60/0xd8
[ 249.349621] tmigr_handle_remote_up+0x94/0x394
[ 249.349724] __walk_groups+0x74/0xc8
[ 249.349819] tmigr_handle_remote+0x13c/0x198
[ 249.349922] run_timer_softirq+0x94/0x98
[ 249.350030] __do_softirq+0x1dc/0x218
[ 249.350140] do_softirq_own_stack+0x54/0x74
[ 249.350248] do_softirq_own_stack+0x44/0x74
[ 249.350354] __irq_exit_rcu+0x6c/0xbc
[ 249.350451] irq_exit+0x10/0x20
[ 249.350543] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 249.350639] timer_interrupt+0x64/0x178
[ 249.350724] Decrementer_virt+0x108/0x10c
[ 249.350818] default_idle_call+0x38/0x48
[ 249.350907] do_idle+0xfc/0x11c
[ 249.350987] cpu_startup_entry+0x30/0x34
[ 249.351072] start_secondary+0x504/0x854
[ 249.351167] 0x3338
[ 249.351280] write to 0xeeda9094 of 1 bytes by task 0 on cpu 0:
[ 249.351352] tmigr_cpu_activate+0xe8/0x12c
[ 249.351454] timer_clear_idle+0x60/0x80
[ 249.351560] tick_nohz_restart_sched_tick+0x3c/0x170
[ 249.351661] tick_nohz_idle_exit+0xe0/0x158
[ 249.351759] do_idle+0x54/0x11c
[ 249.351839] cpu_startup_entry+0x30/0x34
[ 249.351925] kernel_init+0x0/0x1a4
[ 249.352022] console_on_rootfs+0x0/0xc8
[ 249.352103] 0x3610
[ 249.352210] Reported by Kernel Concurrency Sanitizer on:
[ 249.352263] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 249.352356] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 249.352416] ==================================================================
[ 275.591448] CPU-temp: 60.1 C
[ 275.591517] , Case: 36.0 C
[ 275.591661] , Fan: 9 (tuned +1)
[ 278.327717] net_ratelimit: 8 callbacks suppressed
[ 278.327781] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 278.327899] b43legacy-phy0 debug: RX: Packet dropped
[ 373.933764] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 373.933867] b43legacy-phy0 debug: RX: Packet dropped
[ 720.759460] ==================================================================
[ 720.759601] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 720.759781] read to 0xeedc6094 of 1 bytes by task 0 on cpu 0:
[ 720.759855] tmigr_next_groupevt+0x60/0xd8
[ 720.759965] tmigr_update_events+0x29c/0x328
[ 720.760069] tmigr_inactive_up+0x180/0x288
[ 720.760171] __walk_groups+0x74/0xc8
[ 720.760269] tmigr_cpu_deactivate+0x110/0x178
[ 720.760375] __get_next_timer_interrupt+0x32c/0x34c
[ 720.760489] timer_base_try_to_set_idle+0x50/0x94
[ 720.760601] tick_nohz_idle_stop_tick+0x150/0x4fc
[ 720.760704] do_idle+0xf8/0x11c
[ 720.760787] cpu_startup_entry+0x30/0x34
[ 720.760875] kernel_init+0x0/0x1a4
[ 720.760976] console_on_rootfs+0x0/0xc8
[ 720.761059] 0x3610
[ 720.761178] write to 0xeedc6094 of 1 bytes by task 0 on cpu 1:
[ 720.761252] tmigr_cpu_activate+0xe8/0x12c
[ 720.761357] timer_clear_idle+0x60/0x80
[ 720.761463] tick_nohz_restart_sched_tick+0x3c/0x170
[ 720.761565] tick_nohz_idle_exit+0xe0/0x158
[ 720.761667] do_idle+0x54/0x11c
[ 720.761747] cpu_startup_entry+0x30/0x34
[ 720.761835] start_secondary+0x504/0x854
[ 720.761932] 0x3338
[ 720.762041] Reported by Kernel Concurrency Sanitizer on:
[ 720.762097] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 720.762193] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 720.762255] ==================================================================
[ 751.213814] ==================================================================
[ 751.266545] BUG: KCSAN: data-race in interrupt_async_enter_prepare / set_fd_set
[ 751.372865] read to 0xc29db6dc of 4 bytes by task 1541 on cpu 0:
[ 751.427255] interrupt_async_enter_prepare+0x64/0xc4
[ 751.481946] do_IRQ+0x18/0x2c
[ 751.536487] HardwareInterrupt_virt+0x108/0x10c
[ 751.591584] 0xfefefefe
[ 751.646400] 0x0
[ 751.700756] kcsan_setup_watchpoint+0x300/0x4cc
[ 751.755834] set_fd_set+0x60/0xec
[ 751.810703] core_sys_select+0x1ec/0x240
[ 751.865731] sys_pselect6_time32+0x190/0x1b4
[ 751.920851] system_call_exception+0x15c/0x1c0
[ 751.976313] ret_from_syscall+0x0/0x2c
[ 752.086926] write to 0xc29db6dc of 4 bytes by task 1541 on cpu 0:
[ 752.143313] set_fd_set+0x60/0xec
[ 752.199552] core_sys_select+0x1ec/0x240
[ 752.255574] sys_pselect6_time32+0x190/0x1b4
[ 752.311346] system_call_exception+0x15c/0x1c0
[ 752.367176] ret_from_syscall+0x0/0x2c
[ 752.478262] Reported by Kernel Concurrency Sanitizer on:
[ 752.534822] CPU: 0 PID: 1541 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 752.592536] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 752.650552] ==================================================================
[ 771.386274] b43legacy-phy0 warning: Unexpected value for chanstat (0x7C00)
[ 771.476892] b43legacy-phy0 debug: RX: Packet dropped
[ 772.110509] ==================================================================
[ 772.170664] BUG: KCSAN: data-race in tmigr_cpu_activate / tmigr_next_groupevt
[ 772.291413] write to 0xeedc6094 of 1 bytes by task 0 on cpu 1:
[ 772.352754] tmigr_cpu_activate+0xe8/0x12c
[ 772.413919] timer_clear_idle+0x60/0x80
[ 772.475037] tick_nohz_restart_sched_tick+0x3c/0x170
[ 772.536604] tick_nohz_idle_exit+0xe0/0x158
[ 772.598085] do_idle+0x54/0x11c
[ 772.659168] cpu_startup_entry+0x30/0x34
[ 772.719700] start_secondary+0x504/0x854
[ 772.779445] 0x3338
[ 772.895403] read to 0xeedc6094 of 1 bytes by interrupt on cpu 0:
[ 772.954414] tmigr_next_groupevt+0x60/0xd8
[ 773.013453] tmigr_handle_remote_up+0x94/0x394
[ 773.072167] __walk_groups+0x74/0xc8
[ 773.130690] tmigr_handle_remote+0x13c/0x198
[ 773.189549] run_timer_softirq+0x94/0x98
[ 773.248284] __do_softirq+0x1dc/0x218
[ 773.306765] do_softirq_own_stack+0x54/0x74
[ 773.365384] do_softirq_own_stack+0x44/0x74
[ 773.423759] __irq_exit_rcu+0x6c/0xbc
[ 773.481931] irq_exit+0x10/0x20
[ 773.540045] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 773.598635] timer_interrupt+0x64/0x178
[ 773.656878] Decrementer_virt+0x108/0x10c
[ 773.714842] default_idle_call+0x38/0x48
[ 773.772963] do_idle+0xfc/0x11c
[ 773.831032] cpu_startup_entry+0x30/0x34
[ 773.889479] kernel_init+0x0/0x1a4
[ 773.947933] console_on_rootfs+0x0/0xc8
[ 774.006554] 0x3610
[ 774.123373] Reported by Kernel Concurrency Sanitizer on:
[ 774.182980] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 774.244373] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 774.305784] ==================================================================
[ 908.288449] ==================================================================
[ 908.349201] BUG: KCSAN: data-race in __run_timer_base / next_expiry_recalc
[ 908.467956] read to 0xeedc4918 of 4 bytes by interrupt on cpu 0:
[ 908.527641] __run_timer_base+0x4c/0x38c
[ 908.586652] timer_expire_remote+0x48/0x68
[ 908.645495] tmigr_handle_remote_up+0x1f4/0x394
[ 908.704257] __walk_groups+0x74/0xc8
[ 908.762829] tmigr_handle_remote+0x13c/0x198
[ 908.821961] run_timer_softirq+0x94/0x98
[ 908.880952] __do_softirq+0x1dc/0x218
[ 908.939760] do_softirq_own_stack+0x54/0x74
[ 908.998778] do_softirq_own_stack+0x44/0x74
[ 909.057271] __irq_exit_rcu+0x6c/0xbc
[ 909.115657] irq_exit+0x10/0x20
[ 909.173786] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 909.232717] timer_interrupt+0x64/0x178
[ 909.291195] Decrementer_virt+0x108/0x10c
[ 909.349294] default_idle_call+0x38/0x48
[ 909.407348] do_idle+0xfc/0x11c
[ 909.465156] cpu_startup_entry+0x30/0x34
[ 909.523064] kernel_init+0x0/0x1a4
[ 909.580804] console_on_rootfs+0x0/0xc8
[ 909.638593] 0x3610
[ 909.751912] write to 0xeedc4918 of 4 bytes by interrupt on cpu 1:
[ 909.808835] next_expiry_recalc+0xbc/0x15c
[ 909.864998] __run_timer_base+0x278/0x38c
[ 909.920308] run_timer_base+0x5c/0x7c
[ 909.974831] run_timer_softirq+0x34/0x98
[ 910.028542] __do_softirq+0x1dc/0x218
[ 910.081628] do_softirq_own_stack+0x54/0x74
[ 910.134578] do_softirq_own_stack+0x44/0x74
[ 910.186699] __irq_exit_rcu+0x6c/0xbc
[ 910.238904] irq_exit+0x10/0x20
[ 910.290634] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 910.343100] timer_interrupt+0x64/0x178
[ 910.395429] Decrementer_virt+0x108/0x10c
[ 910.447741] default_idle_call+0x38/0x48
[ 910.500014] do_idle+0xfc/0x11c
[ 910.552097] cpu_startup_entry+0x30/0x34
[ 910.604699] start_secondary+0x504/0x854
[ 910.656958] 0x3338
[ 910.759460] Reported by Kernel Concurrency Sanitizer on:
[ 910.811642] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 910.864781] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 910.918205] ==================================================================
[ 948.875808] ==================================================================
[ 948.928873] BUG: KCSAN: data-race in interrupt_async_enter_prepare / raw_copy_to_user
[ 949.036459] read to 0xc29d939c of 4 bytes by task 1584 on cpu 0:
[ 949.091302] interrupt_async_enter_prepare+0x64/0xc4
[ 949.145797] timer_interrupt+0x1c/0x178
[ 949.199947] Decrementer_virt+0x108/0x10c
[ 949.254144] 0x8
[ 949.307879] 0xc51a8020
[ 949.361476] kcsan_setup_watchpoint+0x300/0x4cc
[ 949.415617] raw_copy_to_user+0x74/0xb4
[ 949.469747] _copy_to_iter+0x120/0x694
[ 949.523836] simple_copy_to_iter+0x78/0x80
[ 949.578000] __skb_datagram_iter+0x88/0x334
[ 949.632420] skb_copy_datagram_iter+0x4c/0x78
[ 949.686676] unix_stream_read_actor+0x58/0x8c
[ 949.740203] unix_stream_read_generic+0x808/0xae0
[ 949.792946] unix_stream_recvmsg+0x118/0x11c
[ 949.844851] sock_recvmsg_nosec+0x5c/0x88
[ 949.897131] ____sys_recvmsg+0xc4/0x270
[ 949.948720] ___sys_recvmsg+0x90/0xd4
[ 949.999685] __sys_recvmsg+0xb0/0xf8
[ 950.050220] sys_recvmsg+0x50/0x78
[ 950.100272] system_call_exception+0x15c/0x1c0
[ 950.150591] ret_from_syscall+0x0/0x2c
[ 950.250668] write to 0xc29d939c of 4 bytes by task 1584 on cpu 0:
[ 950.301716] raw_copy_to_user+0x74/0xb4
[ 950.352436] _copy_to_iter+0x120/0x694
[ 950.403091] simple_copy_to_iter+0x78/0x80
[ 950.453773] __skb_datagram_iter+0x88/0x334
[ 950.504795] skb_copy_datagram_iter+0x4c/0x78
[ 950.556085] unix_stream_read_actor+0x58/0x8c
[ 950.607130] unix_stream_read_generic+0x808/0xae0
[ 950.657834] unix_stream_recvmsg+0x118/0x11c
[ 950.708078] sock_recvmsg_nosec+0x5c/0x88
[ 950.758405] ____sys_recvmsg+0xc4/0x270
[ 950.808713] ___sys_recvmsg+0x90/0xd4
[ 950.858949] __sys_recvmsg+0xb0/0xf8
[ 950.909091] sys_recvmsg+0x50/0x78
[ 950.959103] system_call_exception+0x15c/0x1c0
[ 951.009386] ret_from_syscall+0x0/0x2c
[ 951.109902] Reported by Kernel Concurrency Sanitizer on:
[ 951.160864] CPU: 0 PID: 1584 Comm: wmaker Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 951.212548] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 951.264588] ==================================================================
[ 1037.010310] ==================================================================
[ 1037.063153] BUG: KCSAN: data-race in blk_finish_plug / blk_time_get_ns
[ 1037.168081] read to 0xc15b1d30 of 4 bytes by interrupt on cpu 1:
[ 1037.221981] blk_time_get_ns+0x24/0xf4
[ 1037.275976] __blk_mq_end_request+0x58/0xe8
[ 1037.330011] scsi_end_request+0x120/0x2d4
[ 1037.383796] scsi_io_completion+0x290/0x6b4
[ 1037.439234] scsi_finish_command+0x160/0x1a4
[ 1037.494753] scsi_complete+0xf0/0x128
[ 1037.549618] blk_complete_reqs+0xb4/0xd8
[ 1037.603095] blk_done_softirq+0x68/0xa4
[ 1037.656486] __do_softirq+0x1dc/0x218
[ 1037.709877] do_softirq_own_stack+0x54/0x74
[ 1037.763446] do_softirq_own_stack+0x44/0x74
[ 1037.816890] __irq_exit_rcu+0x6c/0xbc
[ 1037.870073] irq_exit+0x10/0x20
[ 1037.922396] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 1037.974802] do_IRQ+0x24/0x2c
[ 1038.026293] HardwareInterrupt_virt+0x108/0x10c
[ 1038.078675] 0x1dffff0
[ 1038.129889] 0x1dffff0
[ 1038.179967] kcsan_setup_watchpoint+0x300/0x4cc
[ 1038.230224] blk_finish_plug+0x48/0x6c
[ 1038.280185] read_pages+0xf0/0x214
[ 1038.329697] page_cache_ra_unbounded+0x120/0x244
[ 1038.379653] do_page_cache_ra+0x90/0xb8
[ 1038.429513] force_page_cache_ra+0x12c/0x130
[ 1038.479826] page_cache_sync_ra+0xc4/0xdc
[ 1038.529986] filemap_get_pages+0x1a4/0x708
[ 1038.580050] filemap_read+0x204/0x4c0
[ 1038.629911] blkdev_read_iter+0x1e8/0x25c
[ 1038.679901] vfs_read+0x29c/0x2f4
[ 1038.729784] ksys_read+0xb8/0x134
[ 1038.779468] sys_read+0x4c/0x74
[ 1038.828948] system_call_exception+0x15c/0x1c0
[ 1038.878919] ret_from_syscall+0x0/0x2c
[ 1038.978089] write to 0xc15b1d30 of 4 bytes by task 1615 on cpu 1:
[ 1039.028773] blk_finish_plug+0x48/0x6c
[ 1039.079459] read_pages+0xf0/0x214
[ 1039.130155] page_cache_ra_unbounded+0x120/0x244
[ 1039.181231] do_page_cache_ra+0x90/0xb8
[ 1039.232200] force_page_cache_ra+0x12c/0x130
[ 1039.283238] page_cache_sync_ra+0xc4/0xdc
[ 1039.334278] filemap_get_pages+0x1a4/0x708
[ 1039.384945] filemap_read+0x204/0x4c0
[ 1039.435002] blkdev_read_iter+0x1e8/0x25c
[ 1039.485191] vfs_read+0x29c/0x2f4
[ 1039.535226] ksys_read+0xb8/0x134
[ 1039.585232] sys_read+0x4c/0x74
[ 1039.634967] system_call_exception+0x15c/0x1c0
[ 1039.685109] ret_from_syscall+0x0/0x2c
[ 1039.785036] Reported by Kernel Concurrency Sanitizer on:
[ 1039.835612] CPU: 1 PID: 1615 Comm: blkid Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1039.887246] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1039.939286] ==================================================================
[ 1051.674902] ==================================================================
[ 1051.728499] BUG: KCSAN: data-race in interrupt_async_enter_prepare / raw_copy_to_user
[ 1051.836119] read to 0xc29db6dc of 4 bytes by task 1541 on cpu 1:
[ 1051.890846] interrupt_async_enter_prepare+0x64/0xc4
[ 1051.945445] timer_interrupt+0x1c/0x178
[ 1051.999296] Decrementer_virt+0x108/0x10c
[ 1052.052489] 0x8
[ 1052.104560] 0xc51a79c0
[ 1052.156840] kcsan_setup_watchpoint+0x300/0x4cc
[ 1052.209000] raw_copy_to_user+0x74/0xb4
[ 1052.260652] _copy_to_iter+0x120/0x694
[ 1052.311927] simple_copy_to_iter+0x78/0x80
[ 1052.362945] __skb_datagram_iter+0x214/0x334
[ 1052.413927] skb_copy_datagram_iter+0x4c/0x78
[ 1052.464757] unix_stream_read_actor+0x58/0x8c
[ 1052.515586] unix_stream_read_generic+0x808/0xae0
[ 1052.566377] unix_stream_recvmsg+0x118/0x11c
[ 1052.617046] sock_recvmsg_nosec+0x5c/0x88
[ 1052.667661] ____sys_recvmsg+0xc4/0x270
[ 1052.718310] ___sys_recvmsg+0x90/0xd4
[ 1052.768927] __sys_recvmsg+0xb0/0xf8
[ 1052.819350] sys_recvmsg+0x50/0x78
[ 1052.870273] system_call_exception+0x15c/0x1c0
[ 1052.921322] ret_from_syscall+0x0/0x2c
[ 1053.022476] write to 0xc29db6dc of 4 bytes by task 1541 on cpu 1:
[ 1053.073773] raw_copy_to_user+0x74/0xb4
[ 1053.124738] _copy_to_iter+0x120/0x694
[ 1053.175625] simple_copy_to_iter+0x78/0x80
[ 1053.226967] __skb_datagram_iter+0x214/0x334
[ 1053.278171] skb_copy_datagram_iter+0x4c/0x78
[ 1053.330087] unix_stream_read_actor+0x58/0x8c
[ 1053.381320] unix_stream_read_generic+0x808/0xae0
[ 1053.432375] unix_stream_recvmsg+0x118/0x11c
[ 1053.483113] sock_recvmsg_nosec+0x5c/0x88
[ 1053.533812] ____sys_recvmsg+0xc4/0x270
[ 1053.584454] ___sys_recvmsg+0x90/0xd4
[ 1053.635043] __sys_recvmsg+0xb0/0xf8
[ 1053.685732] sys_recvmsg+0x50/0x78
[ 1053.736246] system_call_exception+0x15c/0x1c0
[ 1053.787073] ret_from_syscall+0x0/0x2c
[ 1053.888526] Reported by Kernel Concurrency Sanitizer on:
[ 1053.940064] CPU: 1 PID: 1541 Comm: Xvnc Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1053.992784] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1054.045899] ==================================================================
[ 1075.301806] ==================================================================
[ 1075.356564] BUG: KCSAN: data-race in __hrtimer_run_queues / hrtimer_active
[ 1075.466084] read to 0xeeda8c60 of 4 bytes by interrupt on cpu 1:
[ 1075.521666] hrtimer_active+0xb0/0x100
[ 1075.576934] task_tick_fair+0xc8/0xcc
[ 1075.631997] scheduler_tick+0x6c/0xcc
[ 1075.686924] update_process_times+0xc8/0x120
[ 1075.742171] tick_nohz_handler+0x1ac/0x270
[ 1075.797428] __hrtimer_run_queues+0x170/0x1d8
[ 1075.852820] hrtimer_interrupt+0x168/0x350
[ 1075.908457] timer_interrupt+0x108/0x178
[ 1075.964201] Decrementer_virt+0x108/0x10c
[ 1076.019855] percpu_ref_tryget_many.constprop.0+0xf8/0x11c
[ 1076.076096] css_tryget+0x38/0x60
[ 1076.132179] get_mem_cgroup_from_mm+0x138/0x144
[ 1076.188426] __mem_cgroup_charge+0x2c/0x88
[ 1076.244053] folio_prealloc.isra.0+0x84/0xec
[ 1076.299063] handle_mm_fault+0x488/0xed0
[ 1076.353307] ___do_page_fault+0x4d8/0x630
[ 1076.408033] do_page_fault+0x28/0x40
[ 1076.461833] DataAccess_virt+0x124/0x17c
[ 1076.567260] write to 0xeeda8c60 of 4 bytes by interrupt on cpu 0:
[ 1076.620584] __hrtimer_run_queues+0x1cc/0x1d8
[ 1076.673635] hrtimer_interrupt+0x168/0x350
[ 1076.726768] timer_interrupt+0x108/0x178
[ 1076.779810] Decrementer_virt+0x108/0x10c
[ 1076.833162] 0x595
[ 1076.885990] __kernel_unpoison_pages+0xe0/0x1a8
[ 1076.939390] post_alloc_hook+0x8c/0xf0
[ 1076.992752] prep_new_page+0x24/0x5c
[ 1077.045983] get_page_from_freelist+0x564/0x660
[ 1077.099651] __alloc_pages+0x114/0x8dc
[ 1077.153211] folio_prealloc.isra.0+0x44/0xec
[ 1077.206973] handle_mm_fault+0x488/0xed0
[ 1077.260843] ___do_page_fault+0x4d8/0x630
[ 1077.314829] do_page_fault+0x28/0x40
[ 1077.368660] DataAccess_virt+0x124/0x17c
[ 1077.476086] Reported by Kernel Concurrency Sanitizer on:
[ 1077.530829] CPU: 0 PID: 1620 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1077.586833] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1077.643130] ==================================================================
[ 1082.516165] pagealloc: memory corruption
[ 1082.613096] fffdfff0: 00 00 00 00 ....
[ 1082.710010] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1082.807840] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1082.905938] Call Trace:
[ 1083.002796] [f2cf5c00] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[ 1083.103663] [f2cf5c20] [c0be4ee8] dump_stack+0x20/0x34
[ 1083.203141] [f2cf5c30] [c02c47c0] __kernel_unpoison_pages+0x198/0x1a8
[ 1083.304417] [f2cf5c80] [c029b62c] post_alloc_hook+0x8c/0xf0
[ 1083.406281] [f2cf5cb0] [c029b6b4] prep_new_page+0x24/0x5c
[ 1083.508295] [f2cf5cd0] [c029c9dc] get_page_from_freelist+0x564/0x660
[ 1083.610055] [f2cf5d60] [c029dfcc] __alloc_pages+0x114/0x8dc
[ 1083.712330] [f2cf5e20] [c02764f0] folio_prealloc.isra.0+0x44/0xec
[ 1083.817046] [f2cf5e40] [c027be28] handle_mm_fault+0x488/0xed0
[ 1083.919976] [f2cf5ed0] [c00340f4] ___do_page_fault+0x4d8/0x630
[ 1084.024052] [f2cf5f10] [c003446c] do_page_fault+0x28/0x40
[ 1084.126551] [f2cf5f30] [c000433c] DataAccess_virt+0x124/0x17c
[ 1084.229750] --- interrupt: 300 at 0xb13008
[ 1084.332833] NIP: 00b13008 LR: 00b12fe8 CTR: 00000000
[ 1084.436540] REGS: f2cf5f40 TRAP: 0300 Not tainted (6.9.0-rc4-PMacG4-dirty)
[ 1084.538670] MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 20882464 XER: 00000000
[ 1084.643896] DAR: 8fa70010 DSISR: 42000000
GPR00: 00b12fe8 afd69f00 a7fed700 6ba98010 3c500000 20884462 00000003 00a301e4
GPR08: 23fd9000 23fd8000 00000000 4088429a 20882462 00b2ff68 00000000 40882462
GPR16: ffffffff 00000000 00000002 00000000 00000002 00000000 00b30018 00000001
GPR24: ffffffff ffffffff 3c500000 0000005a 6ba98010 00000000 00b37cd0 00001000
[ 1085.165724] NIP [00b13008] 0xb13008
[ 1085.267098] LR [00b12fe8] 0xb12fe8
[ 1085.368411] --- interrupt: 300
[ 1085.470618] page: refcount:1 mapcount:0 mapping:00000000 index:0x1 pfn:0x31069
[ 1085.577511] flags: 0x80000000(zone=2)
[ 1085.682232] page_type: 0xffffffff()
[ 1085.788198] raw: 80000000 00000100 00000122 00000000 00000001 00000000 ffffffff 00000001
[ 1085.894169] raw: 00000000
[ 1085.998995] page dumped because: pagealloc: corrupted page details
[ 1086.105882] page_owner info is not present (never set?)
[ 1103.172608] ==================================================================
[ 1103.237300] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1103.365582] read (marked) to 0xefa6fa40 of 4 bytes by task 1619 on cpu 0:
[ 1103.430899] lru_gen_look_around+0x320/0x634
[ 1103.495970] folio_referenced_one+0x32c/0x404
[ 1103.561131] rmap_walk_anon+0x1c4/0x24c
[ 1103.626212] rmap_walk+0x70/0x7c
[ 1103.690974] folio_referenced+0x194/0x1ec
[ 1103.755894] shrink_folio_list+0x6a8/0xd28
[ 1103.820531] evict_folios+0xcc0/0x1204
[ 1103.884712] try_to_shrink_lruvec+0x214/0x2f0
[ 1103.949008] shrink_one+0x104/0x1e8
[ 1104.013172] shrink_node+0x314/0xc3c
[ 1104.077234] do_try_to_free_pages+0x500/0x7e4
[ 1104.141517] try_to_free_pages+0x150/0x18c
[ 1104.205712] __alloc_pages+0x460/0x8dc
[ 1104.269801] folio_prealloc.isra.0+0x44/0xec
[ 1104.334098] handle_mm_fault+0x488/0xed0
[ 1104.398190] ___do_page_fault+0x4d8/0x630
[ 1104.462229] do_page_fault+0x28/0x40
[ 1104.526125] DataAccess_virt+0x124/0x17c
[ 1104.653866] write to 0xefa6fa40 of 4 bytes by task 40 on cpu 1:
[ 1104.718744] list_add+0x58/0x94
[ 1104.783166] evict_folios+0xb04/0x1204
[ 1104.847662] try_to_shrink_lruvec+0x214/0x2f0
[ 1104.912124] shrink_one+0x104/0x1e8
[ 1104.975841] shrink_node+0x314/0xc3c
[ 1105.038693] balance_pgdat+0x498/0x914
[ 1105.100896] kswapd+0x304/0x398
[ 1105.162235] kthread+0x174/0x178
[ 1105.223310] start_kernel_thread+0x10/0x14
[ 1105.343563] Reported by Kernel Concurrency Sanitizer on:
[ 1105.403874] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1105.464743] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1105.526020] ==================================================================
[ 1107.514623] ==================================================================
[ 1107.576537] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1107.699840] read (marked) to 0xef8320ec of 4 bytes by task 40 on cpu 1:
[ 1107.762376] lru_gen_look_around+0x320/0x634
[ 1107.824312] folio_referenced_one+0x32c/0x404
[ 1107.886238] rmap_walk_anon+0x1c4/0x24c
[ 1107.947942] rmap_walk+0x70/0x7c
[ 1108.009135] folio_referenced+0x194/0x1ec
[ 1108.070477] shrink_folio_list+0x6a8/0xd28
[ 1108.131506] evict_folios+0xcc0/0x1204
[ 1108.192277] try_to_shrink_lruvec+0x214/0x2f0
[ 1108.252645] shrink_one+0x104/0x1e8
[ 1108.312276] shrink_node+0x314/0xc3c
[ 1108.371237] balance_pgdat+0x498/0x914
[ 1108.429451] kswapd+0x304/0x398
[ 1108.487098] kthread+0x174/0x178
[ 1108.544273] start_kernel_thread+0x10/0x14
[ 1108.658034] write to 0xef8320ec of 4 bytes by task 1619 on cpu 0:
[ 1108.715833] list_add+0x58/0x94
[ 1108.773051] evict_folios+0xb04/0x1204
[ 1108.829735] try_to_shrink_lruvec+0x214/0x2f0
[ 1108.886174] shrink_one+0x104/0x1e8
[ 1108.942365] shrink_node+0x314/0xc3c
[ 1108.997602] do_try_to_free_pages+0x500/0x7e4
[ 1109.052504] try_to_free_pages+0x150/0x18c
[ 1109.107028] __alloc_pages+0x460/0x8dc
[ 1109.161106] folio_prealloc.isra.0+0x44/0xec
[ 1109.214621] handle_mm_fault+0x488/0xed0
[ 1109.267410] ___do_page_fault+0x4d8/0x630
[ 1109.319824] do_page_fault+0x28/0x40
[ 1109.371670] DataAccess_virt+0x124/0x17c
[ 1109.474176] Reported by Kernel Concurrency Sanitizer on:
[ 1109.526294] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1109.579602] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1109.633233] ==================================================================
[ 1112.175937] ==================================================================
[ 1112.230216] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1112.338269] read (marked) to 0xef0fa554 of 4 bytes by task 1620 on cpu 1:
[ 1112.393682] lru_gen_look_around+0x320/0x634
[ 1112.448808] folio_referenced_one+0x32c/0x404
[ 1112.503987] rmap_walk_anon+0x1c4/0x24c
[ 1112.559086] rmap_walk+0x70/0x7c
[ 1112.613757] folio_referenced+0x194/0x1ec
[ 1112.668584] shrink_folio_list+0x6a8/0xd28
[ 1112.723455] evict_folios+0xcc0/0x1204
[ 1112.778287] try_to_shrink_lruvec+0x214/0x2f0
[ 1112.833316] shrink_one+0x104/0x1e8
[ 1112.888249] shrink_node+0x314/0xc3c
[ 1112.942681] do_try_to_free_pages+0x500/0x7e4
[ 1112.997037] try_to_free_pages+0x150/0x18c
[ 1113.051448] __alloc_pages+0x460/0x8dc
[ 1113.105779] folio_prealloc.isra.0+0x44/0xec
[ 1113.160200] handle_mm_fault+0x488/0xed0
[ 1113.214729] ___do_page_fault+0x4d8/0x630
[ 1113.269341] do_page_fault+0x28/0x40
[ 1113.323895] DataAccess_virt+0x124/0x17c
[ 1113.433274] write to 0xef0fa554 of 4 bytes by task 40 on cpu 0:
[ 1113.488967] list_add+0x58/0x94
[ 1113.543902] evict_folios+0xb04/0x1204
[ 1113.598280] try_to_shrink_lruvec+0x214/0x2f0
[ 1113.652213] shrink_one+0x104/0x1e8
[ 1113.705362] shrink_node+0x314/0xc3c
[ 1113.758812] balance_pgdat+0x498/0x914
[ 1113.811578] kswapd+0x304/0x398
[ 1113.863739] kthread+0x174/0x178
[ 1113.915313] start_kernel_thread+0x10/0x14
[ 1114.017462] Reported by Kernel Concurrency Sanitizer on:
[ 1114.069359] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1114.122557] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1114.176028] ==================================================================
[ 1114.925709] ==================================================================
[ 1114.980036] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 1115.089080] write to 0xeedbbd40 of 4 bytes by task 1620 on cpu 1:
[ 1115.144741] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 1115.200501] cgroup_rstat_flush_locked+0x528/0x538
[ 1115.256431] cgroup_rstat_flush+0x38/0x5c
[ 1115.312176] do_flush_stats+0x78/0x9c
[ 1115.367879] mem_cgroup_flush_stats+0x7c/0x80
[ 1115.423757] zswap_shrinker_count+0xb8/0x150
[ 1115.479357] do_shrink_slab+0x7c/0x540
[ 1115.534529] shrink_slab+0x1f0/0x384
[ 1115.589688] shrink_one+0x140/0x1e8
[ 1115.644520] shrink_node+0x314/0xc3c
[ 1115.699123] do_try_to_free_pages+0x500/0x7e4
[ 1115.754139] try_to_free_pages+0x150/0x18c
[ 1115.809094] __alloc_pages+0x460/0x8dc
[ 1115.863928] folio_prealloc.isra.0+0x44/0xec
[ 1115.918893] handle_mm_fault+0x488/0xed0
[ 1115.973762] ___do_page_fault+0x4d8/0x630
[ 1116.028624] do_page_fault+0x28/0x40
[ 1116.083430] DataAccess_virt+0x124/0x17c
[ 1116.192920] write to 0xeedbbd40 of 4 bytes by task 40 on cpu 0:
[ 1116.248673] memcg_rstat_updated+0xd8/0x15c
[ 1116.304041] __mod_memcg_lruvec_state+0x118/0x154
[ 1116.358966] __mod_lruvec_state+0x58/0x78
[ 1116.413060] lru_gen_update_size+0x130/0x240
[ 1116.466608] lru_gen_add_folio+0x198/0x288
[ 1116.520444] move_folios_to_lru+0x29c/0x350
[ 1116.573667] evict_folios+0xd20/0x1204
[ 1116.626394] try_to_shrink_lruvec+0x214/0x2f0
[ 1116.678850] shrink_one+0x104/0x1e8
[ 1116.730711] shrink_node+0x314/0xc3c
[ 1116.782307] balance_pgdat+0x498/0x914
[ 1116.833820] kswapd+0x304/0x398
[ 1116.885406] kthread+0x174/0x178
[ 1116.936809] start_kernel_thread+0x10/0x14
[ 1117.039674] value changed: 0x00000018 -> 0x00000000
[ 1117.142997] Reported by Kernel Concurrency Sanitizer on:
[ 1117.195578] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1117.249142] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1117.302991] ==================================================================
[ 1118.378999] ==================================================================
[ 1118.433585] BUG: KCSAN: data-race in list_del / lru_gen_look_around
[ 1118.542375] read (marked) to 0xef2e6d64 of 4 bytes by task 1620 on cpu 1:
[ 1118.598040] lru_gen_look_around+0x320/0x634
[ 1118.653916] folio_referenced_one+0x32c/0x404
[ 1118.709922] rmap_walk_anon+0x1c4/0x24c
[ 1118.765527] rmap_walk+0x70/0x7c
[ 1118.820441] folio_referenced+0x194/0x1ec
[ 1118.875594] shrink_folio_list+0x6a8/0xd28
[ 1118.930737] evict_folios+0xcc0/0x1204
[ 1118.985757] try_to_shrink_lruvec+0x214/0x2f0
[ 1119.041134] shrink_one+0x104/0x1e8
[ 1119.096511] shrink_node+0x314/0xc3c
[ 1119.151747] do_try_to_free_pages+0x500/0x7e4
[ 1119.207404] try_to_free_pages+0x150/0x18c
[ 1119.263057] __alloc_pages+0x460/0x8dc
[ 1119.318628] folio_prealloc.isra.0+0x44/0xec
[ 1119.374089] handle_mm_fault+0x488/0xed0
[ 1119.428844] ___do_page_fault+0x4d8/0x630
[ 1119.482993] do_page_fault+0x28/0x40
[ 1119.536380] DataAccess_virt+0x124/0x17c
[ 1119.642844] write to 0xef2e6d64 of 4 bytes by task 40 on cpu 0:
[ 1119.695760] list_del+0x2c/0x5c
[ 1119.748250] lru_gen_del_folio+0x110/0x140
[ 1119.800516] evict_folios+0xaf8/0x1204
[ 1119.852574] try_to_shrink_lruvec+0x214/0x2f0
[ 1119.904997] shrink_one+0x104/0x1e8
[ 1119.957279] shrink_node+0x314/0xc3c
[ 1120.009316] balance_pgdat+0x498/0x914
[ 1120.061307] kswapd+0x304/0x398
[ 1120.113069] kthread+0x174/0x178
[ 1120.164720] start_kernel_thread+0x10/0x14
[ 1120.268265] Reported by Kernel Concurrency Sanitizer on:
[ 1120.320735] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1120.374216] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1120.428137] ==================================================================
[ 1122.332197] ==================================================================
[ 1122.387140] BUG: KCSAN: data-race in list_add / lru_gen_look_around
[ 1122.496688] read (marked) to 0xef4c94b8 of 4 bytes by task 40 on cpu 0:
[ 1122.552654] lru_gen_look_around+0x320/0x634
[ 1122.608217] folio_referenced_one+0x32c/0x404
[ 1122.663598] rmap_walk_anon+0x1c4/0x24c
[ 1122.718522] rmap_walk+0x70/0x7c
[ 1122.772986] folio_referenced+0x194/0x1ec
[ 1122.827581] shrink_folio_list+0x6a8/0xd28
[ 1122.882182] evict_folios+0xcc0/0x1204
[ 1122.936818] try_to_shrink_lruvec+0x214/0x2f0
[ 1122.991642] shrink_one+0x104/0x1e8
[ 1123.046317] shrink_node+0x314/0xc3c
[ 1123.100786] balance_pgdat+0x498/0x914
[ 1123.155167] kswapd+0x304/0x398
[ 1123.209542] kthread+0x174/0x178
[ 1123.263856] start_kernel_thread+0x10/0x14
[ 1123.372926] write to 0xef4c94b8 of 4 bytes by task 1620 on cpu 1:
[ 1123.428774] list_add+0x58/0x94
[ 1123.483944] evict_folios+0xb04/0x1204
[ 1123.539181] try_to_shrink_lruvec+0x214/0x2f0
[ 1123.594297] shrink_one+0x104/0x1e8
[ 1123.649039] shrink_node+0x314/0xc3c
[ 1123.702982] do_try_to_free_pages+0x500/0x7e4
[ 1123.756502] try_to_free_pages+0x150/0x18c
[ 1123.809341] __alloc_pages+0x460/0x8dc
[ 1123.862617] folio_prealloc.isra.0+0x44/0xec
[ 1123.915388] handle_mm_fault+0x488/0xed0
[ 1123.967668] ___do_page_fault+0x4d8/0x630
[ 1124.019509] do_page_fault+0x28/0x40
[ 1124.070795] DataAccess_virt+0x124/0x17c
[ 1124.173021] Reported by Kernel Concurrency Sanitizer on:
[ 1124.225247] CPU: 1 PID: 1620 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1124.278439] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1124.332099] ==================================================================
[ 1127.208932] ==================================================================
[ 1127.263097] BUG: KCSAN: data-race in mem_cgroup_css_rstat_flush / memcg_rstat_updated
[ 1127.371973] write to 0xeedd8d40 of 4 bytes by task 1619 on cpu 0:
[ 1127.427413] mem_cgroup_css_rstat_flush+0x44c/0x518
[ 1127.482791] cgroup_rstat_flush_locked+0x528/0x538
[ 1127.538283] cgroup_rstat_flush+0x38/0x5c
[ 1127.593429] do_flush_stats+0x78/0x9c
[ 1127.648480] mem_cgroup_flush_stats+0x7c/0x80
[ 1127.703760] zswap_shrinker_count+0xb8/0x150
[ 1127.759088] do_shrink_slab+0x7c/0x540
[ 1127.814363] shrink_slab+0x1f0/0x384
[ 1127.869577] shrink_one+0x140/0x1e8
[ 1127.924251] shrink_node+0x314/0xc3c
[ 1127.978437] do_try_to_free_pages+0x500/0x7e4
[ 1128.032843] try_to_free_pages+0x150/0x18c
[ 1128.087271] __alloc_pages+0x460/0x8dc
[ 1128.141597] folio_prealloc.isra.0+0x44/0xec
[ 1128.195997] handle_mm_fault+0x488/0xed0
[ 1128.250490] ___do_page_fault+0x4d8/0x630
[ 1128.305050] do_page_fault+0x28/0x40
[ 1128.359559] DataAccess_virt+0x124/0x17c
[ 1128.468744] write to 0xeedd8d40 of 4 bytes by task 40 on cpu 1:
[ 1128.524270] memcg_rstat_updated+0xd8/0x15c
[ 1128.579455] __mod_memcg_lruvec_state+0x118/0x154
[ 1128.634197] __mod_lruvec_state+0x58/0x78
[ 1128.688182] lru_gen_update_size+0x130/0x240
[ 1128.741579] lru_gen_add_folio+0x198/0x288
[ 1128.795328] move_folios_to_lru+0x29c/0x350
[ 1128.848471] evict_folios+0xd20/0x1204
[ 1128.901122] try_to_shrink_lruvec+0x214/0x2f0
[ 1128.953550] shrink_one+0x104/0x1e8
[ 1129.005393] shrink_node+0x314/0xc3c
[ 1129.057004] balance_pgdat+0x498/0x914
[ 1129.108555] kswapd+0x304/0x398
[ 1129.160143] kthread+0x174/0x178
[ 1129.211721] start_kernel_thread+0x10/0x14
[ 1129.314534] value changed: 0x0000000d -> 0x00000000
[ 1129.417903] Reported by Kernel Concurrency Sanitizer on:
[ 1129.470489] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1129.524180] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1129.578250] ==================================================================
[ 1132.350890] kworker/u9:1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0
[ 1132.439055] CPU: 1 PID: 39 Comm: kworker/u9:1 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1132.530157] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1132.620439] Workqueue: events_freezable_pwr_efficient disk_events_workfn (events_freezable_pwr_ef)
[ 1132.712862] Call Trace:
[ 1132.805472] [f100dc50] [c0be4e84] dump_stack_lvl+0x88/0xcc (unreliable)
[ 1132.902185] [f100dc70] [c0be4ee8] dump_stack+0x20/0x34
[ 1132.997462] [f100dc80] [c029de40] warn_alloc+0x100/0x178
[ 1133.091658] [f100dce0] [c029e234] __alloc_pages+0x37c/0x8dc
[ 1133.187093] [f100dda0] [c029e884] __page_frag_alloc_align+0x74/0x194
[ 1133.280854] [f100ddd0] [c09bafc0] __netdev_alloc_skb+0x108/0x234
[ 1133.375951] [f100de00] [bef1a5a8] setup_rx_descbuffer+0x5c/0x258 [b43legacy]
[ 1133.471342] [f100de40] [bef1c43c] b43legacy_dma_rx+0x3e4/0x488 [b43legacy]
[ 1133.566247] [f100deb0] [bef0b034] b43legacy_interrupt_tasklet+0x7bc/0x7f0 [b43legacy]
[ 1133.661223] [f100df50] [c006f8c8] tasklet_action_common.isra.0+0xb0/0xe8
[ 1133.756602] [f100df80] [c0c1fc8c] __do_softirq+0x1dc/0x218
[ 1133.853423] [f100dff0] [c00091d8] do_softirq_own_stack+0x54/0x74
[ 1133.950509] [f10dd760] [c00091c8] do_softirq_own_stack+0x44/0x74
[ 1134.045886] [f10dd780] [c006f114] __irq_exit_rcu+0x6c/0xbc
[ 1134.141538] [f10dd790] [c006f588] irq_exit+0x10/0x20
[ 1134.235241] [f10dd7a0] [c0008b58] interrupt_async_exit_prepare.isra.0+0x18/0x2c
[ 1134.328250] [f10dd7b0] [c000917c] do_IRQ+0x24/0x2c
[ 1134.421852] [f10dd7d0] [c00045b4] HardwareInterrupt_virt+0x108/0x10c
[ 1134.518090] --- interrupt: 500 at _raw_spin_unlock_irq+0x30/0x48
[ 1134.611842] NIP: c0c1f49c LR: c0c1f490 CTR: 00000000
[ 1134.705301] REGS: f10dd7e0 TRAP: 0500 Not tainted (6.9.0-rc4-PMacG4-dirty)
[ 1134.800041] MSR: 00209032 <EE,ME,IR,DR,RI> CR: 84882802 XER: 00000000
[ 1134.895506]
GPR00: c0c1f490 f10dd8a0 c1c28020 c49d6828 00016828 0001682b 00000003 c12399ec
GPR08: 00000000 00009032 0000001d f10dd860 24882802 00000000 00000001 00000000
GPR16: 00000800 00000800 00000000 00000000 00000002 00000004 00000004 00000000
GPR24: c49d6850 00000004 00000000 00000007 00000001 c49d6850 f10ddbb4 c49d6828
[ 1135.378017] NIP [c0c1f49c] _raw_spin_unlock_irq+0x30/0x48
[ 1135.473742] LR [c0c1f490] _raw_spin_unlock_irq+0x24/0x48
[ 1135.570964] --- interrupt: 500
[ 1135.667558] [f10dd8c0] [c0246150] evict_folios+0xc74/0x1204
[ 1135.766055] [f10dd9d0] [c02468f4] try_to_shrink_lruvec+0x214/0x2f0
[ 1135.865435] [f10dda50] [c0246ad4] shrink_one+0x104/0x1e8
[ 1135.964504] [f10dda90] [c0248eb8] shrink_node+0x314/0xc3c
[ 1136.063967] [f10ddb20] [c024a98c] do_try_to_free_pages+0x500/0x7e4
[ 1136.164791] [f10ddba0] [c024b110] try_to_free_pages+0x150/0x18c
[ 1136.265414] [f10ddc20] [c029e318] __alloc_pages+0x460/0x8dc
[ 1136.364886] [f10ddce0] [c06088ac] alloc_pages.constprop.0+0x30/0x50
[ 1136.465171] [f10ddd00] [c0608ad4] blk_rq_map_kern+0x208/0x404
[ 1136.564679] [f10ddd50] [c089c048] scsi_execute_cmd+0x350/0x534
[ 1136.663635] [f10dddc0] [c08b77cc] sr_check_events+0x108/0x4bc
[ 1136.764635] [f10dde40] [c08fb620] cdrom_update_events+0x54/0xb8
[ 1136.865074] [f10dde60] [c08fb6b4] cdrom_check_events+0x30/0x70
[ 1136.965069] [f10dde80] [c08b7c44] sr_block_check_events+0x60/0x90
[ 1137.064917] [f10ddea0] [c0630444] disk_check_events+0x68/0x168
[ 1137.165414] [f10ddee0] [c063056c] disk_events_workfn+0x28/0x40
[ 1137.267952] [f10ddf00] [c008df0c] process_scheduled_works+0x350/0x494
[ 1137.368522] [f10ddf70] [c008ee2c] worker_thread+0x2a4/0x300
[ 1137.469521] [f10ddfc0] [c009b87c] kthread+0x174/0x178
[ 1137.569313] [f10ddff0] [c001c304] start_kernel_thread+0x10/0x14
[ 1137.670144] Mem-Info:
[ 1137.769084] active_anon:292700 inactive_anon:181968 isolated_anon:0
active_file:6404 inactive_file:5560 isolated_file:0
unevictable:0 dirty:11 writeback:0
slab_reclaimable:1183 slab_unreclaimable:6185
mapped:7898 shmem:133 pagetables:675
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:1193 free_pcp:778 free_cma:0
[ 1138.591873] Node 0 active_anon:1170800kB inactive_anon:727872kB active_file:25616kB inactive_file:22240kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:31592kB dirty:44kB writeback:0kB shmem:532kB writeback_tmp:0kB kernel_stack:952kB pagetables:2700kB sec_pagetables:0kB all_unreclaimable? no
[ 1138.817095] DMA free:0kB boost:7564kB min:10928kB low:11768kB high:12608kB reserved_highatomic:0KB active_anon:568836kB inactive_anon:92340kB active_file:12kB inactive_file:1248kB unevictable:0kB writepending:40kB present:786432kB managed:709428kB mlocked:0kB bounce:0kB free_pcp:3112kB local_pcp:1844kB free_cma:0kB
[ 1139.054054] lowmem_reserve[]: 0 0 1280 1280
[ 1139.168685] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
[ 1139.288155] 39962 total pagecache pages
[ 1139.403030] 27865 pages in swap cache
[ 1139.518121] Free swap = 8240252kB
[ 1139.632092] Total swap = 8388604kB
[ 1139.745755] 524288 pages RAM
[ 1139.860425] 327680 pages HighMem/MovableOnly
[ 1139.972892] 19251 pages reserved
[ 1140.086052] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086495] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086627] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086729] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086811] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086897] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.086981] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087066] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087125] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087233] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087318] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087401] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087484] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087568] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087651] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087753] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087836] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.087920] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088003] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088087] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088171] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088277] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088364] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088448] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088530] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088615] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088699] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088806] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088891] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.088974] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089059] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089142] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089226] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089331] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089414] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089498] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089584] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089665] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089748] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089852] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.089935] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090019] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090103] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090187] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090292] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090377] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090461] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090544] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090628] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090713] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090817] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090903] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.090987] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091071] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091156] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091240] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091345] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091430] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1140.091515] b43legacy-phy0 debug: DMA RX: setup_rx_descbuffer() failed
[ 1145.532381] ==================================================================
[ 1145.608894] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1145.760471] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1145.836461] zswap_update_total_size+0x58/0xe8
[ 1145.912507] zswap_store+0x5a8/0xa18
[ 1145.989718] swap_writepage+0x4c/0xe8
[ 1146.065657] pageout+0x1dc/0x304
[ 1146.141299] shrink_folio_list+0xa70/0xd28
[ 1146.217154] evict_folios+0xcc0/0x1204
[ 1146.292889] try_to_shrink_lruvec+0x214/0x2f0
[ 1146.369041] shrink_one+0x104/0x1e8
[ 1146.446060] shrink_node+0x314/0xc3c
[ 1146.520298] balance_pgdat+0x498/0x914
[ 1146.594835] kswapd+0x304/0x398
[ 1146.667816] kthread+0x174/0x178
[ 1146.740277] start_kernel_thread+0x10/0x14
[ 1146.883255] read to 0xc121b328 of 8 bytes by task 1620 on cpu 0:
[ 1146.954655] zswap_store+0x118/0xa18
[ 1147.026298] swap_writepage+0x4c/0xe8
[ 1147.098668] pageout+0x1dc/0x304
[ 1147.169358] shrink_folio_list+0xa70/0xd28
[ 1147.240046] evict_folios+0xcc0/0x1204
[ 1147.310128] try_to_shrink_lruvec+0x214/0x2f0
[ 1147.380323] shrink_one+0x104/0x1e8
[ 1147.449989] shrink_node+0x314/0xc3c
[ 1147.519311] do_try_to_free_pages+0x500/0x7e4
[ 1147.588985] try_to_free_pages+0x150/0x18c
[ 1147.658439] __alloc_pages+0x460/0x8dc
[ 1147.727688] folio_prealloc.isra.0+0x44/0xec
[ 1147.796963] handle_mm_fault+0x488/0xed0
[ 1147.866127] ___do_page_fault+0x4d8/0x630
[ 1147.935298] do_page_fault+0x28/0x40
[ 1148.003939] DataAccess_virt+0x124/0x17c
[ 1148.140405] Reported by Kernel Concurrency Sanitizer on:
[ 1148.209378] CPU: 0 PID: 1620 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1148.279898] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1148.350632] ==================================================================
[ 1153.340372] ==================================================================
[ 1153.412514] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1153.554905] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1153.626481] zswap_update_total_size+0x58/0xe8
[ 1153.697496] zswap_store+0x5a8/0xa18
[ 1153.768192] swap_writepage+0x4c/0xe8
[ 1153.839021] pageout+0x1dc/0x304
[ 1153.910909] shrink_folio_list+0xa70/0xd28
[ 1153.980463] evict_folios+0xcc0/0x1204
[ 1154.050937] try_to_shrink_lruvec+0x214/0x2f0
[ 1154.120486] shrink_one+0x104/0x1e8
[ 1154.191056] shrink_node+0x314/0xc3c
[ 1154.260876] balance_pgdat+0x498/0x914
[ 1154.327067] kswapd+0x304/0x398
[ 1154.389843] kthread+0x174/0x178
[ 1154.448891] start_kernel_thread+0x10/0x14
[ 1154.558693] read to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1154.613044] zswap_store+0x118/0xa18
[ 1154.666450] swap_writepage+0x4c/0xe8
[ 1154.719823] pageout+0x1dc/0x304
[ 1154.773083] shrink_folio_list+0xa70/0xd28
[ 1154.826726] evict_folios+0xcc0/0x1204
[ 1154.880407] try_to_shrink_lruvec+0x214/0x2f0
[ 1154.934376] shrink_one+0x104/0x1e8
[ 1154.988131] shrink_node+0x314/0xc3c
[ 1155.041052] do_try_to_free_pages+0x500/0x7e4
[ 1155.093526] try_to_free_pages+0x150/0x18c
[ 1155.145467] __alloc_pages+0x460/0x8dc
[ 1155.197157] folio_prealloc.isra.0+0x44/0xec
[ 1155.248720] handle_mm_fault+0x488/0xed0
[ 1155.300028] ___do_page_fault+0x4d8/0x630
[ 1155.351434] do_page_fault+0x28/0x40
[ 1155.402778] DataAccess_virt+0x124/0x17c
[ 1155.504632] Reported by Kernel Concurrency Sanitizer on:
[ 1155.556251] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1155.608663] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1155.661629] ==================================================================
[ 1159.860944] ==================================================================
[ 1159.914891] BUG: KCSAN: data-race in __mod_memcg_lruvec_state / mem_cgroup_css_rstat_flush
[ 1160.023991] read (marked) to 0xeedd8f80 of 4 bytes by task 1619 on cpu 0:
[ 1160.079774] mem_cgroup_css_rstat_flush+0x394/0x518
[ 1160.135661] cgroup_rstat_flush_locked+0x528/0x538
[ 1160.191359] cgroup_rstat_flush+0x38/0x5c
[ 1160.246745] do_flush_stats+0x78/0x9c
[ 1160.302181] mem_cgroup_flush_stats+0x7c/0x80
[ 1160.357857] zswap_shrinker_count+0xb8/0x150
[ 1160.413527] do_shrink_slab+0x7c/0x540
[ 1160.469078] shrink_slab+0x1f0/0x384
[ 1160.524481] shrink_one+0x140/0x1e8
[ 1160.579854] shrink_node+0x314/0xc3c
[ 1160.634981] do_try_to_free_pages+0x500/0x7e4
[ 1160.690290] try_to_free_pages+0x150/0x18c
[ 1160.745600] __alloc_pages+0x460/0x8dc
[ 1160.800804] __read_swap_cache_async+0xd0/0x24c
[ 1160.856176] swap_cluster_readahead+0x2cc/0x338
[ 1160.911816] swapin_readahead+0x430/0x438
[ 1160.967167] do_swap_page+0x1e0/0x9bc
[ 1161.022385] handle_mm_fault+0xecc/0xed0
[ 1161.077696] ___do_page_fault+0x4d8/0x630
[ 1161.132806] do_page_fault+0x28/0x40
[ 1161.187151] DataAccess_virt+0x124/0x17c
[ 1161.293119] write to 0xeedd8f80 of 4 bytes by task 40 on cpu 1:
[ 1161.347088] __mod_memcg_lruvec_state+0xdc/0x154
[ 1161.400803] __mod_lruvec_state+0x58/0x78
[ 1161.453851] lru_gen_update_size+0x130/0x240
[ 1161.506703] lru_gen_del_folio+0x104/0x140
[ 1161.559074] evict_folios+0xaf8/0x1204
[ 1161.611409] try_to_shrink_lruvec+0x214/0x2f0
[ 1161.664014] shrink_one+0x104/0x1e8
[ 1161.716690] shrink_node+0x314/0xc3c
[ 1161.769028] balance_pgdat+0x498/0x914
[ 1161.821319] kswapd+0x304/0x398
[ 1161.873340] kthread+0x174/0x178
[ 1161.925118] start_kernel_thread+0x10/0x14
[ 1162.028727] Reported by Kernel Concurrency Sanitizer on:
[ 1162.081278] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1162.135074] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1162.189178] ==================================================================
[ 1167.537551] ==================================================================
[ 1167.592244] BUG: KCSAN: data-race in zswap_update_total_size / zswap_update_total_size
[ 1167.702971] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1167.758691] zswap_update_total_size+0x58/0xe8
[ 1167.815688] zswap_entry_free+0xdc/0x1c0
[ 1167.872100] zswap_load+0x190/0x19c
[ 1167.927754] swap_read_folio+0xbc/0x450
[ 1167.984430] swap_cluster_readahead+0x2f8/0x338
[ 1168.040390] swapin_readahead+0x430/0x438
[ 1168.097280] do_swap_page+0x1e0/0x9bc
[ 1168.153152] handle_mm_fault+0xecc/0xed0
[ 1168.210362] ___do_page_fault+0x4d8/0x630
[ 1168.266601] do_page_fault+0x28/0x40
[ 1168.322623] DataAccess_virt+0x124/0x17c
[ 1168.434517] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1168.491480] zswap_update_total_size+0x58/0xe8
[ 1168.547866] zswap_store+0x5a8/0xa18
[ 1168.604934] swap_writepage+0x4c/0xe8
[ 1168.660335] pageout+0x1dc/0x304
[ 1168.714767] shrink_folio_list+0xa70/0xd28
[ 1168.768845] evict_folios+0xcc0/0x1204
[ 1168.823468] try_to_shrink_lruvec+0x214/0x2f0
[ 1168.878212] shrink_one+0x104/0x1e8
[ 1168.931092] shrink_node+0x314/0xc3c
[ 1168.984636] balance_pgdat+0x498/0x914
[ 1169.036606] kswapd+0x304/0x398
[ 1169.087855] kthread+0x174/0x178
[ 1169.139562] start_kernel_thread+0x10/0x14
[ 1169.242777] Reported by Kernel Concurrency Sanitizer on:
[ 1169.294617] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1169.348458] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1169.401904] ==================================================================
[ 1183.009768] ==================================================================
[ 1183.064956] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1183.174114] read to 0xc121b328 of 8 bytes by task 40 on cpu 0:
[ 1183.229430] zswap_store+0x118/0xa18
[ 1183.284521] swap_writepage+0x4c/0xe8
[ 1183.339893] pageout+0x1dc/0x304
[ 1183.395281] shrink_folio_list+0xa70/0xd28
[ 1183.450670] evict_folios+0xcc0/0x1204
[ 1183.506068] try_to_shrink_lruvec+0x214/0x2f0
[ 1183.562182] shrink_one+0x104/0x1e8
[ 1183.617580] shrink_node+0x314/0xc3c
[ 1183.673440] balance_pgdat+0x498/0x914
[ 1183.730115] kswapd+0x304/0x398
[ 1183.784757] kthread+0x174/0x178
[ 1183.839371] start_kernel_thread+0x10/0x14
[ 1183.947992] write to 0xc121b328 of 8 bytes by task 1619 on cpu 1:
[ 1184.002593] zswap_update_total_size+0x58/0xe8
[ 1184.058037] zswap_entry_free+0xdc/0x1c0
[ 1184.113370] zswap_load+0x190/0x19c
[ 1184.167695] swap_read_folio+0xbc/0x450
[ 1184.223285] swap_cluster_readahead+0x2f8/0x338
[ 1184.278473] swapin_readahead+0x430/0x438
[ 1184.333386] do_swap_page+0x1e0/0x9bc
[ 1184.388168] handle_mm_fault+0xecc/0xed0
[ 1184.443913] ___do_page_fault+0x4d8/0x630
[ 1184.499751] do_page_fault+0x28/0x40
[ 1184.554853] DataAccess_virt+0x124/0x17c
[ 1184.663890] Reported by Kernel Concurrency Sanitizer on:
[ 1184.717341] CPU: 1 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1184.772860] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1184.827366] ==================================================================
[ 1190.455160] ==================================================================
[ 1190.509181] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1190.616279] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1190.671318] zswap_update_total_size+0x58/0xe8
[ 1190.726030] zswap_entry_free+0xdc/0x1c0
[ 1190.781260] zswap_load+0x190/0x19c
[ 1190.835946] swap_read_folio+0xbc/0x450
[ 1190.890448] swap_cluster_readahead+0x2f8/0x338
[ 1190.945200] swapin_readahead+0x430/0x438
[ 1191.000452] do_swap_page+0x1e0/0x9bc
[ 1191.055327] handle_mm_fault+0xecc/0xed0
[ 1191.110193] ___do_page_fault+0x4d8/0x630
[ 1191.166183] do_page_fault+0x28/0x40
[ 1191.220277] DataAccess_virt+0x124/0x17c
[ 1191.328296] read to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1191.383248] zswap_store+0x118/0xa18
[ 1191.439465] swap_writepage+0x4c/0xe8
[ 1191.493796] pageout+0x1dc/0x304
[ 1191.548296] shrink_folio_list+0xa70/0xd28
[ 1191.603645] evict_folios+0xcc0/0x1204
[ 1191.658098] try_to_shrink_lruvec+0x214/0x2f0
[ 1191.712976] shrink_one+0x104/0x1e8
[ 1191.768774] shrink_node+0x314/0xc3c
[ 1191.823924] balance_pgdat+0x498/0x914
[ 1191.878609] kswapd+0x304/0x398
[ 1191.933283] kthread+0x174/0x178
[ 1191.988300] start_kernel_thread+0x10/0x14
[ 1192.097058] Reported by Kernel Concurrency Sanitizer on:
[ 1192.150417] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1192.203938] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1192.258910] ==================================================================
[ 1203.342040] ==================================================================
[ 1203.396067] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1203.503547] read to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1203.557855] zswap_store+0x118/0xa18
[ 1203.612576] swap_writepage+0x4c/0xe8
[ 1203.666931] pageout+0x1dc/0x304
[ 1203.721970] shrink_folio_list+0xa70/0xd28
[ 1203.776637] evict_folios+0xcc0/0x1204
[ 1203.831039] try_to_shrink_lruvec+0x214/0x2f0
[ 1203.886009] shrink_one+0x104/0x1e8
[ 1203.940864] shrink_node+0x314/0xc3c
[ 1203.996775] balance_pgdat+0x498/0x914
[ 1204.053002] kswapd+0x304/0x398
[ 1204.107500] kthread+0x174/0x178
[ 1204.162461] start_kernel_thread+0x10/0x14
[ 1204.269324] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1204.323962] zswap_update_total_size+0x58/0xe8
[ 1204.378630] zswap_entry_free+0xdc/0x1c0
[ 1204.433175] zswap_load+0x190/0x19c
[ 1204.488474] swap_read_folio+0xbc/0x450
[ 1204.542800] swap_cluster_readahead+0x2f8/0x338
[ 1204.597291] swapin_readahead+0x430/0x438
[ 1204.651656] do_swap_page+0x1e0/0x9bc
[ 1204.706654] handle_mm_fault+0xecc/0xed0
[ 1204.760974] ___do_page_fault+0x4d8/0x630
[ 1204.815926] do_page_fault+0x28/0x40
[ 1204.870354] DataAccess_virt+0x124/0x17c
[ 1204.979137] Reported by Kernel Concurrency Sanitizer on:
[ 1205.032170] CPU: 0 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1205.085728] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1205.140017] ==================================================================
[ 1206.640937] ==================================================================
[ 1206.694993] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1206.801946] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1206.856508] zswap_update_total_size+0x58/0xe8
[ 1206.911132] zswap_entry_free+0xdc/0x1c0
[ 1206.965843] zswap_load+0x190/0x19c
[ 1207.020101] swap_read_folio+0xbc/0x450
[ 1207.075221] swap_cluster_readahead+0x2f8/0x338
[ 1207.130431] swapin_readahead+0x430/0x438
[ 1207.184750] do_swap_page+0x1e0/0x9bc
[ 1207.239188] handle_mm_fault+0xecc/0xed0
[ 1207.294227] ___do_page_fault+0x4d8/0x630
[ 1207.349077] do_page_fault+0x28/0x40
[ 1207.404162] DataAccess_virt+0x124/0x17c
[ 1207.512153] read to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1207.566528] zswap_store+0x118/0xa18
[ 1207.620922] swap_writepage+0x4c/0xe8
[ 1207.675291] pageout+0x1dc/0x304
[ 1207.729477] shrink_folio_list+0xa70/0xd28
[ 1207.785130] evict_folios+0xcc0/0x1204
[ 1207.841011] try_to_shrink_lruvec+0x214/0x2f0
[ 1207.895916] shrink_one+0x104/0x1e8
[ 1207.950438] shrink_node+0x314/0xc3c
[ 1208.005265] balance_pgdat+0x498/0x914
[ 1208.060116] kswapd+0x304/0x398
[ 1208.115036] kthread+0x174/0x178
[ 1208.169594] start_kernel_thread+0x10/0x14
[ 1208.277724] Reported by Kernel Concurrency Sanitizer on:
[ 1208.331348] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1208.384839] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1208.439529] ==================================================================
[ 1213.640903] ==================================================================
[ 1213.695703] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1213.804484] read to 0xc121b328 of 8 bytes by task 40 on cpu 0:
[ 1213.860459] zswap_store+0x118/0xa18
[ 1213.915658] swap_writepage+0x4c/0xe8
[ 1213.970521] pageout+0x1dc/0x304
[ 1214.025573] shrink_folio_list+0xa70/0xd28
[ 1214.079835] evict_folios+0xcc0/0x1204
[ 1214.134082] try_to_shrink_lruvec+0x214/0x2f0
[ 1214.189919] shrink_one+0x104/0x1e8
[ 1214.246323] shrink_node+0x314/0xc3c
[ 1214.302606] balance_pgdat+0x498/0x914
[ 1214.359039] kswapd+0x304/0x398
[ 1214.415259] kthread+0x174/0x178
[ 1214.471274] start_kernel_thread+0x10/0x14
[ 1214.581789] write to 0xc121b328 of 8 bytes by task 1619 on cpu 1:
[ 1214.637849] zswap_update_total_size+0x58/0xe8
[ 1214.694311] zswap_entry_free+0xdc/0x1c0
[ 1214.750697] zswap_load+0x190/0x19c
[ 1214.806815] swap_read_folio+0xbc/0x450
[ 1214.862958] swap_cluster_readahead+0x2f8/0x338
[ 1214.919292] swapin_readahead+0x430/0x438
[ 1214.975554] do_swap_page+0x1e0/0x9bc
[ 1215.031737] handle_mm_fault+0xecc/0xed0
[ 1215.088003] ___do_page_fault+0x4d8/0x630
[ 1215.144352] do_page_fault+0x28/0x40
[ 1215.200613] DataAccess_virt+0x124/0x17c
[ 1215.311446] Reported by Kernel Concurrency Sanitizer on:
[ 1215.366431] CPU: 1 PID: 1619 Comm: stress Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1215.421814] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1215.478075] ==================================================================
[ 1218.273217] ==================================================================
[ 1218.328009] BUG: KCSAN: data-race in zswap_update_total_size / zswap_update_total_size
[ 1218.435905] write to 0xc121b328 of 8 bytes by task 1619 on cpu 0:
[ 1218.490496] zswap_update_total_size+0x58/0xe8
[ 1218.545503] zswap_store+0x5a8/0xa18
[ 1218.601334] swap_writepage+0x4c/0xe8
[ 1218.656924] pageout+0x1dc/0x304
[ 1218.711641] shrink_folio_list+0xa70/0xd28
[ 1218.768359] evict_folios+0xcc0/0x1204
[ 1218.823335] try_to_shrink_lruvec+0x214/0x2f0
[ 1218.878309] shrink_one+0x104/0x1e8
[ 1218.933755] shrink_node+0x314/0xc3c
[ 1218.989790] do_try_to_free_pages+0x500/0x7e4
[ 1219.045988] try_to_free_pages+0x150/0x18c
[ 1219.100646] __alloc_pages+0x460/0x8dc
[ 1219.155704] __read_swap_cache_async+0xd0/0x24c
[ 1219.210859] swap_cluster_readahead+0x2cc/0x338
[ 1219.266254] swapin_readahead+0x430/0x438
[ 1219.321160] do_swap_page+0x1e0/0x9bc
[ 1219.375680] handle_mm_fault+0xecc/0xed0
[ 1219.431293] ___do_page_fault+0x4d8/0x630
[ 1219.486916] do_page_fault+0x28/0x40
[ 1219.541880] DataAccess_virt+0x124/0x17c
[ 1219.651735] write to 0xc121b328 of 8 bytes by task 40 on cpu 1:
[ 1219.707148] zswap_update_total_size+0x58/0xe8
[ 1219.763713] zswap_store+0x5a8/0xa18
[ 1219.820142] swap_writepage+0x4c/0xe8
[ 1219.875386] pageout+0x1dc/0x304
[ 1219.931246] shrink_folio_list+0xa70/0xd28
[ 1219.986528] evict_folios+0xcc0/0x1204
[ 1220.040133] try_to_shrink_lruvec+0x214/0x2f0
[ 1220.094196] shrink_one+0x104/0x1e8
[ 1220.147543] shrink_node+0x314/0xc3c
[ 1220.200613] balance_pgdat+0x498/0x914
[ 1220.253663] kswapd+0x304/0x398
[ 1220.305693] kthread+0x174/0x178
[ 1220.357259] start_kernel_thread+0x10/0x14
[ 1220.460634] Reported by Kernel Concurrency Sanitizer on:
[ 1220.512814] CPU: 1 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1220.565806] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1220.619024] ==================================================================
[ 1220.909835] ==================================================================
[ 1220.964030] BUG: KCSAN: data-race in zswap_store / zswap_update_total_size
[ 1221.072982] write to 0xc121b328 of 8 bytes by task 1620 on cpu 1:
[ 1221.128360] zswap_update_total_size+0x58/0xe8
[ 1221.184098] zswap_entry_free+0xdc/0x1c0
[ 1221.239507] zswap_load+0x190/0x19c
[ 1221.295278] swap_read_folio+0xbc/0x450
[ 1221.349882] swap_cluster_readahead+0x2f8/0x338
[ 1221.404828] swapin_readahead+0x430/0x438
[ 1221.459969] do_swap_page+0x1e0/0x9bc
[ 1221.514717] handle_mm_fault+0xecc/0xed0
[ 1221.569478] ___do_page_fault+0x4d8/0x630
[ 1221.624290] do_page_fault+0x28/0x40
[ 1221.679550] DataAccess_virt+0x124/0x17c
[ 1221.788426] read to 0xc121b328 of 8 bytes by task 40 on cpu 0:
[ 1221.843562] zswap_store+0x118/0xa18
[ 1221.898855] swap_writepage+0x4c/0xe8
[ 1221.953838] pageout+0x1dc/0x304
[ 1222.008062] shrink_folio_list+0xa70/0xd28
[ 1222.062928] evict_folios+0xcc0/0x1204
[ 1222.116088] try_to_shrink_lruvec+0x214/0x2f0
[ 1222.169817] shrink_one+0x104/0x1e8
[ 1222.222571] shrink_node+0x314/0xc3c
[ 1222.274443] balance_pgdat+0x498/0x914
[ 1222.326101] kswapd+0x304/0x398
[ 1222.378276] kthread+0x174/0x178
[ 1222.429440] start_kernel_thread+0x10/0x14
[ 1222.531455] Reported by Kernel Concurrency Sanitizer on:
[ 1222.582721] CPU: 0 PID: 40 Comm: kswapd0 Not tainted 6.9.0-rc4-PMacG4-dirty #10
[ 1222.635180] Hardware name: PowerMac3,6 7455 0x80010303 PowerMac
[ 1222.688017] ==================================================================
^ permalink raw reply [relevance 1%]
* [PATCH 2/5] ext4: Convert bd_buddy_page to bd_buddy_folio
2024-04-16 17:28 8% ` [PATCH 1/5] ext4: Convert bd_bitmap_page to bd_bitmap_folio Matthew Wilcox (Oracle)
@ 2024-04-16 17:28 8% ` Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-16 17:28 UTC (permalink / raw)
To: Theodore Ts'o
Cc: Matthew Wilcox (Oracle), Andreas Dilger, linux-ext4, linux-fsdevel
There is no need to make this a multi-page folio, so leave all the
infrastructure around it in pages. But since we're locking it, playing
with its refcount and checking whether it's uptodate, it needs to move
to the folio API.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ext4/mballoc.c | 91 +++++++++++++++++++++++------------------------
fs/ext4/mballoc.h | 2 +-
2 files changed, 46 insertions(+), 47 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 91c015fda370..761d8d15b205 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -1439,7 +1439,7 @@ static int ext4_mb_init_cache(struct page *page, char *incore, gfp_t gfp)
* Lock the buddy and bitmap pages. This make sure other parallel init_group
* on the same buddy page doesn't happen whild holding the buddy page lock.
* Return locked buddy and bitmap pages on e4b struct. If buddy and bitmap
- * are on the same page e4b->bd_buddy_page is NULL and return value is 0.
+ * are on the same page e4b->bd_buddy_folio is NULL and return value is 0.
*/
static int ext4_mb_get_buddy_page_lock(struct super_block *sb,
ext4_group_t group, struct ext4_buddy *e4b, gfp_t gfp)
@@ -1447,10 +1447,9 @@ static int ext4_mb_get_buddy_page_lock(struct super_block *sb,
struct inode *inode = EXT4_SB(sb)->s_buddy_cache;
int block, pnum, poff;
int blocks_per_page;
- struct page *page;
struct folio *folio;
- e4b->bd_buddy_page = NULL;
+ e4b->bd_buddy_folio = NULL;
e4b->bd_bitmap_folio = NULL;
blocks_per_page = PAGE_SIZE / sb->s_blocksize;
@@ -1476,11 +1475,12 @@ static int ext4_mb_get_buddy_page_lock(struct super_block *sb,
}
/* blocks_per_page == 1, hence we need another page for the buddy */
- page = find_or_create_page(inode->i_mapping, block + 1, gfp);
- if (!page)
- return -ENOMEM;
- BUG_ON(page->mapping != inode->i_mapping);
- e4b->bd_buddy_page = page;
+ folio = __filemap_get_folio(inode->i_mapping, block + 1,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+ BUG_ON(folio->mapping != inode->i_mapping);
+ e4b->bd_buddy_folio = folio;
return 0;
}
@@ -1490,9 +1490,9 @@ static void ext4_mb_put_buddy_page_lock(struct ext4_buddy *e4b)
folio_unlock(e4b->bd_bitmap_folio);
folio_put(e4b->bd_bitmap_folio);
}
- if (e4b->bd_buddy_page) {
- unlock_page(e4b->bd_buddy_page);
- put_page(e4b->bd_buddy_page);
+ if (e4b->bd_buddy_folio) {
+ folio_unlock(e4b->bd_buddy_folio);
+ folio_put(e4b->bd_buddy_folio);
}
}
@@ -1507,7 +1507,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
struct ext4_group_info *this_grp;
struct ext4_buddy e4b;
- struct page *page;
struct folio *folio;
int ret = 0;
@@ -1544,7 +1543,7 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
goto err;
}
- if (e4b.bd_buddy_page == NULL) {
+ if (e4b.bd_buddy_folio == NULL) {
/*
* If both the bitmap and buddy are in
* the same page we don't need to force
@@ -1554,11 +1553,11 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
goto err;
}
/* init buddy cache */
- page = e4b.bd_buddy_page;
- ret = ext4_mb_init_cache(page, e4b.bd_bitmap, gfp);
+ folio = e4b.bd_buddy_folio;
+ ret = ext4_mb_init_cache(&folio->page, e4b.bd_bitmap, gfp);
if (ret)
goto err;
- if (!PageUptodate(page)) {
+ if (!folio_test_uptodate(folio)) {
ret = -EIO;
goto err;
}
@@ -1580,7 +1579,6 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
int block;
int pnum;
int poff;
- struct page *page;
struct folio *folio;
int ret;
struct ext4_group_info *grp;
@@ -1599,7 +1597,7 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
e4b->bd_info = grp;
e4b->bd_sb = sb;
e4b->bd_group = group;
- e4b->bd_buddy_page = NULL;
+ e4b->bd_buddy_folio = NULL;
e4b->bd_bitmap_folio = NULL;
if (unlikely(EXT4_MB_GRP_NEED_INIT(grp))) {
@@ -1665,7 +1663,7 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
goto err;
}
- /* Pages marked accessed already */
+ /* Folios marked accessed already */
e4b->bd_bitmap_folio = folio;
e4b->bd_bitmap = folio_address(folio) + (poff * sb->s_blocksize);
@@ -1673,48 +1671,49 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
pnum = block / blocks_per_page;
poff = block % blocks_per_page;
- page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED);
- if (page == NULL || !PageUptodate(page)) {
- if (page)
- put_page(page);
- page = find_or_create_page(inode->i_mapping, pnum, gfp);
- if (page) {
- if (WARN_RATELIMIT(page->mapping != inode->i_mapping,
- "ext4: buddy bitmap's page->mapping != inode->i_mapping\n")) {
+ folio = __filemap_get_folio(inode->i_mapping, pnum, FGP_ACCESSED, 0);
+ if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
+ if (!IS_ERR(folio))
+ folio_put(folio);
+ folio = __filemap_get_folio(inode->i_mapping, pnum,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
+ if (!IS_ERR(folio)) {
+ if (WARN_RATELIMIT(folio->mapping != inode->i_mapping,
+ "ext4: buddy bitmap's mapping != inode->i_mapping\n")) {
/* should never happen */
- unlock_page(page);
+ folio_unlock(folio);
ret = -EINVAL;
goto err;
}
- if (!PageUptodate(page)) {
- ret = ext4_mb_init_cache(page, e4b->bd_bitmap,
+ if (!folio_test_uptodate(folio)) {
+ ret = ext4_mb_init_cache(&folio->page, e4b->bd_bitmap,
gfp);
if (ret) {
- unlock_page(page);
+ folio_unlock(folio);
goto err;
}
}
- unlock_page(page);
+ folio_unlock(folio);
}
}
- if (page == NULL) {
- ret = -ENOMEM;
+ if (IS_ERR(folio)) {
+ ret = PTR_ERR(folio);
goto err;
}
- if (!PageUptodate(page)) {
+ if (!folio_test_uptodate(folio)) {
ret = -EIO;
goto err;
}
- /* Pages marked accessed already */
- e4b->bd_buddy_page = page;
- e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize);
+ /* Folios marked accessed already */
+ e4b->bd_buddy_folio = folio;
+ e4b->bd_buddy = folio_address(folio) + (poff * sb->s_blocksize);
return 0;
err:
- if (page)
- put_page(page);
+ if (folio)
+ folio_put(folio);
if (e4b->bd_bitmap_folio)
folio_put(e4b->bd_bitmap_folio);
@@ -1733,8 +1732,8 @@ static void ext4_mb_unload_buddy(struct ext4_buddy *e4b)
{
if (e4b->bd_bitmap_folio)
folio_put(e4b->bd_bitmap_folio);
- if (e4b->bd_buddy_page)
- put_page(e4b->bd_buddy_page);
+ if (e4b->bd_buddy_folio)
+ folio_put(e4b->bd_buddy_folio);
}
@@ -2155,7 +2154,7 @@ static void ext4_mb_use_best_found(struct ext4_allocation_context *ac,
*/
ac->ac_bitmap_page = &e4b->bd_bitmap_folio->page;
get_page(ac->ac_bitmap_page);
- ac->ac_buddy_page = e4b->bd_buddy_page;
+ ac->ac_buddy_page = &e4b->bd_buddy_folio->page;
get_page(ac->ac_buddy_page);
/* store last allocated for subsequent stream allocation */
if (ac->ac_flags & EXT4_MB_STREAM_ALLOC) {
@@ -3888,7 +3887,7 @@ static void ext4_free_data_in_buddy(struct super_block *sb,
/* No more items in the per group rb tree
* balance refcounts from ext4_mb_free_metadata()
*/
- put_page(e4b.bd_buddy_page);
+ folio_put(e4b.bd_buddy_folio);
folio_put(e4b.bd_bitmap_folio);
}
ext4_unlock_group(sb, entry->efd_group);
@@ -6312,7 +6311,7 @@ ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
BUG_ON(!ext4_handle_valid(handle));
BUG_ON(e4b->bd_bitmap_folio == NULL);
- BUG_ON(e4b->bd_buddy_page == NULL);
+ BUG_ON(e4b->bd_buddy_folio == NULL);
new_node = &new_entry->efd_node;
cluster = new_entry->efd_start_cluster;
@@ -6323,7 +6322,7 @@ ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
* otherwise we'll refresh it from
* on-disk bitmap and lose not-yet-available
* blocks */
- get_page(e4b->bd_buddy_page);
+ folio_get(e4b->bd_buddy_folio);
folio_get(e4b->bd_bitmap_folio);
}
while (*n) {
diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
index 4725e5c9e482..720fb277abd2 100644
--- a/fs/ext4/mballoc.h
+++ b/fs/ext4/mballoc.h
@@ -215,7 +215,7 @@ struct ext4_allocation_context {
#define AC_STATUS_BREAK 3
struct ext4_buddy {
- struct page *bd_buddy_page;
+ struct folio *bd_buddy_folio;
void *bd_buddy;
struct folio *bd_bitmap_folio;
void *bd_bitmap;
--
2.43.0
^ permalink raw reply related [relevance 8%]
* [PATCH 1/5] ext4: Convert bd_bitmap_page to bd_bitmap_folio
@ 2024-04-16 17:28 8% ` Matthew Wilcox (Oracle)
2024-04-16 17:28 8% ` [PATCH 2/5] ext4: Convert bd_buddy_page to bd_buddy_folio Matthew Wilcox (Oracle)
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox (Oracle) @ 2024-04-16 17:28 UTC (permalink / raw)
To: Theodore Ts'o
Cc: Matthew Wilcox (Oracle), Andreas Dilger, linux-ext4, linux-fsdevel
There is no need to make this a multi-page folio, so leave all the
infrastructure around it in pages. But since we're locking it, playing
with its refcount and checking whether it's uptodate, it needs to move
to the folio API.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ext4/mballoc.c | 98 ++++++++++++++++++++++++-----------------------
fs/ext4/mballoc.h | 2 +-
2 files changed, 52 insertions(+), 48 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 12b3f196010b..91c015fda370 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -1448,9 +1448,10 @@ static int ext4_mb_get_buddy_page_lock(struct super_block *sb,
int block, pnum, poff;
int blocks_per_page;
struct page *page;
+ struct folio *folio;
e4b->bd_buddy_page = NULL;
- e4b->bd_bitmap_page = NULL;
+ e4b->bd_bitmap_folio = NULL;
blocks_per_page = PAGE_SIZE / sb->s_blocksize;
/*
@@ -1461,12 +1462,13 @@ static int ext4_mb_get_buddy_page_lock(struct super_block *sb,
block = group * 2;
pnum = block / blocks_per_page;
poff = block % blocks_per_page;
- page = find_or_create_page(inode->i_mapping, pnum, gfp);
- if (!page)
- return -ENOMEM;
- BUG_ON(page->mapping != inode->i_mapping);
- e4b->bd_bitmap_page = page;
- e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);
+ folio = __filemap_get_folio(inode->i_mapping, pnum,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+ BUG_ON(folio->mapping != inode->i_mapping);
+ e4b->bd_bitmap_folio = folio;
+ e4b->bd_bitmap = folio_address(folio) + (poff * sb->s_blocksize);
if (blocks_per_page >= 2) {
/* buddy and bitmap are on the same page */
@@ -1484,9 +1486,9 @@ static int ext4_mb_get_buddy_page_lock(struct super_block *sb,
static void ext4_mb_put_buddy_page_lock(struct ext4_buddy *e4b)
{
- if (e4b->bd_bitmap_page) {
- unlock_page(e4b->bd_bitmap_page);
- put_page(e4b->bd_bitmap_page);
+ if (e4b->bd_bitmap_folio) {
+ folio_unlock(e4b->bd_bitmap_folio);
+ folio_put(e4b->bd_bitmap_folio);
}
if (e4b->bd_buddy_page) {
unlock_page(e4b->bd_buddy_page);
@@ -1506,6 +1508,7 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
struct ext4_group_info *this_grp;
struct ext4_buddy e4b;
struct page *page;
+ struct folio *folio;
int ret = 0;
might_sleep();
@@ -1532,11 +1535,11 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp)
goto err;
}
- page = e4b.bd_bitmap_page;
- ret = ext4_mb_init_cache(page, NULL, gfp);
+ folio = e4b.bd_bitmap_folio;
+ ret = ext4_mb_init_cache(&folio->page, NULL, gfp);
if (ret)
goto err;
- if (!PageUptodate(page)) {
+ if (!folio_test_uptodate(folio)) {
ret = -EIO;
goto err;
}
@@ -1578,6 +1581,7 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
int pnum;
int poff;
struct page *page;
+ struct folio *folio;
int ret;
struct ext4_group_info *grp;
struct ext4_sb_info *sbi = EXT4_SB(sb);
@@ -1596,7 +1600,7 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
e4b->bd_sb = sb;
e4b->bd_group = group;
e4b->bd_buddy_page = NULL;
- e4b->bd_bitmap_page = NULL;
+ e4b->bd_bitmap_folio = NULL;
if (unlikely(EXT4_MB_GRP_NEED_INIT(grp))) {
/*
@@ -1617,53 +1621,53 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
pnum = block / blocks_per_page;
poff = block % blocks_per_page;
- /* we could use find_or_create_page(), but it locks page
- * what we'd like to avoid in fast path ... */
- page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED);
- if (page == NULL || !PageUptodate(page)) {
- if (page)
+ /* Avoid locking the folio in the fast path ... */
+ folio = __filemap_get_folio(inode->i_mapping, pnum, FGP_ACCESSED, 0);
+ if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
+ if (!IS_ERR(folio))
/*
- * drop the page reference and try
- * to get the page with lock. If we
+ * drop the folio reference and try
+ * to get the folio with lock. If we
* are not uptodate that implies
- * somebody just created the page but
- * is yet to initialize the same. So
+ * somebody just created the folio but
+ * is yet to initialize it. So
* wait for it to initialize.
*/
- put_page(page);
- page = find_or_create_page(inode->i_mapping, pnum, gfp);
- if (page) {
- if (WARN_RATELIMIT(page->mapping != inode->i_mapping,
- "ext4: bitmap's paging->mapping != inode->i_mapping\n")) {
+ folio_put(folio);
+ folio = __filemap_get_folio(inode->i_mapping, pnum,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
+ if (!IS_ERR(folio)) {
+ if (WARN_RATELIMIT(folio->mapping != inode->i_mapping,
+ "ext4: bitmap's mapping != inode->i_mapping\n")) {
/* should never happen */
- unlock_page(page);
+ folio_unlock(folio);
ret = -EINVAL;
goto err;
}
- if (!PageUptodate(page)) {
- ret = ext4_mb_init_cache(page, NULL, gfp);
+ if (!folio_test_uptodate(folio)) {
+ ret = ext4_mb_init_cache(&folio->page, NULL, gfp);
if (ret) {
- unlock_page(page);
+ folio_unlock(folio);
goto err;
}
- mb_cmp_bitmaps(e4b, page_address(page) +
+ mb_cmp_bitmaps(e4b, folio_address(folio) +
(poff * sb->s_blocksize));
}
- unlock_page(page);
+ folio_unlock(folio);
}
}
- if (page == NULL) {
- ret = -ENOMEM;
+ if (IS_ERR(folio)) {
+ ret = PTR_ERR(folio);
goto err;
}
- if (!PageUptodate(page)) {
+ if (!folio_test_uptodate(folio)) {
ret = -EIO;
goto err;
}
/* Pages marked accessed already */
- e4b->bd_bitmap_page = page;
- e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);
+ e4b->bd_bitmap_folio = folio;
+ e4b->bd_bitmap = folio_address(folio) + (poff * sb->s_blocksize);
block++;
pnum = block / blocks_per_page;
@@ -1711,8 +1715,8 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
err:
if (page)
put_page(page);
- if (e4b->bd_bitmap_page)
- put_page(e4b->bd_bitmap_page);
+ if (e4b->bd_bitmap_folio)
+ folio_put(e4b->bd_bitmap_folio);
e4b->bd_buddy = NULL;
e4b->bd_bitmap = NULL;
@@ -1727,8 +1731,8 @@ static int ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
static void ext4_mb_unload_buddy(struct ext4_buddy *e4b)
{
- if (e4b->bd_bitmap_page)
- put_page(e4b->bd_bitmap_page);
+ if (e4b->bd_bitmap_folio)
+ folio_put(e4b->bd_bitmap_folio);
if (e4b->bd_buddy_page)
put_page(e4b->bd_buddy_page);
}
@@ -2149,7 +2153,7 @@ static void ext4_mb_use_best_found(struct ext4_allocation_context *ac,
* double allocate blocks. The reference is dropped
* in ext4_mb_release_context
*/
- ac->ac_bitmap_page = e4b->bd_bitmap_page;
+ ac->ac_bitmap_page = &e4b->bd_bitmap_folio->page;
get_page(ac->ac_bitmap_page);
ac->ac_buddy_page = e4b->bd_buddy_page;
get_page(ac->ac_buddy_page);
@@ -3885,7 +3889,7 @@ static void ext4_free_data_in_buddy(struct super_block *sb,
* balance refcounts from ext4_mb_free_metadata()
*/
put_page(e4b.bd_buddy_page);
- put_page(e4b.bd_bitmap_page);
+ folio_put(e4b.bd_bitmap_folio);
}
ext4_unlock_group(sb, entry->efd_group);
ext4_mb_unload_buddy(&e4b);
@@ -6307,7 +6311,7 @@ ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
struct rb_node *parent = NULL, *new_node;
BUG_ON(!ext4_handle_valid(handle));
- BUG_ON(e4b->bd_bitmap_page == NULL);
+ BUG_ON(e4b->bd_bitmap_folio == NULL);
BUG_ON(e4b->bd_buddy_page == NULL);
new_node = &new_entry->efd_node;
@@ -6320,7 +6324,7 @@ ext4_mb_free_metadata(handle_t *handle, struct ext4_buddy *e4b,
* on-disk bitmap and lose not-yet-available
* blocks */
get_page(e4b->bd_buddy_page);
- get_page(e4b->bd_bitmap_page);
+ folio_get(e4b->bd_bitmap_folio);
}
while (*n) {
parent = *n;
diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
index 56938532b4ce..4725e5c9e482 100644
--- a/fs/ext4/mballoc.h
+++ b/fs/ext4/mballoc.h
@@ -217,7 +217,7 @@ struct ext4_allocation_context {
struct ext4_buddy {
struct page *bd_buddy_page;
void *bd_buddy;
- struct page *bd_bitmap_page;
+ struct folio *bd_bitmap_folio;
void *bd_bitmap;
struct ext4_group_info *bd_info;
struct super_block *bd_sb;
--
2.43.0
^ permalink raw reply related [relevance 8%]
* Re: riscv32 EXT4 splat, 6.8 regression?
@ 2024-04-16 8:25 0% ` Christian Brauner
0 siblings, 0 replies; 200+ results
From: Christian Brauner @ 2024-04-16 8:25 UTC (permalink / raw)
To: Björn Töpel, Nam Cao, Mike Rapoport
Cc: Andreas Dilger, Al Viro, linux-fsdevel, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle),
Anders Roxell
[Adding Mike who's knowledgeable in this area]
On Mon, Apr 15, 2024 at 06:04:50PM +0200, Björn Töpel wrote:
> Christian Brauner <brauner@kernel.org> writes:
>
> > On Sun, Apr 14, 2024 at 04:08:11PM +0200, Björn Töpel wrote:
> >> Andreas Dilger <adilger@dilger.ca> writes:
> >>
> >> > On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> >> >>
> >> >> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
> >> >>
> >> >>> As to whether the 0xfffff000 address itself is valid for riscv32 is
> >> >>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
> >> >>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
> >> >>> reserving this page address from allocation to avoid similar issues in
> >> >>> other parts of the code, as is done with the NULL/0 page address.
> >> >>
> >> >> Not a chance. *Any* page mapped there is a serious bug on any 32bit
> >> >> box. Recall what ERR_PTR() is...
> >> >>
> >> >> On any architecture the virtual addresses in range (unsigned long)-512..
> >> >> (unsigned long)-1 must never resolve to valid kernel objects.
> >> >> In other words, any kind of wraparound here is asking for an oops on
> >> >> attempts to access the elements of buffer - kernel dereference of
> >> >> (char *)0xfffff000 on a 32bit box is already a bug.
> >> >>
> >> >> It might be getting an invalid pointer, but arithmetical overflows
> >> >> are irrelevant.
> >> >
> >> > The original bug report stated that search_buf = 0xfffff000 on entry,
> >> > and I'd quoted that at the start of my email:
> >> >
> >> > On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
> >> >> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
> >> >> some point the address wraps to zero, and boom. I doubt that 0xfffff000
> >> >> is a sane address.
> >> >
> >> > Now that you mention ERR_PTR() it definitely makes sense that this last
> >> > page HAS to be excluded.
> >> >
> >> > So some other bug is passing the bad pointer to this code before this
> >> > error, or the arch is not correctly excluding this page from allocation.
> >>
> >> Yeah, something is off for sure.
> >>
> >> (FWIW, I manage to hit this for Linus' master as well.)
> >>
> >> I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
> >> this BT:
> >>
> >> [<c01e8b34>] __filemap_add_folio+0x322/0x508
> >> [<c01e8d6e>] filemap_add_folio+0x54/0xce
> >> [<c01ea076>] __filemap_get_folio+0x156/0x2aa
> >> [<c02df346>] __getblk_slow+0xcc/0x302
> >> [<c02df5f2>] bdev_getblk+0x76/0x7a
> >> [<c03519da>] ext4_getblk+0xbc/0x2c4
> >> [<c0351cc2>] ext4_bread_batch+0x56/0x186
> >> [<c036bcaa>] __ext4_find_entry+0x156/0x578
> >> [<c036c152>] ext4_lookup+0x86/0x1f4
> >> [<c02a3252>] __lookup_slow+0x8e/0x142
> >> [<c02a6d70>] walk_component+0x104/0x174
> >> [<c02a793c>] path_lookupat+0x78/0x182
> >> [<c02a8c7c>] filename_lookup+0x96/0x158
> >> [<c02a8d76>] kern_path+0x38/0x56
> >> [<c0c1cb7a>] init_mount+0x5c/0xac
> >> [<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
> >> [<c0c01cce>] prepare_namespace+0x226/0x27c
> >> [<c0c011c6>] kernel_init_freeable+0x286/0x2a8
> >> [<c0b97ab8>] kernel_init+0x2a/0x156
> >> [<c0ba22ca>] ret_from_fork+0xe/0x20
> >>
> >> I get a folio where folio_address(folio) == 0xfffff000 (which is
> >> broken).
> >>
> >> Need to go into the weeds here...
> >
> > I don't see anything obvious that could explain this right away. Did you
> > manage to reproduce this on any other architecture and/or filesystem?
> >
> > Fwiw, iirc there were a bunch of fs/buffer.c changes that came in
> > through the mm/ layer between v6.7 and v6.8 that might also be
> > interesting. But really I'm poking in the dark currently.
>
> Thanks for getting back! Spent some more time one it today.
>
> It seems that the buddy allocator *can* return a page with a VA that can
> wrap (0xfffff000 -- pointed out by Nam and myself).
>
> Further, it seems like riscv32 indeed inserts a page like that to the
> buddy allocator, when the memblock is free'd:
>
> | [<c024961c>] __free_one_page+0x2a4/0x3ea
> | [<c024a448>] __free_pages_ok+0x158/0x3cc
> | [<c024b1a4>] __free_pages_core+0xe8/0x12c
> | [<c0c1435a>] memblock_free_pages+0x1a/0x22
> | [<c0c17676>] memblock_free_all+0x1ee/0x278
> | [<c0c050b0>] mem_init+0x10/0xa4
> | [<c0c1447c>] mm_core_init+0x11a/0x2da
> | [<c0c00bb6>] start_kernel+0x3c4/0x6de
>
> Here, a page with VA 0xfffff000 is a added to the freelist. We were just
> lucky (unlucky?) that page was used for the page cache.
>
> A nasty patch like:
> --8<--
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 549e76af8f82..a6a6abbe71b0 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -2566,6 +2566,9 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
> void __init memblock_free_pages(struct page *page, unsigned long pfn,
> unsigned int order)
> {
> + if ((long)page_address(page) == 0xfffff000L) {
> + return; // leak it
> + }
>
> if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) {
> int nid = early_pfn_to_nid(pfn);
> --8<--
>
> ...and it's gone.
>
> I need to think more about what a proper fix is. Regardless; Christian,
> Al, and Ted can all relax. ;-)
>
>
> Björn
On Tue, Apr 16, 2024 at 08:44:17AM +0200, Nam Cao wrote:
> On 2024-04-15 Björn Töpel wrote:
> > Thanks for getting back! Spent some more time one it today.
> >
> > It seems that the buddy allocator *can* return a page with a VA that can
> > wrap (0xfffff000 -- pointed out by Nam and myself).
> >
> > Further, it seems like riscv32 indeed inserts a page like that to the
> > buddy allocator, when the memblock is free'd:
> >
> > | [<c024961c>] __free_one_page+0x2a4/0x3ea
> > | [<c024a448>] __free_pages_ok+0x158/0x3cc
> > | [<c024b1a4>] __free_pages_core+0xe8/0x12c
> > | [<c0c1435a>] memblock_free_pages+0x1a/0x22
> > | [<c0c17676>] memblock_free_all+0x1ee/0x278
> > | [<c0c050b0>] mem_init+0x10/0xa4
> > | [<c0c1447c>] mm_core_init+0x11a/0x2da
> > | [<c0c00bb6>] start_kernel+0x3c4/0x6de
> >
> > Here, a page with VA 0xfffff000 is a added to the freelist. We were just
> > lucky (unlucky?) that page was used for the page cache.
>
> I just educated myself about memory mapping last night, so the below
> may be complete nonsense. Take it with a grain of salt.
>
> In riscv's setup_bootmem(), we have this line:
> max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
>
> I think this is the root cause: max_low_pfn indicates the last page
> to be mapped. Problem is: nothing prevents PFN_DOWN(phys_ram_end) from
> getting mapped to the last page (0xfffff000). If max_low_pfn is mapped
> to the last page, we get the reported problem.
>
> There seems to be some code to make sure the last page is not used
> (the call to memblock_set_current_limit() right above this line). It is
> unclear to me why this still lets the problem slip through.
>
> The fix is simple: never let max_low_pfn gets mapped to the last page.
> The below patch fixes the problem for me. But I am not entirely sure if
> this is the correct fix, further investigation needed.
>
> Best regards,
> Nam
>
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index fa34cf55037b..17cab0a52726 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -251,7 +251,8 @@ static void __init setup_bootmem(void)
> }
>
> min_low_pfn = PFN_UP(phys_ram_base);
> - max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
> + max_low_pfn = PFN_DOWN(memblock_get_current_limit());
> + max_pfn = PFN_DOWN(phys_ram_end);
> high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
>
> dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [relevance 0%]
* Re: riscv32 EXT4 splat, 6.8 regression?
@ 2024-04-16 8:25 0% ` Christian Brauner
0 siblings, 0 replies; 200+ results
From: Christian Brauner @ 2024-04-16 8:25 UTC (permalink / raw)
To: Björn Töpel, Nam Cao, Mike Rapoport
Cc: Andreas Dilger, Al Viro, linux-fsdevel, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle),
Anders Roxell
[Adding Mike who's knowledgeable in this area]
On Mon, Apr 15, 2024 at 06:04:50PM +0200, Björn Töpel wrote:
> Christian Brauner <brauner@kernel.org> writes:
>
> > On Sun, Apr 14, 2024 at 04:08:11PM +0200, Björn Töpel wrote:
> >> Andreas Dilger <adilger@dilger.ca> writes:
> >>
> >> > On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> >> >>
> >> >> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
> >> >>
> >> >>> As to whether the 0xfffff000 address itself is valid for riscv32 is
> >> >>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
> >> >>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
> >> >>> reserving this page address from allocation to avoid similar issues in
> >> >>> other parts of the code, as is done with the NULL/0 page address.
> >> >>
> >> >> Not a chance. *Any* page mapped there is a serious bug on any 32bit
> >> >> box. Recall what ERR_PTR() is...
> >> >>
> >> >> On any architecture the virtual addresses in range (unsigned long)-512..
> >> >> (unsigned long)-1 must never resolve to valid kernel objects.
> >> >> In other words, any kind of wraparound here is asking for an oops on
> >> >> attempts to access the elements of buffer - kernel dereference of
> >> >> (char *)0xfffff000 on a 32bit box is already a bug.
> >> >>
> >> >> It might be getting an invalid pointer, but arithmetical overflows
> >> >> are irrelevant.
> >> >
> >> > The original bug report stated that search_buf = 0xfffff000 on entry,
> >> > and I'd quoted that at the start of my email:
> >> >
> >> > On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
> >> >> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
> >> >> some point the address wraps to zero, and boom. I doubt that 0xfffff000
> >> >> is a sane address.
> >> >
> >> > Now that you mention ERR_PTR() it definitely makes sense that this last
> >> > page HAS to be excluded.
> >> >
> >> > So some other bug is passing the bad pointer to this code before this
> >> > error, or the arch is not correctly excluding this page from allocation.
> >>
> >> Yeah, something is off for sure.
> >>
> >> (FWIW, I manage to hit this for Linus' master as well.)
> >>
> >> I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
> >> this BT:
> >>
> >> [<c01e8b34>] __filemap_add_folio+0x322/0x508
> >> [<c01e8d6e>] filemap_add_folio+0x54/0xce
> >> [<c01ea076>] __filemap_get_folio+0x156/0x2aa
> >> [<c02df346>] __getblk_slow+0xcc/0x302
> >> [<c02df5f2>] bdev_getblk+0x76/0x7a
> >> [<c03519da>] ext4_getblk+0xbc/0x2c4
> >> [<c0351cc2>] ext4_bread_batch+0x56/0x186
> >> [<c036bcaa>] __ext4_find_entry+0x156/0x578
> >> [<c036c152>] ext4_lookup+0x86/0x1f4
> >> [<c02a3252>] __lookup_slow+0x8e/0x142
> >> [<c02a6d70>] walk_component+0x104/0x174
> >> [<c02a793c>] path_lookupat+0x78/0x182
> >> [<c02a8c7c>] filename_lookup+0x96/0x158
> >> [<c02a8d76>] kern_path+0x38/0x56
> >> [<c0c1cb7a>] init_mount+0x5c/0xac
> >> [<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
> >> [<c0c01cce>] prepare_namespace+0x226/0x27c
> >> [<c0c011c6>] kernel_init_freeable+0x286/0x2a8
> >> [<c0b97ab8>] kernel_init+0x2a/0x156
> >> [<c0ba22ca>] ret_from_fork+0xe/0x20
> >>
> >> I get a folio where folio_address(folio) == 0xfffff000 (which is
> >> broken).
> >>
> >> Need to go into the weeds here...
> >
> > I don't see anything obvious that could explain this right away. Did you
> > manage to reproduce this on any other architecture and/or filesystem?
> >
> > Fwiw, iirc there were a bunch of fs/buffer.c changes that came in
> > through the mm/ layer between v6.7 and v6.8 that might also be
> > interesting. But really I'm poking in the dark currently.
>
> Thanks for getting back! Spent some more time one it today.
>
> It seems that the buddy allocator *can* return a page with a VA that can
> wrap (0xfffff000 -- pointed out by Nam and myself).
>
> Further, it seems like riscv32 indeed inserts a page like that to the
> buddy allocator, when the memblock is free'd:
>
> | [<c024961c>] __free_one_page+0x2a4/0x3ea
> | [<c024a448>] __free_pages_ok+0x158/0x3cc
> | [<c024b1a4>] __free_pages_core+0xe8/0x12c
> | [<c0c1435a>] memblock_free_pages+0x1a/0x22
> | [<c0c17676>] memblock_free_all+0x1ee/0x278
> | [<c0c050b0>] mem_init+0x10/0xa4
> | [<c0c1447c>] mm_core_init+0x11a/0x2da
> | [<c0c00bb6>] start_kernel+0x3c4/0x6de
>
> Here, a page with VA 0xfffff000 is a added to the freelist. We were just
> lucky (unlucky?) that page was used for the page cache.
>
> A nasty patch like:
> --8<--
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 549e76af8f82..a6a6abbe71b0 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -2566,6 +2566,9 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
> void __init memblock_free_pages(struct page *page, unsigned long pfn,
> unsigned int order)
> {
> + if ((long)page_address(page) == 0xfffff000L) {
> + return; // leak it
> + }
>
> if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) {
> int nid = early_pfn_to_nid(pfn);
> --8<--
>
> ...and it's gone.
>
> I need to think more about what a proper fix is. Regardless; Christian,
> Al, and Ted can all relax. ;-)
>
>
> Björn
On Tue, Apr 16, 2024 at 08:44:17AM +0200, Nam Cao wrote:
> On 2024-04-15 Björn Töpel wrote:
> > Thanks for getting back! Spent some more time one it today.
> >
> > It seems that the buddy allocator *can* return a page with a VA that can
> > wrap (0xfffff000 -- pointed out by Nam and myself).
> >
> > Further, it seems like riscv32 indeed inserts a page like that to the
> > buddy allocator, when the memblock is free'd:
> >
> > | [<c024961c>] __free_one_page+0x2a4/0x3ea
> > | [<c024a448>] __free_pages_ok+0x158/0x3cc
> > | [<c024b1a4>] __free_pages_core+0xe8/0x12c
> > | [<c0c1435a>] memblock_free_pages+0x1a/0x22
> > | [<c0c17676>] memblock_free_all+0x1ee/0x278
> > | [<c0c050b0>] mem_init+0x10/0xa4
> > | [<c0c1447c>] mm_core_init+0x11a/0x2da
> > | [<c0c00bb6>] start_kernel+0x3c4/0x6de
> >
> > Here, a page with VA 0xfffff000 is a added to the freelist. We were just
> > lucky (unlucky?) that page was used for the page cache.
>
> I just educated myself about memory mapping last night, so the below
> may be complete nonsense. Take it with a grain of salt.
>
> In riscv's setup_bootmem(), we have this line:
> max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
>
> I think this is the root cause: max_low_pfn indicates the last page
> to be mapped. Problem is: nothing prevents PFN_DOWN(phys_ram_end) from
> getting mapped to the last page (0xfffff000). If max_low_pfn is mapped
> to the last page, we get the reported problem.
>
> There seems to be some code to make sure the last page is not used
> (the call to memblock_set_current_limit() right above this line). It is
> unclear to me why this still lets the problem slip through.
>
> The fix is simple: never let max_low_pfn gets mapped to the last page.
> The below patch fixes the problem for me. But I am not entirely sure if
> this is the correct fix, further investigation needed.
>
> Best regards,
> Nam
>
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index fa34cf55037b..17cab0a52726 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -251,7 +251,8 @@ static void __init setup_bootmem(void)
> }
>
> min_low_pfn = PFN_UP(phys_ram_base);
> - max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
> + max_low_pfn = PFN_DOWN(memblock_get_current_limit());
> + max_pfn = PFN_DOWN(phys_ram_end);
> high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));
>
> dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
^ permalink raw reply [relevance 0%]
* Re: riscv32 EXT4 splat, 6.8 regression?
2024-04-15 13:04 0% ` Christian Brauner
@ 2024-04-15 16:04 0% ` Björn Töpel
-1 siblings, 0 replies; 200+ results
From: Björn Töpel @ 2024-04-15 16:04 UTC (permalink / raw)
To: Christian Brauner
Cc: Andreas Dilger, Al Viro, Nam Cao, linux-fsdevel, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle),
Anders Roxell
Christian Brauner <brauner@kernel.org> writes:
> On Sun, Apr 14, 2024 at 04:08:11PM +0200, Björn Töpel wrote:
>> Andreas Dilger <adilger@dilger.ca> writes:
>>
>> > On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
>> >>
>> >> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
>> >>
>> >>> As to whether the 0xfffff000 address itself is valid for riscv32 is
>> >>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
>> >>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
>> >>> reserving this page address from allocation to avoid similar issues in
>> >>> other parts of the code, as is done with the NULL/0 page address.
>> >>
>> >> Not a chance. *Any* page mapped there is a serious bug on any 32bit
>> >> box. Recall what ERR_PTR() is...
>> >>
>> >> On any architecture the virtual addresses in range (unsigned long)-512..
>> >> (unsigned long)-1 must never resolve to valid kernel objects.
>> >> In other words, any kind of wraparound here is asking for an oops on
>> >> attempts to access the elements of buffer - kernel dereference of
>> >> (char *)0xfffff000 on a 32bit box is already a bug.
>> >>
>> >> It might be getting an invalid pointer, but arithmetical overflows
>> >> are irrelevant.
>> >
>> > The original bug report stated that search_buf = 0xfffff000 on entry,
>> > and I'd quoted that at the start of my email:
>> >
>> > On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
>> >> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
>> >> some point the address wraps to zero, and boom. I doubt that 0xfffff000
>> >> is a sane address.
>> >
>> > Now that you mention ERR_PTR() it definitely makes sense that this last
>> > page HAS to be excluded.
>> >
>> > So some other bug is passing the bad pointer to this code before this
>> > error, or the arch is not correctly excluding this page from allocation.
>>
>> Yeah, something is off for sure.
>>
>> (FWIW, I manage to hit this for Linus' master as well.)
>>
>> I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
>> this BT:
>>
>> [<c01e8b34>] __filemap_add_folio+0x322/0x508
>> [<c01e8d6e>] filemap_add_folio+0x54/0xce
>> [<c01ea076>] __filemap_get_folio+0x156/0x2aa
>> [<c02df346>] __getblk_slow+0xcc/0x302
>> [<c02df5f2>] bdev_getblk+0x76/0x7a
>> [<c03519da>] ext4_getblk+0xbc/0x2c4
>> [<c0351cc2>] ext4_bread_batch+0x56/0x186
>> [<c036bcaa>] __ext4_find_entry+0x156/0x578
>> [<c036c152>] ext4_lookup+0x86/0x1f4
>> [<c02a3252>] __lookup_slow+0x8e/0x142
>> [<c02a6d70>] walk_component+0x104/0x174
>> [<c02a793c>] path_lookupat+0x78/0x182
>> [<c02a8c7c>] filename_lookup+0x96/0x158
>> [<c02a8d76>] kern_path+0x38/0x56
>> [<c0c1cb7a>] init_mount+0x5c/0xac
>> [<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
>> [<c0c01cce>] prepare_namespace+0x226/0x27c
>> [<c0c011c6>] kernel_init_freeable+0x286/0x2a8
>> [<c0b97ab8>] kernel_init+0x2a/0x156
>> [<c0ba22ca>] ret_from_fork+0xe/0x20
>>
>> I get a folio where folio_address(folio) == 0xfffff000 (which is
>> broken).
>>
>> Need to go into the weeds here...
>
> I don't see anything obvious that could explain this right away. Did you
> manage to reproduce this on any other architecture and/or filesystem?
>
> Fwiw, iirc there were a bunch of fs/buffer.c changes that came in
> through the mm/ layer between v6.7 and v6.8 that might also be
> interesting. But really I'm poking in the dark currently.
Thanks for getting back! Spent some more time one it today.
It seems that the buddy allocator *can* return a page with a VA that can
wrap (0xfffff000 -- pointed out by Nam and myself).
Further, it seems like riscv32 indeed inserts a page like that to the
buddy allocator, when the memblock is free'd:
| [<c024961c>] __free_one_page+0x2a4/0x3ea
| [<c024a448>] __free_pages_ok+0x158/0x3cc
| [<c024b1a4>] __free_pages_core+0xe8/0x12c
| [<c0c1435a>] memblock_free_pages+0x1a/0x22
| [<c0c17676>] memblock_free_all+0x1ee/0x278
| [<c0c050b0>] mem_init+0x10/0xa4
| [<c0c1447c>] mm_core_init+0x11a/0x2da
| [<c0c00bb6>] start_kernel+0x3c4/0x6de
Here, a page with VA 0xfffff000 is a added to the freelist. We were just
lucky (unlucky?) that page was used for the page cache.
A nasty patch like:
--8<--
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 549e76af8f82..a6a6abbe71b0 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2566,6 +2566,9 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
void __init memblock_free_pages(struct page *page, unsigned long pfn,
unsigned int order)
{
+ if ((long)page_address(page) == 0xfffff000L) {
+ return; // leak it
+ }
if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) {
int nid = early_pfn_to_nid(pfn);
--8<--
...and it's gone.
I need to think more about what a proper fix is. Regardless; Christian,
Al, and Ted can all relax. ;-)
Björn
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [relevance 0%]
* Re: riscv32 EXT4 splat, 6.8 regression?
@ 2024-04-15 16:04 0% ` Björn Töpel
0 siblings, 0 replies; 200+ results
From: Björn Töpel @ 2024-04-15 16:04 UTC (permalink / raw)
To: Christian Brauner
Cc: Andreas Dilger, Al Viro, Nam Cao, linux-fsdevel, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle),
Anders Roxell
Christian Brauner <brauner@kernel.org> writes:
> On Sun, Apr 14, 2024 at 04:08:11PM +0200, Björn Töpel wrote:
>> Andreas Dilger <adilger@dilger.ca> writes:
>>
>> > On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
>> >>
>> >> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
>> >>
>> >>> As to whether the 0xfffff000 address itself is valid for riscv32 is
>> >>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
>> >>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
>> >>> reserving this page address from allocation to avoid similar issues in
>> >>> other parts of the code, as is done with the NULL/0 page address.
>> >>
>> >> Not a chance. *Any* page mapped there is a serious bug on any 32bit
>> >> box. Recall what ERR_PTR() is...
>> >>
>> >> On any architecture the virtual addresses in range (unsigned long)-512..
>> >> (unsigned long)-1 must never resolve to valid kernel objects.
>> >> In other words, any kind of wraparound here is asking for an oops on
>> >> attempts to access the elements of buffer - kernel dereference of
>> >> (char *)0xfffff000 on a 32bit box is already a bug.
>> >>
>> >> It might be getting an invalid pointer, but arithmetical overflows
>> >> are irrelevant.
>> >
>> > The original bug report stated that search_buf = 0xfffff000 on entry,
>> > and I'd quoted that at the start of my email:
>> >
>> > On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
>> >> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
>> >> some point the address wraps to zero, and boom. I doubt that 0xfffff000
>> >> is a sane address.
>> >
>> > Now that you mention ERR_PTR() it definitely makes sense that this last
>> > page HAS to be excluded.
>> >
>> > So some other bug is passing the bad pointer to this code before this
>> > error, or the arch is not correctly excluding this page from allocation.
>>
>> Yeah, something is off for sure.
>>
>> (FWIW, I manage to hit this for Linus' master as well.)
>>
>> I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
>> this BT:
>>
>> [<c01e8b34>] __filemap_add_folio+0x322/0x508
>> [<c01e8d6e>] filemap_add_folio+0x54/0xce
>> [<c01ea076>] __filemap_get_folio+0x156/0x2aa
>> [<c02df346>] __getblk_slow+0xcc/0x302
>> [<c02df5f2>] bdev_getblk+0x76/0x7a
>> [<c03519da>] ext4_getblk+0xbc/0x2c4
>> [<c0351cc2>] ext4_bread_batch+0x56/0x186
>> [<c036bcaa>] __ext4_find_entry+0x156/0x578
>> [<c036c152>] ext4_lookup+0x86/0x1f4
>> [<c02a3252>] __lookup_slow+0x8e/0x142
>> [<c02a6d70>] walk_component+0x104/0x174
>> [<c02a793c>] path_lookupat+0x78/0x182
>> [<c02a8c7c>] filename_lookup+0x96/0x158
>> [<c02a8d76>] kern_path+0x38/0x56
>> [<c0c1cb7a>] init_mount+0x5c/0xac
>> [<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
>> [<c0c01cce>] prepare_namespace+0x226/0x27c
>> [<c0c011c6>] kernel_init_freeable+0x286/0x2a8
>> [<c0b97ab8>] kernel_init+0x2a/0x156
>> [<c0ba22ca>] ret_from_fork+0xe/0x20
>>
>> I get a folio where folio_address(folio) == 0xfffff000 (which is
>> broken).
>>
>> Need to go into the weeds here...
>
> I don't see anything obvious that could explain this right away. Did you
> manage to reproduce this on any other architecture and/or filesystem?
>
> Fwiw, iirc there were a bunch of fs/buffer.c changes that came in
> through the mm/ layer between v6.7 and v6.8 that might also be
> interesting. But really I'm poking in the dark currently.
Thanks for getting back! Spent some more time one it today.
It seems that the buddy allocator *can* return a page with a VA that can
wrap (0xfffff000 -- pointed out by Nam and myself).
Further, it seems like riscv32 indeed inserts a page like that to the
buddy allocator, when the memblock is free'd:
| [<c024961c>] __free_one_page+0x2a4/0x3ea
| [<c024a448>] __free_pages_ok+0x158/0x3cc
| [<c024b1a4>] __free_pages_core+0xe8/0x12c
| [<c0c1435a>] memblock_free_pages+0x1a/0x22
| [<c0c17676>] memblock_free_all+0x1ee/0x278
| [<c0c050b0>] mem_init+0x10/0xa4
| [<c0c1447c>] mm_core_init+0x11a/0x2da
| [<c0c00bb6>] start_kernel+0x3c4/0x6de
Here, a page with VA 0xfffff000 is a added to the freelist. We were just
lucky (unlucky?) that page was used for the page cache.
A nasty patch like:
--8<--
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 549e76af8f82..a6a6abbe71b0 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2566,6 +2566,9 @@ void __init set_dma_reserve(unsigned long new_dma_reserve)
void __init memblock_free_pages(struct page *page, unsigned long pfn,
unsigned int order)
{
+ if ((long)page_address(page) == 0xfffff000L) {
+ return; // leak it
+ }
if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) {
int nid = early_pfn_to_nid(pfn);
--8<--
...and it's gone.
I need to think more about what a proper fix is. Regardless; Christian,
Al, and Ted can all relax. ;-)
Björn
^ permalink raw reply related [relevance 0%]
* Re: riscv32 EXT4 splat, 6.8 regression?
2024-04-14 14:08 6% ` Björn Töpel
@ 2024-04-15 13:04 0% ` Christian Brauner
-1 siblings, 0 replies; 200+ results
From: Christian Brauner @ 2024-04-15 13:04 UTC (permalink / raw)
To: Björn Töpel
Cc: Andreas Dilger, Al Viro, Nam Cao, linux-fsdevel, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle)
On Sun, Apr 14, 2024 at 04:08:11PM +0200, Björn Töpel wrote:
> Andreas Dilger <adilger@dilger.ca> writes:
>
> > On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> >>
> >> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
> >>
> >>> As to whether the 0xfffff000 address itself is valid for riscv32 is
> >>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
> >>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
> >>> reserving this page address from allocation to avoid similar issues in
> >>> other parts of the code, as is done with the NULL/0 page address.
> >>
> >> Not a chance. *Any* page mapped there is a serious bug on any 32bit
> >> box. Recall what ERR_PTR() is...
> >>
> >> On any architecture the virtual addresses in range (unsigned long)-512..
> >> (unsigned long)-1 must never resolve to valid kernel objects.
> >> In other words, any kind of wraparound here is asking for an oops on
> >> attempts to access the elements of buffer - kernel dereference of
> >> (char *)0xfffff000 on a 32bit box is already a bug.
> >>
> >> It might be getting an invalid pointer, but arithmetical overflows
> >> are irrelevant.
> >
> > The original bug report stated that search_buf = 0xfffff000 on entry,
> > and I'd quoted that at the start of my email:
> >
> > On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
> >> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
> >> some point the address wraps to zero, and boom. I doubt that 0xfffff000
> >> is a sane address.
> >
> > Now that you mention ERR_PTR() it definitely makes sense that this last
> > page HAS to be excluded.
> >
> > So some other bug is passing the bad pointer to this code before this
> > error, or the arch is not correctly excluding this page from allocation.
>
> Yeah, something is off for sure.
>
> (FWIW, I manage to hit this for Linus' master as well.)
>
> I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
> this BT:
>
> [<c01e8b34>] __filemap_add_folio+0x322/0x508
> [<c01e8d6e>] filemap_add_folio+0x54/0xce
> [<c01ea076>] __filemap_get_folio+0x156/0x2aa
> [<c02df346>] __getblk_slow+0xcc/0x302
> [<c02df5f2>] bdev_getblk+0x76/0x7a
> [<c03519da>] ext4_getblk+0xbc/0x2c4
> [<c0351cc2>] ext4_bread_batch+0x56/0x186
> [<c036bcaa>] __ext4_find_entry+0x156/0x578
> [<c036c152>] ext4_lookup+0x86/0x1f4
> [<c02a3252>] __lookup_slow+0x8e/0x142
> [<c02a6d70>] walk_component+0x104/0x174
> [<c02a793c>] path_lookupat+0x78/0x182
> [<c02a8c7c>] filename_lookup+0x96/0x158
> [<c02a8d76>] kern_path+0x38/0x56
> [<c0c1cb7a>] init_mount+0x5c/0xac
> [<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
> [<c0c01cce>] prepare_namespace+0x226/0x27c
> [<c0c011c6>] kernel_init_freeable+0x286/0x2a8
> [<c0b97ab8>] kernel_init+0x2a/0x156
> [<c0ba22ca>] ret_from_fork+0xe/0x20
>
> I get a folio where folio_address(folio) == 0xfffff000 (which is
> broken).
>
> Need to go into the weeds here...
I don't see anything obvious that could explain this right away. Did you
manage to reproduce this on any other architecture and/or filesystem?
Fwiw, iirc there were a bunch of fs/buffer.c changes that came in
through the mm/ layer between v6.7 and v6.8 that might also be
interesting. But really I'm poking in the dark currently.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [relevance 0%]
* Re: riscv32 EXT4 splat, 6.8 regression?
@ 2024-04-15 13:04 0% ` Christian Brauner
0 siblings, 0 replies; 200+ results
From: Christian Brauner @ 2024-04-15 13:04 UTC (permalink / raw)
To: Björn Töpel
Cc: Andreas Dilger, Al Viro, Nam Cao, linux-fsdevel, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle)
On Sun, Apr 14, 2024 at 04:08:11PM +0200, Björn Töpel wrote:
> Andreas Dilger <adilger@dilger.ca> writes:
>
> > On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> >>
> >> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
> >>
> >>> As to whether the 0xfffff000 address itself is valid for riscv32 is
> >>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
> >>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
> >>> reserving this page address from allocation to avoid similar issues in
> >>> other parts of the code, as is done with the NULL/0 page address.
> >>
> >> Not a chance. *Any* page mapped there is a serious bug on any 32bit
> >> box. Recall what ERR_PTR() is...
> >>
> >> On any architecture the virtual addresses in range (unsigned long)-512..
> >> (unsigned long)-1 must never resolve to valid kernel objects.
> >> In other words, any kind of wraparound here is asking for an oops on
> >> attempts to access the elements of buffer - kernel dereference of
> >> (char *)0xfffff000 on a 32bit box is already a bug.
> >>
> >> It might be getting an invalid pointer, but arithmetical overflows
> >> are irrelevant.
> >
> > The original bug report stated that search_buf = 0xfffff000 on entry,
> > and I'd quoted that at the start of my email:
> >
> > On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
> >> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
> >> some point the address wraps to zero, and boom. I doubt that 0xfffff000
> >> is a sane address.
> >
> > Now that you mention ERR_PTR() it definitely makes sense that this last
> > page HAS to be excluded.
> >
> > So some other bug is passing the bad pointer to this code before this
> > error, or the arch is not correctly excluding this page from allocation.
>
> Yeah, something is off for sure.
>
> (FWIW, I manage to hit this for Linus' master as well.)
>
> I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
> this BT:
>
> [<c01e8b34>] __filemap_add_folio+0x322/0x508
> [<c01e8d6e>] filemap_add_folio+0x54/0xce
> [<c01ea076>] __filemap_get_folio+0x156/0x2aa
> [<c02df346>] __getblk_slow+0xcc/0x302
> [<c02df5f2>] bdev_getblk+0x76/0x7a
> [<c03519da>] ext4_getblk+0xbc/0x2c4
> [<c0351cc2>] ext4_bread_batch+0x56/0x186
> [<c036bcaa>] __ext4_find_entry+0x156/0x578
> [<c036c152>] ext4_lookup+0x86/0x1f4
> [<c02a3252>] __lookup_slow+0x8e/0x142
> [<c02a6d70>] walk_component+0x104/0x174
> [<c02a793c>] path_lookupat+0x78/0x182
> [<c02a8c7c>] filename_lookup+0x96/0x158
> [<c02a8d76>] kern_path+0x38/0x56
> [<c0c1cb7a>] init_mount+0x5c/0xac
> [<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
> [<c0c01cce>] prepare_namespace+0x226/0x27c
> [<c0c011c6>] kernel_init_freeable+0x286/0x2a8
> [<c0b97ab8>] kernel_init+0x2a/0x156
> [<c0ba22ca>] ret_from_fork+0xe/0x20
>
> I get a folio where folio_address(folio) == 0xfffff000 (which is
> broken).
>
> Need to go into the weeds here...
I don't see anything obvious that could explain this right away. Did you
manage to reproduce this on any other architecture and/or filesystem?
Fwiw, iirc there were a bunch of fs/buffer.c changes that came in
through the mm/ layer between v6.7 and v6.8 that might also be
interesting. But really I'm poking in the dark currently.
^ permalink raw reply [relevance 0%]
* [djwong-xfs:health-monitoring] [xfs] ac96cb4f2f: aim7.jobs-per-min -68.8% regression
@ 2024-04-15 8:02 2% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-04-15 8:02 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: oe-lkp, lkp, oliver.sang
Hello,
kernel test robot noticed a -68.8% regression of aim7.jobs-per-min on:
commit: ac96cb4f2f2ee36c1ff06e47104db12a3389431b ("xfs: present wait time statistics")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git health-monitoring
testcase: aim7
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
parameters:
disk: 1BRD_48G
fs: xfs
test: disk_cp
load: 3000
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202404151528.f2a07da9-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240415/202404151528.f2a07da9-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-13/performance/1BRD_48G/xfs/x86_64-rhel-8.3/3000/debian-12-x86_64-20240206.cgz/lkp-icl-2sp2/disk_cp/aim7
commit:
ed4cedcef5 ("thread_with_file: Lift from bcachefs")
ac96cb4f2f ("xfs: present wait time statistics")
ed4cedcef505729c ac96cb4f2f2ee36c1ff06e47104
---------------- ---------------------------
%stddev %change %stddev
\ | \
5.468e+09 +15.4% 6.311e+09 cpuidle..time
4319925 +27.6% 5513779 cpuidle..usage
75.01 -62.3% 28.25 iostat.cpu.idle
24.03 ± 4% +197.1% 71.40 iostat.cpu.system
109.30 ± 2% +111.6% 231.28 uptime.boot
11723 ± 3% +7.8% 12643 ± 2% uptime.idle
2.00 ± 57% +28083.3% 563.67 ± 29% perf-c2c.DRAM.local
28.33 ± 27% +92270.0% 26171 perf-c2c.DRAM.remote
24.33 ± 20% +1.3e+05% 30822 perf-c2c.HITM.local
5.67 ± 49% +3.5e+05% 19739 perf-c2c.HITM.remote
30.00 ± 22% +1.7e+05% 50561 perf-c2c.HITM.total
75.00 -62.3% 28.28 vmstat.cpu.id
24.02 ± 4% +197.1% 71.35 vmstat.cpu.sy
442.36 ± 22% -64.7% 156.00 ± 12% vmstat.io.bo
32.05 ± 8% +195.6% 94.74 ± 2% vmstat.procs.r
38848 ± 2% -65.5% 13409 vmstat.system.cs
145904 +15.5% 168573 vmstat.system.in
74.20 -46.8 27.45 mpstat.cpu.all.idle%
0.24 ± 2% -0.1 0.17 mpstat.cpu.all.irq%
0.07 -0.0 0.03 mpstat.cpu.all.soft%
24.51 ± 4% +47.5 72.00 mpstat.cpu.all.sys%
0.98 ± 2% -0.6 0.35 ± 3% mpstat.cpu.all.usr%
25.17 ± 45% +166.2% 67.00 ± 15% mpstat.max_utilization.seconds
40.86 ± 11% +99.7% 81.60 mpstat.max_utilization_pct
327179 ± 2% -68.8% 102061 aim7.jobs-per-min
55.24 ± 2% +219.7% 176.62 aim7.time.elapsed_time
55.24 ± 2% +219.7% 176.62 aim7.time.elapsed_time.max
53499 ± 3% +282.3% 204548 ± 3% aim7.time.involuntary_context_switches
172192 ± 2% +66.3% 286351 aim7.time.minor_page_faults
3106 ± 4% +197.9% 9255 aim7.time.percent_of_cpu_this_job_got
1691 ± 7% +864.9% 16320 ± 2% aim7.time.system_time
444140 -7.7% 409980 aim7.time.voluntary_context_switches
154987 ± 6% +369.1% 727058 meminfo.Active
154139 ± 6% +371.0% 726055 meminfo.Active(anon)
847.58 +18.4% 1003 meminfo.Active(file)
56386 ± 5% +152.3% 142248 ± 12% meminfo.AnonHugePages
3337249 +18.1% 3940093 meminfo.Cached
3276183 +18.7% 3889737 meminfo.Committed_AS
9556 ± 3% +327.8% 40880 ± 2% meminfo.Dirty
8084 ± 5% +368.3% 37859 ± 2% meminfo.Inactive(file)
77533 ± 4% -18.0% 63615 ± 3% meminfo.Mapped
188224 ± 4% +303.6% 759619 meminfo.Shmem
4546 ± 34% +3356.1% 157117 ± 33% numa-meminfo.node0.Active
3972 ± 31% +3833.5% 156273 ± 33% numa-meminfo.node0.Active(anon)
5022 ± 8% +307.3% 20456 ± 3% numa-meminfo.node0.Dirty
4337 ± 8% +339.4% 19058 ± 3% numa-meminfo.node0.Inactive(file)
8541 ± 13% +1821.5% 164124 ± 32% numa-meminfo.node0.Shmem
150457 ± 5% +278.9% 570014 ± 9% numa-meminfo.node1.Active
150186 ± 6% +279.4% 569854 ± 9% numa-meminfo.node1.Active(anon)
5072 ± 18% +302.2% 20402 numa-meminfo.node1.Dirty
4255 ± 19% +340.2% 18731 numa-meminfo.node1.Inactive(file)
179979 ± 4% +230.9% 595621 ± 9% numa-meminfo.node1.Shmem
993.90 ± 31% +3828.4% 39044 ± 33% numa-vmstat.node0.nr_active_anon
1197 ± 9% +332.1% 5172 ± 2% numa-vmstat.node0.nr_dirty
1062 ± 13% +354.1% 4823 ± 2% numa-vmstat.node0.nr_inactive_file
2135 ± 13% +1821.2% 41028 ± 32% numa-vmstat.node0.nr_shmem
993.90 ± 31% +3828.4% 39044 ± 33% numa-vmstat.node0.nr_zone_active_anon
1063 ± 12% +353.7% 4822 ± 2% numa-vmstat.node0.nr_zone_inactive_file
1202 ± 9% +330.1% 5172 ± 2% numa-vmstat.node0.nr_zone_write_pending
37248 ± 5% +282.4% 142420 ± 9% numa-vmstat.node1.nr_active_anon
1271 ± 10% +310.2% 5213 numa-vmstat.node1.nr_dirty
1056 ± 10% +352.8% 4784 numa-vmstat.node1.nr_inactive_file
44801 ± 4% +232.4% 148902 ± 9% numa-vmstat.node1.nr_shmem
37248 ± 5% +282.4% 142420 ± 9% numa-vmstat.node1.nr_zone_active_anon
1062 ± 11% +350.4% 4784 numa-vmstat.node1.nr_zone_inactive_file
1272 ± 10% +309.7% 5211 numa-vmstat.node1.nr_zone_write_pending
38384 ± 5% +372.7% 181438 proc-vmstat.nr_active_anon
200806 +1.0% 202810 proc-vmstat.nr_anon_pages
2446 ± 6% +319.4% 10262 proc-vmstat.nr_dirty
834376 +18.1% 985049 proc-vmstat.nr_file_pages
2094 ± 5% +353.5% 9498 proc-vmstat.nr_inactive_file
69864 +2.0% 71282 proc-vmstat.nr_kernel_stack
19823 ± 4% -17.9% 16277 ± 3% proc-vmstat.nr_mapped
47009 ± 4% +303.9% 189872 proc-vmstat.nr_shmem
36676 +3.7% 38033 proc-vmstat.nr_slab_reclaimable
94134 +2.5% 96452 proc-vmstat.nr_slab_unreclaimable
38384 ± 5% +372.7% 181438 proc-vmstat.nr_zone_active_anon
2094 ± 5% +353.5% 9498 proc-vmstat.nr_zone_inactive_file
2445 ± 6% +319.6% 10261 proc-vmstat.nr_zone_write_pending
12243 ± 46% +645.3% 91243 ± 7% proc-vmstat.numa_hint_faults
6005 ± 15% +377.2% 28657 ± 14% proc-vmstat.numa_hint_faults_local
134149 +1.3% 135864 proc-vmstat.numa_other
7876 ± 91% +643.2% 58542 ± 14% proc-vmstat.numa_pages_migrated
99045 ± 23% +135.3% 233054 ± 14% proc-vmstat.numa_pte_updates
81005 ± 2% +83.1% 148349 proc-vmstat.pgactivate
528487 +85.6% 980677 proc-vmstat.pgfault
7876 ± 91% +643.2% 58542 ± 14% proc-vmstat.pgmigrate_success
19896 ± 6% +307.3% 81032 proc-vmstat.pgreuse
1610 +6.1% 1708 proc-vmstat.unevictable_pgs_culled
1.36 +30.7% 1.77 perf-stat.i.MPKI
7.44e+09 ± 2% -7.4% 6.888e+09 perf-stat.i.branch-instructions
1.49 ± 2% -0.9 0.59 ± 2% perf-stat.i.branch-miss-rate%
43273500 ± 2% -52.3% 20626143 ± 2% perf-stat.i.branch-misses
17.75 ± 2% +11.0 28.80 perf-stat.i.cache-miss-rate%
3.134e+08 ± 2% -39.2% 1.904e+08 perf-stat.i.cache-references
39569 ± 2% -66.0% 13458 perf-stat.i.context-switches
1.96 ± 6% +282.2% 7.49 perf-stat.i.cpi
8.347e+10 ± 4% +188.3% 2.406e+11 perf-stat.i.cpu-cycles
1354 ± 3% +7.4% 1455 perf-stat.i.cpu-migrations
1667 ± 4% +149.4% 4159 ± 2% perf-stat.i.cycles-between-cache-misses
3.745e+10 ± 2% -18.2% 3.062e+10 perf-stat.i.instructions
0.65 ± 4% -67.5% 0.21 perf-stat.i.ipc
8512 ± 3% -38.8% 5212 perf-stat.i.minor-faults
8525 ± 3% -38.8% 5216 perf-stat.i.page-faults
1.58 +17.1% 1.85 perf-stat.overall.MPKI
0.57 -0.3 0.30 perf-stat.overall.branch-miss-rate%
18.86 +10.9 29.74 perf-stat.overall.cache-miss-rate%
2.24 ± 6% +251.5% 7.86 perf-stat.overall.cpi
1413 ± 6% +200.2% 4243 ± 2% perf-stat.overall.cycles-between-cache-misses
0.45 ± 6% -71.7% 0.13 perf-stat.overall.ipc
7.371e+09 ± 2% -6.8% 6.871e+09 perf-stat.ps.branch-instructions
42203246 ± 2% -51.6% 20432598 ± 2% perf-stat.ps.branch-misses
3.112e+08 ± 2% -38.9% 1.902e+08 perf-stat.ps.cache-references
39155 ± 2% -65.8% 13394 perf-stat.ps.context-switches
125712 +1.2% 127271 perf-stat.ps.cpu-clock
8.28e+10 ± 4% +189.8% 2.4e+11 perf-stat.ps.cpu-cycles
1341 ± 3% +8.0% 1449 perf-stat.ps.cpu-migrations
3.711e+10 ± 2% -17.7% 3.054e+10 perf-stat.ps.instructions
8200 ± 3% -36.9% 5171 perf-stat.ps.minor-faults
8212 ± 3% -37.0% 5175 perf-stat.ps.page-faults
125712 +1.2% 127271 perf-stat.ps.task-clock
2.086e+12 +160.2% 5.428e+12 perf-stat.total.instructions
2783 ± 9% +1.7e+05% 4691883 ± 23% sched_debug.cfs_rq:/.avg_vruntime.avg
38308 ± 18% +12466.4% 4814029 ± 22% sched_debug.cfs_rq:/.avg_vruntime.max
56.45 ± 63% +7.6e+06% 4281742 ± 24% sched_debug.cfs_rq:/.avg_vruntime.min
5209 ± 15% +865.1% 50276 ± 6% sched_debug.cfs_rq:/.avg_vruntime.stddev
0.12 ± 20% +349.7% 0.56 ± 7% sched_debug.cfs_rq:/.h_nr_running.avg
22.61 ±129% +1.4e+05% 31613 ± 41% sched_debug.cfs_rq:/.left_deadline.avg
2257 ±156% +1.2e+05% 2711296 ± 33% sched_debug.cfs_rq:/.left_deadline.max
213.22 ±145% +1.3e+05% 286346 ± 34% sched_debug.cfs_rq:/.left_deadline.stddev
22.31 ±132% +1.4e+05% 31612 ± 41% sched_debug.cfs_rq:/.left_vruntime.avg
2222 ±160% +1.2e+05% 2711257 ± 33% sched_debug.cfs_rq:/.left_vruntime.max
210.03 ±148% +1.4e+05% 286343 ± 34% sched_debug.cfs_rq:/.left_vruntime.stddev
146.55 ± 14% -60.1% 58.46 ± 8% sched_debug.cfs_rq:/.load_avg.avg
2518 ± 41% -72.1% 703.57 ± 13% sched_debug.cfs_rq:/.load_avg.max
416.11 ± 15% -66.8% 137.98 ± 8% sched_debug.cfs_rq:/.load_avg.stddev
2783 ± 9% +1.7e+05% 4691883 ± 23% sched_debug.cfs_rq:/.min_vruntime.avg
38308 ± 18% +12466.4% 4814029 ± 22% sched_debug.cfs_rq:/.min_vruntime.max
56.45 ± 63% +7.6e+06% 4281742 ± 24% sched_debug.cfs_rq:/.min_vruntime.min
5209 ± 15% +865.1% 50276 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.12 ± 20% +343.2% 0.55 ± 8% sched_debug.cfs_rq:/.nr_running.avg
31.39 ± 59% -59.4% 12.74 ± 25% sched_debug.cfs_rq:/.removed.load_avg.avg
1024 -63.9% 369.39 ± 10% sched_debug.cfs_rq:/.removed.load_avg.max
168.25 ± 26% -61.5% 64.79 ± 12% sched_debug.cfs_rq:/.removed.load_avg.stddev
473.33 ± 20% -53.8% 218.47 ± 28% sched_debug.cfs_rq:/.removed.runnable_avg.max
74.71 ± 35% -55.6% 33.18 ± 19% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
473.33 ± 20% -53.9% 218.43 ± 28% sched_debug.cfs_rq:/.removed.util_avg.max
74.71 ± 35% -55.6% 33.18 ± 19% sched_debug.cfs_rq:/.removed.util_avg.stddev
22.31 ±132% +1.4e+05% 31612 ± 41% sched_debug.cfs_rq:/.right_vruntime.avg
2222 ±160% +1.2e+05% 2711257 ± 33% sched_debug.cfs_rq:/.right_vruntime.max
210.03 ±148% +1.4e+05% 286343 ± 34% sched_debug.cfs_rq:/.right_vruntime.stddev
247.04 ± 6% +142.1% 598.20 ± 3% sched_debug.cfs_rq:/.runnable_avg.avg
297.44 ± 6% -25.8% 220.72 ± 4% sched_debug.cfs_rq:/.runnable_avg.stddev
245.35 ± 6% +137.8% 583.34 ± 3% sched_debug.cfs_rq:/.util_avg.avg
296.47 ± 6% -28.9% 210.90 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
13.65 ± 44% +2422.1% 344.40 ± 10% sched_debug.cfs_rq:/.util_est.avg
513.33 ± 15% +108.9% 1072 ± 12% sched_debug.cfs_rq:/.util_est.max
75.58 ± 28% +216.9% 239.55 ± 5% sched_debug.cfs_rq:/.util_est.stddev
4423 ± 18% +9382.7% 419497 ± 10% sched_debug.cpu.avg_idle.min
223580 ± 6% -49.6% 112717 ± 6% sched_debug.cpu.avg_idle.stddev
53706 ± 5% +131.6% 124408 ± 9% sched_debug.cpu.clock.avg
53714 ± 5% +131.7% 124451 ± 9% sched_debug.cpu.clock.max
53696 ± 5% +131.6% 124360 ± 9% sched_debug.cpu.clock.min
4.42 ± 14% +481.3% 25.68 ± 12% sched_debug.cpu.clock.stddev
53561 ± 5% +131.7% 124124 ± 9% sched_debug.cpu.clock_task.avg
53702 ± 5% +131.5% 124302 ± 9% sched_debug.cpu.clock_task.max
44775 ± 6% +157.4% 115247 ± 10% sched_debug.cpu.clock_task.min
342.20 ± 16% +759.9% 2942 ± 8% sched_debug.cpu.curr->pid.avg
3218 +134.3% 7540 ± 5% sched_debug.cpu.curr->pid.max
960.58 ± 7% +110.3% 2020 ± 5% sched_debug.cpu.curr->pid.stddev
0.00 ± 32% +65.9% 0.00 ± 12% sched_debug.cpu.next_balance.stddev
0.12 ± 19% +346.8% 0.55 ± 8% sched_debug.cpu.nr_running.avg
1092 ± 3% +663.8% 8347 ± 15% sched_debug.cpu.nr_switches.avg
11977 ± 21% +186.3% 34291 ± 13% sched_debug.cpu.nr_switches.max
141.67 ± 11% +4398.1% 6372 ± 20% sched_debug.cpu.nr_switches.min
1714 ± 12% +100.2% 3431 ± 13% sched_debug.cpu.nr_switches.stddev
0.01 +1.9e+05% 14.50 ± 6% sched_debug.cpu.nr_uninterruptible.avg
26.00 ± 26% +171.3% 70.53 ± 20% sched_debug.cpu.nr_uninterruptible.max
-15.33 +83.7% -28.17 sched_debug.cpu.nr_uninterruptible.min
5.50 ± 10% +217.2% 17.44 ± 21% sched_debug.cpu.nr_uninterruptible.stddev
53700 ± 5% +131.6% 124361 ± 9% sched_debug.cpu_clk
52467 ± 5% +134.7% 123128 ± 9% sched_debug.ktime
54556 ± 5% +129.7% 125292 ± 9% sched_debug.sched_clk
64.57 ± 5% -63.4 1.20 ± 6% perf-profile.calltrace.cycles-pp.read
63.78 ± 5% -62.7 1.11 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
63.67 ± 5% -62.6 1.10 ± 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
63.32 ± 5% -62.3 1.06 ± 6% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
62.98 ± 5% -62.0 1.02 ± 6% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
62.11 ± 5% -61.2 0.93 ± 7% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
61.93 ± 5% -61.0 0.90 ± 7% perf-profile.calltrace.cycles-pp.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read.do_syscall_64
29.02 ± 6% -28.4 0.61 ± 9% perf-profile.calltrace.cycles-pp.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
26.46 ± 4% -26.5 0.00 perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
26.36 ± 4% -26.4 0.00 perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
22.09 ± 6% -22.1 0.00 perf-profile.calltrace.cycles-pp.touch_atime.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
22.03 ± 6% -22.0 0.00 perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
12.74 ± 4% -11.7 1.05 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
7.47 ± 4% -6.9 0.56 ± 2% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
6.27 ± 15% -6.3 0.00 perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
6.18 ± 15% -6.2 0.00 perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
6.60 ± 65% -5.0 1.61 ± 5% perf-profile.calltrace.cycles-pp.unlink
6.58 ± 65% -5.0 1.61 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
6.58 ± 65% -5.0 1.61 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
6.52 ± 66% -4.9 1.60 ± 5% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
6.52 ± 66% -4.9 1.60 ± 5% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
4.73 ± 10% -4.7 0.00 perf-profile.calltrace.cycles-pp.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
6.18 ± 69% -4.7 1.48 ± 6% perf-profile.calltrace.cycles-pp.down_write.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.18 ± 69% -4.7 1.48 ± 6% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write.do_unlinkat.__x64_sys_unlink.do_syscall_64
6.07 ± 71% -4.6 1.44 ± 6% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.do_unlinkat.__x64_sys_unlink
5.53 ± 78% -4.3 1.20 ± 7% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.do_unlinkat
1.79 ± 10% -1.0 0.83 ± 5% perf-profile.calltrace.cycles-pp.creat64
1.77 ± 10% -0.9 0.83 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.77 ± 10% -0.9 0.83 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
1.74 ± 10% -0.9 0.82 ± 5% perf-profile.calltrace.cycles-pp.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.74 ± 10% -0.9 0.82 ± 5% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.73 ± 10% -0.9 0.82 ± 5% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.73 ± 10% -0.9 0.82 ± 5% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64
1.71 ± 10% -0.9 0.82 ± 5% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat
1.23 ± 12% -0.8 0.47 ± 44% perf-profile.calltrace.cycles-pp.down_write.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
1.22 ± 12% -0.8 0.47 ± 44% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write.open_last_lookups.path_openat.do_filp_open
0.47 ± 47% +1.1 1.60 ± 8% perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified
0.47 ± 47% +1.1 1.61 ± 8% perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.48 ± 46% +1.1 1.62 ± 8% perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
0.00 +1.5 1.51 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve
0.00 +1.5 1.52 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc
0.00 +1.6 1.56 ± 8% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time
2.16 ± 5% +3.9 6.07 ± 2% perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
0.00 +4.1 4.15 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified
0.00 +4.2 4.16 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.00 +4.2 4.18 ± 2% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
1.50 ± 8% +4.5 6.00 ± 2% perf-profile.calltrace.cycles-pp.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write
1.16 ± 12% +4.8 5.96 ± 2% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write
16.03 ± 5% +73.1 89.14 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
22.33 ± 5% +73.3 95.62 perf-profile.calltrace.cycles-pp.write
21.55 ± 5% +74.0 95.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
21.45 ± 5% +74.1 95.51 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
21.18 ± 5% +74.3 95.47 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
20.84 ± 4% +74.6 95.44 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
19.71 ± 4% +75.6 95.31 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.90 ± 6% +85.1 88.04 perf-profile.calltrace.cycles-pp.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
2.08 ± 6% +85.9 87.94 perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.00 +86.4 86.44 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.00 +86.7 86.66 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter
0.56 ± 6% +86.8 87.38 perf-profile.calltrace.cycles-pp.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.00 +87.1 87.08 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write
64.85 ± 5% -63.6 1.23 ± 6% perf-profile.children.cycles-pp.read
63.35 ± 5% -62.3 1.07 ± 6% perf-profile.children.cycles-pp.ksys_read
63.01 ± 5% -62.0 1.03 ± 6% perf-profile.children.cycles-pp.vfs_read
62.12 ± 5% -61.2 0.93 ± 7% perf-profile.children.cycles-pp.xfs_file_read_iter
61.97 ± 5% -61.1 0.90 ± 7% perf-profile.children.cycles-pp.xfs_file_buffered_read
29.08 ± 6% -28.5 0.62 ± 9% perf-profile.children.cycles-pp.filemap_read
27.41 ± 4% -27.0 0.39 ± 3% perf-profile.children.cycles-pp.xfs_ilock
26.44 ± 4% -26.2 0.24 ± 5% perf-profile.children.cycles-pp.down_read
22.11 ± 6% -22.0 0.16 ± 7% perf-profile.children.cycles-pp.touch_atime
22.08 ± 6% -21.9 0.16 ± 7% perf-profile.children.cycles-pp.atime_needs_update
12.80 ± 4% -11.7 1.06 ± 2% perf-profile.children.cycles-pp.iomap_write_iter
7.51 ± 4% -6.9 0.56 ± 2% perf-profile.children.cycles-pp.iomap_write_begin
7.06 ± 13% -6.8 0.26 perf-profile.children.cycles-pp.xfs_iunlock
8.53 ± 50% -6.4 2.10 ± 3% perf-profile.children.cycles-pp.down_write
6.22 ± 15% -6.2 0.02 ±141% perf-profile.children.cycles-pp.up_read
7.40 ± 59% -5.4 2.03 ± 3% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
6.94 ± 63% -5.1 1.86 ± 3% perf-profile.children.cycles-pp.rwsem_optimistic_spin
6.60 ± 65% -5.0 1.61 ± 5% perf-profile.children.cycles-pp.unlink
6.52 ± 66% -4.9 1.60 ± 5% perf-profile.children.cycles-pp.__x64_sys_unlink
6.52 ± 66% -4.9 1.60 ± 5% perf-profile.children.cycles-pp.do_unlinkat
6.30 ± 69% -4.7 1.57 ± 4% perf-profile.children.cycles-pp.osq_lock
4.75 ± 10% -4.5 0.25 ± 18% perf-profile.children.cycles-pp.filemap_get_pages
4.50 ± 4% -4.2 0.33 ± 2% perf-profile.children.cycles-pp.__filemap_get_folio
3.09 ± 6% -2.8 0.25 ± 3% perf-profile.children.cycles-pp.iomap_write_end
2.70 ± 5% -2.5 0.20 ± 2% perf-profile.children.cycles-pp.__iomap_write_begin
2.36 ± 5% -2.2 0.15 ± 2% perf-profile.children.cycles-pp.filemap_add_folio
2.42 ± 7% -2.2 0.24 ± 20% perf-profile.children.cycles-pp.filemap_get_read_batch
1.90 ± 7% -1.7 0.21 ± 3% perf-profile.children.cycles-pp.common_startup_64
1.90 ± 7% -1.7 0.21 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
1.90 ± 7% -1.7 0.21 ± 3% perf-profile.children.cycles-pp.do_idle
1.88 ± 7% -1.7 0.21 ± 4% perf-profile.children.cycles-pp.start_secondary
1.78 ± 7% -1.6 0.13 ± 4% perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.74 ± 5% -1.6 0.11 ± 3% perf-profile.children.cycles-pp.__filemap_add_folio
1.82 ± 7% -1.6 0.21 ± 4% perf-profile.children.cycles-pp.cpuidle_idle_call
1.91 ± 5% -1.5 0.38 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.70 ± 7% -1.5 0.19 ± 3% perf-profile.children.cycles-pp.cpuidle_enter
1.70 ± 7% -1.5 0.19 ± 3% perf-profile.children.cycles-pp.cpuidle_enter_state
1.67 ± 7% -1.5 0.19 ± 4% perf-profile.children.cycles-pp.acpi_idle_enter
1.66 ± 7% -1.5 0.19 ± 4% perf-profile.children.cycles-pp.acpi_safe_halt
1.54 ± 6% -1.4 0.16 ± 3% perf-profile.children.cycles-pp.__close
1.54 ± 6% -1.4 0.16 ± 3% perf-profile.children.cycles-pp.__x64_sys_close
1.52 ± 6% -1.4 0.16 ± 3% perf-profile.children.cycles-pp.__fput
1.51 ± 6% -1.4 0.15 ± 2% perf-profile.children.cycles-pp.dput
1.50 ± 6% -1.4 0.15 ± 2% perf-profile.children.cycles-pp.__dentry_kill
1.47 ± 6% -1.3 0.15 ± 3% perf-profile.children.cycles-pp.evict
1.46 ± 6% -1.3 0.15 ± 3% perf-profile.children.cycles-pp.truncate_inode_pages_range
1.34 ± 7% -1.2 0.14 ± 5% perf-profile.children.cycles-pp.zero_user_segments
1.30 ± 7% -1.2 0.13 ± 5% perf-profile.children.cycles-pp.memset_orig
1.25 ± 5% -1.2 0.09 perf-profile.children.cycles-pp.filemap_dirty_folio
1.26 ± 5% -1.0 0.26 ± 2% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.10 ± 8% -1.0 0.11 ± 4% perf-profile.children.cycles-pp.copy_page_to_iter
1.81 ± 10% -1.0 0.84 ± 5% perf-profile.children.cycles-pp.do_sys_openat2
1.79 ± 10% -1.0 0.83 ± 5% perf-profile.children.cycles-pp.creat64
1.78 ± 10% -1.0 0.83 ± 5% perf-profile.children.cycles-pp.do_filp_open
1.78 ± 10% -0.9 0.83 ± 5% perf-profile.children.cycles-pp.path_openat
1.74 ± 10% -0.9 0.82 ± 5% perf-profile.children.cycles-pp.__x64_sys_creat
1.03 ± 6% -0.9 0.14 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
1.00 ± 8% -0.9 0.10 ± 3% perf-profile.children.cycles-pp._copy_to_iter
1.71 ± 10% -0.9 0.82 ± 5% perf-profile.children.cycles-pp.open_last_lookups
0.93 ± 6% -0.8 0.11 ± 3% perf-profile.children.cycles-pp.xlog_cil_commit
0.87 ± 4% -0.8 0.06 ± 7% perf-profile.children.cycles-pp.folio_alloc
0.79 ± 5% -0.7 0.06 ± 6% perf-profile.children.cycles-pp.alloc_pages_mpol
0.71 ± 5% -0.7 0.06 ± 9% perf-profile.children.cycles-pp.__alloc_pages
0.86 ± 5% -0.7 0.20 ± 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.84 ± 4% -0.6 0.20 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.72 ± 7% -0.6 0.10 ± 5% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.71 ± 8% -0.6 0.08 perf-profile.children.cycles-pp.xas_load
0.72 ± 8% -0.6 0.09 perf-profile.children.cycles-pp.entry_SYSCALL_64
0.70 ± 6% -0.6 0.08 ± 10% perf-profile.children.cycles-pp.rw_verify_area
0.66 ± 4% -0.6 0.05 perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
0.65 ± 8% -0.6 0.06 ± 7% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
0.61 ± 6% -0.6 0.06 perf-profile.children.cycles-pp.filemap_get_entry
0.60 ± 6% -0.5 0.05 ± 7% perf-profile.children.cycles-pp.__folio_mark_dirty
0.64 ± 5% -0.5 0.14 ± 3% perf-profile.children.cycles-pp.up_write
0.53 ± 6% -0.5 0.06 ± 6% perf-profile.children.cycles-pp.security_file_permission
0.72 ± 8% -0.5 0.25 ± 3% perf-profile.children.cycles-pp.ret_from_fork
0.72 ± 8% -0.5 0.25 ± 3% perf-profile.children.cycles-pp.ret_from_fork_asm
0.53 ± 7% -0.5 0.06 ± 6% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.71 ± 8% -0.5 0.25 ± 3% perf-profile.children.cycles-pp.kthread
0.52 ± 7% -0.5 0.06 perf-profile.children.cycles-pp.__cond_resched
0.52 ± 7% -0.5 0.06 ± 8% perf-profile.children.cycles-pp.__fdget_pos
0.50 ± 5% -0.5 0.04 ± 44% perf-profile.children.cycles-pp.folios_put_refs
0.51 ± 7% -0.4 0.06 ± 7% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.59 ± 5% -0.4 0.16 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.67 ± 8% -0.4 0.24 ± 3% perf-profile.children.cycles-pp.worker_thread
0.86 ± 8% -0.4 0.43 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.48 ± 7% -0.4 0.05 ± 8% perf-profile.children.cycles-pp.iomap_iter_advance
0.56 ± 5% -0.4 0.14 ± 3% perf-profile.children.cycles-pp.tick_nohz_handler
0.63 ± 9% -0.4 0.23 ± 3% perf-profile.children.cycles-pp.process_one_work
0.46 ± 7% -0.4 0.07 ± 7% perf-profile.children.cycles-pp.__schedule
0.45 ± 7% -0.4 0.06 ± 8% perf-profile.children.cycles-pp.fault_in_readable
0.60 ± 8% -0.4 0.22 ± 2% perf-profile.children.cycles-pp.xfs_inodegc_worker
0.43 ± 5% -0.4 0.05 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.59 ± 9% -0.4 0.22 ± 2% perf-profile.children.cycles-pp.xfs_inactive
0.50 ± 5% -0.4 0.13 ± 2% perf-profile.children.cycles-pp.update_process_times
0.40 ± 6% -0.4 0.04 ± 44% perf-profile.children.cycles-pp.apparmor_file_permission
0.41 ± 7% -0.4 0.06 perf-profile.children.cycles-pp.schedule
0.36 ± 6% -0.3 0.06 perf-profile.children.cycles-pp.load_balance
0.35 ± 7% -0.3 0.05 ± 8% perf-profile.children.cycles-pp.pick_next_task_fair
0.34 ± 7% -0.3 0.05 perf-profile.children.cycles-pp.newidle_balance
0.33 ± 10% -0.3 0.06 perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.32 ± 7% -0.3 0.05 ± 7% perf-profile.children.cycles-pp.find_busiest_group
0.31 ± 6% -0.3 0.05 perf-profile.children.cycles-pp.update_sd_lb_stats
0.29 ± 7% -0.2 0.05 perf-profile.children.cycles-pp.update_sg_lb_stats
0.28 ± 9% -0.2 0.05 perf-profile.children.cycles-pp.xfs_buf_read_map
0.36 ± 8% -0.2 0.14 ± 3% perf-profile.children.cycles-pp.xfs_inactive_ifree
0.27 ± 9% -0.2 0.05 perf-profile.children.cycles-pp.xfs_buf_get_map
0.32 ± 7% -0.2 0.10 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.24 ± 7% -0.2 0.05 perf-profile.children.cycles-pp.xlog_cil_insert_items
0.44 ± 7% -0.2 0.27 ± 3% perf-profile.children.cycles-pp.lookup_open
0.35 ± 7% -0.2 0.19 ± 3% perf-profile.children.cycles-pp.xfs_generic_create
0.27 ± 9% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.vfs_unlink
0.26 ± 9% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.xfs_remove
0.26 ± 9% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.xfs_vn_unlink
0.34 ± 7% -0.2 0.18 ± 2% perf-profile.children.cycles-pp.xfs_create
0.23 ± 10% -0.1 0.08 perf-profile.children.cycles-pp.xfs_inactive_truncate
0.21 ± 8% -0.1 0.08 perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.18 ± 6% -0.1 0.06 perf-profile.children.cycles-pp.task_tick_fair
0.06 -0.0 0.05 perf-profile.children.cycles-pp.main
0.06 -0.0 0.05 perf-profile.children.cycles-pp.run_builtin
0.06 ± 11% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_vn_lookup
0.06 ± 9% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_lookup
0.05 ± 8% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_dir_lookup
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.local_clock_noinstr
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.xfs_iget
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.xfs_iget_cache_hit
0.00 +0.1 0.07 perf-profile.children.cycles-pp.local_clock
0.00 +0.1 0.07 perf-profile.children.cycles-pp.xfs_icreate
0.00 +0.1 0.07 ± 8% perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.xfs_lock_two_inodes
0.00 +0.1 0.08 perf-profile.children.cycles-pp.xfs_trans_alloc_dir
0.00 +0.3 0.29 ± 4% perf-profile.children.cycles-pp.time_stats_update_one
0.66 ± 11% +1.0 1.65 ± 8% perf-profile.children.cycles-pp.xfs_trans_reserve
0.64 ± 12% +1.0 1.65 ± 8% perf-profile.children.cycles-pp.xfs_log_reserve
0.66 ± 11% +1.0 1.67 ± 8% perf-profile.children.cycles-pp.xfs_trans_alloc
2.22 ± 5% +3.9 6.08 ± 2% perf-profile.children.cycles-pp.xfs_file_write_checks
95.43 +3.9 99.29 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
95.26 +4.0 99.27 perf-profile.children.cycles-pp.do_syscall_64
1.53 ± 8% +4.5 6.01 ± 2% perf-profile.children.cycles-pp.kiocb_modified
1.16 ± 12% +4.8 5.96 ± 2% perf-profile.children.cycles-pp.xfs_vn_update_time
22.63 ± 5% +73.1 95.68 perf-profile.children.cycles-pp.write
16.06 ± 5% +73.1 89.14 perf-profile.children.cycles-pp.iomap_file_buffered_write
21.25 ± 5% +74.3 95.50 perf-profile.children.cycles-pp.ksys_write
20.91 ± 4% +74.6 95.47 perf-profile.children.cycles-pp.vfs_write
19.76 ± 4% +75.6 95.31 perf-profile.children.cycles-pp.xfs_file_buffered_write
2.94 ± 6% +85.1 88.04 perf-profile.children.cycles-pp.iomap_iter
2.17 ± 6% +85.8 87.96 perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.59 ± 7% +86.8 87.42 perf-profile.children.cycles-pp.xfs_ilock_for_iomap
0.16 ± 11% +92.4 92.54 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.36 ± 6% +92.4 92.80 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +93.3 93.28 perf-profile.children.cycles-pp.__time_stats_update
26.23 ± 4% -26.0 0.22 ± 5% perf-profile.self.cycles-pp.down_read
21.75 ± 6% -21.6 0.14 ± 9% perf-profile.self.cycles-pp.atime_needs_update
6.18 ± 15% -6.2 0.01 ±223% perf-profile.self.cycles-pp.up_read
6.25 ± 70% -4.7 1.56 ± 4% perf-profile.self.cycles-pp.osq_lock
2.07 ± 8% -1.9 0.20 ± 22% perf-profile.self.cycles-pp.filemap_get_read_batch
1.74 ± 7% -1.6 0.12 ± 3% perf-profile.self.cycles-pp.iomap_set_range_uptodate
1.29 ± 7% -1.2 0.13 ± 5% perf-profile.self.cycles-pp.memset_orig
1.06 ± 7% -1.0 0.08 ± 5% perf-profile.self.cycles-pp.filemap_read
0.97 ± 6% -0.9 0.05 ± 7% perf-profile.self.cycles-pp.down_write
0.98 ± 8% -0.9 0.10 ± 3% perf-profile.self.cycles-pp._copy_to_iter
0.77 ± 7% -0.7 0.09 ± 5% perf-profile.self.cycles-pp.acpi_safe_halt
0.71 ± 7% -0.6 0.10 ± 5% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
0.54 ± 8% -0.5 0.06 perf-profile.self.cycles-pp.vfs_write
0.52 ± 7% -0.5 0.06 perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.56 ± 6% -0.4 0.12 ± 3% perf-profile.self.cycles-pp.up_write
0.50 ± 7% -0.4 0.05 ± 7% perf-profile.self.cycles-pp.__fdget_pos
0.48 ± 6% -0.4 0.04 ± 44% perf-profile.self.cycles-pp.vfs_read
0.85 ± 8% -0.4 0.43 ± 2% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.46 ± 7% -0.4 0.05 perf-profile.self.cycles-pp.iomap_iter_advance
0.44 ± 8% -0.4 0.06 ± 8% perf-profile.self.cycles-pp.fault_in_readable
0.45 ± 7% -0.3 0.18 ± 2% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
0.28 ± 7% -0.2 0.10 perf-profile.self.cycles-pp.xfs_iunlock
0.20 ± 9% -0.1 0.08 ± 4% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.11 ± 7% -0.1 0.06 ± 9% perf-profile.self.cycles-pp.xfs_ilock_for_iomap
0.21 ± 3% +0.0 0.26 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.06 perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.2 0.20 ± 2% perf-profile.self.cycles-pp.__time_stats_update
0.00 +0.3 0.25 ± 5% perf-profile.self.cycles-pp.time_stats_update_one
0.16 ± 11% +92.4 92.54 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 2%]
* Re: riscv32 EXT4 splat, 6.8 regression?
@ 2024-04-14 14:08 6% ` Björn Töpel
0 siblings, 0 replies; 200+ results
From: Björn Töpel @ 2024-04-14 14:08 UTC (permalink / raw)
To: Andreas Dilger, Al Viro
Cc: Nam Cao, linux-fsdevel, Christian Brauner, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle)
Andreas Dilger <adilger@dilger.ca> writes:
> On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
>>
>> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
>>
>>> As to whether the 0xfffff000 address itself is valid for riscv32 is
>>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
>>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
>>> reserving this page address from allocation to avoid similar issues in
>>> other parts of the code, as is done with the NULL/0 page address.
>>
>> Not a chance. *Any* page mapped there is a serious bug on any 32bit
>> box. Recall what ERR_PTR() is...
>>
>> On any architecture the virtual addresses in range (unsigned long)-512..
>> (unsigned long)-1 must never resolve to valid kernel objects.
>> In other words, any kind of wraparound here is asking for an oops on
>> attempts to access the elements of buffer - kernel dereference of
>> (char *)0xfffff000 on a 32bit box is already a bug.
>>
>> It might be getting an invalid pointer, but arithmetical overflows
>> are irrelevant.
>
> The original bug report stated that search_buf = 0xfffff000 on entry,
> and I'd quoted that at the start of my email:
>
> On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
>> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
>> some point the address wraps to zero, and boom. I doubt that 0xfffff000
>> is a sane address.
>
> Now that you mention ERR_PTR() it definitely makes sense that this last
> page HAS to be excluded.
>
> So some other bug is passing the bad pointer to this code before this
> error, or the arch is not correctly excluding this page from allocation.
Yeah, something is off for sure.
(FWIW, I manage to hit this for Linus' master as well.)
I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
this BT:
[<c01e8b34>] __filemap_add_folio+0x322/0x508
[<c01e8d6e>] filemap_add_folio+0x54/0xce
[<c01ea076>] __filemap_get_folio+0x156/0x2aa
[<c02df346>] __getblk_slow+0xcc/0x302
[<c02df5f2>] bdev_getblk+0x76/0x7a
[<c03519da>] ext4_getblk+0xbc/0x2c4
[<c0351cc2>] ext4_bread_batch+0x56/0x186
[<c036bcaa>] __ext4_find_entry+0x156/0x578
[<c036c152>] ext4_lookup+0x86/0x1f4
[<c02a3252>] __lookup_slow+0x8e/0x142
[<c02a6d70>] walk_component+0x104/0x174
[<c02a793c>] path_lookupat+0x78/0x182
[<c02a8c7c>] filename_lookup+0x96/0x158
[<c02a8d76>] kern_path+0x38/0x56
[<c0c1cb7a>] init_mount+0x5c/0xac
[<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
[<c0c01cce>] prepare_namespace+0x226/0x27c
[<c0c011c6>] kernel_init_freeable+0x286/0x2a8
[<c0b97ab8>] kernel_init+0x2a/0x156
[<c0ba22ca>] ret_from_fork+0xe/0x20
I get a folio where folio_address(folio) == 0xfffff000 (which is
broken).
Need to go into the weeds here...
Björn
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [relevance 6%]
* Re: riscv32 EXT4 splat, 6.8 regression?
@ 2024-04-14 14:08 6% ` Björn Töpel
0 siblings, 0 replies; 200+ results
From: Björn Töpel @ 2024-04-14 14:08 UTC (permalink / raw)
To: Andreas Dilger, Al Viro
Cc: Nam Cao, linux-fsdevel, Christian Brauner, Jan Kara,
Linux Kernel Mailing List, linux-riscv, Theodore Ts'o,
Ext4 Developers List, Conor Dooley, Matthew Wilcox (Oracle)
Andreas Dilger <adilger@dilger.ca> writes:
> On Apr 13, 2024, at 8:15 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
>>
>> On Sat, Apr 13, 2024 at 07:46:03PM -0600, Andreas Dilger wrote:
>>
>>> As to whether the 0xfffff000 address itself is valid for riscv32 is
>>> outside my realm, but given that RAM is cheap it doesn't seem unlikely
>>> to have 4GB+ of RAM and want to use it all. The riscv32 might consider
>>> reserving this page address from allocation to avoid similar issues in
>>> other parts of the code, as is done with the NULL/0 page address.
>>
>> Not a chance. *Any* page mapped there is a serious bug on any 32bit
>> box. Recall what ERR_PTR() is...
>>
>> On any architecture the virtual addresses in range (unsigned long)-512..
>> (unsigned long)-1 must never resolve to valid kernel objects.
>> In other words, any kind of wraparound here is asking for an oops on
>> attempts to access the elements of buffer - kernel dereference of
>> (char *)0xfffff000 on a 32bit box is already a bug.
>>
>> It might be getting an invalid pointer, but arithmetical overflows
>> are irrelevant.
>
> The original bug report stated that search_buf = 0xfffff000 on entry,
> and I'd quoted that at the start of my email:
>
> On Apr 12, 2024, at 8:57 AM, Björn Töpel <bjorn@kernel.org> wrote:
>> What I see in ext4_search_dir() is that search_buf is 0xfffff000, and at
>> some point the address wraps to zero, and boom. I doubt that 0xfffff000
>> is a sane address.
>
> Now that you mention ERR_PTR() it definitely makes sense that this last
> page HAS to be excluded.
>
> So some other bug is passing the bad pointer to this code before this
> error, or the arch is not correctly excluding this page from allocation.
Yeah, something is off for sure.
(FWIW, I manage to hit this for Linus' master as well.)
I added a print (close to trace_mm_filemap_add_to_page_cache()), and for
this BT:
[<c01e8b34>] __filemap_add_folio+0x322/0x508
[<c01e8d6e>] filemap_add_folio+0x54/0xce
[<c01ea076>] __filemap_get_folio+0x156/0x2aa
[<c02df346>] __getblk_slow+0xcc/0x302
[<c02df5f2>] bdev_getblk+0x76/0x7a
[<c03519da>] ext4_getblk+0xbc/0x2c4
[<c0351cc2>] ext4_bread_batch+0x56/0x186
[<c036bcaa>] __ext4_find_entry+0x156/0x578
[<c036c152>] ext4_lookup+0x86/0x1f4
[<c02a3252>] __lookup_slow+0x8e/0x142
[<c02a6d70>] walk_component+0x104/0x174
[<c02a793c>] path_lookupat+0x78/0x182
[<c02a8c7c>] filename_lookup+0x96/0x158
[<c02a8d76>] kern_path+0x38/0x56
[<c0c1cb7a>] init_mount+0x5c/0xac
[<c0c2ba4c>] devtmpfs_mount+0x44/0x7a
[<c0c01cce>] prepare_namespace+0x226/0x27c
[<c0c011c6>] kernel_init_freeable+0x286/0x2a8
[<c0b97ab8>] kernel_init+0x2a/0x156
[<c0ba22ca>] ret_from_fork+0xe/0x20
I get a folio where folio_address(folio) == 0xfffff000 (which is
broken).
Need to go into the weeds here...
Björn
^ permalink raw reply [relevance 6%]
* [PATCH 03/11] grow_dev_folio(): we only want ->bd_inode->i_mapping there
@ 2024-04-11 14:53 14% ` Al Viro
0 siblings, 0 replies; 200+ results
From: Al Viro @ 2024-04-11 14:53 UTC (permalink / raw)
To: Christian Brauner
Cc: Jan Kara, Yu Kuai, hch, axboe, linux-fsdevel, linux-block,
yi.zhang, yangerkun, yukuai (C)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
---
fs/buffer.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index d5a0932ae68d..78a4e95ba2f2 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1034,12 +1034,12 @@ static sector_t folio_init_buffers(struct folio *folio,
static bool grow_dev_folio(struct block_device *bdev, sector_t block,
pgoff_t index, unsigned size, gfp_t gfp)
{
- struct inode *inode = bdev->bd_inode;
+ struct address_space *mapping = bdev->bd_mapping;
struct folio *folio;
struct buffer_head *bh;
sector_t end_block = 0;
- folio = __filemap_get_folio(inode->i_mapping, index,
+ folio = __filemap_get_folio(mapping, index,
FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
if (IS_ERR(folio))
return false;
@@ -1073,10 +1073,10 @@ static bool grow_dev_folio(struct block_device *bdev, sector_t block,
* lock to be atomic wrt __find_get_block(), which does not
* run under the folio lock.
*/
- spin_lock(&inode->i_mapping->i_private_lock);
+ spin_lock(&mapping->i_private_lock);
link_dev_buffers(folio, bh);
end_block = folio_init_buffers(folio, bdev, size);
- spin_unlock(&inode->i_mapping->i_private_lock);
+ spin_unlock(&mapping->i_private_lock);
unlock:
folio_unlock(folio);
folio_put(folio);
--
2.39.2
^ permalink raw reply related [relevance 14%]
* [PATCH v14 7/8] udmabuf: Pin the pages using memfd_pin_folios() API
2024-04-11 6:59 4% [PATCH v14 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
@ 2024-04-11 6:59 5% ` Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-11 6:59 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Daniel Vetter, Hugh Dickins, Peter Xu, Jason Gunthorpe,
Gerd Hoffmann, Dongwon Kim, Junxiao Chang
Using memfd_pin_folios() will ensure that the pages are pinned
correctly using FOLL_PIN. And, this also ensures that we don't
accidentally break features such as memory hotunplug as it would
not allow pinning pages in the movable zone.
Using this new API also simplifies the code as we no longer have
to deal with extracting individual pages from their mappings or
handle shmem and hugetlb cases separately.
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 153 +++++++++++++++++++-------------------
1 file changed, 78 insertions(+), 75 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index a8f3af61f7f2..afa8bfd2a2a9 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -30,6 +30,12 @@ struct udmabuf {
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
+ struct list_head unpin_list;
+};
+
+struct udmabuf_folio {
+ struct folio *folio;
+ struct list_head list;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -153,17 +159,43 @@ static void unmap_udmabuf(struct dma_buf_attachment *at,
return put_sg_table(at->dev, sg, direction);
}
+static void unpin_all_folios(struct list_head *unpin_list)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ while (!list_empty(unpin_list)) {
+ ubuf_folio = list_first_entry(unpin_list,
+ struct udmabuf_folio, list);
+ unpin_folio(ubuf_folio->folio);
+
+ list_del(&ubuf_folio->list);
+ kfree(ubuf_folio);
+ }
+}
+
+static int add_to_unpin_list(struct list_head *unpin_list,
+ struct folio *folio)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ ubuf_folio = kzalloc(sizeof(*ubuf_folio), GFP_KERNEL);
+ if (!ubuf_folio)
+ return -ENOMEM;
+
+ ubuf_folio->folio = folio;
+ list_add_tail(&ubuf_folio->list, unpin_list);
+ return 0;
+}
+
static void release_udmabuf(struct dma_buf *buf)
{
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
- pgoff_t pg;
if (ubuf->sg)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
- for (pg = 0; pg < ubuf->pagecount; pg++)
- folio_put(ubuf->folios[pg]);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
@@ -218,64 +250,6 @@ static const struct dma_buf_ops udmabuf_ops = {
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
-static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- struct hstate *hpstate = hstate_file(memfd);
- pgoff_t mapidx = offset >> huge_page_shift(hpstate);
- pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
- pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct folio *folio = NULL;
- pgoff_t pgidx;
-
- mapidx <<= huge_page_order(hpstate);
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!folio) {
- folio = __filemap_get_folio(memfd->f_mapping,
- mapidx,
- FGP_ACCESSED, 0);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
- }
-
- folio_get(folio);
- ubuf->folios[*pgbuf] = folio;
- ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
- (*pgbuf)++;
- if (++subpgoff == maxsubpgs) {
- folio_put(folio);
- folio = NULL;
- subpgoff = 0;
- mapidx += pages_per_huge_page(hpstate);
- }
- }
-
- if (folio)
- folio_put(folio);
-
- return 0;
-}
-
-static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct folio *folio = NULL;
-
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
-
- ubuf->folios[*pgbuf] = folio;
- (*pgbuf)++;
- }
-
- return 0;
-}
-
static int check_memfd_seals(struct file *memfd)
{
int seals;
@@ -321,16 +295,19 @@ static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- pgoff_t pgcnt, pgbuf = 0, pglimit;
+ pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0;
+ long nr_folios, ret = -EINVAL;
struct file *memfd = NULL;
+ struct folio **folios;
struct udmabuf *ubuf;
- int ret = -EINVAL;
- u32 i, flags;
+ u32 i, j, k, flags;
+ loff_t end;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
if (!ubuf)
return -ENOMEM;
+ INIT_LIST_HEAD(&ubuf->unpin_list);
pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
for (i = 0; i < head->count; i++) {
if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
@@ -366,17 +343,44 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
- if (is_file_hugepages(memfd))
- ret = handle_hugetlb_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- else
- ret = handle_shmem_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- if (ret < 0)
+ folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
goto err;
+ }
+ end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1;
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+
+ nr_folios = ret;
+ pgoff >>= PAGE_SHIFT;
+ for (j = 0, k = 0; j < pgcnt; j++) {
+ ubuf->folios[pgbuf] = folios[k];
+ ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT;
+
+ if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) {
+ ret = add_to_unpin_list(&ubuf->unpin_list,
+ folios[k]);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+ }
+
+ pgbuf++;
+ if (++pgoff == folio_nr_pages(folios[k])) {
+ pgoff = 0;
+ if (++k == nr_folios)
+ break;
+ }
+ }
+
+ kfree(folios);
fput(memfd);
}
@@ -388,10 +392,9 @@ static long udmabuf_create(struct miscdevice *device,
return ret;
err:
- while (pgbuf > 0)
- folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH v14 6/8] udmabuf: Convert udmabuf driver to use folios
2024-04-11 6:59 4% [PATCH v14 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
@ 2024-04-11 6:59 4% ` Vivek Kasireddy
2024-04-11 6:59 5% ` [PATCH v14 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-11 6:59 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Daniel Vetter, Hugh Dickins, Peter Xu, Jason Gunthorpe,
Gerd Hoffmann, Dongwon Kim, Junxiao Chang
This is mainly a preparatory patch to use memfd_pin_folios() API
for pinning folios. Using folios instead of pages makes sense as
the udmabuf driver needs to handle both shmem and hugetlb cases.
And, using the memfd_pin_folios() API makes this easier as we no
longer need to separately handle shmem vs hugetlb cases in the
udmabuf driver.
Note that, the function vmap_udmabuf() still needs a list of pages;
so, we collect all the head pages into a local array in this case.
Other changes in this patch include the addition of helpers for
checking the memfd seals and exporting dmabuf. Moving code from
udmabuf_create() into these helpers improves readability given
that udmabuf_create() is a bit long.
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 140 ++++++++++++++++++++++----------------
1 file changed, 83 insertions(+), 57 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 274defd3fa3e..a8f3af61f7f2 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -26,7 +26,7 @@ MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is
struct udmabuf {
pgoff_t pagecount;
- struct page **pages;
+ struct folio **folios;
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
@@ -42,7 +42,7 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
if (pgoff >= ubuf->pagecount)
return VM_FAULT_SIGBUS;
- pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn = folio_pfn(ubuf->folios[pgoff]);
pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
return vmf_insert_pfn(vma, vmf->address, pfn);
@@ -68,11 +68,21 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
{
struct udmabuf *ubuf = buf->priv;
+ struct page **pages;
void *vaddr;
+ pgoff_t pg;
dma_resv_assert_held(buf->resv);
- vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
+ pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ for (pg = 0; pg < ubuf->pagecount; pg++)
+ pages[pg] = &ubuf->folios[pg]->page;
+
+ vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+ kfree(pages);
if (!vaddr)
return -EINVAL;
@@ -107,7 +117,8 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
goto err_alloc;
for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
- sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+ sg_set_folio(sgl, ubuf->folios[i], PAGE_SIZE,
+ ubuf->offsets[i]);
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
@@ -152,9 +163,9 @@ static void release_udmabuf(struct dma_buf *buf)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
for (pg = 0; pg < ubuf->pagecount; pg++)
- put_page(ubuf->pages[pg]);
+ folio_put(ubuf->folios[pg]);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
}
@@ -215,36 +226,33 @@ static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
pgoff_t mapidx = offset >> huge_page_shift(hpstate);
pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct page *hpage = NULL;
- struct folio *folio;
+ struct folio *folio = NULL;
pgoff_t pgidx;
mapidx <<= huge_page_order(hpstate);
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!hpage) {
+ if (!folio) {
folio = __filemap_get_folio(memfd->f_mapping,
mapidx,
FGP_ACCESSED, 0);
if (IS_ERR(folio))
return PTR_ERR(folio);
-
- hpage = &folio->page;
}
- get_page(hpage);
- ubuf->pages[*pgbuf] = hpage;
+ folio_get(folio);
+ ubuf->folios[*pgbuf] = folio;
ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
(*pgbuf)++;
if (++subpgoff == maxsubpgs) {
- put_page(hpage);
- hpage = NULL;
+ folio_put(folio);
+ folio = NULL;
subpgoff = 0;
mapidx += pages_per_huge_page(hpstate);
}
}
- if (hpage)
- put_page(hpage);
+ if (folio)
+ folio_put(folio);
return 0;
}
@@ -254,31 +262,69 @@ static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
pgoff_t *pgbuf)
{
pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio = NULL;
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(memfd->f_mapping,
- pgoff + pgidx);
- if (IS_ERR(page))
- return PTR_ERR(page);
+ folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
- ubuf->pages[*pgbuf] = page;
+ ubuf->folios[*pgbuf] = folio;
(*pgbuf)++;
}
return 0;
}
+static int check_memfd_seals(struct file *memfd)
+{
+ int seals;
+
+ if (!memfd)
+ return -EBADFD;
+
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EBADFD;
+
+ seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
+ if (seals == -EINVAL)
+ return -EBADFD;
+
+ if ((seals & SEALS_WANTED) != SEALS_WANTED ||
+ (seals & SEALS_DENIED) != 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int export_udmabuf(struct udmabuf *ubuf,
+ struct miscdevice *device,
+ u32 flags)
+{
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ struct dma_buf *buf;
+
+ ubuf->device = device;
+ exp_info.ops = &udmabuf_ops;
+ exp_info.size = ubuf->pagecount << PAGE_SHIFT;
+ exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
+
+ buf = dma_buf_export(&exp_info);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+
+ return dma_buf_fd(buf, flags);
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
struct file *memfd = NULL;
struct udmabuf *ubuf;
- struct dma_buf *buf;
- pgoff_t pgcnt, pgbuf = 0, pglimit;
- int seals, ret = -EINVAL;
+ int ret = -EINVAL;
u32 i, flags;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
@@ -299,9 +345,9 @@ static long udmabuf_create(struct miscdevice *device,
if (!ubuf->pagecount)
goto err;
- ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
GFP_KERNEL);
- if (!ubuf->pages) {
+ if (!ubuf->folios) {
ret = -ENOMEM;
goto err;
}
@@ -314,18 +360,9 @@ static long udmabuf_create(struct miscdevice *device,
pgbuf = 0;
for (i = 0; i < head->count; i++) {
- ret = -EBADFD;
memfd = fget(list[i].memfd);
- if (!memfd)
- goto err;
- if (!shmem_file(memfd) && !is_file_hugepages(memfd))
- goto err;
- seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
- if (seals == -EINVAL)
- goto err;
- ret = -EINVAL;
- if ((seals & SEALS_WANTED) != SEALS_WANTED ||
- (seals & SEALS_DENIED) != 0)
+ ret = check_memfd_seals(memfd);
+ if (ret < 0)
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
@@ -341,33 +378,22 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
fput(memfd);
- memfd = NULL;
}
- exp_info.ops = &udmabuf_ops;
- exp_info.size = ubuf->pagecount << PAGE_SHIFT;
- exp_info.priv = ubuf;
- exp_info.flags = O_RDWR;
-
- ubuf->device = device;
- buf = dma_buf_export(&exp_info);
- if (IS_ERR(buf)) {
- ret = PTR_ERR(buf);
+ flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0;
+ ret = export_udmabuf(ubuf, device, flags);
+ if (ret < 0)
goto err;
- }
- flags = 0;
- if (head->flags & UDMABUF_FLAGS_CLOEXEC)
- flags |= O_CLOEXEC;
- return dma_buf_fd(buf, flags);
+ return ret;
err:
while (pgbuf > 0)
- put_page(ubuf->pages[--pgbuf]);
+ folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
return ret;
}
--
2.43.0
^ permalink raw reply related [relevance 4%]
* [PATCH v14 5/8] udmabuf: Add back support for mapping hugetlb pages
2024-04-11 6:59 4% [PATCH v14 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
@ 2024-04-11 6:59 4% ` Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
2024-04-11 6:59 5% ` [PATCH v14 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-11 6:59 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Daniel Vetter, Mike Kravetz,
Hugh Dickins, Peter Xu, Jason Gunthorpe, Gerd Hoffmann,
Dongwon Kim, Junxiao Chang
A user or admin can configure a VMM (Qemu) Guest's memory to be
backed by hugetlb pages for various reasons. However, a Guest OS
would still allocate (and pin) buffers that are backed by regular
4k sized pages. In order to map these buffers and create dma-bufs
for them on the Host, we first need to find the hugetlb pages where
the buffer allocations are located and then determine the offsets
of individual chunks (within those pages) and use this information
to eventually populate a scatterlist.
Testcase: default_hugepagesz=2M hugepagesz=2M hugepages=2500 options
were passed to the Host kernel and Qemu was launched with these
relevant options: qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-display gtk,gl=on
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
Replacing -display gtk,gl=on with -display gtk,gl=off above would
exercise the mmap handler.
Cc: David Hildenbrand <david@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com> (v2)
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 122 +++++++++++++++++++++++++++++++-------
1 file changed, 101 insertions(+), 21 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 820c993c8659..274defd3fa3e 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -10,6 +10,7 @@
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/shmem_fs.h>
+#include <linux/hugetlb.h>
#include <linux/slab.h>
#include <linux/udmabuf.h>
#include <linux/vmalloc.h>
@@ -28,6 +29,7 @@ struct udmabuf {
struct page **pages;
struct sg_table *sg;
struct miscdevice *device;
+ pgoff_t *offsets;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -41,6 +43,8 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
+
return vmf_insert_pfn(vma, vmf->address, pfn);
}
@@ -90,23 +94,29 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
{
struct udmabuf *ubuf = buf->priv;
struct sg_table *sg;
+ struct scatterlist *sgl;
+ unsigned int i = 0;
int ret;
sg = kzalloc(sizeof(*sg), GFP_KERNEL);
if (!sg)
return ERR_PTR(-ENOMEM);
- ret = sg_alloc_table_from_pages(sg, ubuf->pages, ubuf->pagecount,
- 0, ubuf->pagecount << PAGE_SHIFT,
- GFP_KERNEL);
+
+ ret = sg_alloc_table(sg, ubuf->pagecount, GFP_KERNEL);
if (ret < 0)
- goto err;
+ goto err_alloc;
+
+ for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
+ sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
- goto err;
+ goto err_map;
return sg;
-err:
+err_map:
sg_free_table(sg);
+err_alloc:
kfree(sg);
return ERR_PTR(ret);
}
@@ -143,6 +153,7 @@ static void release_udmabuf(struct dma_buf *buf)
for (pg = 0; pg < ubuf->pagecount; pg++)
put_page(ubuf->pages[pg]);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
}
@@ -196,17 +207,77 @@ static const struct dma_buf_ops udmabuf_ops = {
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
+static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ struct hstate *hpstate = hstate_file(memfd);
+ pgoff_t mapidx = offset >> huge_page_shift(hpstate);
+ pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
+ pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
+ struct page *hpage = NULL;
+ struct folio *folio;
+ pgoff_t pgidx;
+
+ mapidx <<= huge_page_order(hpstate);
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ if (!hpage) {
+ folio = __filemap_get_folio(memfd->f_mapping,
+ mapidx,
+ FGP_ACCESSED, 0);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+
+ hpage = &folio->page;
+ }
+
+ get_page(hpage);
+ ubuf->pages[*pgbuf] = hpage;
+ ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
+ (*pgbuf)++;
+ if (++subpgoff == maxsubpgs) {
+ put_page(hpage);
+ hpage = NULL;
+ subpgoff = 0;
+ mapidx += pages_per_huge_page(hpstate);
+ }
+ }
+
+ if (hpage)
+ put_page(hpage);
+
+ return 0;
+}
+
+static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
+ struct page *page;
+
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ page = shmem_read_mapping_page(memfd->f_mapping,
+ pgoff + pgidx);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ ubuf->pages[*pgbuf] = page;
+ (*pgbuf)++;
+ }
+
+ return 0;
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL;
- struct address_space *mapping = NULL;
struct udmabuf *ubuf;
struct dma_buf *buf;
- pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
- struct page *page;
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
int seals, ret = -EINVAL;
u32 i, flags;
@@ -234,6 +305,12 @@ static long udmabuf_create(struct miscdevice *device,
ret = -ENOMEM;
goto err;
}
+ ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+ }
pgbuf = 0;
for (i = 0; i < head->count; i++) {
@@ -241,8 +318,7 @@ static long udmabuf_create(struct miscdevice *device,
memfd = fget(list[i].memfd);
if (!memfd)
goto err;
- mapping = memfd->f_mapping;
- if (!shmem_mapping(mapping))
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
goto err;
seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
if (seals == -EINVAL)
@@ -251,16 +327,19 @@ static long udmabuf_create(struct miscdevice *device,
if ((seals & SEALS_WANTED) != SEALS_WANTED ||
(seals & SEALS_DENIED) != 0)
goto err;
- pgoff = list[i].offset >> PAGE_SHIFT;
- pgcnt = list[i].size >> PAGE_SHIFT;
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(mapping, pgoff + pgidx);
- if (IS_ERR(page)) {
- ret = PTR_ERR(page);
- goto err;
- }
- ubuf->pages[pgbuf++] = page;
- }
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+ if (is_file_hugepages(memfd))
+ ret = handle_hugetlb_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ else
+ ret = handle_shmem_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ if (ret < 0)
+ goto err;
+
fput(memfd);
memfd = NULL;
}
@@ -287,6 +366,7 @@ static long udmabuf_create(struct miscdevice *device,
put_page(ubuf->pages[--pgbuf]);
if (memfd)
fput(memfd);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
return ret;
--
2.43.0
^ permalink raw reply related [relevance 4%]
* [PATCH v14 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios
@ 2024-04-11 6:59 4% Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-11 6:59 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Christoph Hellwig, Andrew Morton, Daniel Vetter, Hugh Dickins,
Peter Xu, Jason Gunthorpe, Gerd Hoffmann, Dongwon Kim,
Junxiao Chang
Currently, some drivers (e.g, Udmabuf) that want to longterm-pin
the pages/folios associated with a memfd, do so by simply taking a
reference on them. This is not desirable because the pages/folios
may reside in Movable zone or CMA block.
Therefore, having drivers use memfd_pin_folios() API ensures that
the folios are appropriately pinned via FOLL_PIN for longterm DMA.
This patchset also introduces a few helpers and converts the Udmabuf
driver to use folios and memfd_pin_folios() API to longterm-pin
the folios for DMA. Two new Udmabuf selftests are also included to
test the driver and the new API.
---
Patchset overview:
Patch 1-2: GUP helpers to migrate and unpin one or more folios
Patch 3: Introduce memfd_pin_folios() API
Patch 4-5: Udmabuf driver bug fixes for Qemu + hugetlb=on, blob=true case
Patch 6-8: Convert Udmabuf to use memfd_pin_folios() and add selftests
This series is tested using the following methods:
- Run the subtests added in Patch 8
- Run Qemu (master) with the following options and a few additional
patches to Spice:
qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-spice port=3001,gl=on,disable-ticketing=on,preferred-codec=gstreamer:h264
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
- Run source ./run_vmtests.sh -t gup_test -a to check for GUP regressions
Changelog:
v13 -> v14:
- Drop the redundant comments before check_and_migrate_movable_pages()
and refer to check_and_migrate_movable_folios() comments (David)
- Use appropriate ksft_* functions for printing and KSFT_* codes for
exit() in udmabuf selftest (Shuah)
- Add Mike Kravetz's suggested-by tag in udmabuf selftest patch (Shuah)
- Collect Ack and Rb tags from David
v12 -> v13: (suggestions from David)
- Drop the sanity checks in unpin_folio()/unpin_folios() due to
unavailability of per folio anon-exclusive flag
- Export unpin_folio()/unpin_folios() using EXPORT_SYMBOL_GPL
instead of EXPORT_SYMBOL
- Have check_and_migrate_movable_pages() just call
check_and_migrate_movable_folios() instead of calling other helpers
- Slightly improve the comments and commit messages
v11 -> v12:
- Rebased and tested on mm-unstable
v10 -> v11:
- Remove the version string from the patch subject (Andrew)
- Move the changelog from the patches into the cover letter
- Rearrange the patchset to have GUP patches at the beginning
v9 -> v10:
- Introduce and use unpin_folio(), unpin_folios() and
check_and_migrate_movable_folios() helpers
- Use a list to track the folios that need to be unpinned in udmabuf
v8 -> v9: (suggestions from Matthew)
- Drop the extern while declaring memfd_alloc_folio()
- Fix memfd_alloc_folio() declaration to have it return struct folio *
instead of struct page * when CONFIG_MEMFD_CREATE is not defined
- Use folio_pfn() on the folio instead of page_to_pfn() on head page
in udmabuf
- Don't split the arguments to shmem_read_folio() on multiple lines
in udmabuf
v7 -> v8: (suggestions from David)
- Have caller pass [start, end], max_folios instead of start, nr_pages
- Replace offsets array with just offset into the first page
- Add comments explaning the need for next_idx
- Pin (and return) the folio (via FOLL_PIN) only once
v6 -> v7:
- Rename this API to memfd_pin_folios() and make it return folios
and offsets instead of pages (David)
- Don't continue processing the folios in the batch returned by
filemap_get_folios_contig() if they do not have correct next_idx
- Add the R-b tag from Christoph
v5 -> v6: (suggestions from Christoph)
- Rename this API to memfd_pin_user_pages() to make it clear that it
is intended for memfds
- Move the memfd page allocation helper from gup.c to memfd.c
- Fix indentation errors in memfd_pin_user_pages()
- For contiguous ranges of folios, use a helper such as
filemap_get_folios_contig() to lookup the page cache in batches
- Split the processing of hugetlb or shmem pages into helpers to
simplify the code in udmabuf_create()
v4 -> v5: (suggestions from David)
- For hugetlb case, ensure that we only obtain head pages from the
mapping by using __filemap_get_folio() instead of find_get_page_flags()
- Handle -EEXIST when two or more potential users try to simultaneously
add a huge page to the mapping by forcing them to retry on failure
v3 -> v4:
- Remove the local variable "page" and instead use 3 return statements
in alloc_file_page() (David)
- Add the R-b tag from David
v2 -> v3: (suggestions from David)
- Enclose the huge page allocation code with #ifdef CONFIG_HUGETLB_PAGE
(Build error reported by kernel test robot <lkp@intel.com>)
- Don't forget memalloc_pin_restore() on non-migration related errors
- Improve the readability of the cleanup code associated with
non-migration related errors
- Augment the comments by describing FOLL_LONGTERM like behavior
- Include the R-b tag from Jason
v1 -> v2:
- Drop gup_flags and improve comments and commit message (David)
- Allocate a page if we cannot find in page cache for the hugetlbfs
case as well (David)
- Don't unpin pages if there is a migration related failure (David)
- Drop the unnecessary nr_pages <= 0 check (Jason)
- Have the caller of the API pass in file * instead of fd (Jason)
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Vivek Kasireddy (8):
mm/gup: Introduce unpin_folio/unpin_folios helpers
mm/gup: Introduce check_and_migrate_movable_folios()
mm/gup: Introduce memfd_pin_folios() for pinning memfd folios
udmabuf: Use vmf_insert_pfn and VM_PFNMAP for handling mmap
udmabuf: Add back support for mapping hugetlb pages
udmabuf: Convert udmabuf driver to use folios
udmabuf: Pin the pages using memfd_pin_folios() API
selftests/udmabuf: Add tests to verify data after page migration
drivers/dma-buf/udmabuf.c | 231 +++++++++----
include/linux/memfd.h | 5 +
include/linux/mm.h | 5 +
mm/gup.c | 307 +++++++++++++++---
mm/memfd.c | 35 ++
.../selftests/drivers/dma-buf/udmabuf.c | 214 ++++++++++--
6 files changed, 659 insertions(+), 138 deletions(-)
--
2.43.0
^ permalink raw reply [relevance 4%]
* Re: [RFC PATCH] mm: move xa forward when run across zombie page
@ 2024-04-11 7:04 0% ` Zhaoyang Huang
0 siblings, 0 replies; 200+ results
From: Zhaoyang Huang @ 2024-04-11 7:04 UTC (permalink / raw)
To: Dave Chinner
Cc: Matthew Wilcox, zhaoyang.huang, Andrew Morton, linux-mm,
linux-kernel, steve.kang, baocong.liu, linux-fsdevel,
Brian Foster, Christoph Hellwig, David Hildenbrand
On Tue, Nov 1, 2022 at 3:17 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Thu, Oct 20, 2022 at 10:52:14PM +0100, Matthew Wilcox wrote:
> > On Thu, Oct 20, 2022 at 09:04:24AM +1100, Dave Chinner wrote:
> > > On Wed, Oct 19, 2022 at 04:23:10PM +0100, Matthew Wilcox wrote:
> > > > On Wed, Oct 19, 2022 at 09:30:42AM +1100, Dave Chinner wrote:
> > > > > This is reading and writing the same amount of file data at the
> > > > > application level, but once the data has been written and kicked out
> > > > > of the page cache it seems to require an awful lot more read IO to
> > > > > get it back to the application. i.e. this looks like mmap() is
> > > > > readahead thrashing severely, and eventually it livelocks with this
> > > > > sort of report:
> > > > >
> > > > > [175901.982484] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> > > > > [175901.985095] rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15): P25728
> > > > > [175901.987996] (detected by 0, t=97399871 jiffies, g=15891025, q=1972622 ncpus=32)
> > > > > [175901.991698] task:test_write state:R running task stack:12784 pid:25728 ppid: 25696 flags:0x00004002
> > > > > [175901.995614] Call Trace:
> > > > > [175901.996090] <TASK>
> > > > > [175901.996594] ? __schedule+0x301/0xa30
> > > > > [175901.997411] ? sysvec_apic_timer_interrupt+0xb/0x90
> > > > > [175901.998513] ? sysvec_apic_timer_interrupt+0xb/0x90
> > > > > [175901.999578] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > > > [175902.000714] ? xas_start+0x53/0xc0
> > > > > [175902.001484] ? xas_load+0x24/0xa0
> > > > > [175902.002208] ? xas_load+0x5/0xa0
> > > > > [175902.002878] ? __filemap_get_folio+0x87/0x340
> > > > > [175902.003823] ? filemap_fault+0x139/0x8d0
> > > > > [175902.004693] ? __do_fault+0x31/0x1d0
> > > > > [175902.005372] ? __handle_mm_fault+0xda9/0x17d0
> > > > > [175902.006213] ? handle_mm_fault+0xd0/0x2a0
> > > > > [175902.006998] ? exc_page_fault+0x1d9/0x810
> > > > > [175902.007789] ? asm_exc_page_fault+0x22/0x30
> > > > > [175902.008613] </TASK>
> > > > >
> > > > > Given that filemap_fault on XFS is probably trying to map large
> > > > > folios, I do wonder if this is a result of some kind of race with
> > > > > teardown of a large folio...
> > > >
> > > > It doesn't matter whether we're trying to map a large folio; it
> > > > matters whether a large folio was previously created in the cache.
> > > > Through the magic of readahead, it may well have been. I suspect
> > > > it's not teardown of a large folio, but splitting. Removing a
> > > > page from the page cache stores to the pointer in the XArray
> > > > first (either NULL or a shadow entry), then decrements the refcount.
> > > >
> > > > We must be observing a frozen folio. There are a number of places
> > > > in the MM which freeze a folio, but the obvious one is splitting.
> > > > That looks like this:
> > > >
> > > > local_irq_disable();
> > > > if (mapping) {
> > > > xas_lock(&xas);
> > > > (...)
> > > > if (folio_ref_freeze(folio, 1 + extra_pins)) {
> > >
> > > But the lookup is not doing anything to prevent the split on the
> > > frozen page from making progress, right? It's not holding any folio
> > > references, and it's not holding the mapping tree lock, either. So
> > > how does the lookup in progress prevent the page split from making
> > > progress?
> >
> > My thinking was that it keeps hammering the ->refcount field in
> > struct folio. That might prevent a thread on a different socket
> > from making forward progress. In contrast, spinlocks are designed
> > to be fair under contention, so by spinning on an actual lock, we'd
> > remove contention on the folio.
> >
> > But I think the tests you've done refute that theory. I'm all out of
> > ideas at the moment. Either we have a frozen folio from somebody who
> > doesn't hold the lock, or we have someone who's left a frozen folio in
> > the page cache. I'm leaning towards that explanation at the moment,
> > but I don't have a good suggestion for debugging.
>
> It's something else. I got gdb attached to qemu and single stepped
> the looping lookup. The context I caught this time is truncate after
> unlink:
>
> (gdb) bt
> #0 find_get_entry (mark=<optimized out>, max=<optimized out>, xas=<optimized out>) at mm/filemap.c:2014
> #1 find_lock_entries (mapping=mapping@entry=0xffff8882445e2118, start=start@entry=25089, end=end@entry=18446744073709551614,
> fbatch=fbatch@entry=0xffffc900082a7dd8, indices=indices@entry=0xffffc900082a7d60) at mm/filemap.c:2095
> #2 0xffffffff8128f024 in truncate_inode_pages_range (mapping=mapping@entry=0xffff8882445e2118, lstart=lstart@entry=0, lend=lend@entry=-1)
> at mm/truncate.c:364
> #3 0xffffffff8128f452 in truncate_inode_pages (lstart=0, mapping=0xffff8882445e2118) at mm/truncate.c:452
> #4 0xffffffff8136335d in evict (inode=inode@entry=0xffff8882445e1f78) at fs/inode.c:666
> #5 0xffffffff813636cc in iput_final (inode=0xffff8882445e1f78) at fs/inode.c:1747
> #6 0xffffffff81355b8b in do_unlinkat (dfd=dfd@entry=10, name=0xffff88834170e000) at fs/namei.c:4326
> #7 0xffffffff81355cc3 in __do_sys_unlinkat (flag=<optimized out>, pathname=<optimized out>, dfd=<optimized out>) at fs/namei.c:4362
> #8 __se_sys_unlinkat (flag=<optimized out>, pathname=<optimized out>, dfd=<optimized out>) at fs/namei.c:4355
> #9 __x64_sys_unlinkat (regs=<optimized out>) at fs/namei.c:4355
> #10 0xffffffff81e92e35 in do_syscall_x64 (nr=<optimized out>, regs=0xffffc900082a7f58) at arch/x86/entry/common.c:50
> #11 do_syscall_64 (regs=0xffffc900082a7f58, nr=<optimized out>) at arch/x86/entry/common.c:80
> #12 0xffffffff82000087 in entry_SYSCALL_64 () at arch/x86/entry/entry_64.S:120
> #13 0x0000000000000000 in ?? ()
>
> The find_lock_entries() call is being asked to start at index
> 25089, and we are spinning on a folio we find because
> folio_try_get_rcu(folio) is failing - the folio ref count is zero.
>
> The xas state on lookup is:
>
> (gdb) p *xas
> $6 = {xa = 0xffff8882445e2120, xa_index = 25092, xa_shift = 0 '\000', xa_sibs = 0 '\000', xa_offset = 4 '\004', xa_pad = 0 '\000',
> xa_node = 0xffff888144c15918, xa_alloc = 0x0 <fixed_percpu_data>, xa_update = 0x0 <fixed_percpu_data>, xa_lru = 0x0 <fixed_percpu_data>
>
> indicating that we are trying to look up index 25092 (3 pages
> further in than the start of the batch), and the folio that this
> keeps returning is this:
>
> (gdb) p *folio
> $7 = {{{flags = 24769796876795904, {lru = {next = 0xffffea0005690008, prev = 0xffff88823ffd5f50}, {__filler = 0xffffea0005690008,
> mlock_count = 1073569616}}, mapping = 0x0 <fixed_percpu_data>, index = 18688, private = 0x8 <fixed_percpu_data+8>, _mapcount = {
> counter = -129}, _refcount = {counter = 0}, memcg_data = 0}, page = {flags = 24769796876795904, {{{lru = {next = 0xffffea0005690008,
> prev = 0xffff88823ffd5f50}, {__filler = 0xffffea0005690008, mlock_count = 1073569616}, buddy_list = {
> next = 0xffffea0005690008, prev = 0xffff88823ffd5f50}, pcp_list = {next = 0xffffea0005690008, prev = 0xffff88823ffd5f50}},
> mapping = 0x0 <fixed_percpu_data>, index = 18688, private = 8}, {pp_magic = 18446719884544507912, pp = 0xffff88823ffd5f50,
> _pp_mapping_pad = 0, dma_addr = 18688, {dma_addr_upper = 8, pp_frag_count = {counter = 8}}}, {
> compound_head = 18446719884544507912, compound_dtor = 80 'P', compound_order = 95 '_', compound_mapcount = {counter = -30590},
> compound_pincount = {counter = 0}, compound_nr = 0}, {_compound_pad_1 = 18446719884544507912,
> _compound_pad_2 = 18446612691733536592, deferred_list = {next = 0x0 <fixed_percpu_data>,
> prev = 0x4900 <irq_stack_backing_store+10496>}}, {_pt_pad_1 = 18446719884544507912, pmd_huge_pte = 0xffff88823ffd5f50,
> _pt_pad_2 = 0, {pt_mm = 0x4900 <irq_stack_backing_store+10496>, pt_frag_refcount = {counter = 18688}},
> ptl = 0x8 <fixed_percpu_data+8>}, {pgmap = 0xffffea0005690008, zone_device_data = 0xffff88823ffd5f50}, callback_head = {
> next = 0xffffea0005690008, func = 0xffff88823ffd5f50}}, {_mapcount = {counter = -129}, page_type = 4294967167}, _refcount = {
> counter = 0}, memcg_data = 0}}, _flags_1 = 24769796876795904, __head = 0, _folio_dtor = 3 '\003', _folio_order = 8 '\b',
> _total_mapcount = {counter = -1}, _pincount = {counter = 0}, _folio_nr_pages = 0}
> (gdb)
>
> The folio has a NULL mapping, and an index of 18688, which means
> even if it was not a folio that has been invalidated or freed, the
> index is way outside the range we are looking for.
>
> If I step it round the lookup loop, xas does not change, and the
> same folio is returned every time through the loop. Perhaps
> the mapping tree itself might be corrupt???
>
> It's simple enough to stop the machine once it has become stuck to
> observe the iteration and dump structures, just tell me what you
> need to know from here...
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
This bug emerges again and I would like to propose a reproduce
sequence of this bug which has nothing to do with scheduler stuff (
this could be wrong and sorry for wasting your time if so)
Thread_isolate:
1. alloc_contig_range->isolate_migratepages_block isolate a certain of
pages to cc->migratepages
(folio has refcount: 1 + n (alloc_pages, page_cache))
2. alloc_contig_range->migrate_pages->folio_ref_freeze(folio, 1 +
extra_pins) set the folio->refcnt to 0
3. alloc_contig_range->migrate_pages->xas_split split the folios to
each slot as folio from slot[offset] to slot[offset + sibs]
Thread_truncate:
4. enter the livelock by the chain below
rcu_read_lock();
find_get_entry
folio = xas_find
if(!folio_try_get_rcu)
xas_reset;
rcu_read_unlock();
4'. alloc_contig_range->migrate_pages->__split_huge_page which will
modify folio's refcnt to 2 and breaks the livelock but is blocked by
lruvec->lock's contention
If the above call chain makes sense, could we solve this by below
modification which has split_folio and __split_huge_page be atomic by
taking lruvec->lock earlier than now.
int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
unsigned int new_order)
{
+ lruvec = folio_lruvec_lock(folio);
if (mapping) {
int nr = folio_nr_pages(folio);
xas_split(&xas, folio, folio_order(folio));
if (folio_test_pmd_mappable(folio) &&
new_order < HPAGE_PMD_ORDER) {
if (folio_test_swapbacked(folio)) {
__lruvec_stat_mod_folio(folio,
NR_SHMEM_THPS, -nr);
} else {
__lruvec_stat_mod_folio(folio,
NR_FILE_THPS, -nr);
filemap_nr_thps_dec(mapping);
}
}
}
__split_huge_page(page, list, end, new_order);
+ folio_lruvec_unlock(folio);
^ permalink raw reply [relevance 0%]
* Re: [PATCH vfs.all 22/26] block: stash a bdev_file to read/write raw blcok_device
@ 2024-04-09 11:53 7% ` Yu Kuai
0 siblings, 0 replies; 200+ results
From: Yu Kuai @ 2024-04-09 11:53 UTC (permalink / raw)
To: Christian Brauner, Yu Kuai
Cc: jack, hch, viro, axboe, linux-fsdevel, linux-block, yi.zhang,
yangerkun, yukuai (C)
Hi,
在 2024/04/09 18:23, Christian Brauner 写道:
>> +static int __stash_bdev_file(struct block_device *bdev)
>
> I've said that on the previous version. I think that this is really
> error prone and seems overall like an unpleasant solution. I would
> really like to avoid going down that route.
Yes, I see your point, and it's indeed reasonable.
>
> I think a chunk of this series is good though specicially simple
> conversions of individual filesystems where file_inode() or f_mapping
> makes sense. There's a few exceptions where we might be better of
> replacing the current apis with something else (I think Al touched on
> that somewhere further down the thread.).
>
> I'd suggest the straightforward bd_inode removals into a separate series
> that I can take.
>
> Thanks for working on all of this. It's certainly a contentious area.
How about following simple patch to expose bdev_mapping() for
fs/buffer.c for now?
Thanks,
Kuai
diff --git a/block/blk.h b/block/blk.h
index a34bb590cce6..f8bcb43a12c6 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -428,7 +428,6 @@ static inline int blkdev_zone_mgmt_ioctl(struct
block_device *bdev,
#endif /* CONFIG_BLK_DEV_ZONED */
struct inode *bdev_inode(struct block_device *bdev);
-struct address_space *bdev_mapping(struct block_device *bdev);
struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
void bdev_add(struct block_device *bdev, dev_t dev);
diff --git a/fs/buffer.c b/fs/buffer.c
index 4f73d23c2c46..e2bd19e3fe48 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -189,8 +189,8 @@ EXPORT_SYMBOL(end_buffer_write_sync);
static struct buffer_head *
__find_get_block_slow(struct block_device *bdev, sector_t block)
{
- struct inode *bd_inode = bdev->bd_inode;
- struct address_space *bd_mapping = bd_inode->i_mapping;
+ struct address_space *bd_mapping = bdev_mapping(bdev);
+ struct inode *bd_inode = bd_mapping->host;
struct buffer_head *ret = NULL;
pgoff_t index;
struct buffer_head *bh;
@@ -1034,12 +1034,12 @@ static sector_t folio_init_buffers(struct folio
*folio,
static bool grow_dev_folio(struct block_device *bdev, sector_t block,
pgoff_t index, unsigned size, gfp_t gfp)
{
- struct inode *inode = bdev->bd_inode;
+ struct address_space *bd_mapping = bdev_mapping(bdev);
struct folio *folio;
struct buffer_head *bh;
sector_t end_block = 0;
- folio = __filemap_get_folio(inode->i_mapping, index,
+ folio = __filemap_get_folio(bd_mapping, index,
FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
if (IS_ERR(folio))
return false;
@@ -1073,10 +1073,10 @@ static bool grow_dev_folio(struct block_device
*bdev, sector_t block,
* lock to be atomic wrt __find_get_block(), which does not
* run under the folio lock.
*/
- spin_lock(&inode->i_mapping->i_private_lock);
+ spin_lock(&bd_mapping->i_private_lock);
link_dev_buffers(folio, bh);
end_block = folio_init_buffers(folio, bdev, size);
- spin_unlock(&inode->i_mapping->i_private_lock);
+ spin_unlock(&bd_mapping->i_private_lock);
unlock:
folio_unlock(folio);
folio_put(folio);
@@ -1463,7 +1463,7 @@ __bread_gfp(struct block_device *bdev, sector_t block,
{
struct buffer_head *bh;
- gfp |= mapping_gfp_constraint(bdev->bd_inode->i_mapping, ~__GFP_FS);
+ gfp |= mapping_gfp_constraint(bdev_mapping(bdev), ~__GFP_FS);
/*
* Prefer looping in the allocator rather than here, at least that
@@ -1696,8 +1696,8 @@ EXPORT_SYMBOL(create_empty_buffers);
*/
void clean_bdev_aliases(struct block_device *bdev, sector_t block,
sector_t len)
{
- struct inode *bd_inode = bdev->bd_inode;
- struct address_space *bd_mapping = bd_inode->i_mapping;
+ struct address_space *bd_mapping = bdev_mapping(bdev);
+ struct inode *bd_inode = bd_mapping->host;
struct folio_batch fbatch;
pgoff_t index = ((loff_t)block << bd_inode->i_blkbits) / PAGE_SIZE;
pgoff_t end;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bc840e0fb6e5..bbae55535d53 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1527,6 +1527,7 @@ void blkdev_put_no_open(struct block_device *bdev);
struct block_device *I_BDEV(struct inode *inode);
struct block_device *file_bdev(struct file *bdev_file);
+struct address_space *bdev_mapping(struct block_device *bdev);
bool disk_live(struct gendisk *disk);
unsigned int block_size(struct block_device *bdev);
> .
>
^ permalink raw reply related [relevance 7%]
* Re: [PATCH v6 00/37] Memory allocation profiling
2024-04-05 13:37 0% ` Klara Modin
@ 2024-04-05 14:14 0% ` Suren Baghdasaryan
0 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-04-05 14:14 UTC (permalink / raw)
To: Klara Modin
Cc: akpm, kent.overstreet, mhocko, vbabka, hannes, roman.gushchin,
mgorman, dave, willy, liam.howlett, penguin-kernel, corbet, void,
peterz, juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
On Fri, Apr 5, 2024 at 6:37 AM Klara Modin <klarasmodin@gmail.com> wrote:
>
> Hi,
>
> On 2024-03-21 17:36, Suren Baghdasaryan wrote:
> > Overview:
> > Low overhead [1] per-callsite memory allocation profiling. Not just for
> > debug kernels, overhead low enough to be deployed in production.
> >
> > Example output:
> > root@moria-kvm:~# sort -rn /proc/allocinfo
> > 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
> > 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
> > 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
> > 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> > 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> > 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
> > 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
> > 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> > 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> > 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
> > 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
> > ...
> >
> > Since v5 [2]:
> > - Added Reviewed-by and Acked-by, per Vlastimil Babka and Miguel Ojeda
> > - Changed pgalloc_tag_{add|sub} to use number of pages instead of order, per Matthew Wilcox
> > - Changed pgalloc_tag_sub_bytes to pgalloc_tag_sub_pages and adjusted the usage, per Matthew Wilcox
> > - Moved static key check before prepare_slab_obj_exts_hook(), per Vlastimil Babka
> > - Fixed RUST helper, per Miguel Ojeda
> > - Fixed documentation, per Randy Dunlap
> > - Rebased over mm-unstable
> >
> > Usage:
> > kconfig options:
> > - CONFIG_MEM_ALLOC_PROFILING
> > - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> > - CONFIG_MEM_ALLOC_PROFILING_DEBUG
> > adds warnings for allocations that weren't accounted because of a
> > missing annotation
> >
> > sysctl:
> > /proc/sys/vm/mem_profiling
> >
> > Runtime info:
> > /proc/allocinfo
> >
> > Notes:
> >
> > [1]: Overhead
> > To measure the overhead we are comparing the following configurations:
> > (1) Baseline with CONFIG_MEMCG_KMEM=n
> > (2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> > CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
> > (3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> > CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
> > (4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
> > CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
> > (5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
> > (6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> > CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
> > (7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> > CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
> >
> > Performance overhead:
> > To evaluate performance we implemented an in-kernel test executing
> > multiple get_free_page/free_page and kmalloc/kfree calls with allocation
> > sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
> > affinity set to a specific CPU to minimize the noise. Below are results
> > from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
> > 56 core Intel Xeon:
> >
> > kmalloc pgalloc
> > (1 baseline) 6.764s 16.902s
> > (2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
> > (3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
> > (4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
> > (5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
> > (6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
> > (7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
> >
> > Memory overhead:
> > Kernel size:
> >
> > text data bss dec diff
> > (1) 26515311 18890222 17018880 62424413
> > (2) 26524728 19423818 16740352 62688898 264485
> > (3) 26524724 19423818 16740352 62688894 264481
> > (4) 26524728 19423818 16740352 62688898 264485
> > (5) 26541782 18964374 16957440 62463596 39183
> >
> > Memory consumption on a 56 core Intel CPU with 125GB of memory:
> > Code tags: 192 kB
> > PageExts: 262144 kB (256MB)
> > SlabExts: 9876 kB (9.6MB)
> > PcpuExts: 512 kB (0.5MB)
> >
> > Total overhead is 0.2% of total memory.
> >
> > Benchmarks:
> >
> > Hackbench tests run 100 times:
> > hackbench -s 512 -l 200 -g 15 -f 25 -P
> > baseline disabled profiling enabled profiling
> > avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
> > stdev 0.0137 0.0188 0.0077
> >
> >
> > hackbench -l 10000
> > baseline disabled profiling enabled profiling
> > avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
> > stdev 0.0933 0.0286 0.0489
> >
> > stress-ng tests:
> > stress-ng --class memory --seq 4 -t 60
> > stress-ng --class cpu --seq 4 -t 60
> > Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
> >
> > [2] https://lore.kernel.org/all/20240306182440.2003814-1-surenb@google.com/
>
> If I enable this, I consistently get percpu allocation failures. I can
> occasionally reproduce it in qemu. I've attached the logs and my config,
> please let me know if there's anything else that could be relevant.
Thanks for the report!
In debug_alloc_profiling.log I see:
[ 7.445127] percpu: limit reached, disable warning
That's probably the reason. I'll take a closer look at the cause of
that and how we can fix it.
In qemu-alloc3.log I see couple of warnings:
[ 1.111620] alloc_tag was not set
[ 1.111880] WARNING: CPU: 0 PID: 164 at
include/linux/alloc_tag.h:118 kfree (./include/linux/alloc_tag.h:118
(discriminator 1) ./include/linux/alloc_tag.h:161 (discriminator 1)
mm/slub.c:2043 ...
[ 1.161710] alloc_tag was not cleared (got tag for fs/squashfs/cache.c:413)
[ 1.162289] WARNING: CPU: 0 PID: 195 at
include/linux/alloc_tag.h:109 kmalloc_trace_noprof
(./include/linux/alloc_tag.h:109 (discriminator 1)
./include/linux/alloc_tag.h:149 (discriminator 1) ...
Which means we missed to instrument some allocation. Can you please
check if disabling CONFIG_MEM_ALLOC_PROFILING_DEBUG fixes QEMU case?
In the meantime I'll try to reproduce and fix this.
Thanks,
Suren.
>
> Kind regards,
> Klara Modin
^ permalink raw reply [relevance 0%]
* Re: [PATCH v6 00/37] Memory allocation profiling
2024-03-21 16:36 3% [PATCH v6 00/37] Memory allocation profiling Suren Baghdasaryan
` (2 preceding siblings ...)
2024-03-21 20:41 0% ` [PATCH v6 00/37] Memory allocation profiling Andrew Morton
@ 2024-04-05 13:37 0% ` Klara Modin
2024-04-05 14:14 0% ` Suren Baghdasaryan
3 siblings, 1 reply; 200+ results
From: Klara Modin @ 2024-04-05 13:37 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
[-- Attachment #1: Type: text/plain, Size: 5358 bytes --]
Hi,
On 2024-03-21 17:36, Suren Baghdasaryan wrote:
> Overview:
> Low overhead [1] per-callsite memory allocation profiling. Not just for
> debug kernels, overhead low enough to be deployed in production.
>
> Example output:
> root@moria-kvm:~# sort -rn /proc/allocinfo
> 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
> 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
> 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
> 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
> 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
> 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
> 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
> ...
>
> Since v5 [2]:
> - Added Reviewed-by and Acked-by, per Vlastimil Babka and Miguel Ojeda
> - Changed pgalloc_tag_{add|sub} to use number of pages instead of order, per Matthew Wilcox
> - Changed pgalloc_tag_sub_bytes to pgalloc_tag_sub_pages and adjusted the usage, per Matthew Wilcox
> - Moved static key check before prepare_slab_obj_exts_hook(), per Vlastimil Babka
> - Fixed RUST helper, per Miguel Ojeda
> - Fixed documentation, per Randy Dunlap
> - Rebased over mm-unstable
>
> Usage:
> kconfig options:
> - CONFIG_MEM_ALLOC_PROFILING
> - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> - CONFIG_MEM_ALLOC_PROFILING_DEBUG
> adds warnings for allocations that weren't accounted because of a
> missing annotation
>
> sysctl:
> /proc/sys/vm/mem_profiling
>
> Runtime info:
> /proc/allocinfo
>
> Notes:
>
> [1]: Overhead
> To measure the overhead we are comparing the following configurations:
> (1) Baseline with CONFIG_MEMCG_KMEM=n
> (2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
> (3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
> (4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
> CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
> (5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
> (6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
> (7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
> CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
>
> Performance overhead:
> To evaluate performance we implemented an in-kernel test executing
> multiple get_free_page/free_page and kmalloc/kfree calls with allocation
> sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
> affinity set to a specific CPU to minimize the noise. Below are results
> from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
> 56 core Intel Xeon:
>
> kmalloc pgalloc
> (1 baseline) 6.764s 16.902s
> (2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
> (3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
> (4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
> (5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
> (6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
> (7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
>
> Memory overhead:
> Kernel size:
>
> text data bss dec diff
> (1) 26515311 18890222 17018880 62424413
> (2) 26524728 19423818 16740352 62688898 264485
> (3) 26524724 19423818 16740352 62688894 264481
> (4) 26524728 19423818 16740352 62688898 264485
> (5) 26541782 18964374 16957440 62463596 39183
>
> Memory consumption on a 56 core Intel CPU with 125GB of memory:
> Code tags: 192 kB
> PageExts: 262144 kB (256MB)
> SlabExts: 9876 kB (9.6MB)
> PcpuExts: 512 kB (0.5MB)
>
> Total overhead is 0.2% of total memory.
>
> Benchmarks:
>
> Hackbench tests run 100 times:
> hackbench -s 512 -l 200 -g 15 -f 25 -P
> baseline disabled profiling enabled profiling
> avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
> stdev 0.0137 0.0188 0.0077
>
>
> hackbench -l 10000
> baseline disabled profiling enabled profiling
> avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
> stdev 0.0933 0.0286 0.0489
>
> stress-ng tests:
> stress-ng --class memory --seq 4 -t 60
> stress-ng --class cpu --seq 4 -t 60
> Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
>
> [2] https://lore.kernel.org/all/20240306182440.2003814-1-surenb@google.com/
If I enable this, I consistently get percpu allocation failures. I can
occasionally reproduce it in qemu. I've attached the logs and my config,
please let me know if there's anything else that could be relevant.
Kind regards,
Klara Modin
[-- Attachment #2: debug_alloc_profiling.log.gz --]
[-- Type: application/gzip, Size: 28378 bytes --]
[-- Attachment #3: config.gz --]
[-- Type: application/gzip, Size: 38465 bytes --]
[-- Attachment #4: qemu-alloc3.log.gz --]
[-- Type: application/gzip, Size: 14651 bytes --]
^ permalink raw reply [relevance 0%]
* [PATCH 06/11] KVM: guest_memfd: Add hook for initializing memory
2024-04-04 18:50 12% ` [PATCH 04/11] filemap: add FGP_CREAT_ONLY Paolo Bonzini
@ 2024-04-04 18:50 5% ` Paolo Bonzini
2024-04-22 10:53 0% ` Xu Yilun
2 siblings, 1 reply; 200+ results
From: Paolo Bonzini @ 2024-04-04 18:50 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc, michael.roth, isaku.yamahata
guest_memfd pages are generally expected to be in some arch-defined
initial state prior to using them for guest memory. For SEV-SNP this
initial state is 'private', or 'guest-owned', and requires additional
operations to move these pages into a 'private' state by updating the
corresponding entries the RMP table.
Allow for an arch-defined hook to handle updates of this sort, and go
ahead and implement one for x86 so KVM implementations like AMD SVM can
register a kvm_x86_ops callback to handle these updates for SEV-SNP
guests.
The preparation callback is always called when allocating/grabbing
folios via gmem, and it is up to the architecture to keep track of
whether or not the pages are already in the expected state (e.g. the RMP
table in the case of SEV-SNP).
In some cases, it is necessary to defer the preparation of the pages to
handle things like in-place encryption of initial guest memory payloads
before marking these pages as 'private'/'guest-owned'. Add an argument
(always true for now) to kvm_gmem_get_folio() that allows for the
preparation callback to be bypassed. To detect possible issues in
the way userspace initializes memory, it is only possible to add an
unprepared page if it is not already included in the filemap.
Link: https://lore.kernel.org/lkml/ZLqVdvsF11Ddo7Dq@google.com/
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20231230172351.574091-5-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 6 +++
include/linux/kvm_host.h | 5 +++
virt/kvm/Kconfig | 4 ++
virt/kvm/guest_memfd.c | 65 ++++++++++++++++++++++++++++--
6 files changed, 78 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 5187fcf4b610..d26fcad13e36 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -139,6 +139,7 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
KVM_X86_OP_OPTIONAL(get_untagged_addr)
KVM_X86_OP_OPTIONAL(alloc_apic_backing_page)
+KVM_X86_OP_OPTIONAL_RET0(gmem_prepare)
#undef KVM_X86_OP
#undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 01c69840647e..f101fab0040e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1809,6 +1809,7 @@ struct kvm_x86_ops {
gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu);
+ int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
};
struct kvm_x86_nested_ops {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2d2619d3eee4..972524ddcfdb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13598,6 +13598,12 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_arch_no_poll);
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order)
+{
+ return static_call(kvm_x86_gmem_prepare)(kvm, pfn, gfn, max_order);
+}
+#endif
int kvm_spec_ctrl_test_value(u64 value)
{
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 48f31dcd318a..33ed3b884a6b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2445,4 +2445,9 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
}
#endif /* CONFIG_KVM_PRIVATE_MEM */
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
+bool kvm_arch_gmem_prepare_needed(struct kvm *kvm);
+#endif
+
#endif
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 29b73eedfe74..ca870157b2ed 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -109,3 +109,7 @@ config KVM_GENERIC_PRIVATE_MEM
select KVM_GENERIC_MEMORY_ATTRIBUTES
select KVM_PRIVATE_MEM
bool
+
+config HAVE_KVM_GMEM_PREPARE
+ bool
+ depends on KVM_PRIVATE_MEM
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index e5b3cd02b651..486748e65f36 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -13,12 +13,60 @@ struct kvm_gmem {
struct list_head entry;
};
-static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+bool __weak kvm_arch_gmem_prepare_needed(struct kvm *kvm)
+{
+ return false;
+}
+#endif
+
+static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio)
+{
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+ struct list_head *gmem_list = &inode->i_mapping->i_private_list;
+ struct kvm_gmem *gmem;
+
+ list_for_each_entry(gmem, gmem_list, entry) {
+ struct kvm_memory_slot *slot;
+ struct kvm *kvm = gmem->kvm;
+ struct page *page;
+ kvm_pfn_t pfn;
+ gfn_t gfn;
+ int rc;
+
+ if (!kvm_arch_gmem_prepare_needed(kvm))
+ continue;
+
+ slot = xa_load(&gmem->bindings, index);
+ if (!slot)
+ continue;
+
+ page = folio_file_page(folio, index);
+ pfn = page_to_pfn(page);
+ gfn = slot->base_gfn + index - slot->gmem.pgoff;
+ rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, compound_order(compound_head(page)));
+ if (rc) {
+ pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx, error %d.\n",
+ index, rc);
+ return rc;
+ }
+ }
+
+#endif
+ return 0;
+}
+
+static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool prepare)
{
struct folio *folio;
+ fgf_t fgp_flags = FGP_LOCK | FGP_ACCESSED | FGP_CREAT;
+
+ if (!prepare)
+ fgp_flags |= FGP_CREAT_ONLY;
/* TODO: Support huge pages. */
- folio = filemap_grab_folio(inode->i_mapping, index);
+ folio = __filemap_get_folio(inode->i_mapping, index, fgp_flags,
+ mapping_gfp_mask(inode->i_mapping));
if (IS_ERR_OR_NULL(folio))
return folio;
@@ -41,6 +89,15 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
folio_mark_uptodate(folio);
}
+ if (prepare) {
+ int r = kvm_gmem_prepare_folio(inode, index, folio);
+ if (r < 0) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return ERR_PTR(r);
+ }
+ }
+
/*
* Ignore accessed, referenced, and dirty flags. The memory is
* unevictable and there is no storage to write back to.
@@ -145,7 +202,7 @@ static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len)
break;
}
- folio = kvm_gmem_get_folio(inode, index);
+ folio = kvm_gmem_get_folio(inode, index, true);
if (IS_ERR_OR_NULL(folio)) {
r = folio ? PTR_ERR(folio) : -ENOMEM;
break;
@@ -505,7 +562,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
goto out_fput;
}
- folio = kvm_gmem_get_folio(file_inode(file), index);
+ folio = kvm_gmem_get_folio(file_inode(file), index, true);
if (!folio) {
r = -ENOMEM;
goto out_fput;
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH 04/11] filemap: add FGP_CREAT_ONLY
@ 2024-04-04 18:50 12% ` Paolo Bonzini
2024-04-25 5:52 0% ` Paolo Bonzini
2024-04-04 18:50 5% ` [PATCH 06/11] KVM: guest_memfd: Add hook for initializing memory Paolo Bonzini
2 siblings, 1 reply; 200+ results
From: Paolo Bonzini @ 2024-04-04 18:50 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: seanjc, michael.roth, isaku.yamahata, Matthew Wilcox, Yosry Ahmed
KVM would like to add a ioctl to encrypt and install a page into private
memory (i.e. into a guest_memfd), in preparation for launching an
encrypted guest.
This API should be used only once per page (unless there are failures),
so we want to rule out the possibility of operating on a page that is
already in the guest_memfd's filemap. Overwriting the page is almost
certainly a sign of a bug, so we might as well forbid it.
Therefore, introduce a new flag for __filemap_get_folio (to be passed
together with FGP_CREAT) that allows *adding* a new page to the filemap
but not returning an existing one.
An alternative possibility would be to force KVM users to initialize
the whole filemap in one go, but that is complicated by the fact that
the filemap includes pages of different kinds, including some that are
per-vCPU rather than per-VM. Basically the result would be closer to
a system call that multiplexes multiple ioctls, than to something
cleaner like readv/writev.
Races between callers that pass FGP_CREAT_ONLY are uninteresting to
the filemap code: one of the racers wins and one fails with EEXIST,
similar to calling open(2) with O_CREAT|O_EXCL. It doesn't matter to
filemap.c if the missing synchronization is in the kernel or in userspace,
and in fact it could even be intentional. (In the case of KVM it turns
out that a mutex is taken around these calls for unrelated reasons,
so there can be no races.)
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/linux/pagemap.h | 2 ++
mm/filemap.c | 4 ++++
2 files changed, 6 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index f879c1d54da7..a8c0685e8c08 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -587,6 +587,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
* * %FGP_CREAT - If no folio is present then a new folio is allocated,
* added to the page cache and the VM's LRU list. The folio is
* returned locked.
+ * * %FGP_CREAT_ONLY - Fail if a folio is present
* * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
* folio is already in cache. If the folio was allocated, unlock it
* before returning so the caller can do the same dance.
@@ -607,6 +608,7 @@ typedef unsigned int __bitwise fgf_t;
#define FGP_NOWAIT ((__force fgf_t)0x00000020)
#define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
#define FGP_STABLE ((__force fgf_t)0x00000080)
+#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
#define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
#define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
diff --git a/mm/filemap.c b/mm/filemap.c
index 7437b2bd75c1..e7440e189ebd 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1863,6 +1863,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio = NULL;
if (!folio)
goto no_page;
+ if (fgp_flags & FGP_CREAT_ONLY) {
+ folio_put(folio);
+ return ERR_PTR(-EEXIST);
+ }
if (fgp_flags & FGP_LOCK) {
if (fgp_flags & FGP_NOWAIT) {
--
2.43.0
^ permalink raw reply related [relevance 12%]
* [PATCH v13 7/8] udmabuf: Pin the pages using memfd_pin_folios() API
2024-04-04 7:26 4% [PATCH v13 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
@ 2024-04-04 7:26 5% ` Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-04 7:26 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Daniel Vetter, Mike Kravetz, Hugh Dickins, Peter Xu,
Jason Gunthorpe, Gerd Hoffmann, Dongwon Kim, Junxiao Chang
Using memfd_pin_folios() will ensure that the pages are pinned
correctly using FOLL_PIN. And, this also ensures that we don't
accidentally break features such as memory hotunplug as it would
not allow pinning pages in the movable zone.
Using this new API also simplifies the code as we no longer have
to deal with extracting individual pages from their mappings or
handle shmem and hugetlb cases separately.
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 153 +++++++++++++++++++-------------------
1 file changed, 78 insertions(+), 75 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index a8f3af61f7f2..afa8bfd2a2a9 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -30,6 +30,12 @@ struct udmabuf {
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
+ struct list_head unpin_list;
+};
+
+struct udmabuf_folio {
+ struct folio *folio;
+ struct list_head list;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -153,17 +159,43 @@ static void unmap_udmabuf(struct dma_buf_attachment *at,
return put_sg_table(at->dev, sg, direction);
}
+static void unpin_all_folios(struct list_head *unpin_list)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ while (!list_empty(unpin_list)) {
+ ubuf_folio = list_first_entry(unpin_list,
+ struct udmabuf_folio, list);
+ unpin_folio(ubuf_folio->folio);
+
+ list_del(&ubuf_folio->list);
+ kfree(ubuf_folio);
+ }
+}
+
+static int add_to_unpin_list(struct list_head *unpin_list,
+ struct folio *folio)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ ubuf_folio = kzalloc(sizeof(*ubuf_folio), GFP_KERNEL);
+ if (!ubuf_folio)
+ return -ENOMEM;
+
+ ubuf_folio->folio = folio;
+ list_add_tail(&ubuf_folio->list, unpin_list);
+ return 0;
+}
+
static void release_udmabuf(struct dma_buf *buf)
{
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
- pgoff_t pg;
if (ubuf->sg)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
- for (pg = 0; pg < ubuf->pagecount; pg++)
- folio_put(ubuf->folios[pg]);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
@@ -218,64 +250,6 @@ static const struct dma_buf_ops udmabuf_ops = {
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
-static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- struct hstate *hpstate = hstate_file(memfd);
- pgoff_t mapidx = offset >> huge_page_shift(hpstate);
- pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
- pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct folio *folio = NULL;
- pgoff_t pgidx;
-
- mapidx <<= huge_page_order(hpstate);
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!folio) {
- folio = __filemap_get_folio(memfd->f_mapping,
- mapidx,
- FGP_ACCESSED, 0);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
- }
-
- folio_get(folio);
- ubuf->folios[*pgbuf] = folio;
- ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
- (*pgbuf)++;
- if (++subpgoff == maxsubpgs) {
- folio_put(folio);
- folio = NULL;
- subpgoff = 0;
- mapidx += pages_per_huge_page(hpstate);
- }
- }
-
- if (folio)
- folio_put(folio);
-
- return 0;
-}
-
-static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct folio *folio = NULL;
-
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
-
- ubuf->folios[*pgbuf] = folio;
- (*pgbuf)++;
- }
-
- return 0;
-}
-
static int check_memfd_seals(struct file *memfd)
{
int seals;
@@ -321,16 +295,19 @@ static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- pgoff_t pgcnt, pgbuf = 0, pglimit;
+ pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0;
+ long nr_folios, ret = -EINVAL;
struct file *memfd = NULL;
+ struct folio **folios;
struct udmabuf *ubuf;
- int ret = -EINVAL;
- u32 i, flags;
+ u32 i, j, k, flags;
+ loff_t end;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
if (!ubuf)
return -ENOMEM;
+ INIT_LIST_HEAD(&ubuf->unpin_list);
pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
for (i = 0; i < head->count; i++) {
if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
@@ -366,17 +343,44 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
- if (is_file_hugepages(memfd))
- ret = handle_hugetlb_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- else
- ret = handle_shmem_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- if (ret < 0)
+ folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
goto err;
+ }
+ end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1;
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+
+ nr_folios = ret;
+ pgoff >>= PAGE_SHIFT;
+ for (j = 0, k = 0; j < pgcnt; j++) {
+ ubuf->folios[pgbuf] = folios[k];
+ ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT;
+
+ if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) {
+ ret = add_to_unpin_list(&ubuf->unpin_list,
+ folios[k]);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+ }
+
+ pgbuf++;
+ if (++pgoff == folio_nr_pages(folios[k])) {
+ pgoff = 0;
+ if (++k == nr_folios)
+ break;
+ }
+ }
+
+ kfree(folios);
fput(memfd);
}
@@ -388,10 +392,9 @@ static long udmabuf_create(struct miscdevice *device,
return ret;
err:
- while (pgbuf > 0)
- folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH v13 6/8] udmabuf: Convert udmabuf driver to use folios
2024-04-04 7:26 4% [PATCH v13 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
@ 2024-04-04 7:26 4% ` Vivek Kasireddy
2024-04-04 7:26 5% ` [PATCH v13 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-04 7:26 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Daniel Vetter, Mike Kravetz, Hugh Dickins, Peter Xu,
Jason Gunthorpe, Gerd Hoffmann, Dongwon Kim, Junxiao Chang
This is mainly a preparatory patch to use memfd_pin_folios() API
for pinning folios. Using folios instead of pages makes sense as
the udmabuf driver needs to handle both shmem and hugetlb cases.
However, the function vmap_udmabuf() still needs a list of pages;
so, we collect all the head pages into a local array in this case.
Other changes in this patch include the addition of helpers for
checking the memfd seals and exporting dmabuf. Moving code from
udmabuf_create() into these helpers improves readability given
that udmabuf_create() is a bit long.
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 140 ++++++++++++++++++++++----------------
1 file changed, 83 insertions(+), 57 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 274defd3fa3e..a8f3af61f7f2 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -26,7 +26,7 @@ MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is
struct udmabuf {
pgoff_t pagecount;
- struct page **pages;
+ struct folio **folios;
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
@@ -42,7 +42,7 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
if (pgoff >= ubuf->pagecount)
return VM_FAULT_SIGBUS;
- pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn = folio_pfn(ubuf->folios[pgoff]);
pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
return vmf_insert_pfn(vma, vmf->address, pfn);
@@ -68,11 +68,21 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
{
struct udmabuf *ubuf = buf->priv;
+ struct page **pages;
void *vaddr;
+ pgoff_t pg;
dma_resv_assert_held(buf->resv);
- vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
+ pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ for (pg = 0; pg < ubuf->pagecount; pg++)
+ pages[pg] = &ubuf->folios[pg]->page;
+
+ vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+ kfree(pages);
if (!vaddr)
return -EINVAL;
@@ -107,7 +117,8 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
goto err_alloc;
for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
- sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+ sg_set_folio(sgl, ubuf->folios[i], PAGE_SIZE,
+ ubuf->offsets[i]);
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
@@ -152,9 +163,9 @@ static void release_udmabuf(struct dma_buf *buf)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
for (pg = 0; pg < ubuf->pagecount; pg++)
- put_page(ubuf->pages[pg]);
+ folio_put(ubuf->folios[pg]);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
}
@@ -215,36 +226,33 @@ static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
pgoff_t mapidx = offset >> huge_page_shift(hpstate);
pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct page *hpage = NULL;
- struct folio *folio;
+ struct folio *folio = NULL;
pgoff_t pgidx;
mapidx <<= huge_page_order(hpstate);
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!hpage) {
+ if (!folio) {
folio = __filemap_get_folio(memfd->f_mapping,
mapidx,
FGP_ACCESSED, 0);
if (IS_ERR(folio))
return PTR_ERR(folio);
-
- hpage = &folio->page;
}
- get_page(hpage);
- ubuf->pages[*pgbuf] = hpage;
+ folio_get(folio);
+ ubuf->folios[*pgbuf] = folio;
ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
(*pgbuf)++;
if (++subpgoff == maxsubpgs) {
- put_page(hpage);
- hpage = NULL;
+ folio_put(folio);
+ folio = NULL;
subpgoff = 0;
mapidx += pages_per_huge_page(hpstate);
}
}
- if (hpage)
- put_page(hpage);
+ if (folio)
+ folio_put(folio);
return 0;
}
@@ -254,31 +262,69 @@ static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
pgoff_t *pgbuf)
{
pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio = NULL;
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(memfd->f_mapping,
- pgoff + pgidx);
- if (IS_ERR(page))
- return PTR_ERR(page);
+ folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
- ubuf->pages[*pgbuf] = page;
+ ubuf->folios[*pgbuf] = folio;
(*pgbuf)++;
}
return 0;
}
+static int check_memfd_seals(struct file *memfd)
+{
+ int seals;
+
+ if (!memfd)
+ return -EBADFD;
+
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EBADFD;
+
+ seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
+ if (seals == -EINVAL)
+ return -EBADFD;
+
+ if ((seals & SEALS_WANTED) != SEALS_WANTED ||
+ (seals & SEALS_DENIED) != 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int export_udmabuf(struct udmabuf *ubuf,
+ struct miscdevice *device,
+ u32 flags)
+{
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ struct dma_buf *buf;
+
+ ubuf->device = device;
+ exp_info.ops = &udmabuf_ops;
+ exp_info.size = ubuf->pagecount << PAGE_SHIFT;
+ exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
+
+ buf = dma_buf_export(&exp_info);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+
+ return dma_buf_fd(buf, flags);
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
struct file *memfd = NULL;
struct udmabuf *ubuf;
- struct dma_buf *buf;
- pgoff_t pgcnt, pgbuf = 0, pglimit;
- int seals, ret = -EINVAL;
+ int ret = -EINVAL;
u32 i, flags;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
@@ -299,9 +345,9 @@ static long udmabuf_create(struct miscdevice *device,
if (!ubuf->pagecount)
goto err;
- ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
GFP_KERNEL);
- if (!ubuf->pages) {
+ if (!ubuf->folios) {
ret = -ENOMEM;
goto err;
}
@@ -314,18 +360,9 @@ static long udmabuf_create(struct miscdevice *device,
pgbuf = 0;
for (i = 0; i < head->count; i++) {
- ret = -EBADFD;
memfd = fget(list[i].memfd);
- if (!memfd)
- goto err;
- if (!shmem_file(memfd) && !is_file_hugepages(memfd))
- goto err;
- seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
- if (seals == -EINVAL)
- goto err;
- ret = -EINVAL;
- if ((seals & SEALS_WANTED) != SEALS_WANTED ||
- (seals & SEALS_DENIED) != 0)
+ ret = check_memfd_seals(memfd);
+ if (ret < 0)
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
@@ -341,33 +378,22 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
fput(memfd);
- memfd = NULL;
}
- exp_info.ops = &udmabuf_ops;
- exp_info.size = ubuf->pagecount << PAGE_SHIFT;
- exp_info.priv = ubuf;
- exp_info.flags = O_RDWR;
-
- ubuf->device = device;
- buf = dma_buf_export(&exp_info);
- if (IS_ERR(buf)) {
- ret = PTR_ERR(buf);
+ flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0;
+ ret = export_udmabuf(ubuf, device, flags);
+ if (ret < 0)
goto err;
- }
- flags = 0;
- if (head->flags & UDMABUF_FLAGS_CLOEXEC)
- flags |= O_CLOEXEC;
- return dma_buf_fd(buf, flags);
+ return ret;
err:
while (pgbuf > 0)
- put_page(ubuf->pages[--pgbuf]);
+ folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
return ret;
}
--
2.43.0
^ permalink raw reply related [relevance 4%]
* [PATCH v13 5/8] udmabuf: Add back support for mapping hugetlb pages
2024-04-04 7:26 4% [PATCH v13 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
@ 2024-04-04 7:26 4% ` Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
2024-04-04 7:26 5% ` [PATCH v13 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-04 7:26 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Daniel Vetter, Mike Kravetz,
Hugh Dickins, Peter Xu, Jason Gunthorpe, Gerd Hoffmann,
Dongwon Kim, Junxiao Chang
A user or admin can configure a VMM (Qemu) Guest's memory to be
backed by hugetlb pages for various reasons. However, a Guest OS
would still allocate (and pin) buffers that are backed by regular
4k sized pages. In order to map these buffers and create dma-bufs
for them on the Host, we first need to find the hugetlb pages where
the buffer allocations are located and then determine the offsets
of individual chunks (within those pages) and use this information
to eventually populate a scatterlist.
Testcase: default_hugepagesz=2M hugepagesz=2M hugepages=2500 options
were passed to the Host kernel and Qemu was launched with these
relevant options: qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-display gtk,gl=on
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
Replacing -display gtk,gl=on with -display gtk,gl=off above would
exercise the mmap handler.
Cc: David Hildenbrand <david@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com> (v2)
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 122 +++++++++++++++++++++++++++++++-------
1 file changed, 101 insertions(+), 21 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 820c993c8659..274defd3fa3e 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -10,6 +10,7 @@
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/shmem_fs.h>
+#include <linux/hugetlb.h>
#include <linux/slab.h>
#include <linux/udmabuf.h>
#include <linux/vmalloc.h>
@@ -28,6 +29,7 @@ struct udmabuf {
struct page **pages;
struct sg_table *sg;
struct miscdevice *device;
+ pgoff_t *offsets;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -41,6 +43,8 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
+
return vmf_insert_pfn(vma, vmf->address, pfn);
}
@@ -90,23 +94,29 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
{
struct udmabuf *ubuf = buf->priv;
struct sg_table *sg;
+ struct scatterlist *sgl;
+ unsigned int i = 0;
int ret;
sg = kzalloc(sizeof(*sg), GFP_KERNEL);
if (!sg)
return ERR_PTR(-ENOMEM);
- ret = sg_alloc_table_from_pages(sg, ubuf->pages, ubuf->pagecount,
- 0, ubuf->pagecount << PAGE_SHIFT,
- GFP_KERNEL);
+
+ ret = sg_alloc_table(sg, ubuf->pagecount, GFP_KERNEL);
if (ret < 0)
- goto err;
+ goto err_alloc;
+
+ for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
+ sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
- goto err;
+ goto err_map;
return sg;
-err:
+err_map:
sg_free_table(sg);
+err_alloc:
kfree(sg);
return ERR_PTR(ret);
}
@@ -143,6 +153,7 @@ static void release_udmabuf(struct dma_buf *buf)
for (pg = 0; pg < ubuf->pagecount; pg++)
put_page(ubuf->pages[pg]);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
}
@@ -196,17 +207,77 @@ static const struct dma_buf_ops udmabuf_ops = {
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
+static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ struct hstate *hpstate = hstate_file(memfd);
+ pgoff_t mapidx = offset >> huge_page_shift(hpstate);
+ pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
+ pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
+ struct page *hpage = NULL;
+ struct folio *folio;
+ pgoff_t pgidx;
+
+ mapidx <<= huge_page_order(hpstate);
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ if (!hpage) {
+ folio = __filemap_get_folio(memfd->f_mapping,
+ mapidx,
+ FGP_ACCESSED, 0);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+
+ hpage = &folio->page;
+ }
+
+ get_page(hpage);
+ ubuf->pages[*pgbuf] = hpage;
+ ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
+ (*pgbuf)++;
+ if (++subpgoff == maxsubpgs) {
+ put_page(hpage);
+ hpage = NULL;
+ subpgoff = 0;
+ mapidx += pages_per_huge_page(hpstate);
+ }
+ }
+
+ if (hpage)
+ put_page(hpage);
+
+ return 0;
+}
+
+static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
+ struct page *page;
+
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ page = shmem_read_mapping_page(memfd->f_mapping,
+ pgoff + pgidx);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ ubuf->pages[*pgbuf] = page;
+ (*pgbuf)++;
+ }
+
+ return 0;
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL;
- struct address_space *mapping = NULL;
struct udmabuf *ubuf;
struct dma_buf *buf;
- pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
- struct page *page;
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
int seals, ret = -EINVAL;
u32 i, flags;
@@ -234,6 +305,12 @@ static long udmabuf_create(struct miscdevice *device,
ret = -ENOMEM;
goto err;
}
+ ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+ }
pgbuf = 0;
for (i = 0; i < head->count; i++) {
@@ -241,8 +318,7 @@ static long udmabuf_create(struct miscdevice *device,
memfd = fget(list[i].memfd);
if (!memfd)
goto err;
- mapping = memfd->f_mapping;
- if (!shmem_mapping(mapping))
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
goto err;
seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
if (seals == -EINVAL)
@@ -251,16 +327,19 @@ static long udmabuf_create(struct miscdevice *device,
if ((seals & SEALS_WANTED) != SEALS_WANTED ||
(seals & SEALS_DENIED) != 0)
goto err;
- pgoff = list[i].offset >> PAGE_SHIFT;
- pgcnt = list[i].size >> PAGE_SHIFT;
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(mapping, pgoff + pgidx);
- if (IS_ERR(page)) {
- ret = PTR_ERR(page);
- goto err;
- }
- ubuf->pages[pgbuf++] = page;
- }
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+ if (is_file_hugepages(memfd))
+ ret = handle_hugetlb_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ else
+ ret = handle_shmem_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ if (ret < 0)
+ goto err;
+
fput(memfd);
memfd = NULL;
}
@@ -287,6 +366,7 @@ static long udmabuf_create(struct miscdevice *device,
put_page(ubuf->pages[--pgbuf]);
if (memfd)
fput(memfd);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
return ret;
--
2.43.0
^ permalink raw reply related [relevance 4%]
* [PATCH v13 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios
@ 2024-04-04 7:26 4% Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Vivek Kasireddy @ 2024-04-04 7:26 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Christoph Hellwig, Andrew Morton, Daniel Vetter, Hugh Dickins,
Peter Xu, Jason Gunthorpe, Gerd Hoffmann, Dongwon Kim,
Junxiao Chang
Currently, some drivers (e.g, Udmabuf) that want to longterm-pin
the pages/folios associated with a memfd, do so by simply taking a
reference on them. This is not desirable because the pages/folios
may reside in Movable zone or CMA block.
Therefore, having drivers use memfd_pin_folios() API ensures that
the folios are appropriately pinned via FOLL_PIN for longterm DMA.
This patchset also introduces a few helpers and converts the Udmabuf
driver to use folios and memfd_pin_folios() API to longterm-pin
the folios for DMA. Two new Udmabuf selftests are also included to
test the driver and the new API.
---
Patchset overview:
Patch 1-2: GUP helpers to migrate and unpin one or more folios
Patch 3: Introduce memfd_pin_folios() API
Patch 4-5: Udmabuf driver bug fixes for Qemu + hugetlb=on, blob=true case
Patch 6-8: Convert Udmabuf to use memfd_pin_folios() and add sefltests
This series is tested using the following methods:
- Run the subtests added in Patch 8
- Run Qemu (master) with the following options and a few additional
patches to Spice:
qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-spice port=3001,gl=on,disable-ticketing=on,preferred-codec=gstreamer:h264
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
- Run source ./run_vmtests.sh -t gup_test -a to check GUP regressions
Changelog:
v12 -> v13: (suggestions from David)
- Drop the sanity checks in unpin_folio()/unpin_folios() due to
unavailability of per folio anon-exclusive flag
- Export unpin_folio()/unpin_folios() using EXPORT_SYMBOL_GPL
instead of EXPORT_SYMBOL
- Have check_and_migrate_movable_pages() just call
check_and_migrate_movable_folios() instead of calling other helpers
- Slightly improve the comments and commit messages
v11 -> v12:
- Rebased and tested on mm-unstable
v10 -> v11:
- Remove the version string from the patch subject (Andrew)
- Move the changelog from the patches into the cover letter
- Rearrange the patchset to have GUP patches at the beginning
v9 -> v10:
- Introduce and use unpin_folio(), unpin_folios() and
check_and_migrate_movable_folios() helpers
- Use a list to track the folios that need to be unpinned in udmabuf
v8 -> v9: (suggestions from Matthew)
- Drop the extern while declaring memfd_alloc_folio()
- Fix memfd_alloc_folio() declaration to have it return struct folio *
instead of struct page * when CONFIG_MEMFD_CREATE is not defined
- Use folio_pfn() on the folio instead of page_to_pfn() on head page
in udmabuf
- Don't split the arguments to shmem_read_folio() on multiple lines
in udmabuf
v7 -> v8: (suggestions from David)
- Have caller pass [start, end], max_folios instead of start, nr_pages
- Replace offsets array with just offset into the first page
- Add comments explaning the need for next_idx
- Pin (and return) the folio (via FOLL_PIN) only once
v6 -> v7:
- Rename this API to memfd_pin_folios() and make it return folios
and offsets instead of pages (David)
- Don't continue processing the folios in the batch returned by
filemap_get_folios_contig() if they do not have correct next_idx
- Add the R-b tag from Christoph
v5 -> v6: (suggestions from Christoph)
- Rename this API to memfd_pin_user_pages() to make it clear that it
is intended for memfds
- Move the memfd page allocation helper from gup.c to memfd.c
- Fix indentation errors in memfd_pin_user_pages()
- For contiguous ranges of folios, use a helper such as
filemap_get_folios_contig() to lookup the page cache in batches
- Split the processing of hugetlb or shmem pages into helpers to
simplify the code in udmabuf_create()
v4 -> v5: (suggestions from David)
- For hugetlb case, ensure that we only obtain head pages from the
mapping by using __filemap_get_folio() instead of find_get_page_flags()
- Handle -EEXIST when two or more potential users try to simultaneously
add a huge page to the mapping by forcing them to retry on failure
v3 -> v4:
- Remove the local variable "page" and instead use 3 return statements
in alloc_file_page() (David)
- Add the R-b tag from David
v2 -> v3: (suggestions from David)
- Enclose the huge page allocation code with #ifdef CONFIG_HUGETLB_PAGE
(Build error reported by kernel test robot <lkp@intel.com>)
- Don't forget memalloc_pin_restore() on non-migration related errors
- Improve the readability of the cleanup code associated with
non-migration related errors
- Augment the comments by describing FOLL_LONGTERM like behavior
- Include the R-b tag from Jason
v1 -> v2:
- Drop gup_flags and improve comments and commit message (David)
- Allocate a page if we cannot find in page cache for the hugetlbfs
case as well (David)
- Don't unpin pages if there is a migration related failure (David)
- Drop the unnecessary nr_pages <= 0 check (Jason)
- Have the caller of the API pass in file * instead of fd (Jason)
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Vivek Kasireddy (8):
mm/gup: Introduce unpin_folio/unpin_folios helpers
mm/gup: Introduce check_and_migrate_movable_folios()
mm/gup: Introduce memfd_pin_folios() for pinning memfd folios
udmabuf: Use vmf_insert_pfn and VM_PFNMAP for handling mmap
udmabuf: Add back support for mapping hugetlb pages
udmabuf: Convert udmabuf driver to use folios
udmabuf: Pin the pages using memfd_pin_folios() API
selftests/udmabuf: Add tests to verify data after page migration
drivers/dma-buf/udmabuf.c | 231 +++++++++----
include/linux/memfd.h | 5 +
include/linux/mm.h | 5 +
mm/gup.c | 305 +++++++++++++++---
mm/memfd.c | 35 ++
.../selftests/drivers/dma-buf/udmabuf.c | 151 ++++++++-
6 files changed, 627 insertions(+), 105 deletions(-)
--
2.43.0
^ permalink raw reply [relevance 4%]
* Re: [PATCH] mm/filemap: set folio->mapping to NULL before xas_store()
2024-03-26 21:05 0% ` Andrew Morton
2024-03-26 22:50 0% ` Matthew Wilcox
@ 2024-03-26 22:52 0% ` Soma
1 sibling, 0 replies; 200+ results
From: Soma @ 2024-03-26 22:52 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, Matthew Wilcox (Oracle), linux-fsdevel, linux-kernel
On Wed, Mar 27, 2024 at 6:05 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Sat, 23 Mar 2024 06:04:54 +0900 Soma Nakata <soma.nakata01@gmail.com> wrote:
>
> > Functions such as __filemap_get_folio() check the truncation of
> > folios based on the mapping field. Therefore setting this field to NULL
> > earlier prevents unnecessary operations on already removed folios.
> >
> > ...
> >
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
> >
> > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> >
> > + folio->mapping = NULL;
> > + /* Leave page->index set: truncation lookup relies upon it */
> > +
> > xas_store(&xas, shadow);
> > xas_init_marks(&xas);
> >
> > - folio->mapping = NULL;
> > - /* Leave page->index set: truncation lookup relies upon it */
> > mapping->nrpages -= nr;
> > }
>
> Seems at least harmless, but I wonder if it can really make any
> difference. Don't readers of folio->mapping lock the folio first?
Yes, the reader locks the folio.
Only __filemap_remove_folio() calls page_cache_delete(),
and it says the caller has to lock the folio or make sure
that usage is safe. In the latter case, this patch improves
efficiency a little bit.
However, I found that there is not any latter case actually,
so discard it or apply, also to make the order of operations in
page_cache_delete() and page_cache_delete_batch() the same
for a cleanup.
Thanks,
^ permalink raw reply [relevance 0%]
* Re: [PATCH] mm/filemap: set folio->mapping to NULL before xas_store()
2024-03-26 21:05 0% ` Andrew Morton
@ 2024-03-26 22:50 0% ` Matthew Wilcox
2024-03-26 22:52 0% ` Soma
1 sibling, 0 replies; 200+ results
From: Matthew Wilcox @ 2024-03-26 22:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: Soma Nakata, linux-mm, linux-fsdevel, linux-kernel
On Tue, Mar 26, 2024 at 02:05:33PM -0700, Andrew Morton wrote:
> On Sat, 23 Mar 2024 06:04:54 +0900 Soma Nakata <soma.nakata01@gmail.com> wrote:
> > Functions such as __filemap_get_folio() check the truncation of
> > folios based on the mapping field. Therefore setting this field to NULL
> > earlier prevents unnecessary operations on already removed folios.
> >
> > ...
> >
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
> >
> > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> >
> > + folio->mapping = NULL;
> > + /* Leave page->index set: truncation lookup relies upon it */
> > +
> > xas_store(&xas, shadow);
> > xas_init_marks(&xas);
> >
> > - folio->mapping = NULL;
> > - /* Leave page->index set: truncation lookup relies upon it */
> > mapping->nrpages -= nr;
> > }
>
> Seems at least harmless, but I wonder if it can really make any
> difference. Don't readers of folio->mapping lock the folio first?
I can't think of anywhere that doesn't ... most of the places that check
folio->mapping have "goto unlock" as the very next line. I don't think
this patch accomplishes anything.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] mm/filemap: set folio->mapping to NULL before xas_store()
2024-03-22 21:04 6% [PATCH] mm/filemap: set folio->mapping to NULL before xas_store() Soma Nakata
@ 2024-03-26 21:05 0% ` Andrew Morton
2024-03-26 22:50 0% ` Matthew Wilcox
2024-03-26 22:52 0% ` Soma
0 siblings, 2 replies; 200+ results
From: Andrew Morton @ 2024-03-26 21:05 UTC (permalink / raw)
To: Soma Nakata
Cc: linux-mm, Matthew Wilcox (Oracle), linux-fsdevel, linux-kernel
On Sat, 23 Mar 2024 06:04:54 +0900 Soma Nakata <soma.nakata01@gmail.com> wrote:
> Functions such as __filemap_get_folio() check the truncation of
> folios based on the mapping field. Therefore setting this field to NULL
> earlier prevents unnecessary operations on already removed folios.
>
> ...
>
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
>
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
>
> + folio->mapping = NULL;
> + /* Leave page->index set: truncation lookup relies upon it */
> +
> xas_store(&xas, shadow);
> xas_init_marks(&xas);
>
> - folio->mapping = NULL;
> - /* Leave page->index set: truncation lookup relies upon it */
> mapping->nrpages -= nr;
> }
Seems at least harmless, but I wonder if it can really make any
difference. Don't readers of folio->mapping lock the folio first?
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 0/3] fs: aio: more folio conversion
2024-03-22 14:12 0% ` [PATCH v2 0/3] fs: aio: more folio conversion Christian Brauner
@ 2024-03-25 19:50 0% ` Matthew Wilcox
0 siblings, 0 replies; 200+ results
From: Matthew Wilcox @ 2024-03-25 19:50 UTC (permalink / raw)
To: Christian Brauner
Cc: Kefeng Wang, linux-aio, linux-fsdevel, Alexander Viro,
Benjamin LaHaise, Jan Kara, linux-kernel
On Fri, Mar 22, 2024 at 03:12:42PM +0100, Christian Brauner wrote:
> On Thu, 21 Mar 2024 21:16:37 +0800, Kefeng Wang wrote:
> > Convert to use folio throughout aio.
> >
> > v2:
> > - fix folio check returned from __filemap_get_folio()
> > - use folio_end_read() suggested by Matthew
> >
> > Kefeng Wang (3):
> > fs: aio: use a folio in aio_setup_ring()
> > fs: aio: use a folio in aio_free_ring()
> > fs: aio: convert to ring_folios and internal_folios
> >
> > [...]
>
> @Willy, can I get your RVB, please?
For the series:
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
^ permalink raw reply [relevance 0%]
* [PATCH 6.8 512/715] lib/stackdepot: fix first entry having a 0-handle
@ 2024-03-24 22:31 6% ` Sasha Levin
0 siblings, 0 replies; 200+ results
From: Sasha Levin @ 2024-03-24 22:31 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Oscar Salvador, Marco Elver, Vlastimil Babka, Andrey Konovalov,
Alexander Potapenko, Michal Hocko, Andrew Morton, Sasha Levin
From: Oscar Salvador <osalvador@suse.de>
[ Upstream commit 3ee34eabac2abb6b1b6fcdebffe18870719ad000 ]
Patch series "page_owner: print stacks and their outstanding allocations",
v10.
page_owner is a great debug functionality tool that lets us know about all
pages that have been allocated/freed and their specific stacktrace. This
comes very handy when debugging memory leaks, since with some scripting we
can see the outstanding allocations, which might point to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner. This functionality creates a new directory called
'page_owner_stacks' under 'sys/kernel//debug' with a read-only file called
'show_stacks', which prints out all the stacks followed by their
outstanding number of allocations (being that the times the stacktrace has
allocated but not freed yet). This gives us a clear and a quick overview
of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free
operation) call.
Unfortunately, we cannot use the new stackdepot api STACK_DEPOT_FLAG_GET
because it does not fulfill page_owner needs, meaning we would have to
special case things, at which point makes more sense for page_owner to do
its own {dec,inc}rementing of the stacks. E.g: Using
STACK_DEPOT_FLAG_PUT, once the refcount reaches 0, such stack gets
evicted, so page_owner would lose information.
This patchset also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
This patch (of 7):
The very first entry of stack_record gets a handle of 0, but this is wrong
because stackdepot treats a 0-handle as a non-valid one. E.g: See the
check in stack_depot_fetch()
Fix this by adding and offset of 1.
This bug has been lurking since the very beginning of stackdepot, but no
one really cared as it seems. Because of that I am not adding a Fixes
tag.
Link: https://lkml.kernel.org/r/20240215215907.20121-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20240215215907.20121-2-osalvador@suse.de
Co-developed-by: Marco Elver <elver@google.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Stable-dep-of: dc24559472a6 ("lib/stackdepot: off by one in depot_fetch_stack()")
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
lib/stackdepot.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 4a7055a63d9f8..c043a4186bc59 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -45,15 +45,16 @@
#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
STACK_DEPOT_EXTRA_BITS)
#define DEPOT_POOLS_CAP 8192
+/* The pool_index is offset by 1 so the first record does not have a 0 handle. */
#define DEPOT_MAX_POOLS \
- (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \
- (1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP)
+ (((1LL << (DEPOT_POOL_INDEX_BITS)) - 1 < DEPOT_POOLS_CAP) ? \
+ (1LL << (DEPOT_POOL_INDEX_BITS)) - 1 : DEPOT_POOLS_CAP)
/* Compact structure that stores a reference to a stack. */
union handle_parts {
depot_stack_handle_t handle;
struct {
- u32 pool_index : DEPOT_POOL_INDEX_BITS;
+ u32 pool_index : DEPOT_POOL_INDEX_BITS; /* pool_index is offset by 1 */
u32 offset : DEPOT_OFFSET_BITS;
u32 extra : STACK_DEPOT_EXTRA_BITS;
};
@@ -372,7 +373,7 @@ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size)
stack = current_pool + pool_offset;
/* Pre-initialize handle once. */
- stack->handle.pool_index = pool_index;
+ stack->handle.pool_index = pool_index + 1;
stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
stack->handle.extra = 0;
INIT_LIST_HEAD(&stack->hash_list);
@@ -483,18 +484,19 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
const int pools_num_cached = READ_ONCE(pools_num);
union handle_parts parts = { .handle = handle };
void *pool;
+ u32 pool_index = parts.pool_index - 1;
size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;
lockdep_assert_not_held(&pool_lock);
- if (parts.pool_index > pools_num_cached) {
+ if (pool_index > pools_num_cached) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pools_num_cached, handle);
+ pool_index, pools_num_cached, handle);
return NULL;
}
- pool = stack_pools[parts.pool_index];
+ pool = stack_pools[pool_index];
if (WARN_ON(!pool))
return NULL;
--
2.43.0
^ permalink raw reply related [relevance 6%]
* [PATCH] mm/filemap: set folio->mapping to NULL before xas_store()
@ 2024-03-22 21:04 6% Soma Nakata
2024-03-26 21:05 0% ` Andrew Morton
0 siblings, 1 reply; 200+ results
From: Soma Nakata @ 2024-03-22 21:04 UTC (permalink / raw)
To: linux-mm
Cc: soma.nakata01, Matthew Wilcox (Oracle),
Andrew Morton, linux-fsdevel, linux-kernel
Functions such as __filemap_get_folio() check the truncation of
folios based on the mapping field. Therefore setting this field to NULL
earlier prevents unnecessary operations on already removed folios.
Signed-off-by: Soma Nakata <soma.nakata01@gmail.com>
---
mm/filemap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 2723104cc06a..79bac7c00084 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+ folio->mapping = NULL;
+ /* Leave page->index set: truncation lookup relies upon it */
+
xas_store(&xas, shadow);
xas_init_marks(&xas);
- folio->mapping = NULL;
- /* Leave page->index set: truncation lookup relies upon it */
mapping->nrpages -= nr;
}
--
2.25.1
^ permalink raw reply related [relevance 6%]
* Re: [PATCH v2 0/3] fs: aio: more folio conversion
2024-03-21 13:16 6% [PATCH v2 0/3] fs: aio: more folio conversion Kefeng Wang
2024-03-21 13:16 7% ` [PATCH v2 1/3] fs: aio: use a folio in aio_setup_ring() Kefeng Wang
@ 2024-03-22 14:12 0% ` Christian Brauner
2024-03-25 19:50 0% ` Matthew Wilcox
1 sibling, 1 reply; 200+ results
From: Christian Brauner @ 2024-03-22 14:12 UTC (permalink / raw)
To: Matthew Wilcox, Kefeng Wang
Cc: Christian Brauner, linux-aio, linux-fsdevel, Alexander Viro,
Benjamin LaHaise, Jan Kara, linux-kernel
On Thu, 21 Mar 2024 21:16:37 +0800, Kefeng Wang wrote:
> Convert to use folio throughout aio.
>
> v2:
> - fix folio check returned from __filemap_get_folio()
> - use folio_end_read() suggested by Matthew
>
> Kefeng Wang (3):
> fs: aio: use a folio in aio_setup_ring()
> fs: aio: use a folio in aio_free_ring()
> fs: aio: convert to ring_folios and internal_folios
>
> [...]
@Willy, can I get your RVB, please?
---
Applied to the vfs.misc branch of the vfs/vfs.git tree.
Patches in the vfs.misc branch should appear in linux-next soon.
Please report any outstanding bugs that were missed during review in a
new review to the original patch series allowing us to drop it.
It's encouraged to provide Acked-bys and Reviewed-bys even though the
patch has now been applied. If possible patch trailers will be updated.
Note that commit hashes shown below are subject to change due to rebase,
trailer updates or similar. If in doubt, please check the listed branch.
tree: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
branch: vfs.misc
[1/3] fs: aio: use a folio in aio_setup_ring()
https://git.kernel.org/vfs/vfs/c/39dad2b19085
[2/3] fs: aio: use a folio in aio_free_ring()
https://git.kernel.org/vfs/vfs/c/be0d43ccd350
[3/3] fs: aio: convert to ring_folios and internal_folios
https://git.kernel.org/vfs/vfs/c/6a5599ce3338
^ permalink raw reply [relevance 0%]
* Re: [PATCH v6 00/37] Memory allocation profiling
2024-03-21 20:41 0% ` [PATCH v6 00/37] Memory allocation profiling Andrew Morton
@ 2024-03-21 21:08 0% ` Suren Baghdasaryan
0 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-21 21:08 UTC (permalink / raw)
To: Andrew Morton
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
On Thu, Mar 21, 2024 at 1:42 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Thu, 21 Mar 2024 09:36:22 -0700 Suren Baghdasaryan <surenb@google.com> wrote:
>
> > Low overhead [1] per-callsite memory allocation profiling. Not just for
> > debug kernels, overhead low enough to be deployed in production.
> >
> > Example output:
> > root@moria-kvm:~# sort -rn /proc/allocinfo
> > 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
> > 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
> > 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
> > 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> > 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> > 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
> > 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
> > 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> > 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> > 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
> > 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
>
> Did you consider adding a knob to permit all the data to be wiped out?
> So people can zap everything, run the chosen workload then go see what
> happened?
>
> Of course, this can be done in userspace by taking a snapshot before
> and after, then crunching on the two....
Yeah, that's exactly what I was envisioning. Don't think we need to
complicate more by adding a reset functionality unless there are other
reasons for it. Thanks!
^ permalink raw reply [relevance 0%]
* + memprofiling-documentation.patch added to mm-unstable branch
@ 2024-03-21 20:44 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-03-21 20:44 UTC (permalink / raw)
To: mm-commits, wedsonaf, viro, vbabka, tj, surenb, peterz,
pasha.tatashin, ojeda, keescook, gary, dennis, cl, boqun.feng,
bjorn3_gh, benno.lossin, aliceryhl, alex.gaynor, a.hindborg,
kent.overstreet, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 7444 bytes --]
The patch titled
Subject: memprofiling: documentation
has been added to the -mm mm-unstable branch. Its filename is
memprofiling-documentation.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/memprofiling-documentation.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Kent Overstreet <kent.overstreet@linux.dev>
Subject: memprofiling: documentation
Date: Thu, 21 Mar 2024 09:36:59 -0700
Provide documentation for memory allocation profiling.
Link: https://lkml.kernel.org/r/20240321163705.3067592-38-surenb@google.com
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alex Gaynor <alex.gaynor@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@samsung.com>
Cc: Benno Lossin <benno.lossin@proton.me>
Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Gary Guo <gary@garyguo.net>
Cc: Kees Cook <keescook@chromium.org>
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
Documentation/mm/allocation-profiling.rst | 100 ++++++++++++++++++++
Documentation/mm/index.rst | 1
2 files changed, 101 insertions(+)
--- /dev/null
+++ a/Documentation/mm/allocation-profiling.rst
@@ -0,0 +1,100 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+MEMORY ALLOCATION PROFILING
+===========================
+
+Low overhead (suitable for production) accounting of all memory allocations,
+tracked by file and line number.
+
+Usage:
+kconfig options:
+- CONFIG_MEM_ALLOC_PROFILING
+
+- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+
+- CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ adds warnings for allocations that weren't accounted because of a
+ missing annotation
+
+Boot parameter:
+ sysctl.vm.mem_profiling=0|1|never
+
+ When set to "never", memory allocation profiling overhead is minimized and it
+ cannot be enabled at runtime (sysctl becomes read-only).
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
+
+sysctl:
+ /proc/sys/vm/mem_profiling
+
+Runtime info:
+ /proc/allocinfo
+
+Example output::
+
+ root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
+ 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
+ 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
+ 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
+ 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 55M 4887 mm/slub.c:2259 func:alloc_slab_page
+ 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+
+===================
+Theory of operation
+===================
+
+Memory allocation profiling builds off of code tagging, which is a library for
+declaring static structs (that typically describe a file and line number in
+some way, hence code tagging) and then finding and operating on them at runtime,
+- i.e. iterating over them to print them in debugfs/procfs.
+
+To add accounting for an allocation call, we replace it with a macro
+invocation, alloc_hooks(), that
+- declares a code tag
+- stashes a pointer to it in task_struct
+- calls the real allocation function
+- and finally, restores the task_struct alloc tag pointer to its previous value.
+
+This allows for alloc_hooks() calls to be nested, with the most recent one
+taking effect. This is important for allocations internal to the mm/ code that
+do not properly belong to the outer allocation context and should be counted
+separately: for example, slab object extension vectors, or when the slab
+allocates pages from the page allocator.
+
+Thus, proper usage requires determining which function in an allocation call
+stack should be tagged. There are many helper functions that essentially wrap
+e.g. kmalloc() and do a little more work, then are called in multiple places;
+we'll generally want the accounting to happen in the callers of these helpers,
+not in the helpers themselves.
+
+To fix up a given helper, for example foo(), do the following:
+- switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
+
+- rename it to foo_noprof()
+
+- define a macro version of foo() like so:
+
+ #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
+
+It's also possible to stash a pointer to an alloc tag in your own data structures.
+
+Do this when you're implementing a generic data structure that does allocations
+"on behalf of" some other code - for example, the rhashtable code. This way,
+instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
+break it out by rhashtable type.
+
+To do so:
+- Hook your data structure's init function, like any other allocation function.
+
+- Within your init function, use the convenience macro alloc_tag_record() to
+ record alloc tag in your data structure.
+
+- Then, use the following form for your allocations:
+ alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
--- a/Documentation/mm/index.rst~memprofiling-documentation
+++ a/Documentation/mm/index.rst
@@ -26,6 +26,7 @@ see the :doc:`admin guide <../admin-guid
page_cache
shmfs
oom
+ allocation-profiling
Legacy Documentation
====================
_
Patches currently in -mm which might be from kent.overstreet@linux.dev are
fix-missing-vmalloch-includes.patch
asm-generic-ioh-kill-vmalloch-dependency.patch
mm-slub-mark-slab_free_freelist_hook-__always_inline.patch
scripts-kallysms-always-include-__start-and-__stop-symbols.patch
fs-convert-alloc_inode_sb-to-a-macro.patch
rust-add-a-rust-helper-for-krealloc.patch
mempool-hook-up-to-memory-allocation-profiling.patch
mm-percpu-introduce-pcpuobj_ext.patch
mm-percpu-add-codetag-reference-into-pcpuobj_ext.patch
mm-vmalloc-enable-memory-allocation-profiling.patch
rhashtable-plumb-through-alloc-tag.patch
maintainers-add-entries-for-code-tagging-and-memory-allocation-profiling.patch
memprofiling-documentation.patch
^ permalink raw reply [relevance 4%]
* + lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch added to mm-unstable branch
@ 2024-03-21 20:43 3% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-03-21 20:43 UTC (permalink / raw)
To: mm-commits, wedsonaf, viro, vbabka, tj, peterz, pasha.tatashin,
ojeda, kent.overstreet, keescook, gary, dennis, cl, boqun.feng,
bjorn3_gh, benno.lossin, aliceryhl, alex.gaynor, a.hindborg,
surenb, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 20133 bytes --]
The patch titled
Subject: lib: add allocation tagging support for memory allocation profiling
has been added to the -mm mm-unstable branch. Its filename is
lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Suren Baghdasaryan <surenb@google.com>
Subject: lib: add allocation tagging support for memory allocation profiling
Date: Thu, 21 Mar 2024 09:36:35 -0700
Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily
instrument memory allocators. It registers an "alloc_tags" codetag type
with /proc/allocinfo interface to output allocation tag information when
the feature is enabled.
CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory
allocation profiling instrumentation.
Memory allocation profiling can be enabled or disabled at runtime using
/proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n.
CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation
profiling by default.
Link: https://lkml.kernel.org/r/20240321163705.3067592-14-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alex Gaynor <alex.gaynor@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@samsung.com>
Cc: Benno Lossin <benno.lossin@proton.me>
Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Gary Guo <gary@garyguo.net>
Cc: Kees Cook <keescook@chromium.org>
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
Documentation/admin-guide/sysctl/vm.rst | 16 ++
Documentation/filesystems/proc.rst | 29 ++++
include/asm-generic/codetag.lds.h | 14 ++
include/asm-generic/vmlinux.lds.h | 3
include/linux/alloc_tag.h | 145 +++++++++++++++++++++
include/linux/sched.h | 24 +++
lib/Kconfig.debug | 25 +++
lib/Makefile | 2
lib/alloc_tag.c | 149 ++++++++++++++++++++++
scripts/module.lds.S | 7 +
10 files changed, 414 insertions(+)
--- a/Documentation/admin-guide/sysctl/vm.rst~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/Documentation/admin-guide/sysctl/vm.rst
@@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
+- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
@@ -425,6 +426,21 @@ e.g., up to one or two maps per allocati
The default value is 65530.
+mem_profiling
+==============
+
+Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
+
+1: Enable memory profiling.
+
+0: Disable memory profiling.
+
+Enabling memory profiling introduces a small performance overhead for all
+memory allocations.
+
+The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
+
+
memory_failure_early_kill:
==========================
--- a/Documentation/filesystems/proc.rst~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/Documentation/filesystems/proc.rst
@@ -688,6 +688,7 @@ files are there, and which are missing.
============ ===============================================================
File Content
============ ===============================================================
+ allocinfo Memory allocations profiling information
apm Advanced power management info
bootconfig Kernel command line obtained from boot config,
and, if there were kernel parameters from the
@@ -953,6 +954,34 @@ also be allocatable although a lot of fi
reclaimed to achieve this.
+allocinfo
+~~~~~~~
+
+Provides information about memory allocations at all locations in the code
+base. Each allocation in the code is identified by its source file, line
+number, module (if originates from a loadable module) and the function calling
+the allocation. The number of bytes allocated and number of calls at each
+location are reported.
+
+Example output.
+
+::
+
+ > sort -rn /proc/allocinfo
+ 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
+ 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
+ 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
+ 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
+ 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
+ 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ ...
+
+
meminfo
~~~~~~~
--- /dev/null
+++ a/include/asm-generic/codetag.lds.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_GENERIC_CODETAG_LDS_H
+#define __ASM_GENERIC_CODETAG_LDS_H
+
+#define SECTION_WITH_BOUNDARIES(_name) \
+ . = ALIGN(8); \
+ __start_##_name = .; \
+ KEEP(*(_name)) \
+ __stop_##_name = .;
+
+#define CODETAG_SECTIONS() \
+ SECTION_WITH_BOUNDARIES(alloc_tags)
+
+#endif /* __ASM_GENERIC_CODETAG_LDS_H */
--- a/include/asm-generic/vmlinux.lds.h~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/include/asm-generic/vmlinux.lds.h
@@ -50,6 +50,8 @@
* [__nosave_begin, __nosave_end] for the nosave data
*/
+#include <asm-generic/codetag.lds.h>
+
#ifndef LOAD_OFFSET
#define LOAD_OFFSET 0
#endif
@@ -366,6 +368,7 @@
. = ALIGN(8); \
BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \
BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \
+ CODETAG_SECTIONS() \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
TRACE_PRINTKS() \
--- /dev/null
+++ a/include/linux/alloc_tag.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * allocation tagging
+ */
+#ifndef _LINUX_ALLOC_TAG_H
+#define _LINUX_ALLOC_TAG_H
+
+#include <linux/bug.h>
+#include <linux/codetag.h>
+#include <linux/container_of.h>
+#include <linux/preempt.h>
+#include <asm/percpu.h>
+#include <linux/cpumask.h>
+#include <linux/static_key.h>
+
+struct alloc_tag_counters {
+ u64 bytes;
+ u64 calls;
+};
+
+/*
+ * An instance of this structure is created in a special ELF section at every
+ * allocation callsite. At runtime, the special section is treated as
+ * an array of these. Embedded codetag utilizes codetag framework.
+ */
+struct alloc_tag {
+ struct codetag ct;
+ struct alloc_tag_counters __percpu *counters;
+} __aligned(8);
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+
+static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct)
+{
+ return container_of(ct, struct alloc_tag, ct);
+}
+
+#ifdef ARCH_NEEDS_WEAK_PER_CPU
+/*
+ * When percpu variables are required to be defined as weak, static percpu
+ * variables can't be used inside a function (see comments for DECLARE_PER_CPU_SECTION).
+ */
+#error "Memory allocation profiling is incompatible with ARCH_NEEDS_WEAK_PER_CPU"
+#endif
+
+#define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+ __section("alloc_tags") = { \
+ .ct = CODE_TAG_INIT, \
+ .counters = &_alloc_tag_cntr };
+
+DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static inline bool mem_alloc_profiling_enabled(void)
+{
+ return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ &mem_alloc_profiling_key);
+}
+
+static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag)
+{
+ struct alloc_tag_counters v = { 0, 0 };
+ struct alloc_tag_counters *counter;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ counter = per_cpu_ptr(tag->counters, cpu);
+ v.bytes += counter->bytes;
+ v.calls += counter->calls;
+ }
+
+ return v;
+}
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ WARN_ONCE(ref && ref->ct,
+ "alloc_tag was not cleared (got tag for %s:%u)\n",
+ ref->ct->filename, ref->ct->lineno);
+
+ WARN_ONCE(!tag, "current->alloc_tag not set");
+}
+
+static inline void alloc_tag_sub_check(union codetag_ref *ref)
+{
+ WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n");
+}
+#else
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag) {}
+static inline void alloc_tag_sub_check(union codetag_ref *ref) {}
+#endif
+
+/* Caller should verify both ref and tag to be valid */
+static inline void __alloc_tag_ref_set(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ ref->ct = &tag->ct;
+ /*
+ * We need in increment the call counter every time we have a new
+ * allocation or when we split a large allocation into smaller ones.
+ * Each new reference for every sub-allocation needs to increment call
+ * counter because when we free each part the counter will be decremented.
+ */
+ this_cpu_inc(tag->counters->calls);
+}
+
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes)
+{
+ alloc_tag_add_check(ref, tag);
+ if (!ref || !tag)
+ return;
+
+ __alloc_tag_ref_set(ref, tag);
+ this_cpu_add(tag->counters->bytes, bytes);
+}
+
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes)
+{
+ struct alloc_tag *tag;
+
+ alloc_tag_sub_check(ref);
+ if (!ref || !ref->ct)
+ return;
+
+ tag = ct_to_alloc_tag(ref->ct);
+
+ this_cpu_sub(tag->counters->bytes, bytes);
+ this_cpu_dec(tag->counters->calls);
+
+ ref->ct = NULL;
+}
+
+#else /* CONFIG_MEM_ALLOC_PROFILING */
+
+#define DEFINE_ALLOC_TAG(_alloc_tag)
+static inline bool mem_alloc_profiling_enabled(void) { return false; }
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag,
+ size_t bytes) {}
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {}
+
+#endif /* CONFIG_MEM_ALLOC_PROFILING */
+
+#endif /* _LINUX_ALLOC_TAG_H */
--- a/include/linux/sched.h~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/include/linux/sched.h
@@ -770,6 +770,10 @@ struct task_struct {
unsigned int flags;
unsigned int ptrace;
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+ struct alloc_tag *alloc_tag;
+#endif
+
#ifdef CONFIG_SMP
int on_cpu;
struct __call_single_node wake_entry;
@@ -810,6 +814,7 @@ struct task_struct {
struct task_group *sched_task_group;
#endif
+
#ifdef CONFIG_UCLAMP_TASK
/*
* Clamp values requested for a scheduling entity.
@@ -2187,4 +2192,23 @@ static inline int sched_core_idle_cpu(in
extern void sched_set_stop_task(int cpu, struct task_struct *stop);
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag)
+{
+ swap(current->alloc_tag, tag);
+ return tag;
+}
+
+static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old)
+{
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n");
+#endif
+ current->alloc_tag = old;
+}
+#else
+#define alloc_tag_save(_tag) NULL
+#define alloc_tag_restore(_tag, _old) do {} while (0)
+#endif
+
#endif
--- /dev/null
+++ a/lib/alloc_tag.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/alloc_tag.h>
+#include <linux/fs.h>
+#include <linux/gfp.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_buf.h>
+#include <linux/seq_file.h>
+
+static struct codetag_type *alloc_tag_cttype;
+
+DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static void *allocinfo_start(struct seq_file *m, loff_t *pos)
+{
+ struct codetag_iterator *iter;
+ struct codetag *ct;
+ loff_t node = *pos;
+
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ m->private = iter;
+ if (!iter)
+ return NULL;
+
+ codetag_lock_module_list(alloc_tag_cttype, true);
+ *iter = codetag_get_ct_iter(alloc_tag_cttype);
+ while ((ct = codetag_next_ct(iter)) != NULL && node)
+ node--;
+
+ return ct ? iter : NULL;
+}
+
+static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ struct codetag *ct = codetag_next_ct(iter);
+
+ (*pos)++;
+ if (!ct)
+ return NULL;
+
+ return iter;
+}
+
+static void allocinfo_stop(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)m->private;
+
+ if (iter) {
+ codetag_lock_module_list(alloc_tag_cttype, false);
+ kfree(iter);
+ }
+}
+
+static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct)
+{
+ struct alloc_tag *tag = ct_to_alloc_tag(ct);
+ struct alloc_tag_counters counter = alloc_tag_read(tag);
+ s64 bytes = counter.bytes;
+
+ seq_buf_printf(out, "%12lli %8llu ", bytes, counter.calls);
+ codetag_to_text(out, ct);
+ seq_buf_putc(out, ' ');
+ seq_buf_putc(out, '\n');
+}
+
+static int allocinfo_show(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ char *bufp;
+ size_t n = seq_get_buf(m, &bufp);
+ struct seq_buf buf;
+
+ seq_buf_init(&buf, bufp, n);
+ alloc_tag_to_text(&buf, iter->ct);
+ seq_commit(m, seq_buf_used(&buf));
+ return 0;
+}
+
+static const struct seq_operations allocinfo_seq_op = {
+ .start = allocinfo_start,
+ .next = allocinfo_next,
+ .stop = allocinfo_stop,
+ .show = allocinfo_show,
+};
+
+static void __init procfs_init(void)
+{
+ proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op);
+}
+
+static bool alloc_tag_module_unload(struct codetag_type *cttype,
+ struct codetag_module *cmod)
+{
+ struct codetag_iterator iter = codetag_get_ct_iter(cttype);
+ struct alloc_tag_counters counter;
+ bool module_unused = true;
+ struct alloc_tag *tag;
+ struct codetag *ct;
+
+ for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) {
+ if (iter.cmod != cmod)
+ continue;
+
+ tag = ct_to_alloc_tag(ct);
+ counter = alloc_tag_read(tag);
+
+ if (WARN(counter.bytes,
+ "%s:%u module %s func:%s has %llu allocated at module unload",
+ ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes))
+ module_unused = false;
+ }
+
+ return module_unused;
+}
+
+static struct ctl_table memory_allocation_profiling_sysctls[] = {
+ {
+ .procname = "mem_profiling",
+ .data = &mem_alloc_profiling_key,
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ .mode = 0444,
+#else
+ .mode = 0644,
+#endif
+ .proc_handler = proc_do_static_key,
+ },
+ { }
+};
+
+static int __init alloc_tag_init(void)
+{
+ const struct codetag_type_desc desc = {
+ .section = "alloc_tags",
+ .tag_size = sizeof(struct alloc_tag),
+ .module_unload = alloc_tag_module_unload,
+ };
+
+ alloc_tag_cttype = codetag_register_type(&desc);
+ if (IS_ERR_OR_NULL(alloc_tag_cttype))
+ return PTR_ERR(alloc_tag_cttype);
+
+ register_sysctl_init("vm", memory_allocation_profiling_sysctls);
+ procfs_init();
+
+ return 0;
+}
+module_init(alloc_tag_init);
--- a/lib/Kconfig.debug~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/lib/Kconfig.debug
@@ -972,6 +972,31 @@ config CODE_TAGGING
bool
select KALLSYMS
+config MEM_ALLOC_PROFILING
+ bool "Enable memory allocation profiling"
+ default n
+ depends on PROC_FS
+ depends on !DEBUG_FORCE_WEAK_PER_CPU
+ select CODE_TAGGING
+ help
+ Track allocation source code and record total allocation size
+ initiated at that code location. The mechanism can be used to track
+ memory leaks with a low performance and memory impact.
+
+config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ bool "Enable memory allocation profiling by default"
+ default y
+ depends on MEM_ALLOC_PROFILING
+
+config MEM_ALLOC_PROFILING_DEBUG
+ bool "Memory allocation profiler debugging"
+ default n
+ depends on MEM_ALLOC_PROFILING
+ select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ help
+ Adds warnings with helpful error messages for memory allocation
+ profiling.
+
source "lib/Kconfig.kasan"
source "lib/Kconfig.kfence"
source "lib/Kconfig.kmsan"
--- a/lib/Makefile~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/lib/Makefile
@@ -234,6 +234,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_CODE_TAGGING) += codetag.o
+obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o
+
lib-$(CONFIG_GENERIC_BUG) += bug.o
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
--- a/scripts/module.lds.S~lib-add-allocation-tagging-support-for-memory-allocation-profiling
+++ a/scripts/module.lds.S
@@ -9,6 +9,8 @@
#define DISCARD_EH_FRAME *(.eh_frame)
#endif
+#include <asm-generic/codetag.lds.h>
+
SECTIONS {
/DISCARD/ : {
*(.discard)
@@ -47,12 +49,17 @@ SECTIONS {
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
+ CODETAG_SECTIONS()
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
+#else
+ .data : {
+ CODETAG_SECTIONS()
+ }
#endif
}
_
Patches currently in -mm which might be from surenb@google.com are
mm-introduce-slabobj_ext-to-support-slab-object-extensions.patch
mm-introduce-__gfp_no_obj_ext-flag-to-selectively-prevent-slabobj_ext-creation.patch
mm-slab-introduce-slab_no_obj_ext-to-avoid-obj_ext-creation.patch
slab-objext-introduce-objext_flags-as-extension-to-page_memcg_data_flags.patch
lib-code-tagging-framework.patch
lib-code-tagging-module-support.patch
lib-prevent-module-unloading-if-memory-is-not-freed.patch
lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch
lib-introduce-support-for-page-allocation-tagging.patch
lib-introduce-early-boot-parameter-to-avoid-page_ext-memory-overhead.patch
mm-percpu-increase-percpu_module_reserve-to-accommodate-allocation-tags.patch
change-alloc_pages-name-in-dma_map_ops-to-avoid-name-conflicts.patch
mm-enable-page-allocation-tagging.patch
mm-create-new-codetag-references-during-page-splitting.patch
mm-fix-non-compound-multi-order-memory-accounting-in-__free_pages.patch
mm-page_ext-enable-early_page_ext-when-config_mem_alloc_profiling_debug=y.patch
lib-add-codetag-reference-into-slabobj_ext.patch
mm-slab-add-allocation-accounting-into-slab-allocation-and-free-paths.patch
mm-slab-enable-slab-allocation-tagging-for-kmalloc-and-friends.patch
mm-percpu-enable-per-cpu-allocation-tagging.patch
lib-add-memory-allocations-report-in-show_mem.patch
codetag-debug-skip-objext-checking-when-its-for-objext-itself.patch
codetag-debug-mark-codetags-for-reserved-pages-as-empty.patch
codetag-debug-introduce-objexts_alloc_fail-to-mark-failed-slab_ext-allocations.patch
^ permalink raw reply [relevance 3%]
* + fix-missing-vmalloch-includes.patch added to mm-unstable branch
@ 2024-03-21 20:42 2% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-03-21 20:42 UTC (permalink / raw)
To: mm-commits, wedsonaf, viro, vbabka, tj, surenb, peterz,
pasha.tatashin, ojeda, keescook, gary, dennis, cl, boqun.feng,
bjorn3_gh, benno.lossin, aliceryhl, alex.gaynor, a.hindborg,
kent.overstreet, akpm
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 26372 bytes --]
The patch titled
Subject: fix missing vmalloc.h includes
has been added to the -mm mm-unstable branch. Its filename is
fix-missing-vmalloch-includes.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/fix-missing-vmalloch-includes.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Kent Overstreet <kent.overstreet@linux.dev>
Subject: fix missing vmalloc.h includes
Date: Thu, 21 Mar 2024 09:36:23 -0700
Patch series "Memory allocation profiling", v6.
Overview:
Low overhead [1] per-callsite memory allocation profiling. Not just for
debug kernels, overhead low enough to be deployed in production.
Example output:
root@moria-kvm:~# sort -rn /proc/allocinfo
127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
56373248 4737 mm/slub.c:2259 func:alloc_slab_page
14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
3940352 962 mm/memory.c:4214 func:alloc_anon_folio
2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
...
Usage:
kconfig options:
- CONFIG_MEM_ALLOC_PROFILING
- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
- CONFIG_MEM_ALLOC_PROFILING_DEBUG
adds warnings for allocations that weren't accounted because of a
missing annotation
sysctl:
/proc/sys/vm/mem_profiling
Runtime info:
/proc/allocinfo
Notes:
[1]: Overhead
To measure the overhead we are comparing the following configurations:
(1) Baseline with CONFIG_MEMCG_KMEM=n
(2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
(3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
(4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
(5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
(6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
(7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
Performance overhead:
To evaluate performance we implemented an in-kernel test executing
multiple get_free_page/free_page and kmalloc/kfree calls with allocation
sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
affinity set to a specific CPU to minimize the noise. Below are results
from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
56 core Intel Xeon:
kmalloc pgalloc
(1 baseline) 6.764s 16.902s
(2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
(3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
(4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
(5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
(6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
(7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
Memory overhead:
Kernel size:
text data bss dec diff
(1) 26515311 18890222 17018880 62424413
(2) 26524728 19423818 16740352 62688898 264485
(3) 26524724 19423818 16740352 62688894 264481
(4) 26524728 19423818 16740352 62688898 264485
(5) 26541782 18964374 16957440 62463596 39183
Memory consumption on a 56 core Intel CPU with 125GB of memory:
Code tags: 192 kB
PageExts: 262144 kB (256MB)
SlabExts: 9876 kB (9.6MB)
PcpuExts: 512 kB (0.5MB)
Total overhead is 0.2% of total memory.
Benchmarks:
Hackbench tests run 100 times:
hackbench -s 512 -l 200 -g 15 -f 25 -P
baseline disabled profiling enabled profiling
avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
stdev 0.0137 0.0188 0.0077
hackbench -l 10000
baseline disabled profiling enabled profiling
avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
stdev 0.0933 0.0286 0.0489
stress-ng tests:
stress-ng --class memory --seq 4 -t 60
stress-ng --class cpu --seq 4 -t 60
Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
[2] https://lore.kernel.org/all/20240306182440.2003814-1-surenb@google.com/
This patch (of 37):
The next patch drops vmalloc.h from a system header in order to fix a
circular dependency; this adds it to all the files that were pulling it in
implicitly.
Link: https://lkml.kernel.org/r/20240321163705.3067592-1-surenb@google.com
Link: https://lkml.kernel.org/r/20240321163705.3067592-2-surenb@google.com
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alex Gaynor <alex.gaynor@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@samsung.com>
Cc: Benno Lossin <benno.lossin@proton.me>
Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Gary Guo <gary@garyguo.net>
Cc: Kees Cook <keescook@chromium.org>
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
arch/alpha/lib/checksum.c | 1 +
arch/alpha/lib/fpreg.c | 1 +
arch/alpha/lib/memcpy.c | 1 +
arch/arm/kernel/irq.c | 1 +
arch/arm/kernel/traps.c | 1 +
arch/arm64/kernel/efi.c | 1 +
arch/loongarch/include/asm/kfence.h | 1 +
arch/powerpc/kernel/iommu.c | 1 +
arch/powerpc/mm/mem.c | 1 +
arch/riscv/kernel/elf_kexec.c | 1 +
arch/riscv/kernel/probes/kprobes.c | 1 +
arch/s390/kernel/cert_store.c | 1 +
arch/s390/kernel/ipl.c | 1 +
arch/x86/include/asm/io.h | 1 +
arch/x86/kernel/cpu/sgx/main.c | 1 +
arch/x86/kernel/irq_64.c | 1 +
arch/x86/mm/fault.c | 1 +
drivers/accel/ivpu/ivpu_mmu_context.c | 1 +
drivers/gpu/drm/gma500/mmu.c | 1 +
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 1 +
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c | 1 +
drivers/gpu/drm/i915/gt/shmem_utils.c | 1 +
drivers/gpu/drm/i915/gvt/firmware.c | 1 +
drivers/gpu/drm/i915/gvt/gtt.c | 1 +
drivers/gpu/drm/i915/gvt/handlers.c | 1 +
drivers/gpu/drm/i915/gvt/mmio.c | 1 +
drivers/gpu/drm/i915/gvt/vgpu.c | 1 +
drivers/gpu/drm/i915/intel_gvt.c | 1 +
drivers/gpu/drm/imagination/pvr_vm_mips.c | 1 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 1 +
drivers/gpu/drm/omapdrm/omap_gem.c | 1 +
drivers/gpu/drm/v3d/v3d_bo.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_binding.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 1 +
drivers/gpu/drm/xen/xen_drm_front_gem.c | 1 +
drivers/hwtracing/coresight/coresight-trbe.c | 1 +
drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c | 1 +
drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_mbox.c | 1 +
drivers/net/ethernet/microsoft/mana/hw_channel.c | 1 +
drivers/platform/x86/uv_sysfs.c | 1 +
drivers/scsi/mpi3mr/mpi3mr_transport.c | 2 ++
drivers/vfio/pci/pds/dirty.c | 1 +
drivers/virt/acrn/mm.c | 1 +
drivers/virtio/virtio_mem.c | 1 +
include/linux/pds/pds_common.h | 2 ++
include/rdma/rdmavt_qp.h | 1 +
mm/debug_vm_pgtable.c | 1 +
sound/pci/hda/cs35l41_hda.c | 1 +
52 files changed, 54 insertions(+)
--- a/arch/alpha/lib/checksum.c~fix-missing-vmalloch-includes
+++ a/arch/alpha/lib/checksum.c
@@ -14,6 +14,7 @@
#include <linux/string.h>
#include <asm/byteorder.h>
+#include <asm/checksum.h>
static inline unsigned short from64to16(unsigned long x)
{
--- a/arch/alpha/lib/fpreg.c~fix-missing-vmalloch-includes
+++ a/arch/alpha/lib/fpreg.c
@@ -8,6 +8,7 @@
#include <linux/compiler.h>
#include <linux/export.h>
#include <linux/preempt.h>
+#include <asm/fpu.h>
#include <asm/thread_info.h>
#if defined(CONFIG_ALPHA_EV6) || defined(CONFIG_ALPHA_EV67)
--- a/arch/alpha/lib/memcpy.c~fix-missing-vmalloch-includes
+++ a/arch/alpha/lib/memcpy.c
@@ -18,6 +18,7 @@
#include <linux/types.h>
#include <linux/export.h>
+#include <linux/string.h>
/*
* This should be done in one go with ldq_u*2/mask/stq_u. Do it
--- a/arch/arm64/kernel/efi.c~fix-missing-vmalloch-includes
+++ a/arch/arm64/kernel/efi.c
@@ -10,6 +10,7 @@
#include <linux/efi.h>
#include <linux/init.h>
#include <linux/screen_info.h>
+#include <linux/vmalloc.h>
#include <asm/efi.h>
#include <asm/stacktrace.h>
--- a/arch/arm/kernel/irq.c~fix-missing-vmalloch-includes
+++ a/arch/arm/kernel/irq.c
@@ -32,6 +32,7 @@
#include <linux/kallsyms.h>
#include <linux/proc_fs.h>
#include <linux/export.h>
+#include <linux/vmalloc.h>
#include <asm/hardware/cache-l2x0.h>
#include <asm/hardware/cache-uniphier.h>
--- a/arch/arm/kernel/traps.c~fix-missing-vmalloch-includes
+++ a/arch/arm/kernel/traps.c
@@ -26,6 +26,7 @@
#include <linux/sched/debug.h>
#include <linux/sched/task_stack.h>
#include <linux/irq.h>
+#include <linux/vmalloc.h>
#include <linux/atomic.h>
#include <asm/cacheflush.h>
--- a/arch/loongarch/include/asm/kfence.h~fix-missing-vmalloch-includes
+++ a/arch/loongarch/include/asm/kfence.h
@@ -10,6 +10,7 @@
#define _ASM_LOONGARCH_KFENCE_H
#include <linux/kfence.h>
+#include <linux/vmalloc.h>
#include <asm/pgtable.h>
#include <asm/tlb.h>
--- a/arch/powerpc/kernel/iommu.c~fix-missing-vmalloch-includes
+++ a/arch/powerpc/kernel/iommu.c
@@ -26,6 +26,7 @@
#include <linux/iommu.h>
#include <linux/sched.h>
#include <linux/debugfs.h>
+#include <linux/vmalloc.h>
#include <asm/io.h>
#include <asm/iommu.h>
#include <asm/pci-bridge.h>
--- a/arch/powerpc/mm/mem.c~fix-missing-vmalloch-includes
+++ a/arch/powerpc/mm/mem.c
@@ -16,6 +16,7 @@
#include <linux/highmem.h>
#include <linux/suspend.h>
#include <linux/dma-direct.h>
+#include <linux/vmalloc.h>
#include <asm/swiotlb.h>
#include <asm/machdep.h>
--- a/arch/riscv/kernel/elf_kexec.c~fix-missing-vmalloch-includes
+++ a/arch/riscv/kernel/elf_kexec.c
@@ -19,6 +19,7 @@
#include <linux/libfdt.h>
#include <linux/types.h>
#include <linux/memblock.h>
+#include <linux/vmalloc.h>
#include <asm/setup.h>
int arch_kimage_file_post_load_cleanup(struct kimage *image)
--- a/arch/riscv/kernel/probes/kprobes.c~fix-missing-vmalloch-includes
+++ a/arch/riscv/kernel/probes/kprobes.c
@@ -6,6 +6,7 @@
#include <linux/extable.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
+#include <linux/vmalloc.h>
#include <asm/ptrace.h>
#include <linux/uaccess.h>
#include <asm/sections.h>
--- a/arch/s390/kernel/cert_store.c~fix-missing-vmalloch-includes
+++ a/arch/s390/kernel/cert_store.c
@@ -21,6 +21,7 @@
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
+#include <linux/vmalloc.h>
#include <crypto/sha2.h>
#include <keys/user-type.h>
#include <asm/debug.h>
--- a/arch/s390/kernel/ipl.c~fix-missing-vmalloch-includes
+++ a/arch/s390/kernel/ipl.c
@@ -20,6 +20,7 @@
#include <linux/gfp.h>
#include <linux/crash_dump.h>
#include <linux/debug_locks.h>
+#include <linux/vmalloc.h>
#include <asm/asm-extable.h>
#include <asm/diag.h>
#include <asm/ipl.h>
--- a/arch/x86/include/asm/io.h~fix-missing-vmalloch-includes
+++ a/arch/x86/include/asm/io.h
@@ -42,6 +42,7 @@
#include <asm/early_ioremap.h>
#include <asm/pgtable_types.h>
#include <asm/shared/io.h>
+#include <asm/special_insns.h>
#define build_mmio_read(name, size, type, reg, barrier) \
static inline type name(const volatile void __iomem *addr) \
--- a/arch/x86/kernel/cpu/sgx/main.c~fix-missing-vmalloch-includes
+++ a/arch/x86/kernel/cpu/sgx/main.c
@@ -13,6 +13,7 @@
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
+#include <linux/vmalloc.h>
#include <asm/sgx.h>
#include "driver.h"
#include "encl.h"
--- a/arch/x86/kernel/irq_64.c~fix-missing-vmalloch-includes
+++ a/arch/x86/kernel/irq_64.c
@@ -18,6 +18,7 @@
#include <linux/uaccess.h>
#include <linux/smp.h>
#include <linux/sched/task_stack.h>
+#include <linux/vmalloc.h>
#include <asm/cpu_entry_area.h>
#include <asm/softirq_stack.h>
--- a/arch/x86/mm/fault.c~fix-missing-vmalloch-includes
+++ a/arch/x86/mm/fault.c
@@ -20,6 +20,7 @@
#include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/
#include <linux/mm_types.h>
#include <linux/mm.h> /* find_and_lock_vma() */
+#include <linux/vmalloc.h>
#include <asm/cpufeature.h> /* boot_cpu_has, ... */
#include <asm/traps.h> /* dotraplinkage, ... */
--- a/drivers/accel/ivpu/ivpu_mmu_context.c~fix-missing-vmalloch-includes
+++ a/drivers/accel/ivpu/ivpu_mmu_context.c
@@ -6,6 +6,7 @@
#include <linux/bitfield.h>
#include <linux/highmem.h>
#include <linux/set_memory.h>
+#include <linux/vmalloc.h>
#include <drm/drm_cache.h>
--- a/drivers/gpu/drm/gma500/mmu.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/gma500/mmu.c
@@ -5,6 +5,7 @@
**************************************************************************/
#include <linux/highmem.h>
+#include <linux/vmalloc.h>
#include "mmu.h"
#include "psb_drv.h"
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -5,6 +5,7 @@
*/
#include <drm/drm_cache.h>
+#include <linux/vmalloc.h>
#include "gt/intel_gt.h"
#include "gt/intel_tlb.h"
--- a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
@@ -4,6 +4,7 @@
* Copyright © 2016 Intel Corporation
*/
+#include <linux/vmalloc.h>
#include "mock_dmabuf.h"
static struct sg_table *mock_map_dma_buf(struct dma_buf_attachment *attachment,
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -7,6 +7,7 @@
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/shmem_fs.h>
+#include <linux/vmalloc.h>
#include "i915_drv.h"
#include "gem/i915_gem_object.h"
--- a/drivers/gpu/drm/i915/gvt/firmware.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/firmware.c
@@ -30,6 +30,7 @@
#include <linux/firmware.h>
#include <linux/crc32.h>
+#include <linux/vmalloc.h>
#include "i915_drv.h"
#include "gvt.h"
--- a/drivers/gpu/drm/i915/gvt/gtt.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/gtt.c
@@ -39,6 +39,7 @@
#include "trace.h"
#include "gt/intel_gt_regs.h"
+#include <linux/vmalloc.h>
#if defined(VERBOSE_DEBUG)
#define gvt_vdbg_mm(fmt, args...) gvt_dbg_mm(fmt, ##args)
--- a/drivers/gpu/drm/i915/gvt/handlers.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/handlers.c
@@ -52,6 +52,7 @@
#include "display/skl_watermark_regs.h"
#include "display/vlv_dsi_pll_regs.h"
#include "gt/intel_gt_regs.h"
+#include <linux/vmalloc.h>
/* XXX FIXME i915 has changed PP_XXX definition */
#define PCH_PP_STATUS _MMIO(0xc7200)
--- a/drivers/gpu/drm/i915/gvt/mmio.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/mmio.c
@@ -33,6 +33,7 @@
*
*/
+#include <linux/vmalloc.h>
#include "i915_drv.h"
#include "i915_reg.h"
#include "gvt.h"
--- a/drivers/gpu/drm/i915/gvt/vgpu.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -34,6 +34,7 @@
#include "i915_drv.h"
#include "gvt.h"
#include "i915_pvinfo.h"
+#include <linux/vmalloc.h>
void populate_pvinfo_page(struct intel_vgpu *vgpu)
{
--- a/drivers/gpu/drm/i915/intel_gvt.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/i915/intel_gvt.c
@@ -28,6 +28,7 @@
#include "gt/intel_context.h"
#include "gt/intel_ring.h"
#include "gt/shmem_utils.h"
+#include <linux/vmalloc.h>
/**
* DOC: Intel GVT-g host support
--- a/drivers/gpu/drm/imagination/pvr_vm_mips.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/imagination/pvr_vm_mips.c
@@ -14,6 +14,7 @@
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/types.h>
+#include <linux/vmalloc.h>
/**
* pvr_vm_mips_init() - Initialise MIPS FW pagetable
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -4,6 +4,7 @@
*/
#include <linux/dma-buf.h>
+#include <linux/vmalloc.h>
#include <drm/drm.h>
#include <drm/drm_device.h>
--- a/drivers/gpu/drm/omapdrm/omap_gem.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -9,6 +9,7 @@
#include <linux/shmem_fs.h>
#include <linux/spinlock.h>
#include <linux/pfn_t.h>
+#include <linux/vmalloc.h>
#include <drm/drm_prime.h>
#include <drm/drm_vma_manager.h>
--- a/drivers/gpu/drm/v3d/v3d_bo.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/v3d/v3d_bo.c
@@ -21,6 +21,7 @@
#include <linux/dma-buf.h>
#include <linux/pfn_t.h>
+#include <linux/vmalloc.h>
#include "v3d_drv.h"
#include "uapi/drm/v3d_drm.h"
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
@@ -54,6 +54,7 @@
#include "vmwgfx_drv.h"
#include "vmwgfx_binding.h"
#include "device_include/svga3d_reg.h"
+#include <linux/vmalloc.h>
#define VMW_BINDING_RT_BIT 0
#define VMW_BINDING_PS_BIT 1
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
@@ -31,6 +31,7 @@
#include <drm/ttm/ttm_placement.h>
#include <linux/sched/signal.h>
+#include <linux/vmalloc.h>
bool vmw_supports_3d(struct vmw_private *dev_priv)
{
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c
@@ -25,6 +25,7 @@
*
**************************************************************************/
+#include <linux/vmalloc.h>
#include "vmwgfx_devcaps.h"
#include "vmwgfx_drv.h"
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -53,6 +53,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/version.h>
+#include <linux/vmalloc.h>
#define VMWGFX_DRIVER_DESC "Linux drm driver for VMware graphics devices"
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -35,6 +35,7 @@
#include <linux/sync_file.h>
#include <linux/hashtable.h>
+#include <linux/vmalloc.h>
/*
* Helper macro to get dx_ctx_node if available otherwise print an error
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
@@ -31,6 +31,7 @@
#include <drm/vmwgfx_drm.h>
#include <linux/pci.h>
+#include <linux/vmalloc.h>
int vmw_getparam_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c~fix-missing-vmalloch-includes
+++ a/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -11,6 +11,7 @@
#include <linux/dma-buf.h>
#include <linux/scatterlist.h>
#include <linux/shmem_fs.h>
+#include <linux/vmalloc.h>
#include <drm/drm_gem.h>
#include <drm/drm_prime.h>
--- a/drivers/hwtracing/coresight/coresight-trbe.c~fix-missing-vmalloch-includes
+++ a/drivers/hwtracing/coresight/coresight-trbe.c
@@ -17,6 +17,7 @@
#include <asm/barrier.h>
#include <asm/cpufeature.h>
+#include <linux/vmalloc.h>
#include "coresight-self-hosted-trace.h"
#include "coresight-trbe.h"
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c~fix-missing-vmalloch-includes
+++ a/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c
@@ -15,6 +15,7 @@
#include <linux/io.h>
#include <linux/pci.h>
#include <linux/etherdevice.h>
+#include <linux/vmalloc.h>
#include "octep_config.h"
#include "octep_main.h"
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_mbox.c~fix-missing-vmalloch-includes
+++ a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_mbox.c
@@ -7,6 +7,7 @@
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/netdevice.h>
+#include <linux/vmalloc.h>
#include "octep_vf_config.h"
#include "octep_vf_main.h"
--- a/drivers/net/ethernet/microsoft/mana/hw_channel.c~fix-missing-vmalloch-includes
+++ a/drivers/net/ethernet/microsoft/mana/hw_channel.c
@@ -3,6 +3,7 @@
#include <net/mana/gdma.h>
#include <net/mana/hw_channel.h>
+#include <linux/vmalloc.h>
static int mana_hwc_get_msg_index(struct hw_channel_context *hwc, u16 *msg_id)
{
--- a/drivers/platform/x86/uv_sysfs.c~fix-missing-vmalloch-includes
+++ a/drivers/platform/x86/uv_sysfs.c
@@ -11,6 +11,7 @@
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/kobject.h>
+#include <linux/vmalloc.h>
#include <asm/uv/bios.h>
#include <asm/uv/uv.h>
#include <asm/uv/uv_hub.h>
--- a/drivers/scsi/mpi3mr/mpi3mr_transport.c~fix-missing-vmalloch-includes
+++ a/drivers/scsi/mpi3mr/mpi3mr_transport.c
@@ -7,6 +7,8 @@
*
*/
+#include <linux/vmalloc.h>
+
#include "mpi3mr.h"
/**
--- a/drivers/vfio/pci/pds/dirty.c~fix-missing-vmalloch-includes
+++ a/drivers/vfio/pci/pds/dirty.c
@@ -3,6 +3,7 @@
#include <linux/interval_tree.h>
#include <linux/vfio.h>
+#include <linux/vmalloc.h>
#include <linux/pds/pds_common.h>
#include <linux/pds/pds_core_if.h>
--- a/drivers/virt/acrn/mm.c~fix-missing-vmalloch-includes
+++ a/drivers/virt/acrn/mm.c
@@ -12,6 +12,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/slab.h>
+#include <linux/vmalloc.h>
#include "acrn_drv.h"
--- a/drivers/virtio/virtio_mem.c~fix-missing-vmalloch-includes
+++ a/drivers/virtio/virtio_mem.c
@@ -21,6 +21,7 @@
#include <linux/bitmap.h>
#include <linux/lockdep.h>
#include <linux/log2.h>
+#include <linux/vmalloc.h>
#include <acpi/acpi_numa.h>
--- a/include/linux/pds/pds_common.h~fix-missing-vmalloch-includes
+++ a/include/linux/pds/pds_common.h
@@ -4,6 +4,8 @@
#ifndef _PDS_COMMON_H_
#define _PDS_COMMON_H_
+#include <linux/notifier.h>
+
#define PDS_CORE_DRV_NAME "pds_core"
/* the device's internal addressing uses up to 52 bits */
--- a/include/rdma/rdmavt_qp.h~fix-missing-vmalloch-includes
+++ a/include/rdma/rdmavt_qp.h
@@ -11,6 +11,7 @@
#include <rdma/ib_verbs.h>
#include <rdma/rdmavt_cq.h>
#include <rdma/rvt-abi.h>
+#include <linux/vmalloc.h>
/*
* Atomic bit definitions for r_aflags.
*/
--- a/mm/debug_vm_pgtable.c~fix-missing-vmalloch-includes
+++ a/mm/debug_vm_pgtable.c
@@ -30,6 +30,7 @@
#include <linux/start_kernel.h>
#include <linux/sched/mm.h>
#include <linux/io.h>
+#include <linux/vmalloc.h>
#include <asm/cacheflush.h>
#include <asm/pgalloc.h>
--- a/sound/pci/hda/cs35l41_hda.c~fix-missing-vmalloch-includes
+++ a/sound/pci/hda/cs35l41_hda.c
@@ -13,6 +13,7 @@
#include <sound/soc.h>
#include <linux/pm_runtime.h>
#include <linux/spi/spi.h>
+#include <linux/vmalloc.h>
#include "hda_local.h"
#include "hda_auto_parser.h"
#include "hda_jack.h"
_
Patches currently in -mm which might be from kent.overstreet@linux.dev are
fix-missing-vmalloch-includes.patch
asm-generic-ioh-kill-vmalloch-dependency.patch
mm-slub-mark-slab_free_freelist_hook-__always_inline.patch
scripts-kallysms-always-include-__start-and-__stop-symbols.patch
fs-convert-alloc_inode_sb-to-a-macro.patch
rust-add-a-rust-helper-for-krealloc.patch
mempool-hook-up-to-memory-allocation-profiling.patch
mm-percpu-introduce-pcpuobj_ext.patch
mm-percpu-add-codetag-reference-into-pcpuobj_ext.patch
mm-vmalloc-enable-memory-allocation-profiling.patch
rhashtable-plumb-through-alloc-tag.patch
maintainers-add-entries-for-code-tagging-and-memory-allocation-profiling.patch
memprofiling-documentation.patch
^ permalink raw reply [relevance 2%]
* Re: [PATCH v6 00/37] Memory allocation profiling
2024-03-21 16:36 3% [PATCH v6 00/37] Memory allocation profiling Suren Baghdasaryan
2024-03-21 16:36 3% ` [PATCH v6 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-03-21 16:36 4% ` [PATCH v6 37/37] memprofiling: Documentation Suren Baghdasaryan
@ 2024-03-21 20:41 0% ` Andrew Morton
2024-03-21 21:08 0% ` Suren Baghdasaryan
2024-04-05 13:37 0% ` Klara Modin
3 siblings, 1 reply; 200+ results
From: Andrew Morton @ 2024-03-21 20:41 UTC (permalink / raw)
To: Suren Baghdasaryan
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
On Thu, 21 Mar 2024 09:36:22 -0700 Suren Baghdasaryan <surenb@google.com> wrote:
> Low overhead [1] per-callsite memory allocation profiling. Not just for
> debug kernels, overhead low enough to be deployed in production.
>
> Example output:
> root@moria-kvm:~# sort -rn /proc/allocinfo
> 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
> 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
> 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
> 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
> 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
> 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
> 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
Did you consider adding a knob to permit all the data to be wiped out?
So people can zap everything, run the chosen workload then go see what
happened?
Of course, this can be done in userspace by taking a snapshot before
and after, then crunching on the two....
^ permalink raw reply [relevance 0%]
* [PATCH v6 37/37] memprofiling: Documentation
2024-03-21 16:36 3% [PATCH v6 00/37] Memory allocation profiling Suren Baghdasaryan
2024-03-21 16:36 3% ` [PATCH v6 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
@ 2024-03-21 16:36 4% ` Suren Baghdasaryan
2024-03-21 20:41 0% ` [PATCH v6 00/37] Memory allocation profiling Andrew Morton
2024-04-05 13:37 0% ` Klara Modin
3 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-21 16:36 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, surenb, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
From: Kent Overstreet <kent.overstreet@linux.dev>
Provide documentation for memory allocation profiling.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Documentation/mm/allocation-profiling.rst | 100 ++++++++++++++++++++++
Documentation/mm/index.rst | 1 +
2 files changed, 101 insertions(+)
create mode 100644 Documentation/mm/allocation-profiling.rst
diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
new file mode 100644
index 000000000000..d3b733b41ae6
--- /dev/null
+++ b/Documentation/mm/allocation-profiling.rst
@@ -0,0 +1,100 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+MEMORY ALLOCATION PROFILING
+===========================
+
+Low overhead (suitable for production) accounting of all memory allocations,
+tracked by file and line number.
+
+Usage:
+kconfig options:
+- CONFIG_MEM_ALLOC_PROFILING
+
+- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+
+- CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ adds warnings for allocations that weren't accounted because of a
+ missing annotation
+
+Boot parameter:
+ sysctl.vm.mem_profiling=0|1|never
+
+ When set to "never", memory allocation profiling overhead is minimized and it
+ cannot be enabled at runtime (sysctl becomes read-only).
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
+
+sysctl:
+ /proc/sys/vm/mem_profiling
+
+Runtime info:
+ /proc/allocinfo
+
+Example output::
+
+ root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
+ 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
+ 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
+ 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
+ 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 55M 4887 mm/slub.c:2259 func:alloc_slab_page
+ 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+
+===================
+Theory of operation
+===================
+
+Memory allocation profiling builds off of code tagging, which is a library for
+declaring static structs (that typically describe a file and line number in
+some way, hence code tagging) and then finding and operating on them at runtime,
+- i.e. iterating over them to print them in debugfs/procfs.
+
+To add accounting for an allocation call, we replace it with a macro
+invocation, alloc_hooks(), that
+- declares a code tag
+- stashes a pointer to it in task_struct
+- calls the real allocation function
+- and finally, restores the task_struct alloc tag pointer to its previous value.
+
+This allows for alloc_hooks() calls to be nested, with the most recent one
+taking effect. This is important for allocations internal to the mm/ code that
+do not properly belong to the outer allocation context and should be counted
+separately: for example, slab object extension vectors, or when the slab
+allocates pages from the page allocator.
+
+Thus, proper usage requires determining which function in an allocation call
+stack should be tagged. There are many helper functions that essentially wrap
+e.g. kmalloc() and do a little more work, then are called in multiple places;
+we'll generally want the accounting to happen in the callers of these helpers,
+not in the helpers themselves.
+
+To fix up a given helper, for example foo(), do the following:
+- switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
+
+- rename it to foo_noprof()
+
+- define a macro version of foo() like so:
+
+ #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
+
+It's also possible to stash a pointer to an alloc tag in your own data structures.
+
+Do this when you're implementing a generic data structure that does allocations
+"on behalf of" some other code - for example, the rhashtable code. This way,
+instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
+break it out by rhashtable type.
+
+To do so:
+- Hook your data structure's init function, like any other allocation function.
+
+- Within your init function, use the convenience macro alloc_tag_record() to
+ record alloc tag in your data structure.
+
+- Then, use the following form for your allocations:
+ alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
diff --git a/Documentation/mm/index.rst b/Documentation/mm/index.rst
index 31d2ac306438..48b9b559ca7b 100644
--- a/Documentation/mm/index.rst
+++ b/Documentation/mm/index.rst
@@ -26,6 +26,7 @@ see the :doc:`admin guide <../admin-guide/mm/index>`.
page_cache
shmfs
oom
+ allocation-profiling
Legacy Documentation
====================
--
2.44.0.291.gc1ea87d7ee-goog
^ permalink raw reply related [relevance 4%]
* [PATCH v6 13/37] lib: add allocation tagging support for memory allocation profiling
2024-03-21 16:36 3% [PATCH v6 00/37] Memory allocation profiling Suren Baghdasaryan
@ 2024-03-21 16:36 3% ` Suren Baghdasaryan
2024-03-21 16:36 4% ` [PATCH v6 37/37] memprofiling: Documentation Suren Baghdasaryan
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-21 16:36 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, surenb, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily
instrument memory allocators. It registers an "alloc_tags" codetag type
with /proc/allocinfo interface to output allocation tag information when
the feature is enabled.
CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory
allocation profiling instrumentation.
Memory allocation profiling can be enabled or disabled at runtime using
/proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n.
CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation
profiling by default.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
---
Documentation/admin-guide/sysctl/vm.rst | 16 +++
Documentation/filesystems/proc.rst | 29 +++++
include/asm-generic/codetag.lds.h | 14 +++
include/asm-generic/vmlinux.lds.h | 3 +
include/linux/alloc_tag.h | 145 +++++++++++++++++++++++
include/linux/sched.h | 24 ++++
lib/Kconfig.debug | 25 ++++
lib/Makefile | 2 +
lib/alloc_tag.c | 149 ++++++++++++++++++++++++
scripts/module.lds.S | 7 ++
10 files changed, 414 insertions(+)
create mode 100644 include/asm-generic/codetag.lds.h
create mode 100644 include/linux/alloc_tag.h
create mode 100644 lib/alloc_tag.c
diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index c59889de122b..e86c968a7a0e 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/vm:
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
+- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
@@ -425,6 +426,21 @@ e.g., up to one or two maps per allocation.
The default value is 65530.
+mem_profiling
+==============
+
+Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
+
+1: Enable memory profiling.
+
+0: Disable memory profiling.
+
+Enabling memory profiling introduces a small performance overhead for all
+memory allocations.
+
+The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
+
+
memory_failure_early_kill:
==========================
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index c6a6b9df2104..5d2fc58b5b1f 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -688,6 +688,7 @@ files are there, and which are missing.
============ ===============================================================
File Content
============ ===============================================================
+ allocinfo Memory allocations profiling information
apm Advanced power management info
bootconfig Kernel command line obtained from boot config,
and, if there were kernel parameters from the
@@ -953,6 +954,34 @@ also be allocatable although a lot of filesystem metadata may have to be
reclaimed to achieve this.
+allocinfo
+~~~~~~~
+
+Provides information about memory allocations at all locations in the code
+base. Each allocation in the code is identified by its source file, line
+number, module (if originates from a loadable module) and the function calling
+the allocation. The number of bytes allocated and number of calls at each
+location are reported.
+
+Example output.
+
+::
+
+ > sort -rn /proc/allocinfo
+ 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
+ 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
+ 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
+ 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
+ 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
+ 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ ...
+
+
meminfo
~~~~~~~
diff --git a/include/asm-generic/codetag.lds.h b/include/asm-generic/codetag.lds.h
new file mode 100644
index 000000000000..64f536b80380
--- /dev/null
+++ b/include/asm-generic/codetag.lds.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_GENERIC_CODETAG_LDS_H
+#define __ASM_GENERIC_CODETAG_LDS_H
+
+#define SECTION_WITH_BOUNDARIES(_name) \
+ . = ALIGN(8); \
+ __start_##_name = .; \
+ KEEP(*(_name)) \
+ __stop_##_name = .;
+
+#define CODETAG_SECTIONS() \
+ SECTION_WITH_BOUNDARIES(alloc_tags)
+
+#endif /* __ASM_GENERIC_CODETAG_LDS_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index f7749d0f2562..3e4497b5135a 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -50,6 +50,8 @@
* [__nosave_begin, __nosave_end] for the nosave data
*/
+#include <asm-generic/codetag.lds.h>
+
#ifndef LOAD_OFFSET
#define LOAD_OFFSET 0
#endif
@@ -366,6 +368,7 @@
. = ALIGN(8); \
BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \
BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \
+ CODETAG_SECTIONS() \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
TRACE_PRINTKS() \
diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
new file mode 100644
index 000000000000..b970ff1c80dc
--- /dev/null
+++ b/include/linux/alloc_tag.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * allocation tagging
+ */
+#ifndef _LINUX_ALLOC_TAG_H
+#define _LINUX_ALLOC_TAG_H
+
+#include <linux/bug.h>
+#include <linux/codetag.h>
+#include <linux/container_of.h>
+#include <linux/preempt.h>
+#include <asm/percpu.h>
+#include <linux/cpumask.h>
+#include <linux/static_key.h>
+
+struct alloc_tag_counters {
+ u64 bytes;
+ u64 calls;
+};
+
+/*
+ * An instance of this structure is created in a special ELF section at every
+ * allocation callsite. At runtime, the special section is treated as
+ * an array of these. Embedded codetag utilizes codetag framework.
+ */
+struct alloc_tag {
+ struct codetag ct;
+ struct alloc_tag_counters __percpu *counters;
+} __aligned(8);
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+
+static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct)
+{
+ return container_of(ct, struct alloc_tag, ct);
+}
+
+#ifdef ARCH_NEEDS_WEAK_PER_CPU
+/*
+ * When percpu variables are required to be defined as weak, static percpu
+ * variables can't be used inside a function (see comments for DECLARE_PER_CPU_SECTION).
+ */
+#error "Memory allocation profiling is incompatible with ARCH_NEEDS_WEAK_PER_CPU"
+#endif
+
+#define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+ __section("alloc_tags") = { \
+ .ct = CODE_TAG_INIT, \
+ .counters = &_alloc_tag_cntr };
+
+DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static inline bool mem_alloc_profiling_enabled(void)
+{
+ return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ &mem_alloc_profiling_key);
+}
+
+static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag)
+{
+ struct alloc_tag_counters v = { 0, 0 };
+ struct alloc_tag_counters *counter;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ counter = per_cpu_ptr(tag->counters, cpu);
+ v.bytes += counter->bytes;
+ v.calls += counter->calls;
+ }
+
+ return v;
+}
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ WARN_ONCE(ref && ref->ct,
+ "alloc_tag was not cleared (got tag for %s:%u)\n",
+ ref->ct->filename, ref->ct->lineno);
+
+ WARN_ONCE(!tag, "current->alloc_tag not set");
+}
+
+static inline void alloc_tag_sub_check(union codetag_ref *ref)
+{
+ WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n");
+}
+#else
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag) {}
+static inline void alloc_tag_sub_check(union codetag_ref *ref) {}
+#endif
+
+/* Caller should verify both ref and tag to be valid */
+static inline void __alloc_tag_ref_set(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ ref->ct = &tag->ct;
+ /*
+ * We need in increment the call counter every time we have a new
+ * allocation or when we split a large allocation into smaller ones.
+ * Each new reference for every sub-allocation needs to increment call
+ * counter because when we free each part the counter will be decremented.
+ */
+ this_cpu_inc(tag->counters->calls);
+}
+
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes)
+{
+ alloc_tag_add_check(ref, tag);
+ if (!ref || !tag)
+ return;
+
+ __alloc_tag_ref_set(ref, tag);
+ this_cpu_add(tag->counters->bytes, bytes);
+}
+
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes)
+{
+ struct alloc_tag *tag;
+
+ alloc_tag_sub_check(ref);
+ if (!ref || !ref->ct)
+ return;
+
+ tag = ct_to_alloc_tag(ref->ct);
+
+ this_cpu_sub(tag->counters->bytes, bytes);
+ this_cpu_dec(tag->counters->calls);
+
+ ref->ct = NULL;
+}
+
+#else /* CONFIG_MEM_ALLOC_PROFILING */
+
+#define DEFINE_ALLOC_TAG(_alloc_tag)
+static inline bool mem_alloc_profiling_enabled(void) { return false; }
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag,
+ size_t bytes) {}
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {}
+
+#endif /* CONFIG_MEM_ALLOC_PROFILING */
+
+#endif /* _LINUX_ALLOC_TAG_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 3c2abbc587b4..4118b3f959c3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -770,6 +770,10 @@ struct task_struct {
unsigned int flags;
unsigned int ptrace;
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+ struct alloc_tag *alloc_tag;
+#endif
+
#ifdef CONFIG_SMP
int on_cpu;
struct __call_single_node wake_entry;
@@ -810,6 +814,7 @@ struct task_struct {
struct task_group *sched_task_group;
#endif
+
#ifdef CONFIG_UCLAMP_TASK
/*
* Clamp values requested for a scheduling entity.
@@ -2187,4 +2192,23 @@ static inline int sched_core_idle_cpu(int cpu) { return idle_cpu(cpu); }
extern void sched_set_stop_task(int cpu, struct task_struct *stop);
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag)
+{
+ swap(current->alloc_tag, tag);
+ return tag;
+}
+
+static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old)
+{
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n");
+#endif
+ current->alloc_tag = old;
+}
+#else
+#define alloc_tag_save(_tag) NULL
+#define alloc_tag_restore(_tag, _old) do {} while (0)
+#endif
+
#endif
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d2dbdd45fd9a..d9a6477afdb1 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -972,6 +972,31 @@ config CODE_TAGGING
bool
select KALLSYMS
+config MEM_ALLOC_PROFILING
+ bool "Enable memory allocation profiling"
+ default n
+ depends on PROC_FS
+ depends on !DEBUG_FORCE_WEAK_PER_CPU
+ select CODE_TAGGING
+ help
+ Track allocation source code and record total allocation size
+ initiated at that code location. The mechanism can be used to track
+ memory leaks with a low performance and memory impact.
+
+config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ bool "Enable memory allocation profiling by default"
+ default y
+ depends on MEM_ALLOC_PROFILING
+
+config MEM_ALLOC_PROFILING_DEBUG
+ bool "Memory allocation profiler debugging"
+ default n
+ depends on MEM_ALLOC_PROFILING
+ select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ help
+ Adds warnings with helpful error messages for memory allocation
+ profiling.
+
source "lib/Kconfig.kasan"
source "lib/Kconfig.kfence"
source "lib/Kconfig.kmsan"
diff --git a/lib/Makefile b/lib/Makefile
index 910335da8f13..2f4e17bfb299 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -234,6 +234,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_CODE_TAGGING) += codetag.o
+obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o
+
lib-$(CONFIG_GENERIC_BUG) += bug.o
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
new file mode 100644
index 000000000000..f09c8a422bc2
--- /dev/null
+++ b/lib/alloc_tag.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/alloc_tag.h>
+#include <linux/fs.h>
+#include <linux/gfp.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_buf.h>
+#include <linux/seq_file.h>
+
+static struct codetag_type *alloc_tag_cttype;
+
+DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static void *allocinfo_start(struct seq_file *m, loff_t *pos)
+{
+ struct codetag_iterator *iter;
+ struct codetag *ct;
+ loff_t node = *pos;
+
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ m->private = iter;
+ if (!iter)
+ return NULL;
+
+ codetag_lock_module_list(alloc_tag_cttype, true);
+ *iter = codetag_get_ct_iter(alloc_tag_cttype);
+ while ((ct = codetag_next_ct(iter)) != NULL && node)
+ node--;
+
+ return ct ? iter : NULL;
+}
+
+static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ struct codetag *ct = codetag_next_ct(iter);
+
+ (*pos)++;
+ if (!ct)
+ return NULL;
+
+ return iter;
+}
+
+static void allocinfo_stop(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)m->private;
+
+ if (iter) {
+ codetag_lock_module_list(alloc_tag_cttype, false);
+ kfree(iter);
+ }
+}
+
+static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct)
+{
+ struct alloc_tag *tag = ct_to_alloc_tag(ct);
+ struct alloc_tag_counters counter = alloc_tag_read(tag);
+ s64 bytes = counter.bytes;
+
+ seq_buf_printf(out, "%12lli %8llu ", bytes, counter.calls);
+ codetag_to_text(out, ct);
+ seq_buf_putc(out, ' ');
+ seq_buf_putc(out, '\n');
+}
+
+static int allocinfo_show(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ char *bufp;
+ size_t n = seq_get_buf(m, &bufp);
+ struct seq_buf buf;
+
+ seq_buf_init(&buf, bufp, n);
+ alloc_tag_to_text(&buf, iter->ct);
+ seq_commit(m, seq_buf_used(&buf));
+ return 0;
+}
+
+static const struct seq_operations allocinfo_seq_op = {
+ .start = allocinfo_start,
+ .next = allocinfo_next,
+ .stop = allocinfo_stop,
+ .show = allocinfo_show,
+};
+
+static void __init procfs_init(void)
+{
+ proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op);
+}
+
+static bool alloc_tag_module_unload(struct codetag_type *cttype,
+ struct codetag_module *cmod)
+{
+ struct codetag_iterator iter = codetag_get_ct_iter(cttype);
+ struct alloc_tag_counters counter;
+ bool module_unused = true;
+ struct alloc_tag *tag;
+ struct codetag *ct;
+
+ for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) {
+ if (iter.cmod != cmod)
+ continue;
+
+ tag = ct_to_alloc_tag(ct);
+ counter = alloc_tag_read(tag);
+
+ if (WARN(counter.bytes,
+ "%s:%u module %s func:%s has %llu allocated at module unload",
+ ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes))
+ module_unused = false;
+ }
+
+ return module_unused;
+}
+
+static struct ctl_table memory_allocation_profiling_sysctls[] = {
+ {
+ .procname = "mem_profiling",
+ .data = &mem_alloc_profiling_key,
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ .mode = 0444,
+#else
+ .mode = 0644,
+#endif
+ .proc_handler = proc_do_static_key,
+ },
+ { }
+};
+
+static int __init alloc_tag_init(void)
+{
+ const struct codetag_type_desc desc = {
+ .section = "alloc_tags",
+ .tag_size = sizeof(struct alloc_tag),
+ .module_unload = alloc_tag_module_unload,
+ };
+
+ alloc_tag_cttype = codetag_register_type(&desc);
+ if (IS_ERR_OR_NULL(alloc_tag_cttype))
+ return PTR_ERR(alloc_tag_cttype);
+
+ register_sysctl_init("vm", memory_allocation_profiling_sysctls);
+ procfs_init();
+
+ return 0;
+}
+module_init(alloc_tag_init);
diff --git a/scripts/module.lds.S b/scripts/module.lds.S
index bf5bcf2836d8..45c67a0994f3 100644
--- a/scripts/module.lds.S
+++ b/scripts/module.lds.S
@@ -9,6 +9,8 @@
#define DISCARD_EH_FRAME *(.eh_frame)
#endif
+#include <asm-generic/codetag.lds.h>
+
SECTIONS {
/DISCARD/ : {
*(.discard)
@@ -47,12 +49,17 @@ SECTIONS {
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
+ CODETAG_SECTIONS()
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
+#else
+ .data : {
+ CODETAG_SECTIONS()
+ }
#endif
}
--
2.44.0.291.gc1ea87d7ee-goog
^ permalink raw reply related [relevance 3%]
* [PATCH v6 00/37] Memory allocation profiling
@ 2024-03-21 16:36 3% Suren Baghdasaryan
2024-03-21 16:36 3% ` [PATCH v6 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-21 16:36 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, songmuchun, jbaron, aliceryhl, rientjes, minchan,
kaleshsingh, surenb, kernel-team, linux-doc, linux-kernel, iommu,
linux-arch, linux-fsdevel, linux-mm, linux-modules, kasan-dev,
cgroups
Overview:
Low overhead [1] per-callsite memory allocation profiling. Not just for
debug kernels, overhead low enough to be deployed in production.
Example output:
root@moria-kvm:~# sort -rn /proc/allocinfo
127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
56373248 4737 mm/slub.c:2259 func:alloc_slab_page
14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
3940352 962 mm/memory.c:4214 func:alloc_anon_folio
2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
...
Since v5 [2]:
- Added Reviewed-by and Acked-by, per Vlastimil Babka and Miguel Ojeda
- Changed pgalloc_tag_{add|sub} to use number of pages instead of order, per Matthew Wilcox
- Changed pgalloc_tag_sub_bytes to pgalloc_tag_sub_pages and adjusted the usage, per Matthew Wilcox
- Moved static key check before prepare_slab_obj_exts_hook(), per Vlastimil Babka
- Fixed RUST helper, per Miguel Ojeda
- Fixed documentation, per Randy Dunlap
- Rebased over mm-unstable
Usage:
kconfig options:
- CONFIG_MEM_ALLOC_PROFILING
- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
- CONFIG_MEM_ALLOC_PROFILING_DEBUG
adds warnings for allocations that weren't accounted because of a
missing annotation
sysctl:
/proc/sys/vm/mem_profiling
Runtime info:
/proc/allocinfo
Notes:
[1]: Overhead
To measure the overhead we are comparing the following configurations:
(1) Baseline with CONFIG_MEMCG_KMEM=n
(2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
(3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
(4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
(5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
(6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
(7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
Performance overhead:
To evaluate performance we implemented an in-kernel test executing
multiple get_free_page/free_page and kmalloc/kfree calls with allocation
sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
affinity set to a specific CPU to minimize the noise. Below are results
from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
56 core Intel Xeon:
kmalloc pgalloc
(1 baseline) 6.764s 16.902s
(2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
(3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
(4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
(5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
(6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
(7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
Memory overhead:
Kernel size:
text data bss dec diff
(1) 26515311 18890222 17018880 62424413
(2) 26524728 19423818 16740352 62688898 264485
(3) 26524724 19423818 16740352 62688894 264481
(4) 26524728 19423818 16740352 62688898 264485
(5) 26541782 18964374 16957440 62463596 39183
Memory consumption on a 56 core Intel CPU with 125GB of memory:
Code tags: 192 kB
PageExts: 262144 kB (256MB)
SlabExts: 9876 kB (9.6MB)
PcpuExts: 512 kB (0.5MB)
Total overhead is 0.2% of total memory.
Benchmarks:
Hackbench tests run 100 times:
hackbench -s 512 -l 200 -g 15 -f 25 -P
baseline disabled profiling enabled profiling
avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
stdev 0.0137 0.0188 0.0077
hackbench -l 10000
baseline disabled profiling enabled profiling
avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
stdev 0.0933 0.0286 0.0489
stress-ng tests:
stress-ng --class memory --seq 4 -t 60
stress-ng --class cpu --seq 4 -t 60
Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
[2] https://lore.kernel.org/all/20240306182440.2003814-1-surenb@google.com/
Kent Overstreet (13):
fix missing vmalloc.h includes
asm-generic/io.h: Kill vmalloc.h dependency
mm/slub: Mark slab_free_freelist_hook() __always_inline
scripts/kallysms: Always include __start and __stop symbols
fs: Convert alloc_inode_sb() to a macro
rust: Add a rust helper for krealloc()
mempool: Hook up to memory allocation profiling
mm: percpu: Introduce pcpuobj_ext
mm: percpu: Add codetag reference into pcpuobj_ext
mm: vmalloc: Enable memory allocation profiling
rhashtable: Plumb through alloc tag
MAINTAINERS: Add entries for code tagging and memory allocation
profiling
memprofiling: Documentation
Suren Baghdasaryan (24):
mm: introduce slabobj_ext to support slab object extensions
mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext
creation
mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation
slab: objext: introduce objext_flags as extension to
page_memcg_data_flags
lib: code tagging framework
lib: code tagging module support
lib: prevent module unloading if memory is not freed
lib: add allocation tagging support for memory allocation profiling
lib: introduce support for page allocation tagging
lib: introduce early boot parameter to avoid page_ext memory overhead
mm: percpu: increase PERCPU_MODULE_RESERVE to accommodate allocation
tags
change alloc_pages name in dma_map_ops to avoid name conflicts
mm: enable page allocation tagging
mm: create new codetag references during page splitting
mm: fix non-compound multi-order memory accounting in __free_pages
mm/page_ext: enable early_page_ext when
CONFIG_MEM_ALLOC_PROFILING_DEBUG=y
lib: add codetag reference into slabobj_ext
mm/slab: add allocation accounting into slab allocation and free paths
mm/slab: enable slab allocation tagging for kmalloc and friends
mm: percpu: enable per-cpu allocation tagging
lib: add memory allocations report in show_mem()
codetag: debug: skip objext checking when it's for objext itself
codetag: debug: mark codetags for reserved pages as empty
codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext
allocations
Documentation/admin-guide/sysctl/vm.rst | 16 +
Documentation/filesystems/proc.rst | 29 ++
Documentation/mm/allocation-profiling.rst | 100 ++++++
Documentation/mm/index.rst | 1 +
MAINTAINERS | 17 +
arch/alpha/kernel/pci_iommu.c | 2 +-
arch/alpha/lib/checksum.c | 1 +
arch/alpha/lib/fpreg.c | 1 +
arch/alpha/lib/memcpy.c | 1 +
arch/arm/kernel/irq.c | 1 +
arch/arm/kernel/traps.c | 1 +
arch/arm64/kernel/efi.c | 1 +
arch/loongarch/include/asm/kfence.h | 1 +
arch/mips/jazz/jazzdma.c | 2 +-
arch/powerpc/kernel/dma-iommu.c | 2 +-
arch/powerpc/kernel/iommu.c | 1 +
arch/powerpc/mm/mem.c | 1 +
arch/powerpc/platforms/ps3/system-bus.c | 4 +-
arch/powerpc/platforms/pseries/vio.c | 2 +-
arch/riscv/kernel/elf_kexec.c | 1 +
arch/riscv/kernel/probes/kprobes.c | 1 +
arch/s390/kernel/cert_store.c | 1 +
arch/s390/kernel/ipl.c | 1 +
arch/x86/include/asm/io.h | 1 +
arch/x86/kernel/amd_gart_64.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 1 +
arch/x86/kernel/irq_64.c | 1 +
arch/x86/mm/fault.c | 1 +
drivers/accel/ivpu/ivpu_mmu_context.c | 1 +
drivers/gpu/drm/gma500/mmu.c | 1 +
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 1 +
.../gpu/drm/i915/gem/selftests/mock_dmabuf.c | 1 +
drivers/gpu/drm/i915/gt/shmem_utils.c | 1 +
drivers/gpu/drm/i915/gvt/firmware.c | 1 +
drivers/gpu/drm/i915/gvt/gtt.c | 1 +
drivers/gpu/drm/i915/gvt/handlers.c | 1 +
drivers/gpu/drm/i915/gvt/mmio.c | 1 +
drivers/gpu/drm/i915/gvt/vgpu.c | 1 +
drivers/gpu/drm/i915/intel_gvt.c | 1 +
drivers/gpu/drm/imagination/pvr_vm_mips.c | 1 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 1 +
drivers/gpu/drm/omapdrm/omap_gem.c | 1 +
drivers/gpu/drm/v3d/v3d_bo.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_binding.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 1 +
drivers/gpu/drm/xen/xen_drm_front_gem.c | 1 +
drivers/hwtracing/coresight/coresight-trbe.c | 1 +
drivers/iommu/dma-iommu.c | 2 +-
.../marvell/octeon_ep/octep_pfvf_mbox.c | 1 +
.../marvell/octeon_ep_vf/octep_vf_mbox.c | 1 +
.../net/ethernet/microsoft/mana/hw_channel.c | 1 +
drivers/parisc/ccio-dma.c | 2 +-
drivers/parisc/sba_iommu.c | 2 +-
drivers/platform/x86/uv_sysfs.c | 1 +
drivers/scsi/mpi3mr/mpi3mr_transport.c | 2 +
drivers/staging/media/atomisp/pci/hmm/hmm.c | 2 +-
drivers/vfio/pci/pds/dirty.c | 1 +
drivers/virt/acrn/mm.c | 1 +
drivers/virtio/virtio_mem.c | 1 +
drivers/xen/grant-dma-ops.c | 2 +-
drivers/xen/swiotlb-xen.c | 2 +-
include/asm-generic/codetag.lds.h | 14 +
include/asm-generic/io.h | 1 -
include/asm-generic/vmlinux.lds.h | 3 +
include/linux/alloc_tag.h | 205 +++++++++++
include/linux/codetag.h | 81 +++++
include/linux/dma-map-ops.h | 2 +-
include/linux/fortify-string.h | 5 +-
include/linux/fs.h | 6 +-
include/linux/gfp.h | 126 ++++---
include/linux/gfp_types.h | 11 +
include/linux/memcontrol.h | 56 ++-
include/linux/mempool.h | 73 ++--
include/linux/mm.h | 9 +
include/linux/mm_types.h | 4 +-
include/linux/page_ext.h | 1 -
include/linux/pagemap.h | 9 +-
include/linux/pds/pds_common.h | 2 +
include/linux/percpu.h | 27 +-
include/linux/pgalloc_tag.h | 134 +++++++
include/linux/rhashtable-types.h | 11 +-
include/linux/sched.h | 24 ++
include/linux/slab.h | 179 +++++-----
include/linux/string.h | 4 +-
include/linux/vmalloc.h | 60 +++-
include/rdma/rdmavt_qp.h | 1 +
init/Kconfig | 4 +
kernel/dma/mapping.c | 4 +-
kernel/kallsyms_selftest.c | 2 +-
kernel/module/main.c | 29 +-
lib/Kconfig.debug | 31 ++
lib/Makefile | 3 +
lib/alloc_tag.c | 243 +++++++++++++
lib/codetag.c | 283 +++++++++++++++
lib/rhashtable.c | 28 +-
mm/compaction.c | 7 +-
mm/debug_vm_pgtable.c | 1 +
mm/filemap.c | 6 +-
mm/huge_memory.c | 2 +
mm/kfence/core.c | 14 +-
mm/kfence/kfence.h | 4 +-
mm/memcontrol.c | 56 +--
mm/mempolicy.c | 52 +--
mm/mempool.c | 36 +-
mm/mm_init.c | 13 +-
mm/nommu.c | 64 ++--
mm/page_alloc.c | 71 ++--
mm/page_ext.c | 13 +
mm/page_owner.c | 2 +-
mm/percpu-internal.h | 26 +-
mm/percpu.c | 120 +++----
mm/show_mem.c | 26 ++
mm/slab.h | 51 ++-
mm/slab_common.c | 6 +-
mm/slub.c | 327 +++++++++++++++---
mm/util.c | 44 +--
mm/vmalloc.c | 88 ++---
rust/helpers.c | 8 +
scripts/kallsyms.c | 13 +
scripts/module.lds.S | 7 +
sound/pci/hda/cs35l41_hda.c | 1 +
125 files changed, 2319 insertions(+), 652 deletions(-)
create mode 100644 Documentation/mm/allocation-profiling.rst
create mode 100644 include/asm-generic/codetag.lds.h
create mode 100644 include/linux/alloc_tag.h
create mode 100644 include/linux/codetag.h
create mode 100644 include/linux/pgalloc_tag.h
create mode 100644 lib/alloc_tag.c
create mode 100644 lib/codetag.c
base-commit: a824831a082f1d8f9b51a4c0598e633d38555fcf
--
2.44.0.291.gc1ea87d7ee-goog
^ permalink raw reply [relevance 3%]
* [PATCH v2 0/3] fs: aio: more folio conversion
@ 2024-03-21 13:16 6% Kefeng Wang
2024-03-21 13:16 7% ` [PATCH v2 1/3] fs: aio: use a folio in aio_setup_ring() Kefeng Wang
2024-03-22 14:12 0% ` [PATCH v2 0/3] fs: aio: more folio conversion Christian Brauner
0 siblings, 2 replies; 200+ results
From: Kefeng Wang @ 2024-03-21 13:16 UTC (permalink / raw)
To: Alexander Viro, Benjamin LaHaise, Christian Brauner, Jan Kara,
linux-kernel, Matthew Wilcox
Cc: linux-aio, linux-fsdevel, Kefeng Wang
Convert to use folio throughout aio.
v2:
- fix folio check returned from __filemap_get_folio()
- use folio_end_read() suggested by Matthew
Kefeng Wang (3):
fs: aio: use a folio in aio_setup_ring()
fs: aio: use a folio in aio_free_ring()
fs: aio: convert to ring_folios and internal_folios
fs/aio.c | 91 +++++++++++++++++++++++++++++---------------------------
1 file changed, 47 insertions(+), 44 deletions(-)
--
2.27.0
^ permalink raw reply [relevance 6%]
* [PATCH v2 1/3] fs: aio: use a folio in aio_setup_ring()
2024-03-21 13:16 6% [PATCH v2 0/3] fs: aio: more folio conversion Kefeng Wang
@ 2024-03-21 13:16 7% ` Kefeng Wang
2024-03-22 14:12 0% ` [PATCH v2 0/3] fs: aio: more folio conversion Christian Brauner
1 sibling, 0 replies; 200+ results
From: Kefeng Wang @ 2024-03-21 13:16 UTC (permalink / raw)
To: Alexander Viro, Benjamin LaHaise, Christian Brauner, Jan Kara,
linux-kernel, Matthew Wilcox
Cc: linux-aio, linux-fsdevel, Kefeng Wang
Use a folio throughout aio_setup_ring() to remove calls to compound_head(),
also use folio_end_read() to simultaneously mark the folio uptodate and
unlock it.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
fs/aio.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 9cdaa2faa536..60da236ad575 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -527,17 +527,19 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
}
for (i = 0; i < nr_pages; i++) {
- struct page *page;
- page = find_or_create_page(file->f_mapping,
- i, GFP_USER | __GFP_ZERO);
- if (!page)
+ struct folio *folio;
+
+ folio = __filemap_get_folio(file->f_mapping, i,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+ GFP_USER | __GFP_ZERO);
+ if (IS_ERR(folio))
break;
- pr_debug("pid(%d) page[%d]->count=%d\n",
- current->pid, i, page_count(page));
- SetPageUptodate(page);
- unlock_page(page);
- ctx->ring_pages[i] = page;
+ pr_debug("pid(%d) [%d] folio->count=%d\n", current->pid, i,
+ folio_ref_count(folio));
+ folio_end_read(folio, true);
+
+ ctx->ring_pages[i] = &folio->page;
}
ctx->nr_pages = i;
--
2.27.0
^ permalink raw reply related [relevance 7%]
* Re: [PATCH 1/3] fs: aio: use a folio in aio_setup_ring()
2024-03-21 8:27 7% ` [PATCH 1/3] fs: aio: use a folio in aio_setup_ring() Kefeng Wang
@ 2024-03-21 9:07 0% ` Kefeng Wang
0 siblings, 0 replies; 200+ results
From: Kefeng Wang @ 2024-03-21 9:07 UTC (permalink / raw)
To: Alexander Viro, Benjamin LaHaise, Christian Brauner, Jan Kara,
linux-kernel, Matthew Wilcox
Cc: linux-aio, linux-fsdevel
On 2024/3/21 16:27, Kefeng Wang wrote:
> Use a folio throughout aio_setup_ring() to remove calls to compound_head().
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> fs/aio.c | 21 ++++++++++++---------
> 1 file changed, 12 insertions(+), 9 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index 9cdaa2faa536..d7f6c8705016 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -527,17 +527,20 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
> }
>
> for (i = 0; i < nr_pages; i++) {
> - struct page *page;
> - page = find_or_create_page(file->f_mapping,
> - i, GFP_USER | __GFP_ZERO);
> - if (!page)
> + struct folio *folio;
> +
> + folio = __filemap_get_folio(file->f_mapping, i,
> + FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
> + GFP_USER | __GFP_ZERO);
> + if (!folio)
Oh, this should be if (IS_ERR(folio)), will update.
> break;
> - pr_debug("pid(%d) page[%d]->count=%d\n",
> - current->pid, i, page_count(page));
> - SetPageUptodate(page);
> - unlock_page(page);
>
> - ctx->ring_pages[i] = page;
> + pr_debug("pid(%d) [%d] folio->count=%d\n", current->pid, i,
> + folio_ref_count(folio));
> + folio_mark_uptodate(folio);
> + folio_unlock(folio);
> +
> + ctx->ring_pages[i] = &folio->page;
> }
> ctx->nr_pages = i;
>
^ permalink raw reply [relevance 0%]
* [PATCH 1/3] fs: aio: use a folio in aio_setup_ring()
@ 2024-03-21 8:27 7% ` Kefeng Wang
2024-03-21 9:07 0% ` Kefeng Wang
0 siblings, 1 reply; 200+ results
From: Kefeng Wang @ 2024-03-21 8:27 UTC (permalink / raw)
To: Alexander Viro, Benjamin LaHaise, Christian Brauner, Jan Kara,
linux-kernel, Matthew Wilcox
Cc: linux-aio, linux-fsdevel, Kefeng Wang
Use a folio throughout aio_setup_ring() to remove calls to compound_head().
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
fs/aio.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 9cdaa2faa536..d7f6c8705016 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -527,17 +527,20 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
}
for (i = 0; i < nr_pages; i++) {
- struct page *page;
- page = find_or_create_page(file->f_mapping,
- i, GFP_USER | __GFP_ZERO);
- if (!page)
+ struct folio *folio;
+
+ folio = __filemap_get_folio(file->f_mapping, i,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+ GFP_USER | __GFP_ZERO);
+ if (!folio)
break;
- pr_debug("pid(%d) page[%d]->count=%d\n",
- current->pid, i, page_count(page));
- SetPageUptodate(page);
- unlock_page(page);
- ctx->ring_pages[i] = page;
+ pr_debug("pid(%d) [%d] folio->count=%d\n", current->pid, i,
+ folio_ref_count(folio));
+ folio_mark_uptodate(folio);
+ folio_unlock(folio);
+
+ ctx->ring_pages[i] = &folio->page;
}
ctx->nr_pages = i;
--
2.27.0
^ permalink raw reply related [relevance 7%]
* Re: [PATCH v3 03/11] filemap: allocate mapping_min_order folios in the page cache
2024-03-13 17:02 14% ` [PATCH v3 03/11] filemap: allocate mapping_min_order " Pankaj Raghav (Samsung)
@ 2024-03-15 13:21 0% ` Pankaj Raghav (Samsung)
0 siblings, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-03-15 13:21 UTC (permalink / raw)
To: willy
Cc: gost.dev, chandan.babu, hare, mcgrof, djwong, linux-mm,
linux-kernel, david, akpm, Pankaj Raghav, linux-xfs,
linux-fsdevel
Hi willy,
> filemap_create_folio() and do_read_cache_folio() were always allocating
> folio of order 0. __filemap_get_folio was trying to allocate higher
> order folios when fgp_flags had higher order hint set but it will default
> to order 0 folio if higher order memory allocation fails.
>
> Supporting mapping_min_order implies that we guarantee each folio in the
> page cache has at least an order of mapping_min_order. When adding new
> folios to the page cache we must also ensure the index used is aligned to
> the mapping_min_order as the page cache requires the index to be aligned
> to the order of the folio.
>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
> mm/filemap.c | 24 +++++++++++++++++-------
> 1 file changed, 17 insertions(+), 7 deletions(-)
Are the changes more inline with what you had in mind?
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index a1cb3ea55fb6..57889f206829 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -849,6 +849,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
>
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
> + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
> + folio);
> mapping_set_update(&xas, mapping);
>
> if (!huge) {
> @@ -1886,8 +1888,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> folio_wait_stable(folio);
> no_page:
> if (!folio && (fgp_flags & FGP_CREAT)) {
> - unsigned order = FGF_GET_ORDER(fgp_flags);
> + unsigned int min_order = mapping_min_folio_order(mapping);
> + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> int err;
> + index = mapping_align_start_index(mapping, index);
>
> if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
> gfp |= __GFP_WRITE;
> @@ -1927,7 +1931,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> break;
> folio_put(folio);
> folio = NULL;
> - } while (order-- > 0);
> + } while (order-- > min_order);
>
> if (err == -EEXIST)
> goto repeat;
> @@ -2416,13 +2420,16 @@ static int filemap_update_page(struct kiocb *iocb,
> }
>
> static int filemap_create_folio(struct file *file,
> - struct address_space *mapping, pgoff_t index,
> + struct address_space *mapping, loff_t pos,
> struct folio_batch *fbatch)
> {
> struct folio *folio;
> int error;
> + unsigned int min_order = mapping_min_folio_order(mapping);
> + pgoff_t index;
>
> - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
> + folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
> + min_order);
> if (!folio)
> return -ENOMEM;
>
> @@ -2440,6 +2447,8 @@ static int filemap_create_folio(struct file *file,
> * well to keep locking rules simple.
> */
> filemap_invalidate_lock_shared(mapping);
> + /* index in PAGE units but aligned to min_order number of pages. */
> + index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
> error = filemap_add_folio(mapping, folio, index,
> mapping_gfp_constraint(mapping, GFP_KERNEL));
> if (error == -EEXIST)
> @@ -2500,8 +2509,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
> if (!folio_batch_count(fbatch)) {
> if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
> return -EAGAIN;
> - err = filemap_create_folio(filp, mapping,
> - iocb->ki_pos >> PAGE_SHIFT, fbatch);
> + err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
> if (err == AOP_TRUNCATED_PAGE)
> goto retry;
> return err;
> @@ -3662,9 +3670,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
> repeat:
> folio = filemap_get_folio(mapping, index);
> if (IS_ERR(folio)) {
> - folio = filemap_alloc_folio(gfp, 0);
> + folio = filemap_alloc_folio(gfp,
> + mapping_min_folio_order(mapping));
> if (!folio)
> return ERR_PTR(-ENOMEM);
> + index = mapping_align_start_index(mapping, index);
> err = filemap_add_folio(mapping, folio, index, gfp);
> if (unlikely(err)) {
> folio_put(folio);
> --
> 2.43.0
>
--
Pankaj Raghav
^ permalink raw reply [relevance 0%]
* Re: MGLRU premature memcg OOM on slow writes
2024-03-14 22:23 0% ` Yu Zhao
@ 2024-03-15 2:38 5% ` Yafang Shao
0 siblings, 0 replies; 200+ results
From: Yafang Shao @ 2024-03-15 2:38 UTC (permalink / raw)
To: Yu Zhao
Cc: Axel Rasmussen, Chris Down, cgroups, hannes, kernel-team,
linux-kernel, linux-mm
On Fri, Mar 15, 2024 at 6:23 AM Yu Zhao <yuzhao@google.com> wrote:
>
> On Wed, Mar 13, 2024 at 11:33:21AM +0800, Yafang Shao wrote:
> > On Wed, Mar 13, 2024 at 4:11 AM Yu Zhao <yuzhao@google.com> wrote:
> > >
> > > On Tue, Mar 12, 2024 at 02:07:04PM -0600, Yu Zhao wrote:
> > > > On Tue, Mar 12, 2024 at 09:44:19AM -0700, Axel Rasmussen wrote:
> > > > > On Mon, Mar 11, 2024 at 2:11 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> > > > > >
> > > > > > On Sat, Mar 9, 2024 at 3:19 AM Axel Rasmussen <axelrasmussen@google.com> wrote:
> > > > > > >
> > > > > > > On Thu, Feb 29, 2024 at 4:30 PM Chris Down <chris@chrisdown.name> wrote:
> > > > > > > >
> > > > > > > > Axel Rasmussen writes:
> > > > > > > > >A couple of dumb questions. In your test, do you have any of the following
> > > > > > > > >configured / enabled?
> > > > > > > > >
> > > > > > > > >/proc/sys/vm/laptop_mode
> > > > > > > > >memory.low
> > > > > > > > >memory.min
> > > > > > > >
> > > > > > > > None of these are enabled. The issue is trivially reproducible by writing to
> > > > > > > > any slow device with memory.max enabled, but from the code it looks like MGLRU
> > > > > > > > is also susceptible to this on global reclaim (although it's less likely due to
> > > > > > > > page diversity).
> > > > > > > >
> > > > > > > > >Besides that, it looks like the place non-MGLRU reclaim wakes up the
> > > > > > > > >flushers is in shrink_inactive_list() (which calls wakeup_flusher_threads()).
> > > > > > > > >Since MGLRU calls shrink_folio_list() directly (from evict_folios()), I agree it
> > > > > > > > >looks like it simply will not do this.
> > > > > > > > >
> > > > > > > > >Yosry pointed out [1], where MGLRU used to call this but stopped doing that. It
> > > > > > > > >makes sense to me at least that doing writeback every time we age is too
> > > > > > > > >aggressive, but doing it in evict_folios() makes some sense to me, basically to
> > > > > > > > >copy the behavior the non-MGLRU path (shrink_inactive_list()) has.
> > > > > > > >
> > > > > > > > Thanks! We may also need reclaim_throttle(), depending on how you implement it.
> > > > > > > > Current non-MGLRU behaviour on slow storage is also highly suspect in terms of
> > > > > > > > (lack of) throttling after moving away from VMSCAN_THROTTLE_WRITEBACK, but one
> > > > > > > > thing at a time :-)
> > > > > > >
> > > > > > >
> > > > > > > Hmm, so I have a patch which I think will help with this situation,
> > > > > > > but I'm having some trouble reproducing the problem on 6.8-rc7 (so
> > > > > > > then I can verify the patch fixes it).
> > > > > >
> > > > > > We encountered the same premature OOM issue caused by numerous dirty pages.
> > > > > > The issue disappears after we revert the commit 14aa8b2d5c2e
> > > > > > "mm/mglru: don't sync disk for each aging cycle"
> > > > > >
> > > > > > To aid in replicating the issue, we've developed a straightforward
> > > > > > script, which consistently reproduces it, even on the latest kernel.
> > > > > > You can find the script provided below:
> > > > > >
> > > > > > ```
> > > > > > #!/bin/bash
> > > > > >
> > > > > > MEMCG="/sys/fs/cgroup/memory/mglru"
> > > > > > ENABLE=$1
> > > > > >
> > > > > > # Avoid waking up the flusher
> > > > > > sysctl -w vm.dirty_background_bytes=$((1024 * 1024 * 1024 *4))
> > > > > > sysctl -w vm.dirty_bytes=$((1024 * 1024 * 1024 *4))
> > > > > >
> > > > > > if [ ! -d ${MEMCG} ]; then
> > > > > > mkdir -p ${MEMCG}
> > > > > > fi
> > > > > >
> > > > > > echo $$ > ${MEMCG}/cgroup.procs
> > > > > > echo 1g > ${MEMCG}/memory.limit_in_bytes
> > > > > >
> > > > > > if [ $ENABLE -eq 0 ]; then
> > > > > > echo 0 > /sys/kernel/mm/lru_gen/enabled
> > > > > > else
> > > > > > echo 0x7 > /sys/kernel/mm/lru_gen/enabled
> > > > > > fi
> > > > > >
> > > > > > dd if=/dev/zero of=/data0/mglru.test bs=1M count=1023
> > > > > > rm -rf /data0/mglru.test
> > > > > > ```
> > > > > >
> > > > > > This issue disappears as well after we disable the mglru.
> > > > > >
> > > > > > We hope this script proves helpful in identifying and addressing the
> > > > > > root cause. We eagerly await your insights and proposed fixes.
> > > > >
> > > > > Thanks Yafang, I was able to reproduce the issue using this script.
> > > > >
> > > > > Perhaps interestingly, I was not able to reproduce it with cgroupv2
> > > > > memcgs. I know writeback semantics are quite a bit different there, so
> > > > > perhaps that explains why.
> > > > >
> > > > > Unfortunately, it also reproduces even with the commit I had in mind
> > > > > (basically stealing the "if (all isolated pages are unqueued dirty) {
> > > > > wakeup_flusher_threads(); reclaim_throttle(); }" from
> > > > > shrink_inactive_list, and adding it to MGLRU's evict_folios()). So
> > > > > I'll need to spend some more time on this; I'm planning to send
> > > > > something out for testing next week.
> > > >
> > > > Hi Chris,
> > > >
> > > > My apologies for not getting back to you sooner.
> > > >
> > > > And thanks everyone for all the input!
> > > >
> > > > My take is that Chris' premature OOM kills were NOT really due to
> > > > the flusher not waking up or missing throttling.
> > > >
> > > > Yes, these two are among the differences between the active/inactive
> > > > LRU and MGLRU, but their roles, IMO, are not as important as the LRU
> > > > positions of dirty pages. The active/inactive LRU moves dirty pages
> > > > all the way to the end of the line (reclaim happens at the front)
> > > > whereas MGLRU moves them into the middle, during direct reclaim. The
> > > > rationale for MGLRU was that this way those dirty pages would still
> > > > be counted as "inactive" (or cold).
> > > >
> > > > This theory can be quickly verified by comparing how much
> > > > nr_vmscan_immediate_reclaim grows, i.e.,
> > > >
> > > > Before the copy
> > > > grep nr_vmscan_immediate_reclaim /proc/vmstat
> > > > And then after the copy
> > > > grep nr_vmscan_immediate_reclaim /proc/vmstat
> > > >
> > > > The growth should be trivial for MGLRU and nontrivial for the
> > > > active/inactive LRU.
> > > >
> > > > If this is indeed the case, I'd appreciate very much if anyone could
> > > > try the following (I'll try it myself too later next week).
> > > >
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index 4255619a1a31..020f5d98b9a1 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -4273,10 +4273,13 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> > > > }
> > > >
> > > > /* waiting for writeback */
> > > > - if (folio_test_locked(folio) || folio_test_writeback(folio) ||
> > > > - (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> > > > - gen = folio_inc_gen(lruvec, folio, true);
> > > > - list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
> > > > + if (folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> > > > + DEFINE_MAX_SEQ(lruvec);
> > > > + int old_gen, new_gen = lru_gen_from_seq(max_seq);
> > > > +
> > > > + old_gen = folio_update_gen(folio, new_gen);
> > > > + lru_gen_update_size(lruvec, folio, old_gen, new_gen);
> > > > + list_move(&folio->lru, &lrugen->folios[new_gen][type][zone]);
> > >
> > > Sorry missing one line here:
> > >
> > > + folio_set_reclaim(folio);
> > >
> > > > return true;
> > > > }
> >
> > Hi Yu,
> >
> > I have validated it using the script provided for Axel, but
> > unfortunately, it still triggers an OOM error with your patch applied.
> > Here are the results with nr_vmscan_immediate_reclaim:
>
> Thanks for debunking it!
>
> > - non-MGLRU
> > $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> > nr_vmscan_immediate_reclaim 47411776
> >
> > $ ./test.sh 0
> > 1023+0 records in
> > 1023+0 records out
> > 1072693248 bytes (1.1 GB, 1023 MiB) copied, 0.538058 s, 2.0 GB/s
> >
> > $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> > nr_vmscan_immediate_reclaim 47412544
> >
> > - MGLRU
> > $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> > nr_vmscan_immediate_reclaim 47412544
> >
> > $ ./test.sh 1
> > Killed
> >
> > $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> > nr_vmscan_immediate_reclaim 115455600
>
> The delta is ~260GB, I'm still thinking how that could happen -- is this reliably reproducible?
Yes, it is reliably reproducible on cgroup1 with the script provided as follows:
$ ./test.sh 1
>
> > The detailed OOM info as follows,
> >
> > [Wed Mar 13 11:16:48 2024] dd invoked oom-killer:
> > gfp_mask=0x101c4a(GFP_NOFS|__GFP_HIGHMEM|__GFP_HARDWALL|__GFP_MOVABLE|__GFP_WRITE),
> > order=3, oom_score_adj=0
> > [Wed Mar 13 11:16:48 2024] CPU: 12 PID: 6911 Comm: dd Not tainted 6.8.0-rc6+ #24
> > [Wed Mar 13 11:16:48 2024] Hardware name: Tencent Cloud CVM, BIOS
> > seabios-1.9.1-qemu-project.org 04/01/2014
> > [Wed Mar 13 11:16:48 2024] Call Trace:
> > [Wed Mar 13 11:16:48 2024] <TASK>
> > [Wed Mar 13 11:16:48 2024] dump_stack_lvl+0x6e/0x90
> > [Wed Mar 13 11:16:48 2024] dump_stack+0x10/0x20
> > [Wed Mar 13 11:16:48 2024] dump_header+0x47/0x2d0
> > [Wed Mar 13 11:16:48 2024] oom_kill_process+0x101/0x2e0
> > [Wed Mar 13 11:16:48 2024] out_of_memory+0xfc/0x430
> > [Wed Mar 13 11:16:48 2024] mem_cgroup_out_of_memory+0x13d/0x160
> > [Wed Mar 13 11:16:48 2024] try_charge_memcg+0x7be/0x850
> > [Wed Mar 13 11:16:48 2024] ? get_mem_cgroup_from_mm+0x5e/0x420
> > [Wed Mar 13 11:16:48 2024] ? rcu_read_unlock+0x25/0x70
> > [Wed Mar 13 11:16:48 2024] __mem_cgroup_charge+0x49/0x90
> > [Wed Mar 13 11:16:48 2024] __filemap_add_folio+0x277/0x450
> > [Wed Mar 13 11:16:48 2024] ? __pfx_workingset_update_node+0x10/0x10
> > [Wed Mar 13 11:16:48 2024] filemap_add_folio+0x3c/0xa0
> > [Wed Mar 13 11:16:48 2024] __filemap_get_folio+0x13d/0x2f0
> > [Wed Mar 13 11:16:48 2024] iomap_get_folio+0x4c/0x60
> > [Wed Mar 13 11:16:48 2024] iomap_write_begin+0x1bb/0x2e0
> > [Wed Mar 13 11:16:48 2024] iomap_write_iter+0xff/0x290
> > [Wed Mar 13 11:16:48 2024] iomap_file_buffered_write+0x91/0xf0
> > [Wed Mar 13 11:16:48 2024] xfs_file_buffered_write+0x9f/0x2d0 [xfs]
> > [Wed Mar 13 11:16:48 2024] ? vfs_write+0x261/0x530
> > [Wed Mar 13 11:16:48 2024] ? debug_smp_processor_id+0x17/0x20
> > [Wed Mar 13 11:16:48 2024] xfs_file_write_iter+0xe9/0x120 [xfs]
> > [Wed Mar 13 11:16:48 2024] vfs_write+0x37d/0x530
> > [Wed Mar 13 11:16:48 2024] ksys_write+0x6d/0xf0
> > [Wed Mar 13 11:16:48 2024] __x64_sys_write+0x19/0x20
> > [Wed Mar 13 11:16:48 2024] do_syscall_64+0x79/0x1a0
> > [Wed Mar 13 11:16:48 2024] entry_SYSCALL_64_after_hwframe+0x6e/0x76
> > [Wed Mar 13 11:16:48 2024] RIP: 0033:0x7f63ea33e927
> > [Wed Mar 13 11:16:48 2024] Code: 0b 00 f7 d8 64 89 02 48 c7 c0 ff ff
> > ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10
> > b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54
> > 24 18 48 89 74 24
> > [Wed Mar 13 11:16:48 2024] RSP: 002b:00007ffc0e874768 EFLAGS: 00000246
> > ORIG_RAX: 0000000000000001
> > [Wed Mar 13 11:16:48 2024] RAX: ffffffffffffffda RBX: 0000000000100000
> > RCX: 00007f63ea33e927
> > [Wed Mar 13 11:16:48 2024] RDX: 0000000000100000 RSI: 00007f63dcafe000
> > RDI: 0000000000000001
> > [Wed Mar 13 11:16:48 2024] RBP: 00007f63dcafe000 R08: 00007f63dcafe000
> > R09: 0000000000000000
> > [Wed Mar 13 11:16:48 2024] R10: 0000000000000022 R11: 0000000000000246
> > R12: 0000000000000000
> > [Wed Mar 13 11:16:48 2024] R13: 0000000000000000 R14: 0000000000000000
> > R15: 00007f63dcafe000
> > [Wed Mar 13 11:16:48 2024] </TASK>
> > [Wed Mar 13 11:16:48 2024] memory: usage 1048556kB, limit 1048576kB, failcnt 153
> > [Wed Mar 13 11:16:48 2024] memory+swap: usage 1048556kB, limit
>
> I see you were actually on cgroup v1 -- this might be a different
> problem than Chris' since he was on v2.
Right, we are still using cgroup1. They might not be the same issue.
>
> For v1, the throttling is done by commit 81a70c21d9
> ("mm/cgroup/reclaim: fix dirty pages throttling on cgroup v1").
> IOW, the active/inactive LRU throttles in both v1 and v2 (done
> in different ways) whereas MGLRU doesn't in either case.
>
> > 9007199254740988kB, failcnt 0
> > [Wed Mar 13 11:16:48 2024] kmem: usage 200kB, limit
> > 9007199254740988kB, failcnt 0
> > [Wed Mar 13 11:16:48 2024] Memory cgroup stats for /mglru:
> > [Wed Mar 13 11:16:48 2024] cache 1072365568
> > [Wed Mar 13 11:16:48 2024] rss 1150976
> > [Wed Mar 13 11:16:48 2024] rss_huge 0
> > [Wed Mar 13 11:16:48 2024] shmem 0
> > [Wed Mar 13 11:16:48 2024] mapped_file 0
> > [Wed Mar 13 11:16:48 2024] dirty 1072365568
> > [Wed Mar 13 11:16:48 2024] writeback 0
> > [Wed Mar 13 11:16:48 2024] workingset_refault_anon 0
> > [Wed Mar 13 11:16:48 2024] workingset_refault_file 0
> > [Wed Mar 13 11:16:48 2024] swap 0
> > [Wed Mar 13 11:16:48 2024] swapcached 0
> > [Wed Mar 13 11:16:48 2024] pgpgin 2783
> > [Wed Mar 13 11:16:48 2024] pgpgout 1444
> > [Wed Mar 13 11:16:48 2024] pgfault 885
> > [Wed Mar 13 11:16:48 2024] pgmajfault 0
> > [Wed Mar 13 11:16:48 2024] inactive_anon 1146880
> > [Wed Mar 13 11:16:48 2024] active_anon 4096
> > [Wed Mar 13 11:16:48 2024] inactive_file 802357248
> > [Wed Mar 13 11:16:48 2024] active_file 270008320
> > [Wed Mar 13 11:16:48 2024] unevictable 0
> > [Wed Mar 13 11:16:48 2024] hierarchical_memory_limit 1073741824
> > [Wed Mar 13 11:16:48 2024] hierarchical_memsw_limit 9223372036854771712
> > [Wed Mar 13 11:16:48 2024] total_cache 1072365568
> > [Wed Mar 13 11:16:48 2024] total_rss 1150976
> > [Wed Mar 13 11:16:48 2024] total_rss_huge 0
> > [Wed Mar 13 11:16:48 2024] total_shmem 0
> > [Wed Mar 13 11:16:48 2024] total_mapped_file 0
> > [Wed Mar 13 11:16:48 2024] total_dirty 1072365568
> > [Wed Mar 13 11:16:48 2024] total_writeback 0
> > [Wed Mar 13 11:16:48 2024] total_workingset_refault_anon 0
> > [Wed Mar 13 11:16:48 2024] total_workingset_refault_file 0
> > [Wed Mar 13 11:16:48 2024] total_swap 0
> > [Wed Mar 13 11:16:48 2024] total_swapcached 0
> > [Wed Mar 13 11:16:48 2024] total_pgpgin 2783
> > [Wed Mar 13 11:16:48 2024] total_pgpgout 1444
> > [Wed Mar 13 11:16:48 2024] total_pgfault 885
> > [Wed Mar 13 11:16:48 2024] total_pgmajfault 0
> > [Wed Mar 13 11:16:48 2024] total_inactive_anon 1146880
> > [Wed Mar 13 11:16:48 2024] total_active_anon 4096
> > [Wed Mar 13 11:16:48 2024] total_inactive_file 802357248
> > [Wed Mar 13 11:16:48 2024] total_active_file 270008320
> > [Wed Mar 13 11:16:48 2024] total_unevictable 0
> > [Wed Mar 13 11:16:48 2024] Tasks state (memory values in pages):
> > [Wed Mar 13 11:16:48 2024] [ pid ] uid tgid total_vm rss
> > rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
> > [Wed Mar 13 11:16:48 2024] [ 6911] 0 6911 55506 640
> > 256 384 0 73728 0 0 dd
> > [Wed Mar 13 11:16:48 2024]
> > oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0-1,oom_memcg=/mglru,task_memcg=/mglru,task=dd,pid=6911,uid=0
> >
> > The key information extracted from the OOM info is as follows:
> >
> > [Wed Mar 13 11:16:48 2024] cache 1072365568
> > [Wed Mar 13 11:16:48 2024] dirty 1072365568
> >
> > This information reveals that all file pages are dirty pages.
>
> I'm surprised to see there was 0 pages under writeback:
> [Wed Mar 13 11:16:48 2024] total_writeback 0
> What's your dirty limit?
The background dirty threshold is 2G, and the dirty threshold is 4G.
sysctl -w vm.dirty_background_bytes=$((1024 * 1024 * 1024 * 2))
sysctl -w vm.dirty_bytes=$((1024 * 1024 * 1024 * 4))
>
> It's unfortunate that the mainline has no per-memcg dirty limit. (We
> do at Google.)
Per-memcg dirty limit is a useful feature. We also support it in our
local kernel, but we didn't enable it for this test case.
It is unclear why the memcg maintainers insist on rejecting the
per-memcg dirty limit :(
>
> > As of now, it appears that the most effective solution to address this
> > issue is to revert the commit 14aa8b2d5c2e. Regarding this commit
> > 14aa8b2d5c2e, its original intention was to eliminate potential SSD
> > wearout, although there's no concrete data available on how it might
> > impact SSD longevity. If the concern about SSD wearout is purely
> > theoretical, it might be reasonable to consider reverting this commit.
>
> The SSD wearout problem was real -- it wasn't really due to
> wakeup_flusher_threads() itself; rather, the original MGLRU code call
> the function improperly. It needs to be called under more restricted
> conditions so that it doesn't cause the SDD wearout problem again.
> However, IMO, wakeup_flusher_threads() is just another bandaid trying
> to work around a more fundamental problem. There is no guarantee that
> the flusher will target the dirty pages in the memcg under reclaim,
> right?
Right, it is a system-wide fluser.
>
> Do you mind trying the following first to see if we can get around
> the problem without calling wakeup_flusher_threads().
I have tried it, but it still triggers the OOM. Below is the information.
[ 71.713649] dd invoked oom-killer:
gfp_mask=0x101c4a(GFP_NOFS|__GFP_HIGHMEM|__GFP_HARDWALL|__GFP_MOVABLE|__GFP_WRITE),
order=3, oom_score_adj=0
[ 71.716317] CPU: 60 PID: 7218 Comm: dd Not tainted 6.8.0-rc6+ #26
[ 71.717677] Call Trace:
[ 71.717917] <TASK>
[ 71.718137] dump_stack_lvl+0x6e/0x90
[ 71.718485] dump_stack+0x10/0x20
[ 71.718799] dump_header+0x47/0x2d0
[ 71.719147] oom_kill_process+0x101/0x2e0
[ 71.719523] out_of_memory+0xfc/0x430
[ 71.719868] mem_cgroup_out_of_memory+0x13d/0x160
[ 71.720322] try_charge_memcg+0x7be/0x850
[ 71.720701] ? get_mem_cgroup_from_mm+0x5e/0x420
[ 71.721137] ? rcu_read_unlock+0x25/0x70
[ 71.721506] __mem_cgroup_charge+0x49/0x90
[ 71.721887] __filemap_add_folio+0x277/0x450
[ 71.722304] ? __pfx_workingset_update_node+0x10/0x10
[ 71.722773] filemap_add_folio+0x3c/0xa0
[ 71.723149] __filemap_get_folio+0x13d/0x2f0
[ 71.723551] iomap_get_folio+0x4c/0x60
[ 71.723911] iomap_write_begin+0x1bb/0x2e0
[ 71.724309] iomap_write_iter+0xff/0x290
[ 71.724683] iomap_file_buffered_write+0x91/0xf0
[ 71.725140] xfs_file_buffered_write+0x9f/0x2d0 [xfs]
[ 71.725793] ? vfs_write+0x261/0x530
[ 71.726148] ? debug_smp_processor_id+0x17/0x20
[ 71.726574] xfs_file_write_iter+0xe9/0x120 [xfs]
[ 71.727161] vfs_write+0x37d/0x530
[ 71.727501] ksys_write+0x6d/0xf0
[ 71.727821] __x64_sys_write+0x19/0x20
[ 71.728181] do_syscall_64+0x79/0x1a0
[ 71.728529] entry_SYSCALL_64_after_hwframe+0x6e/0x76
[ 71.729002] RIP: 0033:0x7fd77053e927
[ 71.729340] Code: 0b 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7
0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00
00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89
74 24
[ 71.730988] RSP: 002b:00007fff032b7218 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[ 71.731664] RAX: ffffffffffffffda RBX: 0000000000100000 RCX: 00007fd77053e927
[ 71.732308] RDX: 0000000000100000 RSI: 00007fd762cfe000 RDI: 0000000000000001
[ 71.732955] RBP: 00007fd762cfe000 R08: 00007fd762cfe000 R09: 0000000000000000
[ 71.733592] R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000000000
[ 71.734237] R13: 0000000000000000 R14: 0000000000000000 R15: 00007fd762cfe000
[ 71.735175] </TASK>
[ 71.736115] memory: usage 1048548kB, limit 1048576kB, failcnt 114
[ 71.736123] memory+swap: usage 1048548kB, limit 9007199254740988kB, failcnt 0
[ 71.736127] kmem: usage 184kB, limit 9007199254740988kB, failcnt 0
[ 71.736131] Memory cgroup stats for /mglru:
[ 71.736364] cache 1072300032
[ 71.736370] rss 1224704
[ 71.736373] rss_huge 0
[ 71.736376] shmem 0
[ 71.736380] mapped_file 0
[ 71.736383] dirty 1072300032
[ 71.736386] writeback 0
[ 71.736389] workingset_refault_anon 0
[ 71.736393] workingset_refault_file 0
[ 71.736396] swap 0
[ 71.736400] swapcached 0
[ 71.736403] pgpgin 2782
[ 71.736406] pgpgout 1427
[ 71.736410] pgfault 882
[ 71.736414] pgmajfault 0
[ 71.736417] inactive_anon 0
[ 71.736421] active_anon 1220608
[ 71.736424] inactive_file 0
[ 71.736428] active_file 1072300032
[ 71.736431] unevictable 0
[ 71.736435] hierarchical_memory_limit 1073741824
[ 71.736438] hierarchical_memsw_limit 9223372036854771712
[ 71.736442] total_cache 1072300032
[ 71.736445] total_rss 1224704
[ 71.736448] total_rss_huge 0
[ 71.736451] total_shmem 0
[ 71.736455] total_mapped_file 0
[ 71.736458] total_dirty 1072300032
[ 71.736462] total_writeback 0
[ 71.736465] total_workingset_refault_anon 0
[ 71.736469] total_workingset_refault_file 0
[ 71.736472] total_swap 0
[ 71.736475] total_swapcached 0
[ 71.736478] total_pgpgin 2782
[ 71.736482] total_pgpgout 1427
[ 71.736485] total_pgfault 882
[ 71.736488] total_pgmajfault 0
[ 71.736491] total_inactive_anon 0
[ 71.736494] total_active_anon 1220608
[ 71.736497] total_inactive_file 0
[ 71.736501] total_active_file 1072300032
[ 71.736504] total_unevictable 0
[ 71.736508] Tasks state (memory values in pages):
[ 71.736512] [ pid ] uid tgid total_vm rss rss_anon
rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[ 71.736522] [ 7215] 0 7215 55663 768 0
768 0 81920 0 0 test.sh
[ 71.736586] [ 7218] 0 7218 55506 640 256
384 0 69632 0 0 dd
[ 71.736596] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0-1,oom_memcg=/mglru,task_memcg=/mglru,task=test.sh,pid=7215,uid=0
[ 71.736766] Memory cgroup out of memory: Killed process 7215
(test.sh) total-vm:222652kB, anon-rss:0kB, file-rss:3072kB,
shmem-rss:0kB, UID:0 pgtables:80kB oom_score_adj:0
And the key information:
[ 71.736442] total_cache 1072300032
[ 71.736458] total_dirty 1072300032
[ 71.736462] total_writeback 0
>
> Thanks!
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 4255619a1a31..d3cfbd95996d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -225,7 +225,7 @@ static bool writeback_throttling_sane(struct scan_control *sc)
> if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
> return true;
> #endif
> - return false;
> + return lru_gen_enabled();
> }
> #else
> static bool cgroup_reclaim(struct scan_control *sc)
> @@ -4273,8 +4273,10 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> }
>
> /* waiting for writeback */
> - if (folio_test_locked(folio) || folio_test_writeback(folio) ||
> - (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> + if (folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> + sc->nr.dirty += delta;
> + if (!folio_test_reclaim(folio))
> + sc->nr.congested += delta;
> gen = folio_inc_gen(lruvec, folio, true);
> list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
> return true;
--
Regards
Yafang
^ permalink raw reply [relevance 5%]
* Re: MGLRU premature memcg OOM on slow writes
2024-03-13 3:33 4% ` Yafang Shao
@ 2024-03-14 22:23 0% ` Yu Zhao
2024-03-15 2:38 5% ` Yafang Shao
0 siblings, 1 reply; 200+ results
From: Yu Zhao @ 2024-03-14 22:23 UTC (permalink / raw)
To: Yafang Shao
Cc: Axel Rasmussen, Chris Down, cgroups, hannes, kernel-team,
linux-kernel, linux-mm
On Wed, Mar 13, 2024 at 11:33:21AM +0800, Yafang Shao wrote:
> On Wed, Mar 13, 2024 at 4:11 AM Yu Zhao <yuzhao@google.com> wrote:
> >
> > On Tue, Mar 12, 2024 at 02:07:04PM -0600, Yu Zhao wrote:
> > > On Tue, Mar 12, 2024 at 09:44:19AM -0700, Axel Rasmussen wrote:
> > > > On Mon, Mar 11, 2024 at 2:11 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> > > > >
> > > > > On Sat, Mar 9, 2024 at 3:19 AM Axel Rasmussen <axelrasmussen@google.com> wrote:
> > > > > >
> > > > > > On Thu, Feb 29, 2024 at 4:30 PM Chris Down <chris@chrisdown.name> wrote:
> > > > > > >
> > > > > > > Axel Rasmussen writes:
> > > > > > > >A couple of dumb questions. In your test, do you have any of the following
> > > > > > > >configured / enabled?
> > > > > > > >
> > > > > > > >/proc/sys/vm/laptop_mode
> > > > > > > >memory.low
> > > > > > > >memory.min
> > > > > > >
> > > > > > > None of these are enabled. The issue is trivially reproducible by writing to
> > > > > > > any slow device with memory.max enabled, but from the code it looks like MGLRU
> > > > > > > is also susceptible to this on global reclaim (although it's less likely due to
> > > > > > > page diversity).
> > > > > > >
> > > > > > > >Besides that, it looks like the place non-MGLRU reclaim wakes up the
> > > > > > > >flushers is in shrink_inactive_list() (which calls wakeup_flusher_threads()).
> > > > > > > >Since MGLRU calls shrink_folio_list() directly (from evict_folios()), I agree it
> > > > > > > >looks like it simply will not do this.
> > > > > > > >
> > > > > > > >Yosry pointed out [1], where MGLRU used to call this but stopped doing that. It
> > > > > > > >makes sense to me at least that doing writeback every time we age is too
> > > > > > > >aggressive, but doing it in evict_folios() makes some sense to me, basically to
> > > > > > > >copy the behavior the non-MGLRU path (shrink_inactive_list()) has.
> > > > > > >
> > > > > > > Thanks! We may also need reclaim_throttle(), depending on how you implement it.
> > > > > > > Current non-MGLRU behaviour on slow storage is also highly suspect in terms of
> > > > > > > (lack of) throttling after moving away from VMSCAN_THROTTLE_WRITEBACK, but one
> > > > > > > thing at a time :-)
> > > > > >
> > > > > >
> > > > > > Hmm, so I have a patch which I think will help with this situation,
> > > > > > but I'm having some trouble reproducing the problem on 6.8-rc7 (so
> > > > > > then I can verify the patch fixes it).
> > > > >
> > > > > We encountered the same premature OOM issue caused by numerous dirty pages.
> > > > > The issue disappears after we revert the commit 14aa8b2d5c2e
> > > > > "mm/mglru: don't sync disk for each aging cycle"
> > > > >
> > > > > To aid in replicating the issue, we've developed a straightforward
> > > > > script, which consistently reproduces it, even on the latest kernel.
> > > > > You can find the script provided below:
> > > > >
> > > > > ```
> > > > > #!/bin/bash
> > > > >
> > > > > MEMCG="/sys/fs/cgroup/memory/mglru"
> > > > > ENABLE=$1
> > > > >
> > > > > # Avoid waking up the flusher
> > > > > sysctl -w vm.dirty_background_bytes=$((1024 * 1024 * 1024 *4))
> > > > > sysctl -w vm.dirty_bytes=$((1024 * 1024 * 1024 *4))
> > > > >
> > > > > if [ ! -d ${MEMCG} ]; then
> > > > > mkdir -p ${MEMCG}
> > > > > fi
> > > > >
> > > > > echo $$ > ${MEMCG}/cgroup.procs
> > > > > echo 1g > ${MEMCG}/memory.limit_in_bytes
> > > > >
> > > > > if [ $ENABLE -eq 0 ]; then
> > > > > echo 0 > /sys/kernel/mm/lru_gen/enabled
> > > > > else
> > > > > echo 0x7 > /sys/kernel/mm/lru_gen/enabled
> > > > > fi
> > > > >
> > > > > dd if=/dev/zero of=/data0/mglru.test bs=1M count=1023
> > > > > rm -rf /data0/mglru.test
> > > > > ```
> > > > >
> > > > > This issue disappears as well after we disable the mglru.
> > > > >
> > > > > We hope this script proves helpful in identifying and addressing the
> > > > > root cause. We eagerly await your insights and proposed fixes.
> > > >
> > > > Thanks Yafang, I was able to reproduce the issue using this script.
> > > >
> > > > Perhaps interestingly, I was not able to reproduce it with cgroupv2
> > > > memcgs. I know writeback semantics are quite a bit different there, so
> > > > perhaps that explains why.
> > > >
> > > > Unfortunately, it also reproduces even with the commit I had in mind
> > > > (basically stealing the "if (all isolated pages are unqueued dirty) {
> > > > wakeup_flusher_threads(); reclaim_throttle(); }" from
> > > > shrink_inactive_list, and adding it to MGLRU's evict_folios()). So
> > > > I'll need to spend some more time on this; I'm planning to send
> > > > something out for testing next week.
> > >
> > > Hi Chris,
> > >
> > > My apologies for not getting back to you sooner.
> > >
> > > And thanks everyone for all the input!
> > >
> > > My take is that Chris' premature OOM kills were NOT really due to
> > > the flusher not waking up or missing throttling.
> > >
> > > Yes, these two are among the differences between the active/inactive
> > > LRU and MGLRU, but their roles, IMO, are not as important as the LRU
> > > positions of dirty pages. The active/inactive LRU moves dirty pages
> > > all the way to the end of the line (reclaim happens at the front)
> > > whereas MGLRU moves them into the middle, during direct reclaim. The
> > > rationale for MGLRU was that this way those dirty pages would still
> > > be counted as "inactive" (or cold).
> > >
> > > This theory can be quickly verified by comparing how much
> > > nr_vmscan_immediate_reclaim grows, i.e.,
> > >
> > > Before the copy
> > > grep nr_vmscan_immediate_reclaim /proc/vmstat
> > > And then after the copy
> > > grep nr_vmscan_immediate_reclaim /proc/vmstat
> > >
> > > The growth should be trivial for MGLRU and nontrivial for the
> > > active/inactive LRU.
> > >
> > > If this is indeed the case, I'd appreciate very much if anyone could
> > > try the following (I'll try it myself too later next week).
> > >
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 4255619a1a31..020f5d98b9a1 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -4273,10 +4273,13 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> > > }
> > >
> > > /* waiting for writeback */
> > > - if (folio_test_locked(folio) || folio_test_writeback(folio) ||
> > > - (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> > > - gen = folio_inc_gen(lruvec, folio, true);
> > > - list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
> > > + if (folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> > > + DEFINE_MAX_SEQ(lruvec);
> > > + int old_gen, new_gen = lru_gen_from_seq(max_seq);
> > > +
> > > + old_gen = folio_update_gen(folio, new_gen);
> > > + lru_gen_update_size(lruvec, folio, old_gen, new_gen);
> > > + list_move(&folio->lru, &lrugen->folios[new_gen][type][zone]);
> >
> > Sorry missing one line here:
> >
> > + folio_set_reclaim(folio);
> >
> > > return true;
> > > }
>
> Hi Yu,
>
> I have validated it using the script provided for Axel, but
> unfortunately, it still triggers an OOM error with your patch applied.
> Here are the results with nr_vmscan_immediate_reclaim:
Thanks for debunking it!
> - non-MGLRU
> $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> nr_vmscan_immediate_reclaim 47411776
>
> $ ./test.sh 0
> 1023+0 records in
> 1023+0 records out
> 1072693248 bytes (1.1 GB, 1023 MiB) copied, 0.538058 s, 2.0 GB/s
>
> $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> nr_vmscan_immediate_reclaim 47412544
>
> - MGLRU
> $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> nr_vmscan_immediate_reclaim 47412544
>
> $ ./test.sh 1
> Killed
>
> $ grep nr_vmscan_immediate_reclaim /proc/vmstat
> nr_vmscan_immediate_reclaim 115455600
The delta is ~260GB, I'm still thinking how that could happen -- is this reliably reproducible?
> The detailed OOM info as follows,
>
> [Wed Mar 13 11:16:48 2024] dd invoked oom-killer:
> gfp_mask=0x101c4a(GFP_NOFS|__GFP_HIGHMEM|__GFP_HARDWALL|__GFP_MOVABLE|__GFP_WRITE),
> order=3, oom_score_adj=0
> [Wed Mar 13 11:16:48 2024] CPU: 12 PID: 6911 Comm: dd Not tainted 6.8.0-rc6+ #24
> [Wed Mar 13 11:16:48 2024] Hardware name: Tencent Cloud CVM, BIOS
> seabios-1.9.1-qemu-project.org 04/01/2014
> [Wed Mar 13 11:16:48 2024] Call Trace:
> [Wed Mar 13 11:16:48 2024] <TASK>
> [Wed Mar 13 11:16:48 2024] dump_stack_lvl+0x6e/0x90
> [Wed Mar 13 11:16:48 2024] dump_stack+0x10/0x20
> [Wed Mar 13 11:16:48 2024] dump_header+0x47/0x2d0
> [Wed Mar 13 11:16:48 2024] oom_kill_process+0x101/0x2e0
> [Wed Mar 13 11:16:48 2024] out_of_memory+0xfc/0x430
> [Wed Mar 13 11:16:48 2024] mem_cgroup_out_of_memory+0x13d/0x160
> [Wed Mar 13 11:16:48 2024] try_charge_memcg+0x7be/0x850
> [Wed Mar 13 11:16:48 2024] ? get_mem_cgroup_from_mm+0x5e/0x420
> [Wed Mar 13 11:16:48 2024] ? rcu_read_unlock+0x25/0x70
> [Wed Mar 13 11:16:48 2024] __mem_cgroup_charge+0x49/0x90
> [Wed Mar 13 11:16:48 2024] __filemap_add_folio+0x277/0x450
> [Wed Mar 13 11:16:48 2024] ? __pfx_workingset_update_node+0x10/0x10
> [Wed Mar 13 11:16:48 2024] filemap_add_folio+0x3c/0xa0
> [Wed Mar 13 11:16:48 2024] __filemap_get_folio+0x13d/0x2f0
> [Wed Mar 13 11:16:48 2024] iomap_get_folio+0x4c/0x60
> [Wed Mar 13 11:16:48 2024] iomap_write_begin+0x1bb/0x2e0
> [Wed Mar 13 11:16:48 2024] iomap_write_iter+0xff/0x290
> [Wed Mar 13 11:16:48 2024] iomap_file_buffered_write+0x91/0xf0
> [Wed Mar 13 11:16:48 2024] xfs_file_buffered_write+0x9f/0x2d0 [xfs]
> [Wed Mar 13 11:16:48 2024] ? vfs_write+0x261/0x530
> [Wed Mar 13 11:16:48 2024] ? debug_smp_processor_id+0x17/0x20
> [Wed Mar 13 11:16:48 2024] xfs_file_write_iter+0xe9/0x120 [xfs]
> [Wed Mar 13 11:16:48 2024] vfs_write+0x37d/0x530
> [Wed Mar 13 11:16:48 2024] ksys_write+0x6d/0xf0
> [Wed Mar 13 11:16:48 2024] __x64_sys_write+0x19/0x20
> [Wed Mar 13 11:16:48 2024] do_syscall_64+0x79/0x1a0
> [Wed Mar 13 11:16:48 2024] entry_SYSCALL_64_after_hwframe+0x6e/0x76
> [Wed Mar 13 11:16:48 2024] RIP: 0033:0x7f63ea33e927
> [Wed Mar 13 11:16:48 2024] Code: 0b 00 f7 d8 64 89 02 48 c7 c0 ff ff
> ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10
> b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54
> 24 18 48 89 74 24
> [Wed Mar 13 11:16:48 2024] RSP: 002b:00007ffc0e874768 EFLAGS: 00000246
> ORIG_RAX: 0000000000000001
> [Wed Mar 13 11:16:48 2024] RAX: ffffffffffffffda RBX: 0000000000100000
> RCX: 00007f63ea33e927
> [Wed Mar 13 11:16:48 2024] RDX: 0000000000100000 RSI: 00007f63dcafe000
> RDI: 0000000000000001
> [Wed Mar 13 11:16:48 2024] RBP: 00007f63dcafe000 R08: 00007f63dcafe000
> R09: 0000000000000000
> [Wed Mar 13 11:16:48 2024] R10: 0000000000000022 R11: 0000000000000246
> R12: 0000000000000000
> [Wed Mar 13 11:16:48 2024] R13: 0000000000000000 R14: 0000000000000000
> R15: 00007f63dcafe000
> [Wed Mar 13 11:16:48 2024] </TASK>
> [Wed Mar 13 11:16:48 2024] memory: usage 1048556kB, limit 1048576kB, failcnt 153
> [Wed Mar 13 11:16:48 2024] memory+swap: usage 1048556kB, limit
I see you were actually on cgroup v1 -- this might be a different
problem than Chris' since he was on v2.
For v1, the throttling is done by commit 81a70c21d9
("mm/cgroup/reclaim: fix dirty pages throttling on cgroup v1").
IOW, the active/inactive LRU throttles in both v1 and v2 (done
in different ways) whereas MGLRU doesn't in either case.
> 9007199254740988kB, failcnt 0
> [Wed Mar 13 11:16:48 2024] kmem: usage 200kB, limit
> 9007199254740988kB, failcnt 0
> [Wed Mar 13 11:16:48 2024] Memory cgroup stats for /mglru:
> [Wed Mar 13 11:16:48 2024] cache 1072365568
> [Wed Mar 13 11:16:48 2024] rss 1150976
> [Wed Mar 13 11:16:48 2024] rss_huge 0
> [Wed Mar 13 11:16:48 2024] shmem 0
> [Wed Mar 13 11:16:48 2024] mapped_file 0
> [Wed Mar 13 11:16:48 2024] dirty 1072365568
> [Wed Mar 13 11:16:48 2024] writeback 0
> [Wed Mar 13 11:16:48 2024] workingset_refault_anon 0
> [Wed Mar 13 11:16:48 2024] workingset_refault_file 0
> [Wed Mar 13 11:16:48 2024] swap 0
> [Wed Mar 13 11:16:48 2024] swapcached 0
> [Wed Mar 13 11:16:48 2024] pgpgin 2783
> [Wed Mar 13 11:16:48 2024] pgpgout 1444
> [Wed Mar 13 11:16:48 2024] pgfault 885
> [Wed Mar 13 11:16:48 2024] pgmajfault 0
> [Wed Mar 13 11:16:48 2024] inactive_anon 1146880
> [Wed Mar 13 11:16:48 2024] active_anon 4096
> [Wed Mar 13 11:16:48 2024] inactive_file 802357248
> [Wed Mar 13 11:16:48 2024] active_file 270008320
> [Wed Mar 13 11:16:48 2024] unevictable 0
> [Wed Mar 13 11:16:48 2024] hierarchical_memory_limit 1073741824
> [Wed Mar 13 11:16:48 2024] hierarchical_memsw_limit 9223372036854771712
> [Wed Mar 13 11:16:48 2024] total_cache 1072365568
> [Wed Mar 13 11:16:48 2024] total_rss 1150976
> [Wed Mar 13 11:16:48 2024] total_rss_huge 0
> [Wed Mar 13 11:16:48 2024] total_shmem 0
> [Wed Mar 13 11:16:48 2024] total_mapped_file 0
> [Wed Mar 13 11:16:48 2024] total_dirty 1072365568
> [Wed Mar 13 11:16:48 2024] total_writeback 0
> [Wed Mar 13 11:16:48 2024] total_workingset_refault_anon 0
> [Wed Mar 13 11:16:48 2024] total_workingset_refault_file 0
> [Wed Mar 13 11:16:48 2024] total_swap 0
> [Wed Mar 13 11:16:48 2024] total_swapcached 0
> [Wed Mar 13 11:16:48 2024] total_pgpgin 2783
> [Wed Mar 13 11:16:48 2024] total_pgpgout 1444
> [Wed Mar 13 11:16:48 2024] total_pgfault 885
> [Wed Mar 13 11:16:48 2024] total_pgmajfault 0
> [Wed Mar 13 11:16:48 2024] total_inactive_anon 1146880
> [Wed Mar 13 11:16:48 2024] total_active_anon 4096
> [Wed Mar 13 11:16:48 2024] total_inactive_file 802357248
> [Wed Mar 13 11:16:48 2024] total_active_file 270008320
> [Wed Mar 13 11:16:48 2024] total_unevictable 0
> [Wed Mar 13 11:16:48 2024] Tasks state (memory values in pages):
> [Wed Mar 13 11:16:48 2024] [ pid ] uid tgid total_vm rss
> rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
> [Wed Mar 13 11:16:48 2024] [ 6911] 0 6911 55506 640
> 256 384 0 73728 0 0 dd
> [Wed Mar 13 11:16:48 2024]
> oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0-1,oom_memcg=/mglru,task_memcg=/mglru,task=dd,pid=6911,uid=0
>
> The key information extracted from the OOM info is as follows:
>
> [Wed Mar 13 11:16:48 2024] cache 1072365568
> [Wed Mar 13 11:16:48 2024] dirty 1072365568
>
> This information reveals that all file pages are dirty pages.
I'm surprised to see there was 0 pages under writeback:
[Wed Mar 13 11:16:48 2024] total_writeback 0
What's your dirty limit?
It's unfortunate that the mainline has no per-memcg dirty limit. (We
do at Google.)
> As of now, it appears that the most effective solution to address this
> issue is to revert the commit 14aa8b2d5c2e. Regarding this commit
> 14aa8b2d5c2e, its original intention was to eliminate potential SSD
> wearout, although there's no concrete data available on how it might
> impact SSD longevity. If the concern about SSD wearout is purely
> theoretical, it might be reasonable to consider reverting this commit.
The SSD wearout problem was real -- it wasn't really due to
wakeup_flusher_threads() itself; rather, the original MGLRU code call
the function improperly. It needs to be called under more restricted
conditions so that it doesn't cause the SDD wearout problem again.
However, IMO, wakeup_flusher_threads() is just another bandaid trying
to work around a more fundamental problem. There is no guarantee that
the flusher will target the dirty pages in the memcg under reclaim,
right?
Do you mind trying the following first to see if we can get around
the problem without calling wakeup_flusher_threads().
Thanks!
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4255619a1a31..d3cfbd95996d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -225,7 +225,7 @@ static bool writeback_throttling_sane(struct scan_control *sc)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
return true;
#endif
- return false;
+ return lru_gen_enabled();
}
#else
static bool cgroup_reclaim(struct scan_control *sc)
@@ -4273,8 +4273,10 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
}
/* waiting for writeback */
- if (folio_test_locked(folio) || folio_test_writeback(folio) ||
- (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
+ if (folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
+ sc->nr.dirty += delta;
+ if (!folio_test_reclaim(folio))
+ sc->nr.congested += delta;
gen = folio_inc_gen(lruvec, folio, true);
list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
return true;
^ permalink raw reply related [relevance 0%]
* [PATCH v3 03/11] filemap: allocate mapping_min_order folios in the page cache
2024-03-13 17:02 6% ` [PATCH v3 01/11] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
@ 2024-03-13 17:02 14% ` Pankaj Raghav (Samsung)
2024-03-15 13:21 0% ` Pankaj Raghav (Samsung)
1 sibling, 1 reply; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-03-13 17:02 UTC (permalink / raw)
To: willy, linux-xfs, linux-fsdevel
Cc: gost.dev, chandan.babu, hare, mcgrof, djwong, linux-mm,
linux-kernel, david, akpm, Pankaj Raghav
From: Luis Chamberlain <mcgrof@kernel.org>
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
Supporting mapping_min_order implies that we guarantee each folio in the
page cache has at least an order of mapping_min_order. When adding new
folios to the page cache we must also ensure the index used is aligned to
the mapping_min_order as the page cache requires the index to be aligned
to the order of the folio.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Co-developed-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
mm/filemap.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index a1cb3ea55fb6..57889f206829 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -849,6 +849,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
mapping_set_update(&xas, mapping);
if (!huge) {
@@ -1886,8 +1888,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
+ index = mapping_align_start_index(mapping, index);
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
gfp |= __GFP_WRITE;
@@ -1927,7 +1931,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2416,13 +2420,16 @@ static int filemap_update_page(struct kiocb *iocb,
}
static int filemap_create_folio(struct file *file,
- struct address_space *mapping, pgoff_t index,
+ struct address_space *mapping, loff_t pos,
struct folio_batch *fbatch)
{
struct folio *folio;
int error;
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ pgoff_t index;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ min_order);
if (!folio)
return -ENOMEM;
@@ -2440,6 +2447,8 @@ static int filemap_create_folio(struct file *file,
* well to keep locking rules simple.
*/
filemap_invalidate_lock_shared(mapping);
+ /* index in PAGE units but aligned to min_order number of pages. */
+ index = (pos >> (PAGE_SHIFT + min_order)) << min_order;
error = filemap_add_folio(mapping, folio, index,
mapping_gfp_constraint(mapping, GFP_KERNEL));
if (error == -EEXIST)
@@ -2500,8 +2509,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
return -EAGAIN;
- err = filemap_create_folio(filp, mapping,
- iocb->ki_pos >> PAGE_SHIFT, fbatch);
+ err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch);
if (err == AOP_TRUNCATED_PAGE)
goto retry;
return err;
@@ -3662,9 +3670,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
+ index = mapping_align_start_index(mapping, index);
err = filemap_add_folio(mapping, folio, index, gfp);
if (unlikely(err)) {
folio_put(folio);
--
2.43.0
^ permalink raw reply related [relevance 14%]
* [PATCH v3 01/11] mm: Support order-1 folios in the page cache
@ 2024-03-13 17:02 6% ` Pankaj Raghav (Samsung)
2024-03-13 17:02 14% ` [PATCH v3 03/11] filemap: allocate mapping_min_order " Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-03-13 17:02 UTC (permalink / raw)
To: willy, linux-xfs, linux-fsdevel
Cc: gost.dev, chandan.babu, hare, mcgrof, djwong, linux-mm,
linux-kernel, david, akpm
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Folios of order 1 have no space to store the deferred list. This is
not a problem for the page cache as file-backed folios are never
placed on the deferred list. All we need to do is prevent the core
MM from touching the deferred list for order 1 folios and remove the
code which prevented us from allocating order 1 folios.
Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/huge_mm.h | 7 +++++--
mm/filemap.c | 2 --
mm/huge_memory.c | 23 ++++++++++++++++++-----
mm/internal.h | 4 +---
mm/readahead.c | 3 ---
5 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..916a2a539517 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags);
-void folio_prep_large_rmappable(struct folio *folio);
+struct folio *folio_prep_large_rmappable(struct folio *folio);
bool can_split_folio(struct folio *folio, int *pextra_pins);
int split_huge_page_to_list(struct page *page, struct list_head *list);
static inline int split_huge_page(struct page *page)
@@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
return 0;
}
-static inline void folio_prep_large_rmappable(struct folio *folio) {}
+static inline struct folio *folio_prep_large_rmappable(struct folio *folio)
+{
+ return folio;
+}
#define transparent_hugepage_flags 0UL
diff --git a/mm/filemap.c b/mm/filemap.c
index 4a30de98a8c7..a1cb3ea55fb6 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
- if (order == 1)
- order = 0;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
folio = filemap_alloc_folio(alloc_gfp, order);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94c958f7ebb5..81fd1ba57088 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio)
}
#endif
-void folio_prep_large_rmappable(struct folio *folio)
+struct folio *folio_prep_large_rmappable(struct folio *folio)
{
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
- INIT_LIST_HEAD(&folio->_deferred_list);
+ if (!folio || !folio_test_large(folio))
+ return folio;
+ if (folio_order(folio) > 1)
+ INIT_LIST_HEAD(&folio->_deferred_list);
folio_set_large_rmappable(folio);
+
+ return folio;
}
static inline bool is_transparent_hugepage(struct folio *folio)
@@ -3082,7 +3086,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
- if (!list_empty(&folio->_deferred_list)) {
+ if (folio_order(folio) > 1 &&
+ !list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
list_del(&folio->_deferred_list);
}
@@ -3133,6 +3138,9 @@ void folio_undo_large_rmappable(struct folio *folio)
struct deferred_split *ds_queue;
unsigned long flags;
+ if (folio_order(folio) <= 1)
+ return;
+
/*
* At this point, there is no one trying to add the folio to
* deferred_list. If folio is not in deferred_list, it's safe
@@ -3158,7 +3166,12 @@ void deferred_split_folio(struct folio *folio)
#endif
unsigned long flags;
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
+ /*
+ * Order 1 folios have no space for a deferred list, but we also
+ * won't waste much memory by not adding them to the deferred list.
+ */
+ if (folio_order(folio) <= 1)
+ return;
/*
* The try_to_unmap() in page reclaim path might reach here too,
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5174b5b0c344 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page)
{
struct folio *folio = (struct folio *)page;
- if (folio && folio_order(folio) > 1)
- folio_prep_large_rmappable(folio);
- return folio;
+ return folio_prep_large_rmappable(folio);
}
static inline void prep_compound_head(struct page *page, unsigned int order)
diff --git a/mm/readahead.c b/mm/readahead.c
index 2648ec4f0494..369c70e2be42 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -516,9 +516,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
/* Don't allocate pages past EOF */
while (index + (1UL << order) - 1 > limit)
order--;
- /* THP machinery does not support order-1 */
- if (order == 1)
- order = 0;
err = ra_alloc_folio(ractl, index, mark, order, gfp);
if (err)
break;
--
2.43.0
^ permalink raw reply related [relevance 6%]
* Re: MGLRU premature memcg OOM on slow writes
@ 2024-03-13 3:33 4% ` Yafang Shao
2024-03-14 22:23 0% ` Yu Zhao
0 siblings, 1 reply; 200+ results
From: Yafang Shao @ 2024-03-13 3:33 UTC (permalink / raw)
To: Yu Zhao
Cc: Axel Rasmussen, Chris Down, cgroups, hannes, kernel-team,
linux-kernel, linux-mm
On Wed, Mar 13, 2024 at 4:11 AM Yu Zhao <yuzhao@google.com> wrote:
>
> On Tue, Mar 12, 2024 at 02:07:04PM -0600, Yu Zhao wrote:
> > On Tue, Mar 12, 2024 at 09:44:19AM -0700, Axel Rasmussen wrote:
> > > On Mon, Mar 11, 2024 at 2:11 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> > > >
> > > > On Sat, Mar 9, 2024 at 3:19 AM Axel Rasmussen <axelrasmussen@google.com> wrote:
> > > > >
> > > > > On Thu, Feb 29, 2024 at 4:30 PM Chris Down <chris@chrisdown.name> wrote:
> > > > > >
> > > > > > Axel Rasmussen writes:
> > > > > > >A couple of dumb questions. In your test, do you have any of the following
> > > > > > >configured / enabled?
> > > > > > >
> > > > > > >/proc/sys/vm/laptop_mode
> > > > > > >memory.low
> > > > > > >memory.min
> > > > > >
> > > > > > None of these are enabled. The issue is trivially reproducible by writing to
> > > > > > any slow device with memory.max enabled, but from the code it looks like MGLRU
> > > > > > is also susceptible to this on global reclaim (although it's less likely due to
> > > > > > page diversity).
> > > > > >
> > > > > > >Besides that, it looks like the place non-MGLRU reclaim wakes up the
> > > > > > >flushers is in shrink_inactive_list() (which calls wakeup_flusher_threads()).
> > > > > > >Since MGLRU calls shrink_folio_list() directly (from evict_folios()), I agree it
> > > > > > >looks like it simply will not do this.
> > > > > > >
> > > > > > >Yosry pointed out [1], where MGLRU used to call this but stopped doing that. It
> > > > > > >makes sense to me at least that doing writeback every time we age is too
> > > > > > >aggressive, but doing it in evict_folios() makes some sense to me, basically to
> > > > > > >copy the behavior the non-MGLRU path (shrink_inactive_list()) has.
> > > > > >
> > > > > > Thanks! We may also need reclaim_throttle(), depending on how you implement it.
> > > > > > Current non-MGLRU behaviour on slow storage is also highly suspect in terms of
> > > > > > (lack of) throttling after moving away from VMSCAN_THROTTLE_WRITEBACK, but one
> > > > > > thing at a time :-)
> > > > >
> > > > >
> > > > > Hmm, so I have a patch which I think will help with this situation,
> > > > > but I'm having some trouble reproducing the problem on 6.8-rc7 (so
> > > > > then I can verify the patch fixes it).
> > > >
> > > > We encountered the same premature OOM issue caused by numerous dirty pages.
> > > > The issue disappears after we revert the commit 14aa8b2d5c2e
> > > > "mm/mglru: don't sync disk for each aging cycle"
> > > >
> > > > To aid in replicating the issue, we've developed a straightforward
> > > > script, which consistently reproduces it, even on the latest kernel.
> > > > You can find the script provided below:
> > > >
> > > > ```
> > > > #!/bin/bash
> > > >
> > > > MEMCG="/sys/fs/cgroup/memory/mglru"
> > > > ENABLE=$1
> > > >
> > > > # Avoid waking up the flusher
> > > > sysctl -w vm.dirty_background_bytes=$((1024 * 1024 * 1024 *4))
> > > > sysctl -w vm.dirty_bytes=$((1024 * 1024 * 1024 *4))
> > > >
> > > > if [ ! -d ${MEMCG} ]; then
> > > > mkdir -p ${MEMCG}
> > > > fi
> > > >
> > > > echo $$ > ${MEMCG}/cgroup.procs
> > > > echo 1g > ${MEMCG}/memory.limit_in_bytes
> > > >
> > > > if [ $ENABLE -eq 0 ]; then
> > > > echo 0 > /sys/kernel/mm/lru_gen/enabled
> > > > else
> > > > echo 0x7 > /sys/kernel/mm/lru_gen/enabled
> > > > fi
> > > >
> > > > dd if=/dev/zero of=/data0/mglru.test bs=1M count=1023
> > > > rm -rf /data0/mglru.test
> > > > ```
> > > >
> > > > This issue disappears as well after we disable the mglru.
> > > >
> > > > We hope this script proves helpful in identifying and addressing the
> > > > root cause. We eagerly await your insights and proposed fixes.
> > >
> > > Thanks Yafang, I was able to reproduce the issue using this script.
> > >
> > > Perhaps interestingly, I was not able to reproduce it with cgroupv2
> > > memcgs. I know writeback semantics are quite a bit different there, so
> > > perhaps that explains why.
> > >
> > > Unfortunately, it also reproduces even with the commit I had in mind
> > > (basically stealing the "if (all isolated pages are unqueued dirty) {
> > > wakeup_flusher_threads(); reclaim_throttle(); }" from
> > > shrink_inactive_list, and adding it to MGLRU's evict_folios()). So
> > > I'll need to spend some more time on this; I'm planning to send
> > > something out for testing next week.
> >
> > Hi Chris,
> >
> > My apologies for not getting back to you sooner.
> >
> > And thanks everyone for all the input!
> >
> > My take is that Chris' premature OOM kills were NOT really due to
> > the flusher not waking up or missing throttling.
> >
> > Yes, these two are among the differences between the active/inactive
> > LRU and MGLRU, but their roles, IMO, are not as important as the LRU
> > positions of dirty pages. The active/inactive LRU moves dirty pages
> > all the way to the end of the line (reclaim happens at the front)
> > whereas MGLRU moves them into the middle, during direct reclaim. The
> > rationale for MGLRU was that this way those dirty pages would still
> > be counted as "inactive" (or cold).
> >
> > This theory can be quickly verified by comparing how much
> > nr_vmscan_immediate_reclaim grows, i.e.,
> >
> > Before the copy
> > grep nr_vmscan_immediate_reclaim /proc/vmstat
> > And then after the copy
> > grep nr_vmscan_immediate_reclaim /proc/vmstat
> >
> > The growth should be trivial for MGLRU and nontrivial for the
> > active/inactive LRU.
> >
> > If this is indeed the case, I'd appreciate very much if anyone could
> > try the following (I'll try it myself too later next week).
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 4255619a1a31..020f5d98b9a1 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -4273,10 +4273,13 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> > }
> >
> > /* waiting for writeback */
> > - if (folio_test_locked(folio) || folio_test_writeback(folio) ||
> > - (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> > - gen = folio_inc_gen(lruvec, folio, true);
> > - list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
> > + if (folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
> > + DEFINE_MAX_SEQ(lruvec);
> > + int old_gen, new_gen = lru_gen_from_seq(max_seq);
> > +
> > + old_gen = folio_update_gen(folio, new_gen);
> > + lru_gen_update_size(lruvec, folio, old_gen, new_gen);
> > + list_move(&folio->lru, &lrugen->folios[new_gen][type][zone]);
>
> Sorry missing one line here:
>
> + folio_set_reclaim(folio);
>
> > return true;
> > }
Hi Yu,
I have validated it using the script provided for Axel, but
unfortunately, it still triggers an OOM error with your patch applied.
Here are the results with nr_vmscan_immediate_reclaim:
- non-MGLRU
$ grep nr_vmscan_immediate_reclaim /proc/vmstat
nr_vmscan_immediate_reclaim 47411776
$ ./test.sh 0
1023+0 records in
1023+0 records out
1072693248 bytes (1.1 GB, 1023 MiB) copied, 0.538058 s, 2.0 GB/s
$ grep nr_vmscan_immediate_reclaim /proc/vmstat
nr_vmscan_immediate_reclaim 47412544
- MGLRU
$ grep nr_vmscan_immediate_reclaim /proc/vmstat
nr_vmscan_immediate_reclaim 47412544
$ ./test.sh 1
Killed
$ grep nr_vmscan_immediate_reclaim /proc/vmstat
nr_vmscan_immediate_reclaim 115455600
The detailed OOM info as follows,
[Wed Mar 13 11:16:48 2024] dd invoked oom-killer:
gfp_mask=0x101c4a(GFP_NOFS|__GFP_HIGHMEM|__GFP_HARDWALL|__GFP_MOVABLE|__GFP_WRITE),
order=3, oom_score_adj=0
[Wed Mar 13 11:16:48 2024] CPU: 12 PID: 6911 Comm: dd Not tainted 6.8.0-rc6+ #24
[Wed Mar 13 11:16:48 2024] Hardware name: Tencent Cloud CVM, BIOS
seabios-1.9.1-qemu-project.org 04/01/2014
[Wed Mar 13 11:16:48 2024] Call Trace:
[Wed Mar 13 11:16:48 2024] <TASK>
[Wed Mar 13 11:16:48 2024] dump_stack_lvl+0x6e/0x90
[Wed Mar 13 11:16:48 2024] dump_stack+0x10/0x20
[Wed Mar 13 11:16:48 2024] dump_header+0x47/0x2d0
[Wed Mar 13 11:16:48 2024] oom_kill_process+0x101/0x2e0
[Wed Mar 13 11:16:48 2024] out_of_memory+0xfc/0x430
[Wed Mar 13 11:16:48 2024] mem_cgroup_out_of_memory+0x13d/0x160
[Wed Mar 13 11:16:48 2024] try_charge_memcg+0x7be/0x850
[Wed Mar 13 11:16:48 2024] ? get_mem_cgroup_from_mm+0x5e/0x420
[Wed Mar 13 11:16:48 2024] ? rcu_read_unlock+0x25/0x70
[Wed Mar 13 11:16:48 2024] __mem_cgroup_charge+0x49/0x90
[Wed Mar 13 11:16:48 2024] __filemap_add_folio+0x277/0x450
[Wed Mar 13 11:16:48 2024] ? __pfx_workingset_update_node+0x10/0x10
[Wed Mar 13 11:16:48 2024] filemap_add_folio+0x3c/0xa0
[Wed Mar 13 11:16:48 2024] __filemap_get_folio+0x13d/0x2f0
[Wed Mar 13 11:16:48 2024] iomap_get_folio+0x4c/0x60
[Wed Mar 13 11:16:48 2024] iomap_write_begin+0x1bb/0x2e0
[Wed Mar 13 11:16:48 2024] iomap_write_iter+0xff/0x290
[Wed Mar 13 11:16:48 2024] iomap_file_buffered_write+0x91/0xf0
[Wed Mar 13 11:16:48 2024] xfs_file_buffered_write+0x9f/0x2d0 [xfs]
[Wed Mar 13 11:16:48 2024] ? vfs_write+0x261/0x530
[Wed Mar 13 11:16:48 2024] ? debug_smp_processor_id+0x17/0x20
[Wed Mar 13 11:16:48 2024] xfs_file_write_iter+0xe9/0x120 [xfs]
[Wed Mar 13 11:16:48 2024] vfs_write+0x37d/0x530
[Wed Mar 13 11:16:48 2024] ksys_write+0x6d/0xf0
[Wed Mar 13 11:16:48 2024] __x64_sys_write+0x19/0x20
[Wed Mar 13 11:16:48 2024] do_syscall_64+0x79/0x1a0
[Wed Mar 13 11:16:48 2024] entry_SYSCALL_64_after_hwframe+0x6e/0x76
[Wed Mar 13 11:16:48 2024] RIP: 0033:0x7f63ea33e927
[Wed Mar 13 11:16:48 2024] Code: 0b 00 f7 d8 64 89 02 48 c7 c0 ff ff
ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10
b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54
24 18 48 89 74 24
[Wed Mar 13 11:16:48 2024] RSP: 002b:00007ffc0e874768 EFLAGS: 00000246
ORIG_RAX: 0000000000000001
[Wed Mar 13 11:16:48 2024] RAX: ffffffffffffffda RBX: 0000000000100000
RCX: 00007f63ea33e927
[Wed Mar 13 11:16:48 2024] RDX: 0000000000100000 RSI: 00007f63dcafe000
RDI: 0000000000000001
[Wed Mar 13 11:16:48 2024] RBP: 00007f63dcafe000 R08: 00007f63dcafe000
R09: 0000000000000000
[Wed Mar 13 11:16:48 2024] R10: 0000000000000022 R11: 0000000000000246
R12: 0000000000000000
[Wed Mar 13 11:16:48 2024] R13: 0000000000000000 R14: 0000000000000000
R15: 00007f63dcafe000
[Wed Mar 13 11:16:48 2024] </TASK>
[Wed Mar 13 11:16:48 2024] memory: usage 1048556kB, limit 1048576kB, failcnt 153
[Wed Mar 13 11:16:48 2024] memory+swap: usage 1048556kB, limit
9007199254740988kB, failcnt 0
[Wed Mar 13 11:16:48 2024] kmem: usage 200kB, limit
9007199254740988kB, failcnt 0
[Wed Mar 13 11:16:48 2024] Memory cgroup stats for /mglru:
[Wed Mar 13 11:16:48 2024] cache 1072365568
[Wed Mar 13 11:16:48 2024] rss 1150976
[Wed Mar 13 11:16:48 2024] rss_huge 0
[Wed Mar 13 11:16:48 2024] shmem 0
[Wed Mar 13 11:16:48 2024] mapped_file 0
[Wed Mar 13 11:16:48 2024] dirty 1072365568
[Wed Mar 13 11:16:48 2024] writeback 0
[Wed Mar 13 11:16:48 2024] workingset_refault_anon 0
[Wed Mar 13 11:16:48 2024] workingset_refault_file 0
[Wed Mar 13 11:16:48 2024] swap 0
[Wed Mar 13 11:16:48 2024] swapcached 0
[Wed Mar 13 11:16:48 2024] pgpgin 2783
[Wed Mar 13 11:16:48 2024] pgpgout 1444
[Wed Mar 13 11:16:48 2024] pgfault 885
[Wed Mar 13 11:16:48 2024] pgmajfault 0
[Wed Mar 13 11:16:48 2024] inactive_anon 1146880
[Wed Mar 13 11:16:48 2024] active_anon 4096
[Wed Mar 13 11:16:48 2024] inactive_file 802357248
[Wed Mar 13 11:16:48 2024] active_file 270008320
[Wed Mar 13 11:16:48 2024] unevictable 0
[Wed Mar 13 11:16:48 2024] hierarchical_memory_limit 1073741824
[Wed Mar 13 11:16:48 2024] hierarchical_memsw_limit 9223372036854771712
[Wed Mar 13 11:16:48 2024] total_cache 1072365568
[Wed Mar 13 11:16:48 2024] total_rss 1150976
[Wed Mar 13 11:16:48 2024] total_rss_huge 0
[Wed Mar 13 11:16:48 2024] total_shmem 0
[Wed Mar 13 11:16:48 2024] total_mapped_file 0
[Wed Mar 13 11:16:48 2024] total_dirty 1072365568
[Wed Mar 13 11:16:48 2024] total_writeback 0
[Wed Mar 13 11:16:48 2024] total_workingset_refault_anon 0
[Wed Mar 13 11:16:48 2024] total_workingset_refault_file 0
[Wed Mar 13 11:16:48 2024] total_swap 0
[Wed Mar 13 11:16:48 2024] total_swapcached 0
[Wed Mar 13 11:16:48 2024] total_pgpgin 2783
[Wed Mar 13 11:16:48 2024] total_pgpgout 1444
[Wed Mar 13 11:16:48 2024] total_pgfault 885
[Wed Mar 13 11:16:48 2024] total_pgmajfault 0
[Wed Mar 13 11:16:48 2024] total_inactive_anon 1146880
[Wed Mar 13 11:16:48 2024] total_active_anon 4096
[Wed Mar 13 11:16:48 2024] total_inactive_file 802357248
[Wed Mar 13 11:16:48 2024] total_active_file 270008320
[Wed Mar 13 11:16:48 2024] total_unevictable 0
[Wed Mar 13 11:16:48 2024] Tasks state (memory values in pages):
[Wed Mar 13 11:16:48 2024] [ pid ] uid tgid total_vm rss
rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[Wed Mar 13 11:16:48 2024] [ 6911] 0 6911 55506 640
256 384 0 73728 0 0 dd
[Wed Mar 13 11:16:48 2024]
oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0-1,oom_memcg=/mglru,task_memcg=/mglru,task=dd,pid=6911,uid=0
The key information extracted from the OOM info is as follows:
[Wed Mar 13 11:16:48 2024] cache 1072365568
[Wed Mar 13 11:16:48 2024] dirty 1072365568
This information reveals that all file pages are dirty pages.
As of now, it appears that the most effective solution to address this
issue is to revert the commit 14aa8b2d5c2e. Regarding this commit
14aa8b2d5c2e, its original intention was to eliminate potential SSD
wearout, although there's no concrete data available on how it might
impact SSD longevity. If the concern about SSD wearout is purely
theoretical, it might be reasonable to consider reverting this commit.
^ permalink raw reply [relevance 4%]
* nfsd hangs and nfsd_break_deleg_cb+0x170/0x190 warning
@ 2024-03-11 18:43 1% Rik Theys
0 siblings, 0 replies; 200+ results
From: Rik Theys @ 2024-03-11 18:43 UTC (permalink / raw)
To: Linux Nfs
[-- Attachment #1.1: Type: text/plain, Size: 10254 bytes --]
Hi,
Since a few weeks our Rocky Linux 9 NFS server has periodically logged
hung nfsd tasks. The initial effect was that some clients could no
longer access the NFS server. This got worse and worse (probably as more
nfsd threads got blocked) and we had to restart the server. Restarting
the server also failed as the NFS server service could no longer be stopped.
The initial kernel we noticed this behavior on was
kernel-5.14.0-362.18.1.el9_3.x86_64. Since then we've installed
kernel-5.14.0-419.el9.x86_64 from CentOS Stream 9. The same issue
happened again on this newer kernel version:
[Mon Mar 11 14:10:08 2024] Not tainted 5.14.0-419.el9.x86_64 #1
[Mon Mar 11 14:10:08 2024] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Mon Mar 11 14:10:08 2024] task:nfsd state:D stack:0
pid:8865 ppid:2 flags:0x00004000
[Mon Mar 11 14:10:08 2024] Call Trace:
[Mon Mar 11 14:10:08 2024] <TASK>
[Mon Mar 11 14:10:08 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:10:08 2024] schedule+0x2d/0x70
[Mon Mar 11 14:10:08 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:10:08 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:10:08 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:10:08 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:10:08 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:10:08 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:10:08 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:10:08 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:10:08 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0
[nfsd]
[Mon Mar 11 14:10:08 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:10:08 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:10:08 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:10:08 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:10:08 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:10:08 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:10:08 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:10:08 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:10:08 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:10:08 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:10:08 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:10:08 2024] kthread+0xdd/0x100
[Mon Mar 11 14:10:08 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:10:08 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:10:08 2024] </TASK>
[Mon Mar 11 14:10:08 2024] INFO: task nfsd:8866 blocked for more than
122 seconds.
[Mon Mar 11 14:10:08 2024] Not tainted 5.14.0-419.el9.x86_64 #1
[Mon Mar 11 14:10:08 2024] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Mon Mar 11 14:10:08 2024] task:nfsd state:D stack:0
pid:8866 ppid:2 flags:0x00004000
[Mon Mar 11 14:10:08 2024] Call Trace:
[Mon Mar 11 14:10:08 2024] <TASK>
[Mon Mar 11 14:10:08 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:10:08 2024] schedule+0x2d/0x70
[Mon Mar 11 14:10:08 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:10:08 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:10:08 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:10:08 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:10:08 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:10:08 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:10:08 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:10:08 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:10:08 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:10:08 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:10:08 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:10:08 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:10:08 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:10:08 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:10:08 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:10:08 2024] kthread+0xdd/0x100
[Mon Mar 11 14:10:08 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:10:08 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:10:08 2024] </TASK>
The above is repeated a few times, and then this warning is also logged:
[Mon Mar 11 14:12:04 2024] ------------[ cut here ]------------
[Mon Mar 11 14:12:04 2024] WARNING: CPU: 39 PID: 8844 at
fs/nfsd/nfs4state.c:4919 nfsd_break_deleg_cb+0x170/0x190 [nfsd]
[Mon Mar 11 14:12:05 2024] Modules linked in: nfsv4 dns_resolver nfs
fscache netfs rpcsec_gss_krb5 rpcrdma rdma_cm iw_cm ib_cm ib_core
binfmt_misc bonding tls rfkill nft_counter nft_ct
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_reject_inet
nf_reject_ipv4 nf_reject_ipv6 nft_reject nf_tables nfnetlink vfat fat
dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio l
ibcrc32c dm_service_time dm_multipath intel_rapl_msr intel_rapl_common
intel_uncore_frequency intel_uncore_frequency_common isst_if_common
skx_edac nfit libnvdimm ipmi_ssif x86_pkg_temp
_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass dcdbas rapl
intel_cstate mgag200 i2c_algo_bit drm_shmem_helper dell_smbios
drm_kms_helper dell_wmi_descriptor wmi_bmof intel_u
ncore syscopyarea pcspkr sysfillrect mei_me sysimgblt acpi_ipmi mei
fb_sys_fops i2c_i801 ipmi_si intel_pch_thermal lpc_ich ipmi_devintf
i2c_smbus ipmi_msghandler joydev acpi_power_meter
nfsd auth_rpcgss nfs_acl drm lockd grace fuse sunrpc ext4 mbcache jbd2
sd_mod sg lpfc
[Mon Mar 11 14:12:05 2024] nvmet_fc nvmet nvme_fc nvme_fabrics
crct10dif_pclmul ahci libahci crc32_pclmul nvme_core crc32c_intel ixgbe
megaraid_sas libata nvme_common ghash_clmulni_int
el t10_pi wdat_wdt scsi_transport_fc mdio wmi dca dm_mirror
dm_region_hash dm_log dm_mod
[Mon Mar 11 14:12:05 2024] CPU: 39 PID: 8844 Comm: nfsd Not tainted
5.14.0-419.el9.x86_64 #1
[Mon Mar 11 14:12:05 2024] Hardware name: Dell Inc. PowerEdge
R740/00WGD1, BIOS 2.20.1 09/13/2023
[Mon Mar 11 14:12:05 2024] RIP: 0010:nfsd_break_deleg_cb+0x170/0x190 [nfsd]
[Mon Mar 11 14:12:05 2024] Code: a6 95 c5 f3 e9 ff fe ff ff 48 89 df be
01 00 00 00 e8 34 b5 13 f4 48 8d bb 98 00 00 00 e8 c8 f9 00 00 84 c0 0f
85 2e ff ff ff <0f> 0b e9 27 ff ff ff be
02 00 00 00 48 89 df e8 0c b5 13 f4 e9 01
[Mon Mar 11 14:12:05 2024] RSP: 0018:ffff9929e0bb7b80 EFLAGS: 00010246
[Mon Mar 11 14:12:05 2024] RAX: 0000000000000000 RBX: ffff8ada51930900
RCX: 0000000000000024
[Mon Mar 11 14:12:05 2024] RDX: ffff8ada519309c8 RSI: ffff8ad582933c00
RDI: 0000000000002000
[Mon Mar 11 14:12:05 2024] RBP: ffff8ad46bf21574 R08: ffff9929e0bb7b48
R09: 0000000000000000
[Mon Mar 11 14:12:05 2024] R10: ffff8aec859a2948 R11: 0000000000000000
R12: ffff8ad6f497c360
[Mon Mar 11 14:12:05 2024] R13: ffff8ad46bf21560 R14: ffff8ae5942e0b10
R15: ffff8ad6f497c360
[Mon Mar 11 14:12:05 2024] FS: 0000000000000000(0000)
GS:ffff8b031fcc0000(0000) knlGS:0000000000000000
[Mon Mar 11 14:12:05 2024] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[Mon Mar 11 14:12:05 2024] CR2: 00007fafe2060744 CR3: 00000018e58de006
CR4: 00000000007706e0
[Mon Mar 11 14:12:05 2024] DR0: 0000000000000000 DR1: 0000000000000000
DR2: 0000000000000000
[Mon Mar 11 14:12:05 2024] DR3: 0000000000000000 DR6: 00000000fffe0ff0
DR7: 0000000000000400
[Mon Mar 11 14:12:05 2024] PKRU: 55555554
[Mon Mar 11 14:12:05 2024] Call Trace:
[Mon Mar 11 14:12:05 2024] <TASK>
[Mon Mar 11 14:12:05 2024] ? show_trace_log_lvl+0x1c4/0x2df
[Mon Mar 11 14:12:05 2024] ? show_trace_log_lvl+0x1c4/0x2df
[Mon Mar 11 14:12:05 2024] ? __break_lease+0x16f/0x5f0
[Mon Mar 11 14:12:05 2024] ? nfsd_break_deleg_cb+0x170/0x190 [nfsd]
[Mon Mar 11 14:12:05 2024] ? __warn+0x81/0x110
[Mon Mar 11 14:12:05 2024] ? nfsd_break_deleg_cb+0x170/0x190 [nfsd]
[Mon Mar 11 14:12:05 2024] ? report_bug+0x10a/0x140
[Mon Mar 11 14:12:05 2024] ? handle_bug+0x3c/0x70
[Mon Mar 11 14:12:05 2024] ? exc_invalid_op+0x14/0x70
[Mon Mar 11 14:12:05 2024] ? asm_exc_invalid_op+0x16/0x20
[Mon Mar 11 14:12:05 2024] ? nfsd_break_deleg_cb+0x170/0x190 [nfsd]
[Mon Mar 11 14:12:05 2024] __break_lease+0x16f/0x5f0
[Mon Mar 11 14:12:05 2024] ? nfsd_file_lookup_locked+0x117/0x160 [nfsd]
[Mon Mar 11 14:12:05 2024] ? list_lru_del+0x101/0x150
[Mon Mar 11 14:12:05 2024] nfsd_file_do_acquire+0x790/0x830 [nfsd]
[Mon Mar 11 14:12:05 2024] nfs4_get_vfs_file+0x315/0x3a0 [nfsd]
[Mon Mar 11 14:12:05 2024] nfsd4_process_open2+0x430/0xa30 [nfsd]
[Mon Mar 11 14:12:05 2024] ? fh_verify+0x297/0x2f0 [nfsd]
[Mon Mar 11 14:12:05 2024] nfsd4_open+0x3ce/0x4b0 [nfsd]
[Mon Mar 11 14:12:05 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:12:05 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:12:05 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:12:05 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:12:05 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:12:05 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:12:05 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:12:05 2024] kthread+0xdd/0x100
[Mon Mar 11 14:12:05 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:12:05 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:12:05 2024] </TASK>
[Mon Mar 11 14:12:05 2024] ---[ end trace 7a039e17443dc651 ]---
Could this be the same issue as described here:
https://lore.kernel.org/linux-nfs/af0ec881-5ebf-4feb-98ae-3ed2a77f86f1@oracle.com/
?
As described in that thread, I've tried to obtain the requested information.
The attached workqueue_info.txt file contains the dmesg output after
running 'echo t > /proc/sysrq-trigger'. It's possibly truncated :-(.
I'm also attaching rpc_tasks.txt run on the server, and the
nfs_threads.txt file run on one of the clients that fails to mount the
server when the issue occurs.
Is it possible this is the issue that was fixed by the patches described
here?
https://lore.kernel.org/linux-nfs/2024022054-cause-suffering-eae8@gregkh/
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
[-- Attachment #1.2: Type: text/html, Size: 14668 bytes --]
[-- Attachment #2: workqueue_info.txt --]
[-- Type: text/plain, Size: 1280351 bytes --]
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:4327 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-88-8 state:S stack:0 pid:4330 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:4331 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:sssd_be state:S stack:0 pid:4348 ppid:4316 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f410254e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fffca4884f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055a3030565f0 RCX: 00007f410254e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007fffca48852c RDI: 0000000000000003
[Mon Mar 11 14:29:12 2024] RBP: 000055a303056460 R08: 0000000000078fa6 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000001190 R11: 0000000000000246 R12: 000055a303056740
[Mon Mar 11 14:29:12 2024] R13: 0000000000001190 R14: 000055a302bbf2f8 R15: 00007f410294b000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:NetworkManager state:S stack:0 pid:4349 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? __ep_eventpoll_poll.isra.0+0x143/0x170
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] ? poll_freewait+0x45/0xa0
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? __skb_try_recv_datagram+0xb4/0x190
[Mon Mar 11 14:29:12 2024] ? __skb_recv_datagram+0x85/0xc0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f8b675426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff32a22790 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f8b677c7071 RCX: 00007f8b675426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000008 RDI: 00005561c946f2d0
[Mon Mar 11 14:29:12 2024] RBP: 00005561c946f2d0 R08: 0000000000000000 R09: 00007fff32a22620
[Mon Mar 11 14:29:12 2024] R10: 00007fff32b28080 R11: 0000000000000293 R12: 0000000000000008
[Mon Mar 11 14:29:12 2024] R13: 0000000000000008 R14: 00007fff32a22800 R15: 00005561c939a670
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gmain state:S stack:0 pid:4355 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? avc_has_perm_noaudit+0x94/0x110
[Mon Mar 11 14:29:12 2024] ? avc_has_perm_noaudit+0x94/0x110
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? path_lookupat+0xa2/0x1c0
[Mon Mar 11 14:29:12 2024] ? filename_lookup+0xcf/0x1d0
[Mon Mar 11 14:29:12 2024] ? audit_filter_rules.constprop.0+0x2c5/0xd30
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f8b675426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f8b661a7f80 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f8b677c7071 RCX: 00007f8b675426ff
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000f9d RSI: 0000000000000002 RDI: 00005561c939a460
[Mon Mar 11 14:29:12 2024] RBP: 00005561c939a460 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007fff32b28080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007f8b661a7ff0 R15: 00005561c939a7a0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gdbus state:S stack:0 pid:4359 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? unix_poll+0xf4/0x100
[Mon Mar 11 14:29:12 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? unix_stream_recvmsg+0x92/0xa0
[Mon Mar 11 14:29:12 2024] ? __pfx_unix_stream_read_actor+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f8b675426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f8b659a6f80 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f8b677c7071 RCX: 00007f8b675426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000003 RDI: 00007f8b58008c10
[Mon Mar 11 14:29:12 2024] RBP: 00007f8b58008c10 R08: 0000000000000000 R09: 00007f8b659a6e10
[Mon Mar 11 14:29:12 2024] R10: 00007fff32b28080 R11: 0000000000000293 R12: 0000000000000003
[Mon Mar 11 14:29:12 2024] R13: 0000000000000003 R14: 00007f8b659a6ff0 R15: 00005561c93b04b0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-83-8 state:S stack:0 pid:4350 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:4351 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-77-8 state:S stack:0 pid:4353 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:4354 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:sssd_nss state:S stack:0 pid:4356 ppid:4316 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f05d854e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc788f8588 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055bd5961e430 RCX: 00007f05d854e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffc788f85bc RDI: 0000000000000003
[Mon Mar 11 14:29:12 2024] RBP: 000055bd5961e2a0 R08: 00000000000048ac R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000fb3 R11: 0000000000000246 R12: 000055bd5961e580
[Mon Mar 11 14:29:12 2024] R13: 0000000000000fb3 R14: 00007ffc788f87e0 R15: 000055bd59632e10
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:sssd_pam state:S stack:0 pid:4357 ppid:4316 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc84134e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc7c7f4b48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055857c8fa430 RCX: 00007fc84134e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffc7c7f4b7c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 000055857c8fa2a0 R08: 00000000000f4239 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000002710 R11: 0000000000000246 R12: 000055857c8fa580
[Mon Mar 11 14:29:12 2024] R13: 0000000000002710 R14: 000055857c90c100 R15: 000055857b0180e3
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:sssd_autofs state:S stack:0 pid:4358 ppid:4316 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f7d58b4e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffcb26c3c28 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000557364468430 RCX: 00007f7d58b4e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffcb26c3c5c RDI: 0000000000000003
[Mon Mar 11 14:29:12 2024] RBP: 00005573644682a0 R08: 00000000000f423b R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000002710 R11: 0000000000000246 R12: 0000557364468580
[Mon Mar 11 14:29:12 2024] R13: 0000000000002710 R14: 00007ffcb26c3de0 R15: 000055736447ca60
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-25-8 state:S stack:0 pid:4360 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:4361 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:systemd-logind state:S stack:0 pid:4364 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? ep_send_events+0x272/0x2c0
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0f6c34e84e
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff450edf80 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000556409c24a00 RCX: 00007f0f6c34e84e
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000022 RSI: 0000556409c33d10 RDI: 0000000000000004
[Mon Mar 11 14:29:12 2024] RBP: 0000556409c24b90 R08: 0000000000000000 R09: 1e84d9f954ce397b
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000293 R12: 00000000000000b4
[Mon Mar 11 14:29:12 2024] R13: 0000556409c24a00 R14: 0000000000000022 R15: 0000000000000012
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:tls-strp state:I stack:0 pid:4391 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:bond0 state:I stack:0 pid:4392 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:cupsd state:S stack:0 pid:4399 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f739954e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffd5b08db38 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055ffd032cae6 RCX: 00007f739954e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000080000 RSI: 00007f7397f46010 RDI: 0000000000000004
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 000055ffd032cae6 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00000000000dbba0 R11: 0000000000000246 R12: 0000000000000384
[Mon Mar 11 14:29:12 2024] R13: 0000000000000000 R14: 0000000065ef04e6 R15: 0000000000000383
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:uwsgi state:S stack:0 pid:4405 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fb64274e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff2b12bbc8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fff2b12bc2c RCX: 00007fb64274e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007fff2b12bbd4 RDI: 0000000000000012
[Mon Mar 11 14:29:12 2024] RBP: 0000000000ad0040 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00000000000003e8 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00000000004e32b8 R14: 0000000000000000 R15: 000000000048ed5a
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:sshd state:S stack:0 pid:4412 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? tcp_poll+0x1ce/0x370
[Mon Mar 11 14:29:12 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __mod_memcg_state+0x63/0xb0
[Mon Mar 11 14:29:12 2024] ? memcg_account_kmem+0x1e/0x60
[Mon Mar 11 14:29:12 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:12 2024] ? do_wp_page+0x381/0x540
[Mon Mar 11 14:29:12 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:12 2024] ? xa_load+0x70/0xb0
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? __pfx_inode_free_by_rcu+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? fsnotify_grab_connector+0x49/0x80
[Mon Mar 11 14:29:12 2024] ? __pfx___d_free+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __call_rcu_common.constprop.0+0x117/0x2b0
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? d_walk+0x1be/0x2a0
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? __pfx_select_collect+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? __pfx_inode_free_by_rcu+0x10/0x10
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? free_unref_page_commit+0x7e/0x170
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x39/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3a07144f64
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffdbce1f6b0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000010 RCX: 00007f3a07144f64
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000564a951d0e30 RDI: 0000000000000008
[Mon Mar 11 14:29:12 2024] RBP: 0000564a951d0e30 R08: 0000000000000000 R09: 00007ffdbce1f6f0
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000564a936a6181
[Mon Mar 11 14:29:12 2024] R13: 00000000000002d8 R14: 0000000000000000 R15: 0000564a936da0b0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:tuned state:S stack:0 pid:4413 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f211669c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff397bb840 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055ee4b14a7d0 RCX: 00007f211669c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 000055ee4b14a7d0
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007fff397bb900 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000000000000001 R14: 00007fff397bb880 R15: fffffffeffffffff
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:tuned state:S stack:0 pid:4666 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f211669c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f2115076f30 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f21100bfe30 RCX: 00007f211669c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 00007f21100bfe30
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f2115076ff0 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000000000000001 R14: 00007f2115076f70 R15: fffffffeffffffff
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:tuned state:S stack:0 pid:5104 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? avc_has_perm_noaudit+0x94/0x110
[Mon Mar 11 14:29:12 2024] ? avc_has_perm_noaudit+0x94/0x110
[Mon Mar 11 14:29:12 2024] ? xattr_find_entry+0x3e/0x140 [ext4]
[Mon Mar 11 14:29:12 2024] ? mutex_lock+0xe/0x30
[Mon Mar 11 14:29:12 2024] ? kernfs_vfs_xattr_get+0x42/0x70
[Mon Mar 11 14:29:12 2024] ? avc_has_perm_noaudit+0x94/0x110
[Mon Mar 11 14:29:12 2024] ? get_vfs_caps_from_disk+0x70/0x210
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? terminate_walk+0x61/0xf0
[Mon Mar 11 14:29:12 2024] ? path_lookupat+0xa2/0x1c0
[Mon Mar 11 14:29:12 2024] ? filename_lookup+0xcf/0x1d0
[Mon Mar 11 14:29:12 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:12 2024] ? pick_next_task_rt+0xbd/0x190
[Mon Mar 11 14:29:12 2024] ? pick_next_task+0x4d4/0x950
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f21167426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f21148360c0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: ffffffffffffffff RCX: 00007f21167426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 00007f21151afa30
[Mon Mar 11 14:29:12 2024] RBP: 00007f21148509e8 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000014 R11: 0000000000000293 R12: 00007f21148375c0
[Mon Mar 11 14:29:12 2024] R13: 00007f2110013d40 R14: 00007f211509bd80 R15: 00007f2114846c00
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:tuned state:S stack:0 pid:5263 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? ____sys_sendmsg+0x31c/0x340
[Mon Mar 11 14:29:12 2024] ? import_iovec+0x17/0x20
[Mon Mar 11 14:29:12 2024] ? copy_msghdr_from_user+0x6d/0xa0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f21167426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f210fffe110 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f21158da071 RCX: 00007f21167426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 00007f2104002f90
[Mon Mar 11 14:29:12 2024] RBP: 00007f2104002f90 R08: 0000000000000000 R09: 00007f210fffdfa0
[Mon Mar 11 14:29:12 2024] R10: 00007fff397de080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007f210fffe180 R15: 000055ee4b144ba0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:winbindd state:S stack:0 pid:4415 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f26a134e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffe9e3345d8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00005586337521b0 RCX: 00007f26a134e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffe9e33460c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00005586337422e0 R08: 00000000000dbed4 R09: 000000000000007d
[Mon Mar 11 14:29:12 2024] R10: 000000000000076d R11: 0000000000000246 R12: 0000558633752240
[Mon Mar 11 14:29:12 2024] R13: 000000000000076d R14: 0000558632a15344 R15: 0000558633761ef0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:fail2ban-server state:S stack:0 pid:4416 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? node_is_toptier+0x3e/0x60
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:12 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:12 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:12 2024] ? core_sys_select+0x1f7/0x3b0
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3bf4344e5d
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffcf503fcf0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ffcf503fd90 RCX: 00007f3bf4344e5d
[Mon Mar 11 14:29:12 2024] RDX: 00007ffcf503fe20 RSI: 00007ffcf503fda0 RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00007ffcf503fda0 R08: 00007ffcf503fd00 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007ffcf503fea0 R11: 0000000000000246 R12: 0000000000000005
[Mon Mar 11 14:29:12 2024] R13: 00007ffcf503fd00 R14: 0000000000000000 R15: 00007ffcf503fe20
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:f2b/observer state:S stack:0 pid:4437 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3bf429c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3bf2d44cd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f3be40014b0 RCX: 00007f3bf429c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 00007f3be40014b0
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f3bf2d44d90 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000000000000001 R14: 00007f3bf2d44d10 R15: fffffffeffffffff
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:f2b/f.samba state:S stack:0 pid:4439 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? do_select+0x719/0x7c0
[Mon Mar 11 14:29:12 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:12 2024] ? __ext4_get_inode_loc+0x112/0x4a0 [ext4]
[Mon Mar 11 14:29:12 2024] ? check_xattrs+0xe9/0x360 [ext4]
[Mon Mar 11 14:29:12 2024] ? xattr_find_entry+0x3e/0x140 [ext4]
[Mon Mar 11 14:29:12 2024] ? ext4_xattr_ibody_get+0x16e/0x1b0 [ext4]
[Mon Mar 11 14:29:12 2024] ? ext4_xattr_get+0x87/0xd0 [ext4]
[Mon Mar 11 14:29:12 2024] ? __vfs_getxattr+0x50/0x70
[Mon Mar 11 14:29:12 2024] ? get_vfs_caps_from_disk+0x70/0x210
[Mon Mar 11 14:29:12 2024] ? __legitimize_path+0x27/0x60
[Mon Mar 11 14:29:12 2024] ? audit_copy_inode+0x99/0xd0
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? selinux_inode_getattr+0x99/0xc0
[Mon Mar 11 14:29:12 2024] ? _copy_to_user+0x1a/0x30
[Mon Mar 11 14:29:12 2024] ? cp_new_stat+0x150/0x180
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? clockevents_program_event+0x93/0x100
[Mon Mar 11 14:29:12 2024] ? hrtimer_interrupt+0x126/0x210
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? sched_clock_cpu+0x9/0xc0
[Mon Mar 11 14:29:12 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3bf4344e5d
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3bf1d02f40 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f3bf1d02fc0 RCX: 00007f3bf4344e5d
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 00007f3bf1d02f50 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00007f3bf1d02f50 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:f2b/a.samba state:S stack:0 pid:4440 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? do_select+0x719/0x7c0
[Mon Mar 11 14:29:12 2024] ? task_numa_placement+0x3b7/0x4b0
[Mon Mar 11 14:29:12 2024] ? task_scan_max+0x12c/0x180
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? update_sg_lb_stats+0x7e/0x450
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3bf4344e5d
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3bf1501f40 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f3bf1501fc0 RCX: 00007f3bf4344e5d
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 00007f3bf1501f50 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00007f3bf1501f50 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:f2b/observer state:S stack:0 pid:2470448 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3bf429c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3bf2544290 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f3be8001120 RCX: 00007f3bf429c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 00007f3be8001120
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f3bf2544350 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000000000000001 R14: 00007f3bf25442d0 R15: fffffffeffffffff
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gssproxy state:S stack:0 pid:4420 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f5dde14e84e
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffd94499d10 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5dde14e84e
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000040 RSI: 00005584ab5fa210 RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00007f5dde246060 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 000000000000e95f R11: 0000000000000293 R12: 00007f5dde246060
[Mon Mar 11 14:29:12 2024] R13: 00000000fffffffe R14: 000000000000000a R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gssproxy state:S stack:0 pid:4425 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f5dde09c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f5ddd5fe770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5dde09c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005584ab5fb4a8
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00005584ab5fb4a8 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gssproxy state:S stack:0 pid:4426 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f5dde09c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f5ddcdfd770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5dde09c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005584ab5fb698
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00005584ab5fb698 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gssproxy state:S stack:0 pid:4427 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f5dde09c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f5ddc5fc770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5dde09c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005584ab5fb888
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00005584ab5fb888 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gssproxy state:S stack:0 pid:4428 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f5dde09c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f5ddbdfb770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5dde09c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005584ab5fba78
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00005584ab5fba78 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gssproxy state:S stack:0 pid:4429 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? file_update_time+0xb4/0xd0
[Mon Mar 11 14:29:12 2024] ? pipe_write+0x122/0x650
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f5dde09c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f5ddb5fa770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5dde09c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005584ab5fbc6c
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00005584ab5fbc6c R14: 0000000000000001 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rpc.gssd state:S stack:0 pid:4431 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? ep_send_events+0x272/0x2c0
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7efc96f4e84e
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffcc24423d0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007efc96f4e84e
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000020 RSI: 000055cc3a428990 RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 000055cc3a4286f0 R08: 0000000000000000 R09: 00007efc96fb14e0
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000293 R12: 000055cc3a428990
[Mon Mar 11 14:29:12 2024] R13: 000055cc3a423730 R14: 000055cc3a423730 R15: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rpc.gssd state:S stack:0 pid:4432 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] do_nanosleep+0x67/0x190
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] hrtimer_nanosleep+0xbe/0x1a0
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] common_nsleep+0x40/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_clock_nanosleep+0xbc/0x130
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7efc96f13975
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007efc967fec60 EFLAGS: 00000293 ORIG_RAX: 00000000000000e6
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: ffffffffffffff60 RCX: 00007efc96f13975
[Mon Mar 11 14:29:12 2024] RDX: 00007efc967feca0 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007efc967feca0 R11: 0000000000000293 R12: 000055cc39724af0
[Mon Mar 11 14:29:12 2024] R13: 000055cc3971d328 R14: 000055cc39724b00 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:polkitd state:S stack:0 pid:4667 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? __fget_light+0x9f/0x130
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? pollwake+0x74/0xa0
[Mon Mar 11 14:29:12 2024] ? __pfx_default_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff7713426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc44dec9a0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ff773775071 RCX: 00007ff7713426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 000055597da4b720
[Mon Mar 11 14:29:12 2024] RBP: 000055597da4b720 R08: 0000000000000000 R09: 00007ffc44dec830
[Mon Mar 11 14:29:12 2024] R10: 00007ffc44df5080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007ffc44deca10 R15: 000055597d9517d0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gmain state:S stack:0 pid:5162 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff7713426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76fdfeb40 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ff773775071 RCX: 00007ff7713426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 00007ff768000b80
[Mon Mar 11 14:29:12 2024] RBP: 00007ff768000b80 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007ffc44df5080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007ff76fdfebb0 R15: 000055597d950410
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gdbus state:S stack:0 pid:5165 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:12 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:12 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:12 2024] ? sock_def_readable+0x3e/0xc0
[Mon Mar 11 14:29:12 2024] ? unix_stream_sendmsg+0x476/0x4b0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? ___sys_sendmsg+0x95/0xd0
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff7713426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76f5fdb40 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ff773775071 RCX: 00007ff7713426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 000055597d966df0
[Mon Mar 11 14:29:12 2024] RBP: 000055597d966df0 R08: 0000000000000000 R09: 00007ff76f5fd9d0
[Mon Mar 11 14:29:12 2024] R10: 00007ffc44df5080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007ff76f5fdbb0 R15: 000055597d964e20
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5175 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76edfca40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5176 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? futex_wait+0x17f/0x260
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? flush_tlb_func+0x1ba/0x1f0
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? sched_clock_cpu+0x9/0xc0
[Mon Mar 11 14:29:12 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_call_function_single+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76ebfda40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5177 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76e9fea40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5178 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76e7ffa40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5179 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? free_unref_page_commit+0x7e/0x170
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:12 2024] ? audit_filter_rules.constprop.0+0x148/0xd30
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76e600a40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5180 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76e401a40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5181 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76e202a40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:JS Helper state:S stack:0 pid:5182 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_call_function+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff77129c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76e003a40 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff77129c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055597d93b868
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055597d93b868 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:polkitd state:S stack:0 pid:5197 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? do_poll.constprop.0+0x298/0x390
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:12 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7ff7713426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ff76de04b30 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ff773775071 RCX: 00007ff7713426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 000055597db471e0
[Mon Mar 11 14:29:12 2024] RBP: 000055597db471e0 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007ffc44df5080 R11: 0000000000000293 R12: 0000000000000001
[Mon Mar 11 14:29:12 2024] R13: 0000000000000001 R14: 00007ff76de04ba0 R15: 000055597dac05e0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:wb[KWAK] state:S stack:0 pid:4668 ppid:4415 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f26a134e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffe9e332b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00005586337521b0 RCX: 00007f26a134e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffe9e332b8c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00005586337422e0 R08: 000055863376e4b0 R09: 00007ffe9e332300
[Mon Mar 11 14:29:12 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055863375b3a0
[Mon Mar 11 14:29:12 2024] R13: 0000000000007530 R14: 0000558632a302f8 R15: 0000558632a4c020
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:collectd state:S stack:0 pid:4728 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] do_nanosleep+0x67/0x190
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] hrtimer_nanosleep+0xbe/0x1a0
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] common_nsleep+0x40/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_clock_nanosleep+0xbc/0x130
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f3353913975
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff67a02380 EFLAGS: 00000293 ORIG_RAX: 00000000000000e6
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fff67a023c0 RCX: 00007f3353913975
[Mon Mar 11 14:29:12 2024] RDX: 00007fff67a023c0 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 197bc1d4d90f9051 R08: 0000000000000000 R09: 00007fff67a020b8
[Mon Mar 11 14:29:12 2024] R10: 00007fff67a023c0 R11: 0000000000000293 R12: 0000000280000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c574c0 R14: 197bc1d25914941f R15: 0000000000000003
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:writer#0 state:S stack:0 pid:5327 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f33536c5cd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bc68
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bc68 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:writer#1 state:S stack:0 pid:5329 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3352ec4cd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bc68
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bc68 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:writer#2 state:S stack:0 pid:5330 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? futex_wait_queue+0x82/0xd0
[Mon Mar 11 14:29:12 2024] ? futex_wait+0x17f/0x260
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f33526c3cd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bc68
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bc68 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:writer#3 state:S stack:0 pid:5331 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3351ec2cd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bc68
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bc68 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:writer#4 state:S stack:0 pid:5332 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f33516c1cd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bc68
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bc68 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:reader#0 state:S stack:0 pid:5333 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f3350ec0ca0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f3350ec0dd0 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bce8
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f3350ec0dd0 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bce8 R14: 0000000000000000 R15: 0000560fc6c6bd00
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:reader#1 state:S stack:0 pid:5334 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f33506bfca0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f33506bfdd0 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bce8
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f33506bfdd0 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bce8 R14: 0000000000000000 R15: 0000560fc6c6bd00
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:reader#2 state:S stack:0 pid:5335 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f334febeca0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f334febedd0 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bce8
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f334febedd0 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bce8 R14: 0000000000000000 R15: 0000560fc6c6bd00
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:reader#3 state:S stack:0 pid:5336 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f334f6bdca0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f334f6bddd0 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bce8
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f334f6bddd0 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bce8 R14: 0000000000000000 R15: 0000560fc6c6bd00
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:reader#4 state:S stack:0 pid:5337 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f335389c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f334eebcca0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f334eebcdd0 RCX: 00007f335389c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000560fc6c6bce8
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007f334eebcdd0 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 0000560fc6c6bce8 R14: 0000000000000000 R15: 0000560fc6c6bd00
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nrpe state:S stack:0 pid:4740 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? tcp_poll+0x1ce/0x370
[Mon Mar 11 14:29:12 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:12 2024] ? rmqueue+0x7d3/0xd40
[Mon Mar 11 14:29:12 2024] ? do_wp_page+0x381/0x540
[Mon Mar 11 14:29:12 2024] ? __handle_mm_fault+0x32b/0x670
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] ? asm_exc_page_fault+0x22/0x30
[Mon Mar 11 14:29:12 2024] ? do_wp_page+0x381/0x540
[Mon Mar 11 14:29:12 2024] ? __handle_mm_fault+0x32b/0x670
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:12 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fb70e344dc9
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffcc3211280 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb70e344dc9
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000562bfde0a920 RDI: 0000000000000006
[Mon Mar 11 14:29:12 2024] RBP: 0000562bfde0a920 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000006
[Mon Mar 11 14:29:12 2024] R13: 0000000000000000 R14: 0000562bfcae0da0 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:wb[ESAT] state:S stack:0 pid:4744 ppid:4415 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f26a134e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffe9e332b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00005586337521b0 RCX: 00007f26a134e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffe9e332b8c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00005586337422e0 R08: 0000558633785180 R09: 0000000000004a9c
[Mon Mar 11 14:29:12 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055863375b3a0
[Mon Mar 11 14:29:12 2024] R13: 0000000000007530 R14: 0000558632a302f8 R15: 0000558632a4c020
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rsyslogd state:S stack:0 pid:4752 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? do_select+0x719/0x7c0
[Mon Mar 11 14:29:12 2024] ? do_select+0x719/0x7c0
[Mon Mar 11 14:29:12 2024] ? task_scan_max+0x12c/0x180
[Mon Mar 11 14:29:12 2024] ? update_task_scan_period+0xce/0x180
[Mon Mar 11 14:29:12 2024] ? node_is_toptier+0x3e/0x60
[Mon Mar 11 14:29:12 2024] ? task_numa_fault+0x6d/0x340
[Mon Mar 11 14:29:12 2024] ? do_numa_page+0x29a/0x460
[Mon Mar 11 14:29:12 2024] ? __handle_mm_fault+0x32b/0x670
[Mon Mar 11 14:29:12 2024] ? task_numa_fault+0x6d/0x340
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x39/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0682b44fd2
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffde7a1ca20 EFLAGS: 00000293 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ffde7a1cac0 RCX: 00007f0682b44fd2
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 00007ffde7a1cad0 R08: 00007ffde7a1ca50 R09: 00007ffde7a1ca60
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055c79f7f83a0 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:in:imjournal state:D stack:0 pid:4837 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_preempt_disabled+0x11/0x20
[Mon Mar 11 14:29:12 2024] rwsem_down_write_slowpath+0x23d/0x500
[Mon Mar 11 14:29:12 2024] down_write_killable+0x58/0x80
[Mon Mar 11 14:29:12 2024] do_mprotect_pkey+0xc5/0x400
[Mon Mar 11 14:29:12 2024] ? __seccomp_filter+0x45/0x480
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:12 2024] __x64_sys_mprotect+0x1b/0x30
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0682a3eebb
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f0681c41898 EFLAGS: 00000206 ORIG_RAX: 000000000000000a
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f0674000020 RCX: 00007f0682a3eebb
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000003 RSI: 0000000000001000 RDI: 00007f0674cb9000
[Mon Mar 11 14:29:12 2024] RBP: 0000000000001010 R08: 0000000000cb9000 R09: 0000000000cba000
[Mon Mar 11 14:29:12 2024] R10: 0000000000001030 R11: 0000000000000206 R12: 00000000000003e0
[Mon Mar 11 14:29:12 2024] R13: 0000000000001000 R14: 00007f0674cb8c20 R15: fffffffffffff000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:in:imfile state:S stack:0 pid:4838 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] wait_woken+0x50/0x60
[Mon Mar 11 14:29:12 2024] inotify_read+0x1e8/0x280
[Mon Mar 11 14:29:12 2024] ? __pfx_woken_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] vfs_read+0xa4/0x330
[Mon Mar 11 14:29:12 2024] ? __seccomp_filter+0x45/0x480
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __fget_light+0x9f/0x130
[Mon Mar 11 14:29:12 2024] ksys_read+0x5f/0xe0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0682b3e8bc
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f067bffcb90 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0682b3e8bc
[Mon Mar 11 14:29:12 2024] RDX: 0000000000002000 RSI: 00007f067bffcc00 RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00007f066c000f20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 00007f067bffcc00
[Mon Mar 11 14:29:12 2024] R13: 000055c79f7f844c R14: 0000000000000000 R15: 00007f0682918025
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rs:main Q:Reg state:D stack:0 pid:4840 ppid:1 flags:0x00004002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? pick_next_task+0x4d4/0x950
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] stop_two_cpus+0x177/0x1b0
[Mon Mar 11 14:29:12 2024] ? __pfx_migrate_swap_stop+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_multi_cpu_stop+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? migrate_swap+0xb4/0x110
[Mon Mar 11 14:29:12 2024] ? __pfx_multi_cpu_stop+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? migrate_swap+0xb4/0x110
[Mon Mar 11 14:29:12 2024] migrate_swap+0xb4/0x110
[Mon Mar 11 14:29:12 2024] task_numa_migrate.isra.0+0x65a/0x950
[Mon Mar 11 14:29:12 2024] ? set_next_task_fair+0x2d/0xd0
[Mon Mar 11 14:29:12 2024] task_numa_fault+0x28d/0x340
[Mon Mar 11 14:29:12 2024] do_numa_page+0x29a/0x460
[Mon Mar 11 14:29:12 2024] __handle_mm_fault+0x32b/0x670
[Mon Mar 11 14:29:12 2024] handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:12 2024] do_user_addr_fault+0x1b4/0x6a0
[Mon Mar 11 14:29:12 2024] exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] asm_exc_page_fault+0x22/0x30
[Mon Mar 11 14:29:12 2024] RIP: 0010:__get_user_8+0xd/0x20
[Mon Mar 11 14:29:12 2024] Code: ca c3 cc cc cc cc 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 48 89 c2 48 c1 fa 3f 48 09 d0 0f 01 cb <48> 8b 10 31 c0 0f 01 ca c3 cc cc cc cc 66 0f 1f 44 00 00 90 90 90
[Mon Mar 11 14:29:12 2024] RSP: 0018:ffff9929e08dfe10 EFLAGS: 00050206
[Mon Mar 11 14:29:12 2024] RAX: 00007f0681842fe8 RBX: ffff8aeb9d648000 RCX: 0000000000000000
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: ffff9929e08dfe40 RDI: ffff8aeb9d648000
[Mon Mar 11 14:29:12 2024] RBP: ffff9929e08dfeb0 R08: ffff9929e08dfe94 R09: ffff8ad4d10c9540
[Mon Mar 11 14:29:12 2024] R10: 000000000000003e R11: 0000000000000000 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 00007f0682a9c39a R14: ffff9929e08dff58 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] rseq_ip_fixup+0x46/0x1a0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:12 2024] exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:12 2024] exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:12 2024] syscall_exit_to_user_mode+0x12/0x40
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0682a9c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f0681841b00 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f0682a9c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055c79fd7bd20
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000001 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055c79fd7bd20 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rs:action-0-bui state:S stack:0 pid:4841 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? release_sock+0x40/0x90
[Mon Mar 11 14:29:12 2024] ? tcp_sendmsg+0x33/0x40
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0682a9c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f0681040b00 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0682a9c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055c79fd68b74
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000001 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055c79fd68b74 R14: 0000000000000001 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rs:action-1-bui state:S stack:0 pid:4867 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __tcp_push_pending_frames+0x32/0xf0
[Mon Mar 11 14:29:12 2024] ? tcp_sendmsg_locked+0xaae/0xc20
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f0682a9c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f067b3fcb00 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f0682a9c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055c79fd723c0
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000001 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055c79fd723c0 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:smbd state:S stack:0 pid:4757 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc284a9da8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffc284a9ddc RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 000055f87dd2aa20 R08: 000000000006012f R09: 00007ffc284a9790
[Mon Mar 11 14:29:12 2024] R10: 00000000000bf80a R11: 0000000000000246 R12: 000055f87dd39850
[Mon Mar 11 14:29:12 2024] R13: 00000000000bf80a R14: 000055f87dd37590 R15: 00000000000001bd
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:smbd-notifyd state:S stack:0 pid:4819 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc284a9d58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffc284a9d8c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 000055f87dd2aa20 R08: 000000000006974d R09: 000055f87dda1da0
[Mon Mar 11 14:29:12 2024] R10: 00000000000001b0 R11: 0000000000000246 R12: 000055f87dd39850
[Mon Mar 11 14:29:12 2024] R13: 00000000000001b0 R14: 000055f87dd375d8 R15: 000055f87bfcdda2
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:smbd-cleanupd state:S stack:0 pid:4821 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc284a9d28 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffc284a9d5c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 000055f87dd2aa20 R08: 000055f87dd39b90 R09: 00007ffc284a90f0
[Mon Mar 11 14:29:12 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055f87dd39850
[Mon Mar 11 14:29:12 2024] R13: 0000000000007530 R14: 0000000000000000 R15: 0000000000001295
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:master state:S stack:0 pid:4982 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f6750b4e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffd4c37d4b8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ffd4c37d4d0 RCX: 00007f6750b4e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000064 RSI: 00007ffd4c37d4d0 RDI: 0000000000000010
[Mon Mar 11 14:29:12 2024] RBP: 00007f6750e237d0 R08: 00005654ff244850 R09: 000000000000001b
[Mon Mar 11 14:29:12 2024] R10: 0000000000004268 R11: 0000000000000246 R12: 00007f6750e23794
[Mon Mar 11 14:29:12 2024] R13: 00000000000001f4 R14: 00005654fdb16821 R15: 00005654fdb167cc
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:qmgr state:S stack:0 pid:5126 ppid:4982 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f638e14e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff8af04018 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fff8af04030 RCX: 00007f638e14e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000064 RSI: 00007fff8af04030 RDI: 000000000000000a
[Mon Mar 11 14:29:12 2024] RBP: 00007f638e2677d0 R08: 00005582908f7350 R09: 0000000000000078
[Mon Mar 11 14:29:12 2024] R10: 00000000000493e0 R11: 0000000000000246 R12: 00007f638e267794
[Mon Mar 11 14:29:12 2024] R13: 00000000ffffffff R14: 00005582900937b0 R15: 00007f638e267794
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rpc.mountd state:S stack:0 pid:5127 ppid:1 flags:0x00004002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? udp_poll+0x18/0xb0
[Mon Mar 11 14:29:12 2024] ? cache_poll+0x72/0xa0 [sunrpc]
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? add_ptr_to_bulk_krc_lock+0x5b/0x200
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? cache_flush+0x2a/0x40 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? ip_map_parse+0x1e2/0x200 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? cache_write_procfs+0x52/0xb0 [sunrpc]
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f619d344dc9
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffdb13093d0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f619d344dc9
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 00007ffdb1309520 RDI: 0000000000000400
[Mon Mar 11 14:29:12 2024] RBP: 00007ffdb1309520 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000400
[Mon Mar 11 14:29:12 2024] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:uwsgi state:S stack:0 pid:5236 ppid:4405 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? ep_send_events+0x272/0x2c0
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fb64274e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff2b12bb08 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fff2b12bb58 RCX: 00007fb64274e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007fff2b12bb14 RDI: 0000000000000007
[Mon Mar 11 14:29:12 2024] RBP: 00007fb642563078 R08: 0000000000000001 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000004e0ee0
[Mon Mar 11 14:29:12 2024] R13: 0000000000000007 R14: 00007fb642563000 R15: 00007fff2b12bb9c
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:uwsgi state:S stack:0 pid:5237 ppid:4405 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? ep_send_events+0x272/0x2c0
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fb64274e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff2b12bb08 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fff2b12bb58 RCX: 00007fb64274e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007fff2b12bb14 RDI: 0000000000000007
[Mon Mar 11 14:29:12 2024] RBP: 00007fb64210e078 R08: 0000000000000002 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000004e0ee0
[Mon Mar 11 14:29:12 2024] R13: 0000000000000007 R14: 00007fb64210e000 R15: 00007fff2b12bb9c
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:uwsgi state:S stack:0 pid:5238 ppid:4405 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? ep_send_events+0x272/0x2c0
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fb64274e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff2b12bb08 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fff2b12bb58 RCX: 00007fb64274e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007fff2b12bb14 RDI: 0000000000000007
[Mon Mar 11 14:29:12 2024] RBP: 00007fb64135c078 R08: 0000000000000003 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000004e0ee0
[Mon Mar 11 14:29:12 2024] R13: 0000000000000007 R14: 00007fb64135c000 R15: 00007fff2b12bb9c
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:uwsgi state:S stack:0 pid:5239 ppid:4405 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? add_ptr_to_bulk_krc_lock+0x164/0x200
[Mon Mar 11 14:29:12 2024] ? __update_load_avg_cfs_rq+0x289/0x2f0
[Mon Mar 11 14:29:12 2024] ? __pfx_inode_free_by_rcu+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? sched_clock_cpu+0x9/0xc0
[Mon Mar 11 14:29:12 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:12 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fb64274e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff2b12ba88 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fb642f08f20 RCX: 00007fb64274e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000040 RSI: 0000000000a44080 RDI: 0000000000000006
[Mon Mar 11 14:29:12 2024] RBP: 00000000004e0ee0 R08: 00000000ffffffff R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000adcc60
[Mon Mar 11 14:29:12 2024] R13: 0000000000a44080 R14: 0000000000a44080 R15: 0000000000a4408c
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5339 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? ext4_xattr_ibody_get+0x16e/0x1b0 [ext4]
[Mon Mar 11 14:29:12 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:12 2024] ? __filemap_get_folio+0x26e/0x330
[Mon Mar 11 14:29:12 2024] ? pagecache_get_page+0x3a/0x70
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? ext4_htree_store_dirent+0x36/0x100 [ext4]
[Mon Mar 11 14:29:12 2024] ? htree_dirblock_to_tree+0x27e/0x330 [ext4]
[Mon Mar 11 14:29:12 2024] ? ext4_htree_fill_tree+0xca/0x3b0 [ext4]
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b3426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffcfb738320 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000001b58 RCX: 00007fef6b3426ff
[Mon Mar 11 14:29:12 2024] RDX: 0000000000001b58 RSI: 0000000000000001 RDI: 0000000001f08210
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00007ffcfb729df0
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000293 R12: 0000000001ef3cd0
[Mon Mar 11 14:29:12 2024] R13: 0000000065ef0748 R14: 0000000065ef0748 R15: 00007ffcfb73d4e8
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5342 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b3426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fef691fec10 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fef6401d940 RCX: 00007fef6b3426ff
[Mon Mar 11 14:29:12 2024] RDX: 000000000000ea60 RSI: 0000000000000003 RDI: 00007fef64013720
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000080 R11: 0000000000000293 R12: 00007fef6401d960
[Mon Mar 11 14:29:12 2024] R13: 0000000000000000 R14: 00007fef64000b80 R15: 00007fef691fede0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5343 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:12 2024] ? ext4_htree_store_dirent+0x36/0x100 [ext4]
[Mon Mar 11 14:29:12 2024] ? ext4_htree_store_dirent+0x36/0x100 [ext4]
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? ext4_htree_fill_tree+0xca/0x3b0 [ext4]
[Mon Mar 11 14:29:12 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:12 2024] ? verify_dirent_name+0x1c/0x40
[Mon Mar 11 14:29:12 2024] ? filldir64+0x3b/0x190
[Mon Mar 11 14:29:12 2024] ? current_time+0x2b/0xf0
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_call_function+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b3426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fef63ffec90 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fef63ffed80 RCX: 00007fef6b3426ff
[Mon Mar 11 14:29:12 2024] RDX: 0000000000007530 RSI: 0000000000000001 RDI: 00007fef5c014860
[Mon Mar 11 14:29:12 2024] RBP: 0000000000ba0cd8 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000001000000 R11: 0000000000000293 R12: 00007fef5c000b80
[Mon Mar 11 14:29:12 2024] R13: 00007fef5c00c710 R14: 00007fef6b29f530 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5345 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] ? node_is_toptier+0x3e/0x60
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b3426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fef61ffcca0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fef58003010 RCX: 00007fef6b3426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000000105b8 RSI: 0000000000000001 RDI: 00007fef58016cf0
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000001ebc708
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000293 R12: 000000000000000d
[Mon Mar 11 14:29:12 2024] R13: 0000000000ba0ce8 R14: 00007fef5800eba0 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5346 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b29c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fef53ffeca0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fef4c009180 RCX: 00007fef6b29c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00007fef4c009180
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007fef53ffed40 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 00007fef53ffece0 R14: fffffffeffffffff R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5347 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? common_interrupt+0xf/0xa0
[Mon Mar 11 14:29:12 2024] ? asm_common_interrupt+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? common_interrupt+0xf/0xa0
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b3426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fef52ffdcc0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fef44014d60 RCX: 00007fef6b3426ff
[Mon Mar 11 14:29:12 2024] RDX: 0000000000001388 RSI: 0000000000000001 RDI: 00007fef4401cf30
[Mon Mar 11 14:29:12 2024] RBP: 0000000001ef85b0 R08: 0000000000000000 R09: 00007fef52ffd750
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffcfb743d10
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007fef6b29f530 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nsrexecd state:S stack:0 pid:5349 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] do_nanosleep+0x67/0x190
[Mon Mar 11 14:29:12 2024] hrtimer_nanosleep+0xbe/0x1a0
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] common_nsleep+0x40/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_clock_nanosleep+0xbc/0x130
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fef6b313975
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fef4bff8bc0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e6
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fef6b313975
[Mon Mar 11 14:29:12 2024] RDX: 00007fef4bff8c10 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 00007fef4bff8c50 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007fef4bff8c00 R11: 0000000000000293 R12: 0000000001e7b7c0
[Mon Mar 11 14:29:12 2024] R13: 00007fef4bffec50 R14: 0000000000b8a714 R15: 0000000000b8a744
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:wb-idmap state:S stack:0 pid:5362 ppid:4415 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f26a134e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffe9e332b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00005586337521b0 RCX: 00007f26a134e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000001 RSI: 00007ffe9e332b8c RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00005586337422e0 R08: 000055863376b800 R09: 00000000000069df
[Mon Mar 11 14:29:12 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055863375b3a0
[Mon Mar 11 14:29:12 2024] R13: 0000000000007530 R14: 0000558632a302f8 R15: 0000558632a4c020
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:systemd state:S stack:0 pid:5558 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? ep_send_events+0x272/0x2c0
[Mon Mar 11 14:29:12 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:12 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc22df4e80a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffe065f5858 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000056476cd91670 RCX: 00007fc22df4e80a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000020 RSI: 000056476cff0240 RDI: 0000000000000004
[Mon Mar 11 14:29:12 2024] RBP: 000056476cd91800 R08: 0000000000000020 R09: b36493637b16ad2e
[Mon Mar 11 14:29:12 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000000000b4
[Mon Mar 11 14:29:12 2024] R13: 000056476cd91670 R14: 0000000000000020 R15: 0000000000000012
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:(sd-pam) state:S stack:0 pid:5572 ppid:5558 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? pollwake+0x74/0xa0
[Mon Mar 11 14:29:12 2024] ? dequeue_signal+0x68/0x200
[Mon Mar 11 14:29:12 2024] do_sigtimedwait+0x16f/0x210
[Mon Mar 11 14:29:12 2024] __x64_sys_rt_sigtimedwait+0x6e/0xe0
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f97a2a55a68
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffca488be10 EFLAGS: 00000246 ORIG_RAX: 0000000000000080
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ffca488be40 RCX: 00007f97a2a55a68
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 00007ffca488be40 RDI: 00007ffca488bfd0
[Mon Mar 11 14:29:12 2024] RBP: 00007ffca488be40 R08: 0000000000000001 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000008 R11: 0000000000000246 R12: 00007ffca488bf28
[Mon Mar 11 14:29:12 2024] R13: 00007ffca488bfd0 R14: 0000000000000000 R15: 00007ffca488bf30
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:90 state:I stack:0 pid:6393 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kcopyd state:I stack:0 pid:6394 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dm-thin state:I stack:0 pid:6395 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:91 state:I stack:0 pid:6396 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dmeventd state:S stack:0 pid:6398 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? update_sg_lb_stats+0x7e/0x450
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:12 2024] ? core_sys_select+0x32f/0x3b0
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc940f44e5d
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffcdf586250 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ffcdf5862f0 RCX: 00007fc940f44e5d
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 00007ffcdf586300 RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00007ffcdf586300 R08: 00007ffcdf586260 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000005
[Mon Mar 11 14:29:12 2024] R13: 00007ffcdf586260 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dmeventd state:S stack:0 pid:6410 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] dm_wait_event+0x6d/0xa0 [dm_mod]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] dev_wait+0x67/0x100 [dm_mod]
[Mon Mar 11 14:29:12 2024] ctl_ioctl+0x19f/0x290 [dm_mod]
[Mon Mar 11 14:29:12 2024] dm_ctl_ioctl+0xa/0x20 [dm_mod]
[Mon Mar 11 14:29:12 2024] __x64_sys_ioctl+0x87/0xc0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc940e3ec6b
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fc940327998 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055fb2dfdf460 RCX: 00007fc940e3ec6b
[Mon Mar 11 14:29:12 2024] RDX: 00007fc93c254fb0 RSI: 00000000c138fd08 RDI: 0000000000000007
[Mon Mar 11 14:29:12 2024] RBP: 00007fc94120a276 R08: 0000000000000004 R09: 00007fc94120d010
[Mon Mar 11 14:29:12 2024] R10: 0000000000000007 R11: 0000000000000206 R12: 00007fc93c254fb0
[Mon Mar 11 14:29:12 2024] R13: 00007fc93c255060 R14: 00007fc94120a276 R15: 000055fb2f482bc0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dmeventd state:S stack:0 pid:6411 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc940e9c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fc9402dabd0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007fc9402dad00 RCX: 00007fc940e9c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055fb2dfe60ec
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 00007fc9402dad00 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: 000055fb2dfe60ec R14: 0000000000000000 R15: 000055fb2dfe6100
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dmeventd state:S stack:0 pid:8698 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] dm_wait_event+0x6d/0xa0 [dm_mod]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] dev_wait+0x67/0x100 [dm_mod]
[Mon Mar 11 14:29:12 2024] ctl_ioctl+0x19f/0x290 [dm_mod]
[Mon Mar 11 14:29:12 2024] dm_ctl_ioctl+0xa/0x20 [dm_mod]
[Mon Mar 11 14:29:12 2024] __x64_sys_ioctl+0x87/0xc0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc940e3ec6b
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fc9401ff998 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 000055fb2dfdf460 RCX: 00007fc940e3ec6b
[Mon Mar 11 14:29:12 2024] RDX: 00007fc934004cd0 RSI: 00000000c138fd08 RDI: 0000000000000007
[Mon Mar 11 14:29:12 2024] RBP: 00007fc94120a276 R08: 0000000000000004 R09: 00007fc94120d010
[Mon Mar 11 14:29:12 2024] R10: 0000000000000007 R11: 0000000000000206 R12: 00007fc934004cd0
[Mon Mar 11 14:29:12 2024] R13: 00007fc934004d80 R14: 00007fc94120a276 R15: 000055fb2f482ca0
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:92 state:I stack:0 pid:6412 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:93 state:I stack:0 pid:6413 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:94 state:I stack:0 pid:6417 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:95 state:I stack:0 pid:6422 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:96 state:I stack:0 pid:6427 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:97 state:I stack:0 pid:6433 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:98 state:I stack:0 pid:6437 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-93-8 state:S stack:0 pid:6467 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6468 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-94-8 state:S stack:0 pid:6473 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6474 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-92-8 state:S stack:0 pid:6475 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6476 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-97-8 state:S stack:0 pid:6477 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6478 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-98-8 state:S stack:0 pid:6487 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-95-8 state:S stack:0 pid:6488 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6489 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6490 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-96-8 state:S stack:0 pid:6491 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:6492 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:upsclient state:S stack:0 pid:7154 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:12 2024] ? schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? check_heap_object+0x34/0x150
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x1d4/0x630
[Mon Mar 11 14:29:12 2024] ? check_heap_object+0x34/0x150
[Mon Mar 11 14:29:12 2024] ? __check_object_size.part.0+0x47/0xd0
[Mon Mar 11 14:29:12 2024] ? __skb_datagram_iter+0x79/0x2e0
[Mon Mar 11 14:29:12 2024] ? __pfx_simple_copy_to_iter+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __tcp_cleanup_rbuf+0xa0/0xc0
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg_locked+0x272/0x910
[Mon Mar 11 14:29:12 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:12 2024] ? core_sys_select+0x1f7/0x3b0
[Mon Mar 11 14:29:12 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:12 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:12 2024] __x64_sys_pselect6+0x48/0x70
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f977ff44e5d
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffd00510e60 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007ffd00510f00 RCX: 00007f977ff44e5d
[Mon Mar 11 14:29:12 2024] RDX: 00007ffd00510f90 RSI: 00007ffd00510f10 RDI: 0000000000000005
[Mon Mar 11 14:29:12 2024] RBP: 00007ffd00510f10 R08: 00007ffd00510e70 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007ffd00511010 R11: 0000000000000246 R12: 0000000000000005
[Mon Mar 11 14:29:12 2024] R13: 00007ffd00510e70 R14: 0000000000000000 R15: 00007ffd00510f90
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:upsclient state:S stack:0 pid:7160 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f977fe9c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f977ef4d2a0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f9778000d80 RCX: 00007f977fe9c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00007f9778000d80
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: fffffffeffffffff R14: 00007f97800ce501 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:upsclient state:S stack:0 pid:7162 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:12 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:12 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:12 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:12 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f977fe9c39a
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f977e74c2a0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f9770000d80 RCX: 00007f977fe9c39a
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00007f9770000d80
[Mon Mar 11 14:29:12 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:12 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:12 2024] R13: fffffffeffffffff R14: 00007f97800ce501 R15: 0000000000000000
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:99 state:I stack:0 pid:8694 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kcopyd state:I stack:0 pid:8695 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dm-thin state:I stack:0 pid:8696 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:10 state:I stack:0 pid:8697 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:10 state:I stack:0 pid:8699 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:10 state:I stack:0 pid:8710 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:kdmflush/253:10 state:I stack:0 pid:8719 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-102-8 state:S stack:0 pid:8732 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:8733 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:jbd2/dm-101-8 state:S stack:0 pid:8746 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] kjournald2+0x221/0x280 [jbd2]
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_kjournald2+0x10/0x10 [jbd2]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ext4-rsv-conver state:I stack:0 pid:8747 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ib-comp-wq state:I stack:0 pid:8762 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ib-comp-unb-wq state:I stack:0 pid:8763 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ib_mcast state:I stack:0 pid:8764 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:ib_nl_sa_wq state:I stack:0 pid:8765 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:rdma_cm state:I stack:0 pid:8770 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:12 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8782 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8783 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8784 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8785 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8786 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8787 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8788 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8789 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8790 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8791 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8792 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8793 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8794 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8795 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8796 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8797 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8798 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8799 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8800 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8801 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8802 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8803 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8804 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8805 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8806 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8807 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8808 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8809 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8810 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8811 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8812 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8813 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8814 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] svc_get_next_xprt+0x10e/0x190 [sunrpc]
[Mon Mar 11 14:29:12 2024] svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:I stack:0 pid:8815 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8816 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:12 2024] bit_wait_io+0xd/0x60
[Mon Mar 11 14:29:12 2024] __wait_on_bit+0x48/0x150
[Mon Mar 11 14:29:12 2024] ? __pfx_bit_wait_io+0x10/0x10
[Mon Mar 11 14:29:12 2024] out_of_line_wait_on_bit+0x92/0xb0
[Mon Mar 11 14:29:12 2024] ? __pfx_wake_bit_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] ext4_read_bh+0x84/0x90 [ext4]
[Mon Mar 11 14:29:12 2024] ext4_bread+0x4d/0x70 [ext4]
[Mon Mar 11 14:29:12 2024] __ext4_read_dirblock.part.0+0x2f/0x310 [ext4]
[Mon Mar 11 14:29:12 2024] ext4_dx_find_entry+0x74/0x210 [ext4]
[Mon Mar 11 14:29:12 2024] __ext4_find_entry+0x3a6/0x440 [ext4]
[Mon Mar 11 14:29:12 2024] ? d_alloc+0x83/0xa0
[Mon Mar 11 14:29:12 2024] ext4_lookup.part.0+0x58/0x1c0 [ext4]
[Mon Mar 11 14:29:12 2024] __lookup_slow+0x81/0x130
[Mon Mar 11 14:29:12 2024] lookup_one_len_unlocked+0x90/0xb0
[Mon Mar 11 14:29:12 2024] ? security_inode_permission+0x2d/0x50
[Mon Mar 11 14:29:12 2024] nfsd_lookup_dentry+0x95/0x2a0 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_lookup+0x8b/0x150 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_encode_operation+0xa3/0x2b0 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8817 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:12 2024] folio_wait_bit+0xe9/0x200
[Mon Mar 11 14:29:12 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:12 2024] ? __pfx_wake_page_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] folio_wait_writeback+0x28/0x80
[Mon Mar 11 14:29:12 2024] truncate_inode_partial_folio+0x5a/0x130
[Mon Mar 11 14:29:12 2024] truncate_inode_pages_range+0x1ba/0x440
[Mon Mar 11 14:29:12 2024] ? __ext4_handle_dirty_metadata+0x58/0x180 [ext4]
[Mon Mar 11 14:29:12 2024] ? ext4_do_update_inode.isra.0+0x17a/0x3f0 [ext4]
[Mon Mar 11 14:29:12 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:12 2024] ? kmem_cache_free+0x15/0x360
[Mon Mar 11 14:29:12 2024] ? jbd2_journal_stop+0x152/0x2e0 [jbd2]
[Mon Mar 11 14:29:12 2024] ? down_read+0xe/0xa0
[Mon Mar 11 14:29:12 2024] ? unmap_mapping_range+0x7e/0x140
[Mon Mar 11 14:29:12 2024] truncate_pagecache+0x44/0x60
[Mon Mar 11 14:29:12 2024] ext4_setattr+0x3df/0x980 [ext4]
[Mon Mar 11 14:29:12 2024] notify_change+0x3c5/0x550
[Mon Mar 11 14:29:12 2024] ? nfsd_setuser+0x116/0x270 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __nfsd_setattr+0x5b/0xe0 [nfsd]
[Mon Mar 11 14:29:12 2024] __nfsd_setattr+0x5b/0xe0 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_setattr+0x20f/0x480 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_setattr+0x16f/0x250 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8818 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x344/0x430
[Mon Mar 11 14:29:12 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x378/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8819 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:12 2024] folio_wait_bit+0xe9/0x200
[Mon Mar 11 14:29:12 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:12 2024] ? __pfx_wake_page_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] folio_wait_writeback+0x28/0x80
[Mon Mar 11 14:29:12 2024] truncate_inode_partial_folio+0x5a/0x130
[Mon Mar 11 14:29:12 2024] truncate_inode_pages_range+0x1ba/0x440
[Mon Mar 11 14:29:12 2024] ? __ext4_handle_dirty_metadata+0x58/0x180 [ext4]
[Mon Mar 11 14:29:12 2024] ? ext4_do_update_inode.isra.0+0x17a/0x3f0 [ext4]
[Mon Mar 11 14:29:12 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:12 2024] ? kmem_cache_free+0x15/0x360
[Mon Mar 11 14:29:12 2024] ? jbd2_journal_stop+0x152/0x2e0 [jbd2]
[Mon Mar 11 14:29:12 2024] ? down_read+0xe/0xa0
[Mon Mar 11 14:29:12 2024] ? unmap_mapping_range+0x7e/0x140
[Mon Mar 11 14:29:12 2024] truncate_pagecache+0x44/0x60
[Mon Mar 11 14:29:12 2024] ext4_setattr+0x3df/0x980 [ext4]
[Mon Mar 11 14:29:12 2024] notify_change+0x3c5/0x550
[Mon Mar 11 14:29:12 2024] ? nfsd_setuser+0x116/0x270 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __nfsd_setattr+0x5b/0xe0 [nfsd]
[Mon Mar 11 14:29:12 2024] __nfsd_setattr+0x5b/0xe0 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_setattr+0x20f/0x480 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_setattr+0x16f/0x250 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8820 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x378/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8821 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8822 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:R running task stack:0 pid:8823 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] ? free_unref_page+0x118/0x1e0
[Mon Mar 11 14:29:12 2024] ? __folio_put+0x27/0x60
[Mon Mar 11 14:29:12 2024] ? svc_tcp_recvfrom+0xed/0x4a0 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? svc_recv+0x57/0x150 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8824 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:12 2024] filemap_update_page+0x3b0/0x500
[Mon Mar 11 14:29:12 2024] ? __pfx_wake_page_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] filemap_get_pages+0x210/0x350
[Mon Mar 11 14:29:12 2024] ? update_sg_lb_stats+0x7e/0x450
[Mon Mar 11 14:29:12 2024] filemap_read+0xb9/0x310
[Mon Mar 11 14:29:12 2024] ? __fsnotify_parent+0xff/0x300
[Mon Mar 11 14:29:12 2024] ? __fsnotify_parent+0x10f/0x300
[Mon Mar 11 14:29:12 2024] generic_file_splice_read+0xd7/0x1b0
[Mon Mar 11 14:29:12 2024] splice_direct_to_actor+0xb0/0x210
[Mon Mar 11 14:29:12 2024] ? selinux_inode_permission+0x10e/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_direct_splice_actor+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_splice_read+0x67/0x100 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_encode_splice_read+0x58/0x100 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_encode_read+0x107/0x160 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_encode_operation+0xa3/0x2b0 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x1d0/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8825 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x378/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8826 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8827 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8828 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8829 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8830 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8831 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8832 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x378/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8833 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? percpu_counter_add_batch+0x67/0x70
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8834 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8835 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8836 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? __skb_datagram_iter+0x79/0x2e0
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg_locked+0x272/0x910
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_clientid+0xe2/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8837 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8838 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8839 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8840 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8841 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8842 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8843 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] ? fsnotify+0x2e1/0x3a0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_create_session+0x8c1/0xb30 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8844 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8845 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8846 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8847 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8848 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8849 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8850 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8851 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? allocate_slab+0x24e/0x490
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_create_session+0x8c1/0xb30 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8852 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8853 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8854 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8855 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8856 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8857 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8858 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8859 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8860 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8861 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8862 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? percpu_counter_add_batch+0x67/0x70
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8863 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8864 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8865 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8866 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8867 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8868 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8869 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8870 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x378/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8871 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] ? fsnotify+0x2e1/0x3a0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_create_session+0x8c1/0xb30 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8872 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8873 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8874 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_recvmsg+0x196/0x210
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8875 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x101/0x430
[Mon Mar 11 14:29:12 2024] ? remove_entity_load_avg+0x2e/0x70
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8876 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_cld_remove+0x54/0x1d0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:12 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_exchange_id+0x75f/0x770 [nfsd]
[Mon Mar 11 14:29:12 2024] ? nfsd4_decode_opaque+0x3a/0x90 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:nfsd state:D stack:0 pid:8877 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] ? crypto_sha1_update+0x56/0x1c0
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:12 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:12 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:12 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:12 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:12 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:12 2024] nfsd4_destroy_session+0x1a4/0x240 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd4_proc_compound+0x44b/0x700 [nfsd]
[Mon Mar 11 14:29:12 2024] nfsd_dispatch+0x94/0x1c0 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process_common+0x2ec/0x660 [sunrpc]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd_dispatch+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] ? __pfx_nfsd+0x10/0x10 [nfsd]
[Mon Mar 11 14:29:12 2024] svc_process+0x12d/0x170 [sunrpc]
[Mon Mar 11 14:29:12 2024] nfsd+0x84/0xb0 [nfsd]
[Mon Mar 11 14:29:12 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:12 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:12 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:cups-browsed state:S stack:0 pid:21421 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:12 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? ip_finish_output2+0x1a5/0x440
[Mon Mar 11 14:29:12 2024] ? __ip_queue_xmit+0x184/0x430
[Mon Mar 11 14:29:12 2024] ? tcp_update_skb_after_send+0x69/0xd0
[Mon Mar 11 14:29:12 2024] ? lock_timer_base+0x61/0x80
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? tcp_write_xmit+0x6f5/0xaa0
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? mem_cgroup_charge_skmem+0xbb/0xf0
[Mon Mar 11 14:29:12 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:12 2024] ? mod_objcg_state+0x1fe/0x300
[Mon Mar 11 14:29:12 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:12 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:12 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:12 2024] ? __call_rcu_common.constprop.0+0x117/0x2b0
[Mon Mar 11 14:29:12 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:12 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_call_function_single+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f4034b426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007ffc78a741f0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f4034e53071 RCX: 00007f4034b426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000001b76c5 RSI: 0000000000000002 RDI: 00007f402400c650
[Mon Mar 11 14:29:12 2024] RBP: 00007f402400c650 R08: 0000000000000000 R09: 00007ffc78a74080
[Mon Mar 11 14:29:12 2024] R10: 00007ffc78abe080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007ffc78a74260 R15: 000056181391da70
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gmain state:S stack:0 pid:21425 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:12 2024] ? rmqueue+0x7d3/0xd40
[Mon Mar 11 14:29:12 2024] ? rmqueue+0x7d3/0xd40
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? get_page_from_freelist+0x387/0x530
[Mon Mar 11 14:29:12 2024] ? sysvec_call_function_single+0xb/0x90
[Mon Mar 11 14:29:12 2024] ? asm_sysvec_call_function_single+0x16/0x20
[Mon Mar 11 14:29:12 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f4034b426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f403364b980 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f4034e53071 RCX: 00007f4034b426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00005618138df790
[Mon Mar 11 14:29:12 2024] RBP: 00005618138df790 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 00007ffc78abe080 R11: 0000000000000293 R12: 0000000000000001
[Mon Mar 11 14:29:12 2024] R13: 0000000000000001 R14: 00007f403364b9f0 R15: 0000561813913e70
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:gdbus state:S stack:0 pid:21426 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? remove_wait_queue+0x20/0x60
[Mon Mar 11 14:29:12 2024] ? poll_freewait+0x45/0xa0
[Mon Mar 11 14:29:12 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? sock_recvmsg+0x99/0xa0
[Mon Mar 11 14:29:12 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f4034b426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007f4032e4a980 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00007f4034e53071 RCX: 00007f4034b426ff
[Mon Mar 11 14:29:12 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 00005618138e90d0
[Mon Mar 11 14:29:12 2024] RBP: 00005618138e90d0 R08: 0000000000000000 R09: 00007f4032e4a810
[Mon Mar 11 14:29:12 2024] R10: 00007ffc78abe080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:12 2024] R13: 0000000000000002 R14: 00007f4032e4a9f0 R15: 0000561813925c70
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:dbus state:S stack:0 pid:21786 ppid:4399 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] pipe_read+0x38b/0x4c0
[Mon Mar 11 14:29:12 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:12 2024] vfs_read+0x2f9/0x330
[Mon Mar 11 14:29:12 2024] ksys_read+0xab/0xe0
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:12 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:12 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7fc04133e882
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff2c40bcf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Mon Mar 11 14:29:12 2024] RAX: ffffffffffffffda RBX: 00005646b2641790 RCX: 00007fc04133e882
[Mon Mar 11 14:29:12 2024] RDX: 0000000000000008 RSI: 00005646b2636d71 RDI: 0000000000000000
[Mon Mar 11 14:29:12 2024] RBP: 00007fff2c40c19c R08: 00005646b2634620 R09: 0000000000000000
[Mon Mar 11 14:29:12 2024] R10: 0000000000000003 R11: 0000000000000246 R12: 00007fc041525ad0
[Mon Mar 11 14:29:12 2024] R13: 00005646b263f4f0 R14: 00005646b2536143 R15: 00005646b2636d71
[Mon Mar 11 14:29:12 2024] </TASK>
[Mon Mar 11 14:29:12 2024] task:colord state:S stack:0 pid:21791 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:12 2024] Call Trace:
[Mon Mar 11 14:29:12 2024] <TASK>
[Mon Mar 11 14:29:12 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:12 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:12 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:12 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:12 2024] ? datagram_poll+0xdb/0x110
[Mon Mar 11 14:29:12 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:12 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:12 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:12 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:12 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:12 2024] ? rmqueue+0x7d3/0xd40
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:12 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:12 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:12 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:12 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:12 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:12 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:12 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:12 2024] ? handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:12 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:12 2024] ? sched_clock_cpu+0x9/0xc0
[Mon Mar 11 14:29:12 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:12 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:12 2024] RIP: 0033:0x7f41a23426ff
[Mon Mar 11 14:29:12 2024] RSP: 002b:00007fff02d31390 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f41a2830071 RCX: 00007f41a23426ff
[Mon Mar 11 14:29:15 2024] RDX: 00000000ffffffff RSI: 0000000000000004 RDI: 000055e840bce740
[Mon Mar 11 14:29:15 2024] RBP: 000055e840bce740 R08: 0000000000000000 R09: 00007fff02d31220
[Mon Mar 11 14:29:15 2024] R10: 00007fff02d83080 R11: 0000000000000293 R12: 0000000000000004
[Mon Mar 11 14:29:15 2024] R13: 0000000000000004 R14: 00007fff02d31400 R15: 000055e840b13180
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:pool-spawner state:S stack:0 pid:21804 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:15 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? enqueue_task+0x47/0x110
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:15 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f41a223ee5d
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f41a19febf8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f41a223ee5d
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000002 RSI: 0000000000000080 RDI: 000055e840ba9500
[Mon Mar 11 14:29:15 2024] RBP: 000055e840ba94f0 R08: 00007f41a19feae0 R09: 0000000000000001
[Mon Mar 11 14:29:15 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 000055e840ba9500
[Mon Mar 11 14:29:15 2024] R13: 000055e840ba9508 R14: 000055e840ba94f8 R15: 00007f41a2841065
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:gmain state:S stack:0 pid:21805 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? get_page_from_freelist+0x387/0x530
[Mon Mar 11 14:29:15 2024] ? __mod_memcg_state+0x63/0xb0
[Mon Mar 11 14:29:15 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:15 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:15 2024] ? eventfd_read+0xe2/0x2b0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:15 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:15 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:15 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f41a23426ff
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f41991fdbc0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f41a2830071 RCX: 00007f41a23426ff
[Mon Mar 11 14:29:15 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 000055e840bb8740
[Mon Mar 11 14:29:15 2024] RBP: 000055e840bb8740 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007fff02d83080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:15 2024] R13: 0000000000000002 R14: 00007f41991fdc30 R15: 000055e840bb8620
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:gdbus state:S stack:0 pid:21808 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? mod_objcg_state+0x1fe/0x300
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] ? unix_poll+0xf4/0x100
[Mon Mar 11 14:29:15 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0xea/0x1d0
[Mon Mar 11 14:29:15 2024] ? _copy_from_iter+0x144/0x590
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? ____sys_sendmsg+0x31c/0x340
[Mon Mar 11 14:29:15 2024] ? import_iovec+0x17/0x20
[Mon Mar 11 14:29:15 2024] ? copy_msghdr_from_user+0x6d/0xa0
[Mon Mar 11 14:29:15 2024] ? ___sys_sendmsg+0x95/0xd0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:15 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:15 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f41a23426ff
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f41a09fcbc0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f41a2830071 RCX: 00007f41a23426ff
[Mon Mar 11 14:29:15 2024] RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 00007f418c0107d0
[Mon Mar 11 14:29:15 2024] RBP: 00007f418c0107d0 R08: 0000000000000000 R09: 00007f41a09fca50
[Mon Mar 11 14:29:15 2024] R10: 00007fff02d83080 R11: 0000000000000293 R12: 0000000000000002
[Mon Mar 11 14:29:15 2024] R13: 0000000000000002 R14: 00007f41a09fcc30 R15: 00007f418c00d9c0
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:86031 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000bbef R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000bbef R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88092 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:15 2024] ? dequeue_signal+0x68/0x200
[Mon Mar 11 14:29:15 2024] do_sigtimedwait+0x16f/0x210
[Mon Mar 11 14:29:15 2024] __x64_sys_rt_sigtimedwait+0x6e/0xe0
[Mon Mar 11 14:29:15 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af255aca
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffd9217b0c0 EFLAGS: 00000293 ORIG_RAX: 0000000000000080
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007ffd9217b0f0 RCX: 00007f66af255aca
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 00007ffd9217b0f0 RDI: 00007ffd9217b230
[Mon Mar 11 14:29:15 2024] RBP: 00007ffd9217b0f0 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000000008 R11: 0000000000000293 R12: 00007ffd9217b1f8
[Mon Mar 11 14:29:15 2024] R13: 000055b9efc16220 R14: 00007f66af825110 R15: 00007f66af825b10
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88101 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? context_to_sid+0x95/0x110
[Mon Mar 11 14:29:15 2024] ? sidtab_context_to_sid+0x37/0x430
[Mon Mar 11 14:29:15 2024] ? mls_level_isvalid+0x3e/0x70
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ima_file_check+0x53/0x80
[Mon Mar 11 14:29:15 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:15 2024] ? terminate_walk+0x61/0xf0
[Mon Mar 11 14:29:15 2024] ? path_openat+0xc1/0x280
[Mon Mar 11 14:29:15 2024] ? do_filp_open+0xb2/0x160
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66ae90fa10 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66ae910528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66ae90fa68
[Mon Mar 11 14:29:15 2024] RBP: 00007f66ae90fa70 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66ae90fa70 R11: 0000000000000293 R12: 00007f66ae90fa68
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9efc09f20 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88102 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:15 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:15 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af29c39a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66ae10eb60 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66ae10ec70 RCX: 00007f66af29c39a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 00007f66af587c6c
[Mon Mar 11 14:29:15 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:15 2024] R10: 00007f66ae10ec70 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:15 2024] R13: 00007f66af587c6c R14: 0000000000000000 R15: 00007f66af587c00
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88103 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] futex_wait_queue+0x70/0xd0
[Mon Mar 11 14:29:15 2024] futex_wait+0x175/0x260
[Mon Mar 11 14:29:15 2024] ? enqueue_task_fair+0x88/0x3d0
[Mon Mar 11 14:29:15 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:15 2024] do_futex+0x12d/0x1d0
[Mon Mar 11 14:29:15 2024] __x64_sys_futex+0x73/0x1d0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af29c39a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66ad90dba0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f66af29c39a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055b9efc171c8
[Mon Mar 11 14:29:15 2024] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
[Mon Mar 11 14:29:15 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:15 2024] R13: 000055b9efc171c8 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88118 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66ad10c8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66ad10d528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66ad10c8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66ad10c900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66ad10c900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0c671c0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88121 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? asm_exc_page_fault+0x22/0x30
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? find_idlest_group+0x2b2/0x530
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66ac9018a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66ac902528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66ac9018f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66ac901900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66ac901900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0c713a0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88122 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a7ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a7fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a7ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a7ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a7ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0c72290 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88123 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:15 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a77fd8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a77fe528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a77fd8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a77fd900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a77fd900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0c72620 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88124 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? set_next_entity+0xda/0x150
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x1dc/0x500
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a6ffc8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a6ffd528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a6ffc8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a6ffc900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a6ffc900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0c9f920 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88125 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? __ext4_handle_dirty_metadata+0x58/0x180 [ext4]
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a67fb8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a67fc528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a67fb8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a67fb900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a67fb900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0ca5cb0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88126 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? chacha_block_generic+0x6f/0xb0
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a5ffa8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a5ffb528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a5ffa8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a5ffa900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a5ffa900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cac040 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88127 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a57f98a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a57fa528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a57f98f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a57f9900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a57f9900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cb23d0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88128 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __ext4_handle_dirty_metadata+0x58/0x180 [ext4]
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66a4ff88a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66a4ff9528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66a4ff88f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66a4ff8900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66a4ff8900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cb8760 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88129 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? rmqueue+0x7d3/0xd40
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6683ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6683fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6683ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6683ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6683ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cbeaf0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88130 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? do_sys_poll+0x21a/0x250
[Mon Mar 11 14:29:15 2024] ? chacha_block_generic+0x6f/0xb0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66837fd8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66837fe528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66837fd8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66837fd900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66837fd900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cc4e80 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88131 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6682ffc8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6682ffd528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6682ffc8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6682ffc900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6682ffc900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0ccb210 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88132 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? sock_def_readable+0x3e/0xc0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? unix_dgram_sendmsg+0x5d3/0x9c0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66827fb8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66827fc528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66827fb8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66827fb900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66827fb900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cd15a0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88133 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? rmqueue_pcplist+0xda/0x210
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6681ffa8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6681ffb528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6681ffa8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6681ffa900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6681ffa900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cd7930 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88134 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? __getblk_gfp+0x28/0xd0
[Mon Mar 11 14:29:15 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:15 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] ? __ext4_handle_dirty_metadata+0x58/0x180 [ext4]
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66817f98a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66817fa528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66817f98f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66817f9900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66817f9900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cddcc0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88135 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? kmem_cache_alloc+0x17d/0x340
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? find_idlest_group+0x2b2/0x530
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6680ff88a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6680ff9528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6680ff88f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6680ff8900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6680ff8900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0ce4050 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88136 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? sock_def_readable+0x3e/0xc0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6663ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6663fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6663ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6663ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6663ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cea3e0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88137 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:15 2024] ? do_wp_page+0x381/0x540
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? current_time+0x2b/0xf0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66637fd8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66637fe528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66637fd8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66637fd900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66637fd900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cf0790 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88138 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? current_time+0x2b/0xf0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? _copy_to_iter+0x7e/0x630
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6662ffc8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6662ffd528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6662ffc8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6662ffc900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6662ffc900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cf6b00 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88139 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:15 2024] ? mod_objcg_state+0x1fe/0x300
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66627fb8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66627fc528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66627fb8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66627fb900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66627fb900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0cfce90 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88140 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? sock_def_readable+0x3e/0xc0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6661ffa8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6661ffb528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6661ffa8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6661ffa900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6661ffa900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d03220 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88141 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? asm_exc_page_fault+0x22/0x30
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x7c/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66617f98a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66617fa528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66617f98f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66617f9900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66617f9900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d095b0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88142 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? ext4_inode_csum+0x199/0x210 [ext4]
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6660ff88a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6660ff9528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6660ff88f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6660ff8900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6660ff8900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d0f940 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88143 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? do_sys_poll+0x1d8/0x250
[Mon Mar 11 14:29:15 2024] ? kmem_cache_alloc+0x17d/0x340
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x7c/0x180
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6643ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6643fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6643ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6643ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6643ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d15cd0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88144 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66437fd8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66437fe528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66437fd8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66437fd900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66437fd900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d1c060 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88145 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? unix_dgram_sendmsg+0x5d3/0x9c0
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6642ffc8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6642ffd528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6642ffc8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6642ffc900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6642ffc900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d223f0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88146 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] ? memcg_alloc_slab_cgroups+0x39/0xa0
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? avc_has_perm_noaudit+0x94/0x110
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66427fb8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66427fc528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66427fb8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66427fb900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66427fb900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d28780 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88147 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6641ffa8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6641ffb528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6641ffa8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6641ffa900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6641ffa900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d2eb10 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88148 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66417f98a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66417fa528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66417f98f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66417f9900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66417f9900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d34ea0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88149 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:15 2024] ? xa_load+0x70/0xb0
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6640ff88a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6640ff9528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6640ff88f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6640ff8900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6640ff8900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d3b230 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88150 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? find_idlest_group+0x2b2/0x530
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x7c/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6623ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6623fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6623ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6623ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6623ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d415c0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88151 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:15 2024] ? memcg_alloc_slab_cgroups+0x39/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66237fd8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66237fe528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66237fd8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66237fd900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66237fd900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d47950 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88152 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6622ffc8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6622ffd528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6622ffc8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6622ffc900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6622ffc900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d4dce0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88153 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x151/0x180
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66227fb8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66227fc528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66227fb8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66227fb900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66227fb900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d54070 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88154 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? xas_load+0x9/0xa0
[Mon Mar 11 14:29:15 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6621ffa8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6621ffb528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6621ffa8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6621ffa900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6621ffa900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d5a400 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88155 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:15 2024] ? mod_objcg_state+0x1fe/0x300
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66217f98a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66217fa528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66217f98f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66217f9900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66217f9900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d60790 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88156 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? chacha_block_generic+0x6f/0xb0
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6620ff88a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6620ff9528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6620ff88f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6620ff8900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6620ff8900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d66b20 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88157 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? page_counter_uncharge+0x35/0x80
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? futex_wake+0x7c/0x180
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6603ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6603fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6603ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6603ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6603ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d6ceb0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88158 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66037fd8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66037fe528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66037fd8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66037fd900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66037fd900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d73240 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88159 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? mod_objcg_state+0x1fe/0x300
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6602ffc8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6602ffd528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6602ffc8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6602ffc900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6602ffc900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d795d0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88160 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? xas_alloc+0x4b/0xd0
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66027fb8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66027fc528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66027fb8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66027fb900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66027fb900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d7f980 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88161 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? asm_exc_page_fault+0x22/0x30
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? kmem_cache_alloc+0x17d/0x340
[Mon Mar 11 14:29:15 2024] ? update_sg_wakeup_stats+0x78/0x3b0
[Mon Mar 11 14:29:15 2024] ? newidle_balance+0x2e5/0x400
[Mon Mar 11 14:29:15 2024] ? update_load_avg+0x7e/0x740
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:15 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:15 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:15 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:15 2024] ? __schedule+0x223/0x550
[Mon Mar 11 14:29:15 2024] ? timerqueue_del+0x2a/0x50
[Mon Mar 11 14:29:15 2024] ? __remove_hrtimer+0x39/0x90
[Mon Mar 11 14:29:15 2024] ? hrtimer_try_to_cancel.part.0+0x50/0xf0
[Mon Mar 11 14:29:15 2024] ? hrtimer_cancel+0x1d/0x40
[Mon Mar 11 14:29:15 2024] ? futex_wait+0x23e/0x260
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:15 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6601ffa8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6601ffb528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6601ffa8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6601ffa900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6601ffa900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d85cf0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88162 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? __find_get_block+0x1fb/0x370
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:15 2024] ? mod_objcg_state+0x195/0x300
[Mon Mar 11 14:29:15 2024] ? kmem_cache_alloc_lru+0x12f/0x2b0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? ep_poll_callback+0x25d/0x2a0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f66017f98a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f66017fa528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f66017f98f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f66017f9900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f66017f9900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d8c080 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88163 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? rmqueue+0x7d3/0xd40
[Mon Mar 11 14:29:15 2024] ? __ext4_handle_dirty_metadata+0x58/0x180 [ext4]
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f6600ff88a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f6600ff9528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f6600ff88f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f6600ff8900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f6600ff8900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d92410 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:automount state:S stack:0 pid:88164 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:15 2024] ? autoremove_wake_function+0x30/0x60
[Mon Mar 11 14:29:15 2024] ? __wake_up_common+0x75/0xa0
[Mon Mar 11 14:29:15 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __wake_up_sync_key+0x39/0x50
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? sock_def_readable+0x3e/0xc0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? unix_dgram_sendmsg+0x5d3/0x9c0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:15 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:15 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:15 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:15 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:15 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f66af3427fe
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007f65e3ffe8a0 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007f65e3fff528 RCX: 00007f66af3427fe
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007f65e3ffe8f8
[Mon Mar 11 14:29:15 2024] RBP: 00007f65e3ffe900 R08: 0000000000000008 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00007f65e3ffe900 R11: 0000000000000293 R12: 00007ffd9217b000
[Mon Mar 11 14:29:15 2024] R13: 0000000000000000 R14: 000055b9f0d987a0 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:atd state:S stack:0 pid:88110 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] __do_sys_pause+0x30/0x60
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f17b21184a7
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffeef973918 EFLAGS: 00000246 ORIG_RAX: 0000000000000022
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055ae3b68c0a0 RCX: 00007f17b21184a7
[Mon Mar 11 14:29:15 2024] RDX: 000055ae3b68c0a0 RSI: 0000000000000001 RDI: 0000000000000000
[Mon Mar 11 14:29:15 2024] RBP: 000055ae3b68937c R08: 0000000000000000 R09: 0000000000000078
[Mon Mar 11 14:29:15 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffeef973930
[Mon Mar 11 14:29:15 2024] R13: 0000000000000003 R14: 000055ae3bf1e7b0 R15: 000055ae3b68c0cc
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:agetty state:S stack:0 pid:88117 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:15 2024] ? up+0x12/0x60
[Mon Mar 11 14:29:15 2024] ? console_unlock+0xc8/0x320
[Mon Mar 11 14:29:15 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:15 2024] wait_woken+0x50/0x60
[Mon Mar 11 14:29:15 2024] n_tty_read+0x512/0x660
[Mon Mar 11 14:29:15 2024] ? __pfx_woken_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] tty_read+0x12c/0x220
[Mon Mar 11 14:29:15 2024] ? file_has_perm+0xc6/0xd0
[Mon Mar 11 14:29:15 2024] vfs_read+0x1e6/0x330
[Mon Mar 11 14:29:15 2024] ksys_read+0x5f/0xe0
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fb8efd3e882
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffec6777f08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007fb8eff5f6c0 RCX: 00007fb8efd3e882
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffec6777f7f RDI: 0000000000000000
[Mon Mar 11 14:29:15 2024] RBP: 000000000000006c R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffec6777f7f
[Mon Mar 11 14:29:15 2024] R13: 00007ffec6777fb0 R14: 0000000000000015 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:crond state:S stack:0 pid:88166 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] do_nanosleep+0x67/0x190
[Mon Mar 11 14:29:15 2024] hrtimer_nanosleep+0xbe/0x1a0
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] common_nsleep+0x40/0x50
[Mon Mar 11 14:29:15 2024] __x64_sys_clock_nanosleep+0xbc/0x130
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f546971393a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffcf602bda8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e6
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: fffffffffffffe98 RCX: 00007f546971393a
[Mon Mar 11 14:29:15 2024] RDX: 00007ffcf602bdc0 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:15 2024] RBP: 000000000000000a R08: 0000000000000000 R09: 0000000065ef1568
[Mon Mar 11 14:29:15 2024] R10: 00007ffcf602bdc0 R11: 0000000000000246 R12: 000000000000003c
[Mon Mar 11 14:29:15 2024] R13: 0000000001b2eb06 R14: 000000000000003b R15: 000055e5b1d47a58
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:tmux: server state:S stack:0 pid:91743 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:15 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:15 2024] ? zap_pmd_range.isra.0+0x10f/0x230
[Mon Mar 11 14:29:15 2024] ? zap_pte_range+0x3ec/0xac0
[Mon Mar 11 14:29:15 2024] ? unmap_page_range+0x2b9/0x4c0
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:15 2024] ? __blk_flush_plug+0xf1/0x150
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? ktime_get_ts64+0x49/0xf0
[Mon Mar 11 14:29:15 2024] __x64_sys_poll+0xa6/0x140
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7feaa0f426c7
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc39a31b08 EFLAGS: 00000246 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00005631b1e68c20 RCX: 00007feaa0f426c7
[Mon Mar 11 14:29:15 2024] RDX: 000000000036ee80 RSI: 0000000000000003 RDI: 00005631b1e68a30
[Mon Mar 11 14:29:15 2024] RBP: 0000000000000003 R08: 00007ffc39a31b80 R09: 00007feaa1234d20
[Mon Mar 11 14:29:15 2024] R10: 00007ffc39be9080 R11: 0000000000000246 R12: 00005631b1e68070
[Mon Mar 11 14:29:15 2024] R13: 00005631b1e68a30 R14: 000000000036ee80 R15: 0000000000000003
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:bash state:S stack:0 pid:91744 ppid:91743 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] do_wait+0x15b/0x2f0
[Mon Mar 11 14:29:15 2024] kernel_wait4+0xa6/0x140
[Mon Mar 11 14:29:15 2024] ? __pfx_child_wait_callback+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fafb8b182ca
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffcaac0e548 EFLAGS: 00000246 ORIG_RAX: 000000000000003d
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 0000000000016676 RCX: 00007fafb8b182ca
[Mon Mar 11 14:29:15 2024] RDX: 000000000000000a RSI: 00007ffcaac0e570 RDI: 00000000ffffffff
[Mon Mar 11 14:29:15 2024] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000a
[Mon Mar 11 14:29:15 2024] R13: 00007ffcaac0e5d0 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:nfsclients.sh state:S stack:0 pid:91766 ppid:91744 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] do_wait+0x15b/0x2f0
[Mon Mar 11 14:29:15 2024] kernel_wait4+0xa6/0x140
[Mon Mar 11 14:29:15 2024] ? __pfx_child_wait_callback+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f7e9d9182ca
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc688a3c08 EFLAGS: 00000246 ORIG_RAX: 000000000000003d
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000000000025d049 RCX: 00007f7e9d9182ca
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000000 RSI: 00007ffc688a3c30 RDI: 00000000ffffffff
[Mon Mar 11 14:29:15 2024] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:15 2024] R13: 00007ffc688a3c90 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.86.26.7 state:S stack:0 pid:93091 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:15 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd39970
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:tlsmgr state:S stack:0 pid:158515 ppid:4982 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f5a7794e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffe2b1ad418 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00007ffe2b1ad430 RCX: 00007f5a7794e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000064 RSI: 00007ffe2b1ad430 RDI: 000000000000000c
[Mon Mar 11 14:29:15 2024] RBP: 00007f5a781797d0 R08: 000055b209bc0750 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000017ed00 R11: 0000000000000246 R12: 00007f5a78179794
[Mon Mar 11 14:29:15 2024] R13: 00000000ffffffff R14: 000055b2085fcbd0 R15: 00007f5a78179794
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:nfsiod state:I stack:0 pid:179473 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] rescuer_thread+0x2ca/0x390
[Mon Mar 11 14:29:15 2024] ? __pfx_rescuer_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:sssd_kcm state:S stack:0 pid:179479 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7f811714e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007fff32ced548 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 00005599d32a6430 RCX: 00007f811714e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007fff32ced57c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 00005599d32a62a0 R08: 00000000000f423b R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000002710 R11: 0000000000000246 R12: 00005599d32a6580
[Mon Mar 11 14:29:15 2024] R13: 0000000000002710 R14: 00005599d32b6bd0 R15: 0000000000000001
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:316974 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000e1d4 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000e1d4 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:317495 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000005f8c R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 0000000000005f8c R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.86.26.7 state:S stack:0 pid:348298 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4072 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.27.1 state:S stack:0 pid:789171 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f407c R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.88.18.1 state:S stack:0 pid:3000371 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:3294561 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000007018 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 0000000000007018 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.20.2 state:S stack:0 pid:3797717 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41dd R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.30.2 state:S stack:0 pid:185208 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 00000000000067a2 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 00000000000067a2 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:662850 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[134.58.56. state:S stack:0 pid:960684 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d8 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.19.5 state:S stack:0 pid:978979 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d2 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.27.1 state:S stack:0 pid:1045161 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:15 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41df R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.28.1 state:S stack:0 pid:1049691 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000009906 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 0000000000009906 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.31.1 state:S stack:0 pid:1059267 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 000000000000f29b R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 0000000000004a77 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 0000000000004a77 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:1292148 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000e17e R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000e17e R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.27.4 state:S stack:0 pid:1367201 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4064 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.91.30.5 state:S stack:0 pid:1834355 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000580b R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000580b R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/46:1 state:I stack:0 pid:2050654 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/18:1 state:I stack:0 pid:2072443 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/34:0 state:I stack:0 pid:2078485 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/27:2 state:I stack:0 pid:2144617 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/40:0 state:I stack:0 pid:2193065 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.27.3 state:S stack:0 pid:2193786 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? switch_fpu_return+0x4c/0xd0
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:15 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000e8e4a R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ceda R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000ceda R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.27.3 state:S stack:0 pid:2193788 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 000000000000658b R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000d30a R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000d30a R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.40.238. state:S stack:0 pid:2211016 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:15 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/5:2 state:I stack:0 pid:2242731 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:kworker/39:2 state:I stack:0 pid:2251290 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:15 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:15 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:15 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:15 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:15 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2279961 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:15 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:15 2024] RBP: 000055f87dd2aa20 R08: 00000000000f420c R09: 0000000000000000
[Mon Mar 11 14:29:15 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:15 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:15 2024] </TASK>
[Mon Mar 11 14:29:15 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2279969 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:15 2024] Call Trace:
[Mon Mar 11 14:29:15 2024] <TASK>
[Mon Mar 11 14:29:15 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:15 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:15 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:15 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:15 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:15 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:15 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:15 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:15 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:15 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:15 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:15 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:15 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:15 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:15 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:15 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:15 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:15 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41df R09: 0000000000000077
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2279987 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41dd R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.5 state:S stack:0 pid:2292584 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2297566 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000049811 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000002456 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000002456 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2304037 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4064 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[134.58.56. state:S stack:0 pid:2310377 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.3 state:S stack:0 pid:2318500 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41c2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2322257 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2322491 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.88.4 state:S stack:0 pid:2322831 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000006d32 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000006d32 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2323430 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41da R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2323597 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41dd R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.19.1 state:S stack:0 pid:2324043 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000aa95a R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ba53 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ba53 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.19.1 state:S stack:0 pid:2324056 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4072 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.8 state:S stack:0 pid:2325987 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 000000000002d516 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000002f9a R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000002f9a R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.8 state:S stack:0 pid:2326005 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000003864 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000003864 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.2 state:S stack:0 pid:2329756 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41be R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/6:1 state:I stack:0 pid:2330800 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2330921 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.5 state:S stack:0 pid:2331098 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_call_function+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41bc R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.4 state:S stack:0 pid:2334575 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41e2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.2 state:S stack:0 pid:2338596 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4121 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.228. state:S stack:0 pid:2338708 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 00000000000003e7 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 00000000000003e7 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.22.1 state:S stack:0 pid:2339568 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2339621 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d5 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:samba-dcerpcd state:S stack:0 pid:2339807 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fe4e334e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc108b62d8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000056040c137190 RCX: 00007fe4e334e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc108b630c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000056040c128640 R08: 00000000000e4eb8 R09: 000056040c1452d0
[Mon Mar 11 14:29:16 2024] R10: 00000000000003aa R11: 0000000000000246 R12: 000056040c137220
[Mon Mar 11 14:29:16 2024] R13: 00000000000003aa R14: 000056040c1423f0 R15: 000056040c10b950
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2339816 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? unix_stream_read_generic+0x28b/0x6c0
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? unix_stream_recvmsg+0x92/0xa0
[Mon Mar 11 14:29:16 2024] ? __pfx_unix_stream_read_actor+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f6df214e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffe24c77e08 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00005565e4644850 RCX: 00007f6df214e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000008 RSI: 00005565e4585680 RDI: 0000000000000019
[Mon Mar 11 14:29:16 2024] RBP: 7fffffffffffffff R08: 00005565e4585680 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000028
[Mon Mar 11 14:29:16 2024] R13: 0000000000000008 R14: 0000000000000004 R15: 00005565e46449e0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:samba-bgqd state:S stack:0 pid:2339818 ppid:2339816 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f6aff94e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007fff4f039db8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00005588fecf7900 RCX: 00007f6aff94e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007fff4f039dec RDI: 0000000000000004
[Mon Mar 11 14:29:16 2024] RBP: 00005588fece1710 R08: 00000000000768d2 R09: 00007fff4f039550
[Mon Mar 11 14:29:16 2024] R10: 00000000000a8936 R11: 0000000000000246 R12: 00005588fecf7990
[Mon Mar 11 14:29:16 2024] R13: 00000000000a8936 R14: 00005588fed05cf0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2339825 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? __sys_recvfrom+0xa8/0x120
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fb696b4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007fff0751fc48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00005619fa7f5c60 RCX: 00007fb696b4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000008 RSI: 00005619fa8465b0 RDI: 0000000000000019
[Mon Mar 11 14:29:16 2024] RBP: 7fffffffffffffff R08: 00005619fa8465b0 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000028
[Mon Mar 11 14:29:16 2024] R13: 0000000000000008 R14: 0000000000000004 R15: 00005619fa7f5df0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2340410 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4017 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.230. state:S stack:0 pid:2340528 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ce30 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ce30 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.3 state:S stack:0 pid:2341031 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41e0 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2341353 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4155 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.2 state:S stack:0 pid:2341452 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d7 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.157. state:S stack:0 pid:2341830 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000009086 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 0000000000009086 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.142. state:S stack:0 pid:2342339 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.3 state:S stack:0 pid:2342764 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2343759 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000e91b R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000e91b R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2343951 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/35:0 state:I stack:0 pid:2344315 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/25:1 state:I stack:0 pid:2344356 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/32:0 state:I stack:0 pid:2344368 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events_power_efficient)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? neigh_managed_work+0x9a/0xb0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2344554 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2344564 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41dd R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2346134 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.5 state:S stack:0 pid:2346299 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.8 state:S stack:0 pid:2346897 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41cc R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.2 state:S stack:0 pid:2347077 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.2 state:S stack:0 pid:2347080 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.2 state:S stack:0 pid:2347082 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41eb R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.88.9 state:S stack:0 pid:2347580 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4096 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2347638 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000b25e R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000b25e R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_winreg state:S stack:0 pid:2347737 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fafead4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffeb54e23c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000563370d7e770 RCX: 00007fafead4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffeb54e23fc RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 0000563370d6f2e0 R08: 0000563370dc2220 R09: 000000000000e0de
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 0000563370d7e800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007fafeb771db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2347868 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000001e79 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000001e79 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_winreg state:S stack:0 pid:2347933 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f00b774e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007fff41efac18 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055faa0996770 RCX: 00007f00b774e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007fff41efac4c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000055faa09872e0 R08: 000055faa09dc950 R09: 0000000000009dc2
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055faa0996800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007f00b8133db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.22.1 state:S stack:0 pid:2349497 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 000000000000c5d2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000c383 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000c383 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.232. state:S stack:0 pid:2351731 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:16 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ad58 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ad58 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.31.1 state:S stack:0 pid:2352076 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.157. state:S stack:0 pid:2352279 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.8 state:S stack:0 pid:2352500 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000e222 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000e222 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.7 state:S stack:0 pid:2352778 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4039 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.8 state:S stack:0 pid:2353738 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.229. state:S stack:0 pid:2355747 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.140. state:S stack:0 pid:2356004 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.2 state:S stack:0 pid:2362257 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4062 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.8 state:S stack:0 pid:2363822 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2364172 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2364174 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41c2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/36:1 state:I stack:0 pid:2364257 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/38:2 state:I stack:0 pid:2364258 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/30:0 state:D stack:0 pid:2364267 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: events delayed_fput
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_wbt_cleanup_cb+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:16 2024] rq_qos_wait+0xbb/0x130
[Mon Mar 11 14:29:16 2024] ? __pfx_rq_qos_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_wbt_inflight_cb+0x10/0x10
[Mon Mar 11 14:29:16 2024] wbt_wait+0x9c/0x100
[Mon Mar 11 14:29:16 2024] __rq_qos_throttle+0x20/0x40
[Mon Mar 11 14:29:16 2024] blk_mq_submit_bio+0x183/0x580
[Mon Mar 11 14:29:16 2024] __submit_bio_noacct+0x7e/0x1e0
[Mon Mar 11 14:29:16 2024] ext4_bio_write_page+0x198/0x490 [ext4]
[Mon Mar 11 14:29:16 2024] ? folio_clear_dirty_for_io+0x13d/0x1b0
[Mon Mar 11 14:29:16 2024] mpage_submit_page+0x5a/0x70 [ext4]
[Mon Mar 11 14:29:16 2024] mpage_map_and_submit_buffers+0x146/0x240 [ext4]
[Mon Mar 11 14:29:16 2024] mpage_map_and_submit_extent+0x56/0x300 [ext4]
[Mon Mar 11 14:29:16 2024] ext4_do_writepages+0x5f0/0x780 [ext4]
[Mon Mar 11 14:29:16 2024] ext4_writepages+0xac/0x150 [ext4]
[Mon Mar 11 14:29:16 2024] do_writepages+0xcc/0x1d0
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? raw_spin_rq_lock_nested+0x19/0x80
[Mon Mar 11 14:29:16 2024] filemap_fdatawrite_wbc+0x66/0x90
[Mon Mar 11 14:29:16 2024] __filemap_fdatawrite_range+0x54/0x80
[Mon Mar 11 14:29:16 2024] ext4_release_file+0x70/0xb0 [ext4]
[Mon Mar 11 14:29:16 2024] __fput+0x91/0x250
[Mon Mar 11 14:29:16 2024] delayed_fput+0x1f/0x30
[Mon Mar 11 14:29:16 2024] process_one_work+0x1e2/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] worker_thread+0x50/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.4 state:S stack:0 pid:2364892 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41bc R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.4 state:S stack:0 pid:2364895 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41fe R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.131. state:S stack:0 pid:2365796 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f406c R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.86.26.7 state:S stack:0 pid:2366328 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_call_function_single+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d9 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2367684 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d6 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.46.198. state:S stack:0 pid:2368564 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d0 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.2 state:S stack:0 pid:2370451 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4072 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.6 state:S stack:0 pid:2371619 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 00000000000043b3 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 00000000000043b3 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sshd state:S stack:0 pid:2371675 ppid:4412 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:16 2024] ? unix_poll+0xf4/0x100
[Mon Mar 11 14:29:16 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:16 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:16 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:16 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:16 2024] ? __alloc_skb+0x8e/0x1d0
[Mon Mar 11 14:29:16 2024] ? _copy_to_iter+0x1d4/0x630
[Mon Mar 11 14:29:16 2024] ? __check_object_size.part.0+0x47/0xd0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __skb_datagram_iter+0x79/0x2e0
[Mon Mar 11 14:29:16 2024] ? __pfx_simple_copy_to_iter+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:16 2024] ? kmalloc_trace+0x25/0xa0
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __pfx_deferred_put_nlk_sk+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __pfx_inode_free_by_rcu+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:16 2024] ? __dentry_kill+0x13a/0x180
[Mon Mar 11 14:29:16 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:16 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:16 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:16 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fdd8e1426c7
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc4bb859f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007ffc4bb85a10 RCX: 00007fdd8e1426c7
[Mon Mar 11 14:29:16 2024] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007ffc4bb85a10
[Mon Mar 11 14:29:16 2024] RBP: 000055bad15460c0 R08: 0000000000000001 R09: 00007ffc4bb7ed28
[Mon Mar 11 14:29:16 2024] R10: 0000000000000040 R11: 0000000000000246 R12: 000055bad15460c0
[Mon Mar 11 14:29:16 2024] R13: 000055bad20ed680 R14: 000055bad20ec910 R15: 00000000ffffffff
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sshd state:S stack:0 pid:2371682 ppid:2371675 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? n_tty_poll+0x1cb/0x1f0
[Mon Mar 11 14:29:16 2024] ? tty_poll+0x71/0xa0
[Mon Mar 11 14:29:16 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:16 2024] ? __bond_start_xmit+0x93/0x430 [bonding]
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __mod_timer+0x286/0x3d0
[Mon Mar 11 14:29:16 2024] ? sk_reset_timer+0x14/0x60
[Mon Mar 11 14:29:16 2024] ? tcp_schedule_loss_probe.part.0+0x12c/0x1a0
[Mon Mar 11 14:29:16 2024] ? tcp_write_xmit+0x6f5/0xaa0
[Mon Mar 11 14:29:16 2024] ? __tcp_push_pending_frames+0x32/0xf0
[Mon Mar 11 14:29:16 2024] ? tcp_sendmsg_locked+0xaae/0xc20
[Mon Mar 11 14:29:16 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:16 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:16 2024] __x64_sys_pselect6+0x39/0x70
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fdd8e144f64
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc4bb856b0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fdd8e144f64
[Mon Mar 11 14:29:16 2024] RDX: 000055bad2111b80 RSI: 000055bad213f450 RDI: 000000000000000f
[Mon Mar 11 14:29:16 2024] RBP: 000055bad213f450 R08: 0000000000000000 R09: 00007ffc4bb856f0
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000008
[Mon Mar 11 14:29:16 2024] R13: 000055bad20ef430 R14: 000055bad2111b80 R15: 000055bad20ed680
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:bash state:S stack:0 pid:2371683 ppid:2371682 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? n_tty_poll+0x1cb/0x1f0
[Mon Mar 11 14:29:16 2024] ? tty_poll+0x71/0xa0
[Mon Mar 11 14:29:16 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:16 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:16 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:16 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:16 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:16 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:16 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:16 2024] ? n_tty_write+0x267/0x3c0
[Mon Mar 11 14:29:16 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:16 2024] ? do_tty_write+0x1a4/0x250
[Mon Mar 11 14:29:16 2024] ? __pfx_n_tty_write+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? file_has_perm+0xc6/0xd0
[Mon Mar 11 14:29:16 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:16 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:16 2024] __x64_sys_pselect6+0x39/0x70
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7ff463344f64
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffdf9661b90 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007ff4633faaa0 RCX: 00007ff463344f64
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000000 RSI: 00007ffdf9661c20 RDI: 0000000000000001
[Mon Mar 11 14:29:16 2024] RBP: 00007ffdf9661c20 R08: 0000000000000000 R09: 00007ffdf9661bd0
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:16 2024] R13: 0000000000000000 R14: 0000000000000000 R15: 000055745bd3f580
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.8 state:S stack:0 pid:2372510 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4073 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/14:1 state:I stack:0 pid:2374246 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.6 state:S stack:0 pid:2376116 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000a242 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000a242 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2376851 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4078 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.147. state:S stack:0 pid:2377087 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2377277 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4209 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.88.1 state:S stack:0 pid:2377288 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41e2 R09: 0000000000000077
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.138. state:S stack:0 pid:2377602 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d6 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.140. state:S stack:0 pid:2379578 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? unix_stream_read_generic+0x28b/0x6c0
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a8d30 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87ddfa890 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000008 RSI: 000055f87de08650 RDI: 0000000000000022
[Mon Mar 11 14:29:16 2024] RBP: 7fffffffffffffff R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 00000000ffffffff R11: 0000000000000293 R12: 0000000000000028
[Mon Mar 11 14:29:16 2024] R13: 0000000000000008 R14: 0000000000000004 R15: 000055f87ddfaa20
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.140. state:S stack:0 pid:2381553 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f402f R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.142. state:S stack:0 pid:2381864 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d8 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.133. state:S stack:0 pid:2384413 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/45:0 state:I stack:0 pid:2385965 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:NFSv4 callback state:I stack:0 pid:2387784 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] nfs41_callback_svc+0x186/0x190 [nfsv4]
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_nfs41_callback_svc+0x10/0x10 [nfsv4]
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.5 state:S stack:0 pid:2388610 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/7:0 state:I stack:0 pid:2391347 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/17:1 state:I stack:0 pid:2391348 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/13:0 state:I stack:0 pid:2391582 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.2 state:S stack:0 pid:2393076 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2395603 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41f2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.143. state:S stack:0 pid:2396242 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4110 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.31.1 state:S stack:0 pid:2396394 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41bc R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.8 state:S stack:0 pid:2397415 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:16 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000020aef R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000cba6 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000cba6 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/16:2 state:I stack:0 pid:2397744 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2398554 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.144. state:S stack:0 pid:2399030 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41dc R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/1:1 state:I stack:0 pid:2399485 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events_power_efficient)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? neigh_managed_work+0x9a/0xb0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/3:2 state:I stack:0 pid:2399492 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2400150 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41df R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.137. state:S stack:0 pid:2400165 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.6 state:S stack:0 pid:2400787 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4047 R09: 0000000000000077
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/26:2 state:I stack:0 pid:2401639 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/10:0 state:I stack:0 pid:2401926 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/19:1 state:I stack:0 pid:2402658 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.135. state:S stack:0 pid:2403402 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000a4475 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000008f41 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000008f41 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.234. state:S stack:0 pid:2404568 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41df R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.88.16.7 state:S stack:0 pid:2407419 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/41:1 state:I stack:0 pid:2408233 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/28:0 state:I stack:0 pid:2409584 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/37:1 state:I stack:0 pid:2411099 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.39.226. state:S stack:0 pid:2411168 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000d3a9 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000d3a9 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.4 state:S stack:0 pid:2412672 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f3f59 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/43:2 state:I stack:0 pid:2414694 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2415574 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/20:2 state:I stack:0 pid:2415685 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/9:1 state:I stack:0 pid:2418267 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.6 state:S stack:0 pid:2419992 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2419997 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41cf R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.153. state:S stack:0 pid:2423410 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/2:2 state:I stack:0 pid:2424238 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.7 state:S stack:0 pid:2425212 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41ce R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/21:0 state:I stack:0 pid:2425747 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/42:0 state:I stack:0 pid:2427884 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.7 state:S stack:0 pid:2428733 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/8:2 state:I stack:0 pid:2431582 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.9 state:S stack:0 pid:2432065 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41da R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.18.9 state:S stack:0 pid:2432071 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41c1 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.31.1 state:S stack:0 pid:2432547 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4074 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/15:1 state:I stack:0 pid:2432551 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/0:0 state:I stack:0 pid:2433691 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (lpfc_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.31.4 state:S stack:0 pid:2433848 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41dc R09: 0000000000000077
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.231. state:S stack:0 pid:2433966 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.19.4 state:S stack:0 pid:2434016 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000af6ab R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000008f6f R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000008f6f R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/33:1 state:I stack:0 pid:2434371 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/31:2 state:I stack:0 pid:2436168 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/29:0 state:I stack:0 pid:2436183 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2436431 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41df R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/11:1 state:R running task stack:0 pid:2440048 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2440119 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? mntput_no_expire+0x4a/0x250
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d2 R09: 0000000000000077
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.2 state:S stack:0 pid:2440138 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.46.163. state:S stack:0 pid:2441678 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __do_softirq+0x16a/0x2ac
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f3fe4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2441978 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u96:4 state:I stack:0 pid:2442472 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (dm-thin)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/12:0 state:I stack:0 pid:2442877 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (kdmflush/253:33)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.88.4 state:S stack:0 pid:2443657 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41cf R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.138. state:S stack:0 pid:2445114 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d2 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:pickup state:S stack:0 pid:2449277 ppid:4982 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f32d434e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffff46c1188 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007ffff46c11a0 RCX: 00007f32d434e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000064 RSI: 00007ffff46c11a0 RDI: 000000000000000a
[Mon Mar 11 14:29:16 2024] RBP: 00007f32d45a57d0 R08: 0000563eee563940 R09: 0000000000000078
[Mon Mar 11 14:29:16 2024] R10: 00000000000186a0 R11: 0000000000000246 R12: 00007f32d45a5794
[Mon Mar 11 14:29:16 2024] R13: 00000000ffffffff R14: 0000563eec931080 R15: 00007f32d45a5794
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u96:3 state:D stack:0 pid:2451130 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: nfsd4_callbacks nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_timeout+0x88/0x160
[Mon Mar 11 14:29:16 2024] ? __pfx_process_timeout+0x10/0x10
[Mon Mar 11 14:29:16 2024] rpc_shutdown_client+0xb3/0x150 [sunrpc]
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] nfsd4_process_cb_update+0x3e/0x260 [nfsd]
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? raw_spin_rq_lock_nested+0x19/0x80
[Mon Mar 11 14:29:16 2024] ? newidle_balance+0x26e/0x400
[Mon Mar 11 14:29:16 2024] ? pick_next_task_fair+0x41/0x500
[Mon Mar 11 14:29:16 2024] ? put_prev_task_fair+0x1e/0x40
[Mon Mar 11 14:29:16 2024] ? pick_next_task+0x861/0x950
[Mon Mar 11 14:29:16 2024] ? __update_idle_core+0x23/0xc0
[Mon Mar 11 14:29:16 2024] ? __switch_to_asm+0x3a/0x80
[Mon Mar 11 14:29:16 2024] ? finish_task_switch.isra.0+0x8c/0x2a0
[Mon Mar 11 14:29:16 2024] nfsd4_run_cb_work+0x9f/0x150 [nfsd]
[Mon Mar 11 14:29:16 2024] process_one_work+0x1e2/0x3b0
[Mon Mar 11 14:29:16 2024] worker_thread+0x50/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u97:1 state:I stack:0 pid:2451949 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events_unbound)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/4:2 state:I stack:0 pid:2452213 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2452452 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41da R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/47:2 state:I stack:0 pid:2452570 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.8 state:S stack:0 pid:2453051 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/24:2 state:I stack:0 pid:2453510 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.140. state:S stack:0 pid:2453653 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.233. state:D stack:0 pid:2455152 ppid:4757 flags:0x00004002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:16 2024] bit_wait_io+0xd/0x60
[Mon Mar 11 14:29:16 2024] __wait_on_bit+0x48/0x150
[Mon Mar 11 14:29:16 2024] ? __pfx_bit_wait_io+0x10/0x10
[Mon Mar 11 14:29:16 2024] out_of_line_wait_on_bit+0x92/0xb0
[Mon Mar 11 14:29:16 2024] ? __pfx_wake_bit_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] ext4_read_bh+0x84/0x90 [ext4]
[Mon Mar 11 14:29:16 2024] __ext4_sb_bread_gfp.isra.0+0x5f/0x80 [ext4]
[Mon Mar 11 14:29:16 2024] ext4_xattr_block_get+0x5f/0x1d0 [ext4]
[Mon Mar 11 14:29:16 2024] ext4_xattr_get+0xb1/0xd0 [ext4]
[Mon Mar 11 14:29:16 2024] __vfs_getxattr+0x50/0x70
[Mon Mar 11 14:29:16 2024] get_vfs_caps_from_disk+0x70/0x210
[Mon Mar 11 14:29:16 2024] ? walk_component+0x70/0x1d0
[Mon Mar 11 14:29:16 2024] audit_copy_inode+0x99/0xd0
[Mon Mar 11 14:29:16 2024] filename_lookup+0x17b/0x1d0
[Mon Mar 11 14:29:16 2024] ? __check_object_size.part.0+0x47/0xd0
[Mon Mar 11 14:29:16 2024] ? _copy_to_user+0x1a/0x30
[Mon Mar 11 14:29:16 2024] ? do_getxattr+0xd3/0x160
[Mon Mar 11 14:29:16 2024] ? path_get+0x11/0x30
[Mon Mar 11 14:29:16 2024] vfs_statx+0x8d/0x170
[Mon Mar 11 14:29:16 2024] vfs_fstatat+0x54/0x70
[Mon Mar 11 14:29:16 2024] __do_sys_newfstatat+0x26/0x60
[Mon Mar 11 14:29:16 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:16 2024] ? __audit_syscall_entry+0xef/0x140
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d3debe
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9048 EFLAGS: 00000246 ORIG_RAX: 0000000000000106
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fedb7d3debe
[Mon Mar 11 14:29:16 2024] RDX: 00007ffc284a9050 RSI: 000055f87dd97493 RDI: 0000000000000023
[Mon Mar 11 14:29:16 2024] RBP: 00007ffc284a9150 R08: 0000000000000000 R09: 0000000000000001
[Mon Mar 11 14:29:16 2024] R10: 0000000000000100 R11: 0000000000000246 R12: 00007ffc284a9150
[Mon Mar 11 14:29:16 2024] R13: 00007ffc284a9050 R14: 00007ffc284a9390 R15: 000055f87dd97480
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.146. state:S stack:0 pid:2456493 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4107 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2457400 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41c4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2457742 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4067 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.9 state:S stack:0 pid:2458024 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000b8d4d R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000e586 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000e586 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/22:2 state:I stack:0 pid:2458065 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.8 state:S stack:0 pid:2459178 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.136. state:S stack:0 pid:2459316 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41ad R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.28.1 state:S stack:0 pid:2461553 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000006126 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 0000000000006126 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:3 state:I stack:0 pid:2461839 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (flush-253:79)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.20.1 state:S stack:0 pid:2462719 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u97:3 state:I stack:0 pid:2462779 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (flush-253:77)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:7 state:I stack:0 pid:2463319 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events_unbound)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:10 state:I stack:0 pid:2463433 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (writeback)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.33.88.9 state:S stack:0 pid:2464105 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000926f R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000926f R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:12 state:I stack:0 pid:2464235 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (flush-253:0)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.46.159. state:S stack:0 pid:2465617 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41c4 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u96:1 state:I stack:0 pid:2466032 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (ext4-rsv-conversion)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u96:5 state:I stack:0 pid:2466033 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (bond0)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.30.8 state:S stack:0 pid:2468009 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000293 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:17 state:D stack:0 pid:2468089 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: nfsd4 laundromat_main [nfsd]
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_timeout+0x11f/0x160
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:16 2024] __wait_for_common+0x90/0x1d0
[Mon Mar 11 14:29:16 2024] ? __pfx_schedule_timeout+0x10/0x10
[Mon Mar 11 14:29:16 2024] __flush_workqueue+0x13a/0x3f0
[Mon Mar 11 14:29:16 2024] nfsd4_shutdown_callback+0x49/0x120 [nfsd]
[Mon Mar 11 14:29:16 2024] ? nfsd4_return_all_client_layouts+0xc4/0xf0 [nfsd]
[Mon Mar 11 14:29:16 2024] ? nfsd4_shutdown_copy+0x68/0xc0 [nfsd]
[Mon Mar 11 14:29:16 2024] __destroy_client+0x1f3/0x290 [nfsd]
[Mon Mar 11 14:29:16 2024] nfs4_process_client_reaplist+0xa1/0x110 [nfsd]
[Mon Mar 11 14:29:16 2024] nfs4_laundromat+0x126/0x6e0 [nfsd]
[Mon Mar 11 14:29:16 2024] laundromat_main+0x16/0x40 [nfsd]
[Mon Mar 11 14:29:16 2024] process_one_work+0x1e2/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] worker_thread+0x50/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/23:0 state:I stack:0 pid:2469472 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.19.5 state:S stack:0 pid:2470233 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000d7b4 R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000d7b4 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u97:2 state:I stack:0 pid:2470473 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events_unbound)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/32:1 state:I stack:0 pid:2470837 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u96:2 state:I stack:0 pid:2470993 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (bond0)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/15:2 state:I stack:0 pid:2471252 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/43:1 state:I stack:0 pid:2471690 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:0 state:I stack:0 pid:2471807 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (flush-253:0)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/39:0 state:I stack:0 pid:2471904 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/44:2 state:I stack:0 pid:2472062 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.27.1 state:S stack:0 pid:2472086 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4037 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/46:2 state:I stack:0 pid:2472151 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/36:0 state:I stack:0 pid:2472450 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.140. state:S stack:0 pid:2472699 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41d9 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u96:6 state:I stack:0 pid:2472989 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (dm-thin)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/12:2 state:I stack:0 pid:2473001 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (lpfc_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u97:0 state:I stack:0 pid:2473005 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rpciod)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/2:1 state:I stack:0 pid:2473316 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_par_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/45:2 state:I stack:0 pid:2473358 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/25:2 state:I stack:0 pid:2473359 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/30:1 state:I stack:0 pid:2473379 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:1 state:I stack:0 pid:2473403 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (flush-253:79)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/27:0 state:I stack:0 pid:2473569 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.87.29.1 state:S stack:0 pid:2473572 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f4107 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/4:0 state:I stack:0 pid:2473634 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/15:0 state:I stack:0 pid:2473641 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_par_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __rcu_report_exp_rnp+0x77/0xc0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/22:1 state:I stack:0 pid:2474143 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/42:2 state:I stack:0 pid:2474171 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/18:2 state:I stack:0 pid:2474270 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/8:1 state:I stack:0 pid:2474342 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/47:1 state:I stack:0 pid:2474397 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/26:1 state:I stack:0 pid:2474460 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_par_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/44:0 state:I stack:0 pid:2474523 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/23:1 state:I stack:0 pid:2474524 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/28:2 state:I stack:0 pid:2474525 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/37:0 state:I stack:0 pid:2474527 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/11:0 state:I stack:0 pid:2474533 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/31:0 state:I stack:0 pid:2474534 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_par_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/33:2 state:I stack:0 pid:2474557 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/9:2 state:I stack:0 pid:2474561 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sshd state:S stack:0 pid:2474602 ppid:4412 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:16 2024] ? unix_poll+0xf4/0x100
[Mon Mar 11 14:29:16 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:16 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:16 2024] ? remove_wait_queue+0x20/0x60
[Mon Mar 11 14:29:16 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:16 2024] ? skb_queue_tail+0x1b/0x50
[Mon Mar 11 14:29:16 2024] ? sock_def_readable+0x10/0xc0
[Mon Mar 11 14:29:16 2024] ? _copy_to_iter+0x1d4/0x630
[Mon Mar 11 14:29:16 2024] ? _copy_to_iter+0x1d4/0x630
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __skb_datagram_iter+0x79/0x2e0
[Mon Mar 11 14:29:16 2024] ? __pfx_simple_copy_to_iter+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __kmem_cache_alloc_node+0x1c7/0x2d0
[Mon Mar 11 14:29:16 2024] ? __audit_sockaddr+0x5f/0x80
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __pfx_deferred_put_nlk_sk+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __pfx_inode_free_by_rcu+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:16 2024] ? __rseq_handle_notify_resume+0x26/0xb0
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:16 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fb3535426c7
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffce5622d88 EFLAGS: 00000246 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007ffce5622da0 RCX: 00007fb3535426c7
[Mon Mar 11 14:29:16 2024] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007ffce5622da0
[Mon Mar 11 14:29:16 2024] RBP: 000055c449fb00c0 R08: 0000000000000001 R09: 00007ffce561c0b8
[Mon Mar 11 14:29:16 2024] R10: 0000000000000040 R11: 0000000000000246 R12: 000055c449fb00c0
[Mon Mar 11 14:29:16 2024] R13: 000055c44a8fd2b0 R14: 000055c44a8e6650 R15: 00000000ffffffff
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sshd state:S stack:0 pid:2474605 ppid:2474602 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? dequeue_skb+0x7d/0x500
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? n_tty_poll+0x1cb/0x1f0
[Mon Mar 11 14:29:16 2024] ? tty_poll+0x71/0xa0
[Mon Mar 11 14:29:16 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __mod_timer+0x286/0x3d0
[Mon Mar 11 14:29:16 2024] ? sk_reset_timer+0x14/0x60
[Mon Mar 11 14:29:16 2024] ? tcp_schedule_loss_probe.part.0+0x12c/0x1a0
[Mon Mar 11 14:29:16 2024] ? tcp_write_xmit+0x6f5/0xaa0
[Mon Mar 11 14:29:16 2024] ? __tcp_push_pending_frames+0x32/0xf0
[Mon Mar 11 14:29:16 2024] ? tcp_sendmsg_locked+0xaae/0xc20
[Mon Mar 11 14:29:16 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:16 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:16 2024] __x64_sys_pselect6+0x39/0x70
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fb353544f64
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffce5622a40 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb353544f64
[Mon Mar 11 14:29:16 2024] RDX: 000055c44a929570 RSI: 000055c44a94b810 RDI: 000000000000000f
[Mon Mar 11 14:29:16 2024] RBP: 000055c44a94b810 R08: 0000000000000000 R09: 00007ffce5622a80
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000008
[Mon Mar 11 14:29:16 2024] R13: 000055c44a8fefe0 R14: 000055c44a929570 R15: 000055c44a8fd2b0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:bash state:S stack:0 pid:2474608 ppid:2474605 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] do_wait+0x15b/0x2f0
[Mon Mar 11 14:29:16 2024] kernel_wait4+0xa6/0x140
[Mon Mar 11 14:29:16 2024] ? __pfx_child_wait_callback+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f8d3c3182ca
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffcf6dd9a98 EFLAGS: 00000246 ORIG_RAX: 000000000000003d
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000000000025c49f RCX: 00007f8d3c3182ca
[Mon Mar 11 14:29:16 2024] RDX: 000000000000000a RSI: 00007ffcf6dd9ac0 RDI: 00000000ffffffff
[Mon Mar 11 14:29:16 2024] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000a
[Mon Mar 11 14:29:16 2024] R13: 00007ffcf6dd9b20 R14: 0000000000000000 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/14:2 state:R running task stack:0 pid:2474614 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/20:0 state:I stack:0 pid:2474728 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/21:2 state:I stack:0 pid:2474756 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_par_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __rcu_report_exp_rnp+0x77/0xc0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/17:0 state:I stack:0 pid:2474774 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/7:1 state:I stack:0 pid:2474775 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (lpfc_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/29:2 state:I stack:0 pid:2474780 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/3:1 state:I stack:0 pid:2474822 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/24:1 state:I stack:0 pid:2474838 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/1:2 state:I stack:0 pid:2474995 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/5:0 state:I stack:0 pid:2475005 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:journalctl state:S stack:0 pid:2475167 ppid:2474608 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? n_tty_poll+0x1cb/0x1f0
[Mon Mar 11 14:29:16 2024] ? tty_poll+0x71/0xa0
[Mon Mar 11 14:29:16 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:16 2024] ? select_idle_sibling+0x28/0x430
[Mon Mar 11 14:29:16 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:16 2024] ? wake_affine+0x62/0x1f0
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? __smp_call_single_queue+0x93/0x120
[Mon Mar 11 14:29:16 2024] ? ttwu_queue_wakelist+0xf2/0x110
[Mon Mar 11 14:29:16 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? n_tty_write+0x267/0x3c0
[Mon Mar 11 14:29:16 2024] ? __wake_up+0x40/0x60
[Mon Mar 11 14:29:16 2024] ? do_tty_write+0x1a4/0x250
[Mon Mar 11 14:29:16 2024] ? __pfx_n_tty_write+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? file_has_perm+0xc6/0xd0
[Mon Mar 11 14:29:16 2024] ? file_tty_write.constprop.0+0x98/0xc0
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f027814279f
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007fffd52e26b0 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f027814279f
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 00007fffd52e2910
[Mon Mar 11 14:29:16 2024] RBP: 00007fffd52e2910 R08: 0000000000000008 R09: 00007fffd52e2670
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000003eee
[Mon Mar 11 14:29:16 2024] R13: 0000000000000000 R14: 00007fffd52e27d0 R15: 000055ae95ac4ad0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/41:0 state:I stack:0 pid:2475205 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/16:1 state:I stack:0 pid:2475250 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u97:4 state:I stack:0 pid:2475262 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (flush-253:77)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/19:0 state:I stack:0 pid:2475271 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/0:2 state:I stack:0 pid:2475316 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/34:2 state:I stack:0 pid:2475317 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/10:2 state:I stack:0 pid:2475318 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/40:1 state:I stack:0 pid:2475325 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_par_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/6:0 state:I stack:0 pid:2475365 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/46:0 state:I stack:0 pid:2475460 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/35:1 state:I stack:0 pid:2475663 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/38:1 state:I stack:0 pid:2475692 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/13:2 state:I stack:0 pid:2475727 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/32:2 state:I stack:0 pid:2475859 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/30:2 state:I stack:0 pid:2475882 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/45:1 state:I stack:0 pid:2476033 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/36:2 state:I stack:0 pid:2476104 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/27:1 state:I stack:0 pid:2476181 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/12:1 state:I stack:0 pid:2476314 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/25:0 state:I stack:0 pid:2476365 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sshd state:S stack:0 pid:2476431 ppid:4412 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? add_wait_queue+0x65/0xa0
[Mon Mar 11 14:29:16 2024] ? unix_poll+0xf4/0x100
[Mon Mar 11 14:29:16 2024] ? sock_poll+0x4c/0xe0
[Mon Mar 11 14:29:16 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:16 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:16 2024] ? skb_queue_tail+0x1b/0x50
[Mon Mar 11 14:29:16 2024] ? _copy_to_iter+0x1d4/0x630
[Mon Mar 11 14:29:16 2024] ? __check_object_size.part.0+0x47/0xd0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __audit_sockaddr+0x5f/0x80
[Mon Mar 11 14:29:16 2024] ? kmalloc_trace+0x25/0xa0
[Mon Mar 11 14:29:16 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:16 2024] ? _copy_to_user+0x1a/0x30
[Mon Mar 11 14:29:16 2024] ? move_addr_to_user+0x4b/0xe0
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __pfx_deferred_put_nlk_sk+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rcu_nocb_try_bypass+0x5e/0x460
[Mon Mar 11 14:29:16 2024] ? __pfx_inode_free_by_rcu+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? rseq_get_rseq_cs+0x1d/0x240
[Mon Mar 11 14:29:16 2024] ? __dentry_kill+0x13a/0x180
[Mon Mar 11 14:29:16 2024] ? rseq_ip_fixup+0x6e/0x1a0
[Mon Mar 11 14:29:16 2024] ? __call_rcu_common.constprop.0+0x117/0x2b0
[Mon Mar 11 14:29:16 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:16 2024] __x64_sys_poll+0x39/0x140
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f62089426c7
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc2b4f3f88 EFLAGS: 00000246 ORIG_RAX: 0000000000000007
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007ffc2b4f3fa0 RCX: 00007f62089426c7
[Mon Mar 11 14:29:16 2024] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007ffc2b4f3fa0
[Mon Mar 11 14:29:16 2024] RBP: 000055abfba6f0c0 R08: 0000000000000001 R09: 00007ffc2b4ed2b8
[Mon Mar 11 14:29:16 2024] R10: 0000000000000040 R11: 0000000000000246 R12: 000055abfba6f0c0
[Mon Mar 11 14:29:16 2024] R13: 000055abfbe944d0 R14: 000055abfbe8aad0 R15: 00000000ffffffff
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sshd state:S stack:0 pid:2476434 ppid:2476431 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x10f/0x120
[Mon Mar 11 14:29:16 2024] ? n_tty_poll+0x1cb/0x1f0
[Mon Mar 11 14:29:16 2024] ? tty_poll+0x71/0xa0
[Mon Mar 11 14:29:16 2024] do_select+0x69e/0x7c0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? drain_stock+0x5e/0x90
[Mon Mar 11 14:29:16 2024] ? refill_stock+0x7e/0x90
[Mon Mar 11 14:29:16 2024] ? cgroup_rstat_updated+0x42/0xd0
[Mon Mar 11 14:29:16 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:16 2024] ? drain_obj_stock+0xb6/0x2a0
[Mon Mar 11 14:29:16 2024] ? __mod_memcg_state+0x63/0xb0
[Mon Mar 11 14:29:16 2024] ? memcg_slab_post_alloc_hook+0x181/0x250
[Mon Mar 11 14:29:16 2024] ? try_to_wake_up+0x64/0x5d0
[Mon Mar 11 14:29:16 2024] ? kmem_cache_alloc+0x17d/0x340
[Mon Mar 11 14:29:16 2024] ? complete_signal+0x107/0x300
[Mon Mar 11 14:29:16 2024] core_sys_select+0x1a0/0x3b0
[Mon Mar 11 14:29:16 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:16 2024] do_pselect.constprop.0+0xca/0x170
[Mon Mar 11 14:29:16 2024] __x64_sys_pselect6+0x39/0x70
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f6208944f64
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc2b4f3c40 EFLAGS: 00000246 ORIG_RAX: 000000000000010e
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6208944f64
[Mon Mar 11 14:29:16 2024] RDX: 000055abfbeb2c00 RSI: 000055abfbee1840 RDI: 000000000000000f
[Mon Mar 11 14:29:16 2024] RBP: 000055abfbee1840 R08: 0000000000000000 R09: 00007ffc2b4f3c80
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000008
[Mon Mar 11 14:29:16 2024] R13: 000055abfbe962a0 R14: 000055abfbeb2c00 R15: 000055abfbe944d0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:bash state:R running task stack:0 pid:2476436 ppid:2476434 flags:0x0000400e
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] sched_show_task.cold+0xca/0xe4
[Mon Mar 11 14:29:16 2024] show_state_filter+0x7f/0xe0
[Mon Mar 11 14:29:16 2024] sysrq_handle_showstate+0xc/0x20
[Mon Mar 11 14:29:16 2024] __handle_sysrq.cold+0x40/0x11f
[Mon Mar 11 14:29:16 2024] write_sysrq_trigger+0x24/0x40
[Mon Mar 11 14:29:16 2024] proc_reg_write+0x53/0xa0
[Mon Mar 11 14:29:16 2024] vfs_write+0xe4/0x410
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? auditd_test_task+0x3c/0x50
[Mon Mar 11 14:29:16 2024] ksys_write+0x5f/0xe0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f2ac893e927
[Mon Mar 11 14:29:16 2024] Code: 0b 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007fff07fc0038 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f2ac893e927
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000002 RSI: 000055fdc2dc7490 RDI: 0000000000000001
[Mon Mar 11 14:29:16 2024] RBP: 000055fdc2dc7490 R08: 0000000000000000 R09: 00007f2ac89b14e0
[Mon Mar 11 14:29:16 2024] R10: 00007f2ac89b13e0 R11: 0000000000000246 R12: 0000000000000002
[Mon Mar 11 14:29:16 2024] R13: 00007f2ac89fb780 R14: 0000000000000002 R15: 00007f2ac89f69e0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u98:2 state:D stack:0 pid:2476491 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: writeback wb_workfn (flush-253:96)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_wbt_cleanup_cb+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] io_schedule+0x42/0x70
[Mon Mar 11 14:29:16 2024] rq_qos_wait+0xbb/0x130
[Mon Mar 11 14:29:16 2024] ? __pfx_rq_qos_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_wbt_inflight_cb+0x10/0x10
[Mon Mar 11 14:29:16 2024] wbt_wait+0x9c/0x100
[Mon Mar 11 14:29:16 2024] __rq_qos_throttle+0x20/0x40
[Mon Mar 11 14:29:16 2024] blk_mq_submit_bio+0x183/0x580
[Mon Mar 11 14:29:16 2024] __submit_bio_noacct+0x7e/0x1e0
[Mon Mar 11 14:29:16 2024] submit_bh_wbc+0x115/0x140
[Mon Mar 11 14:29:16 2024] __block_write_full_page+0x217/0x500
[Mon Mar 11 14:29:16 2024] ? __pfx_end_buffer_async_write+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? __pfx_blkdev_get_block+0x10/0x10
[Mon Mar 11 14:29:16 2024] __writepage+0x17/0x70
[Mon Mar 11 14:29:16 2024] write_cache_pages+0x179/0x4c0
[Mon Mar 11 14:29:16 2024] ? __pfx___writepage+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_writepages+0x129/0x1d0
[Mon Mar 11 14:29:16 2024] ? find_busiest_group+0x11d/0x240
[Mon Mar 11 14:29:16 2024] __writeback_single_inode+0x41/0x270
[Mon Mar 11 14:29:16 2024] writeback_sb_inodes+0x209/0x4a0
[Mon Mar 11 14:29:16 2024] __writeback_inodes_wb+0x4c/0xe0
[Mon Mar 11 14:29:16 2024] wb_writeback+0x1d7/0x2d0
[Mon Mar 11 14:29:16 2024] wb_do_writeback+0x1d1/0x2b0
[Mon Mar 11 14:29:16 2024] wb_workfn+0x5e/0x290
[Mon Mar 11 14:29:16 2024] ? try_to_wake_up+0x3e2/0x5d0
[Mon Mar 11 14:29:16 2024] process_one_work+0x1e2/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] worker_thread+0x50/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.46.231. state:S stack:0 pid:2476746 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ad8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 00000000000f41da R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000ea60 R11: 0000000000000246 R12: 000055f87dd61190
[Mon Mar 11 14:29:16 2024] R13: 000000000000ea60 R14: 0000000000000026 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/22:0 state:I stack:0 pid:2476753 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/47:0 state:I stack:0 pid:2476852 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/44:1 state:I stack:0 pid:2476853 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/37:2 state:I stack:0 pid:2476855 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_classic state:S stack:0 pid:2477044 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? ktime_get+0x35/0xa0
[Mon Mar 11 14:29:16 2024] ? clockevents_program_event+0x93/0x100
[Mon Mar 11 14:29:16 2024] ? hrtimer_interrupt+0x126/0x210
[Mon Mar 11 14:29:16 2024] ? sched_clock+0xc/0x30
[Mon Mar 11 14:29:16 2024] ? sched_clock_cpu+0x9/0xc0
[Mon Mar 11 14:29:16 2024] ? irqtime_account_irq+0x3c/0xb0
[Mon Mar 11 14:29:16 2024] ? __irq_exit_rcu+0x46/0xc0
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f4ea274e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffdaf1b6a28 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000562818753770 RCX: 00007f4ea274e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffdaf1b6a5c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 00005628187442e0 R08: 00000000000f4200 R09: 0000000000006cd4
[Mon Mar 11 14:29:16 2024] R10: 00000000000003e8 R11: 0000000000000246 R12: 0000562818753800
[Mon Mar 11 14:29:16 2024] R13: 00000000000003e8 R14: 00007f4ea3c6fdb0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/8:0 state:I stack:0 pid:2477090 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/4:1 state:I stack:0 pid:2477133 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (rcu_gp)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/2:0 state:I stack:0 pid:2477161 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/23:2 state:I stack:0 pid:2477164 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/18:0 state:I stack:0 pid:2477203 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/11:2 state:I stack:0 pid:2477378 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2477388 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_loop+0xd0/0x130
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xb6/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? sysvec_apic_timer_interrupt+0x3c/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f558274e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffcbcc91568 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000056172c69c770 RCX: 00007f558274e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffcbcc9159c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000056172c68d2e0 R08: 000056172c754ef0 R09: 0000000000006eec
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000056172c69c800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007f5583850db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2477392 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fc5d954e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc70429658 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00005635431b0770 RCX: 00007fc5d954e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc7042968c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 00005635431a12e0 R08: 00005635431b11d0 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 00005635431b0800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007fc5da59adb0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2477396 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7efc1514e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffee9480748 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000561499450770 RCX: 00007efc1514e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffee948077c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 00005614994412e0 R08: 00005614994511d0 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 0000561499450800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007efc16237db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2477400 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f37e8d4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffcc5cf46a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f45dcdf770 RCX: 00007f37e8d4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffcc5cf46dc RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000055f45dcd02e0 R08: 000055f45dce01d0 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055f45dcdf800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007f37e9d1fdb0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_winreg state:S stack:0 pid:2477414 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7efff154e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007fff50d7d268 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055b21beab770 RCX: 00007efff154e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007fff50d7d29c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000055b21be9c2e0 R08: 000055b21bee97a0 R09: 000000000000145b
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055b21beab800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007efff2015db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/26:0 state:I stack:0 pid:2477469 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/28:1 state:I stack:0 pid:2477534 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/42:1 state:I stack:0 pid:2477540 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (mm_percpu_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/14:0 state:I stack:0 pid:2477542 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/33:0 state:I stack:0 pid:2477675 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/3:0 state:I stack:0 pid:2477683 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (lpfc_wq)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/u97:5 state:I stack:0 pid:2477730 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (writeback)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/41:2 state:I stack:0 pid:2477809 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sleep state:S stack:0 pid:2478153 ppid:91766 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] do_nanosleep+0x67/0x190
[Mon Mar 11 14:29:16 2024] hrtimer_nanosleep+0xbe/0x1a0
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] common_nsleep+0x40/0x50
[Mon Mar 11 14:29:16 2024] __x64_sys_clock_nanosleep+0xbc/0x130
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? handle_mm_fault+0xcd/0x290
[Mon Mar 11 14:29:16 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f4c8f71393a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffea37e54f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e6
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007f4c8f8926c0 RCX: 00007f4c8f71393a
[Mon Mar 11 14:29:16 2024] RDX: 00007ffea37e5550 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Mar 11 14:29:16 2024] RBP: 000000000000003c R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 00007ffea37e5540 R11: 0000000000000246 R12: 00007ffea37e5540
[Mon Mar 11 14:29:16 2024] R13: 00007ffea37e5550 R14: 00007ffea37e56c8 R15: 000055722ebd3040
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:smbd[10.40.128. state:S stack:0 pid:2478178 ppid:4757 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc284a9ab0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055f87dd397c0 RCX: 00007fedb7d4e84e
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc284a9b0c RDI: 0000000000000005
[Mon Mar 11 14:29:16 2024] RBP: 000055f87dd2aa20 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 000000000000cada R11: 0000000000000293 R12: 000055f87dd58980
[Mon Mar 11 14:29:16 2024] R13: 000000000000cada R14: 0000000000000027 R15: 000055f87dd5b760
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/39:1 state:I stack:0 pid:2478206 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:kworker/40:2 state:I stack:0 pid:2478207 ppid:2 flags:0x00004000
[Mon Mar 11 14:29:16 2024] Workqueue: 0x0 (events)
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] worker_thread+0xbb/0x3a0
[Mon Mar 11 14:29:16 2024] ? __pfx_worker_thread+0x10/0x10
[Mon Mar 11 14:29:16 2024] kthread+0xdd/0x100
[Mon Mar 11 14:29:16 2024] ? __pfx_kthread+0x10/0x10
[Mon Mar 11 14:29:16 2024] ret_from_fork+0x29/0x50
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2478220 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? common_interrupt+0x43/0xa0
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f4471b4e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffe40c13e78 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055e6e8026770 RCX: 00007f4471b4e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffe40c13eac RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000055e6e80172e0 R08: 000055e6e8083a50 R09: 000000000000119c
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055e6e8026800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007f4472ce0db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2478222 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fd76154e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffcedfd9bc8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000556208bf4770 RCX: 00007fd76154e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffcedfd9bfc RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 0000556208be52e0 R08: 0000556208bf51d0 R09: 00007ffcedfd97c0
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 0000556208bf4800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007fd76267adb0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2478223 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f827294e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffdf1f6dce8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000056414398a770 RCX: 00007f827294e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffdf1f6dd1c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000056414397b2e0 R08: 000056414398b1d0 R09: 00007ffdf1f6d8e0
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000056414398a800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007f8273a41db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:rpcd_spoolss state:S stack:0 pid:2478224 ppid:2339807 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] ep_poll+0x348/0x3b0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_epoll_wait+0xb1/0xd0
[Mon Mar 11 14:29:16 2024] __x64_sys_epoll_wait+0x60/0x100
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? exit_to_user_mode_prepare+0xec/0x100
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fd74f14e80a
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc29aada38 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 000055658895b770 RCX: 00007fd74f14e80a
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000001 RSI: 00007ffc29aada6c RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000055658894c2e0 R08: 000055658895c1d0 R09: 00007ffc29aad630
[Mon Mar 11 14:29:16 2024] R10: 0000000000007530 R11: 0000000000000246 R12: 000055658895b800
[Mon Mar 11 14:29:16 2024] R13: 0000000000007530 R14: 00007fd750266db0 R15: 0000000000000000
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:nrpe state:S stack:0 pid:2478244 ppid:1 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] do_wait+0x15b/0x2f0
[Mon Mar 11 14:29:16 2024] kernel_wait4+0xa6/0x140
[Mon Mar 11 14:29:16 2024] ? __pfx_child_wait_callback+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? audit_reset_context.part.0.constprop.0+0x273/0x2e0
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fb70e3182ca
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffcc320ed78 EFLAGS: 00000246 ORIG_RAX: 000000000000003d
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb70e3182ca
[Mon Mar 11 14:29:16 2024] RDX: 0000000000000000 RSI: 00007ffcc320edc4 RDI: 000000000025d0a5
[Mon Mar 11 14:29:16 2024] RBP: 0000562bfde0c5e0 R08: 000000000025d0a5 R09: 000000000025d0a5
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffcc320edd0
[Mon Mar 11 14:29:16 2024] R13: 0000000000000000 R14: 0000000000000006 R15: 0000562bfde1bac0
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:nrpe state:S stack:0 pid:2478245 ppid:2478244 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] pipe_read+0x38b/0x4c0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] vfs_read+0x2f9/0x330
[Mon Mar 11 14:29:16 2024] ksys_read+0xab/0xe0
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fb70e33e882
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffcc320eca8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000562bfde21250 RCX: 00007fb70e33e882
[Mon Mar 11 14:29:16 2024] RDX: 0000000000001000 RSI: 0000562bfde1bee0 RDI: 0000000000000004
[Mon Mar 11 14:29:16 2024] RBP: 00007fb70e3f6c20 R08: 0000000000000004 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000000200 R11: 0000000000000246 R12: 00000000000007ff
[Mon Mar 11 14:29:16 2024] R13: 0000000000000d68 R14: 00007fb70e3f69e0 R15: 0000000000000d68
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:check_fan_statu state:S stack:0 pid:2478246 ppid:2478245 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] pipe_read+0x38b/0x4c0
[Mon Mar 11 14:29:16 2024] ? __pfx_autoremove_wake_function+0x10/0x10
[Mon Mar 11 14:29:16 2024] vfs_read+0x2f9/0x330
[Mon Mar 11 14:29:16 2024] ksys_read+0xab/0xe0
[Mon Mar 11 14:29:16 2024] ? syscall_trace_enter.constprop.0+0x126/0x1a0
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? do_user_addr_fault+0x1d6/0x6a0
[Mon Mar 11 14:29:16 2024] ? exc_page_fault+0x62/0x150
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7f2e4753e882
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffead72a018 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 0000000000002000 RCX: 00007f2e4753e882
[Mon Mar 11 14:29:16 2024] RDX: 0000000000002000 RSI: 000055887f375a60 RDI: 0000000000000003
[Mon Mar 11 14:29:16 2024] RBP: 000055887f375a60 R08: 0000000000000000 R09: 0000000000000000
[Mon Mar 11 14:29:16 2024] R10: 0000000000000100 R11: 0000000000000246 R12: 0000000000000000
[Mon Mar 11 14:29:16 2024] R13: 000055887f2992a0 R14: 0000000000000003 R15: 000055887f3a5b00
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] task:sudo state:S stack:0 pid:2478247 ppid:2478246 flags:0x00000002
[Mon Mar 11 14:29:16 2024] Call Trace:
[Mon Mar 11 14:29:16 2024] <TASK>
[Mon Mar 11 14:29:16 2024] __schedule+0x21b/0x550
[Mon Mar 11 14:29:16 2024] schedule+0x2d/0x70
[Mon Mar 11 14:29:16 2024] schedule_hrtimeout_range_clock+0x95/0x120
[Mon Mar 11 14:29:16 2024] ? __pfx_hrtimer_wakeup+0x10/0x10
[Mon Mar 11 14:29:16 2024] do_poll.constprop.0+0x248/0x390
[Mon Mar 11 14:29:16 2024] do_sys_poll+0x1c8/0x250
[Mon Mar 11 14:29:16 2024] ? __mod_memcg_lruvec_state+0x84/0xd0
[Mon Mar 11 14:29:16 2024] ? mod_objcg_state+0x1fe/0x300
[Mon Mar 11 14:29:16 2024] ? scm_recv.constprop.0+0x43/0x1a0
[Mon Mar 11 14:29:16 2024] ? scm_recv.constprop.0+0x43/0x1a0
[Mon Mar 11 14:29:16 2024] ? avc_has_perm+0x8f/0x1b0
[Mon Mar 11 14:29:16 2024] ? __pfx_pollwake+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? unix_stream_recvmsg+0x92/0xa0
[Mon Mar 11 14:29:16 2024] ? __pfx_unix_stream_read_actor+0x10/0x10
[Mon Mar 11 14:29:16 2024] ? sock_recvmsg+0x99/0xa0
[Mon Mar 11 14:29:16 2024] ? __check_object_size.part.0+0x35/0xd0
[Mon Mar 11 14:29:16 2024] ? ____sys_recvmsg+0x87/0x1b0
[Mon Mar 11 14:29:16 2024] ? __import_iovec+0x46/0x150
[Mon Mar 11 14:29:16 2024] ? import_iovec+0x17/0x20
[Mon Mar 11 14:29:16 2024] ? copy_msghdr_from_user+0x6d/0xa0
[Mon Mar 11 14:29:16 2024] ? ___sys_recvmsg+0x88/0xd0
[Mon Mar 11 14:29:16 2024] ? __audit_filter_op+0xa5/0xf0
[Mon Mar 11 14:29:16 2024] ? _copy_from_user+0x27/0x60
[Mon Mar 11 14:29:16 2024] __x64_sys_ppoll+0xbc/0x150
[Mon Mar 11 14:29:16 2024] do_syscall_64+0x59/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_work+0x103/0x130
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? syscall_exit_to_user_mode+0x22/0x40
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] ? do_syscall_64+0x69/0x90
[Mon Mar 11 14:29:16 2024] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[Mon Mar 11 14:29:16 2024] RIP: 0033:0x7fb58794279f
[Mon Mar 11 14:29:16 2024] RSP: 002b:00007ffc79971dc0 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
[Mon Mar 11 14:29:16 2024] RAX: ffffffffffffffda RBX: 00007ffc79971e60 RCX: 00007fb58794279f
[Mon Mar 11 14:29:16 2024] RDX: 00007ffc79971de0 RSI: 0000000000000001 RDI: 00007ffc79971e60
[Mon Mar 11 14:29:16 2024] RBP: 0000000000000001 R08: 0000000000000008 R09: 00007ffc79970d78
[Mon Mar 11 14:29:16 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
[Mon Mar 11 14:29:16 2024] R13: 00007ffc79971de0 R14: 0000012aabc0581a R15: 0000000000000001
[Mon Mar 11 14:29:16 2024] </TASK>
[Mon Mar 11 14:29:16 2024] Sched Debug Version: v0.11, 5.14.0-419.el9.x86_64 #1
[Mon Mar 11 14:29:16 2024] ktime : 1282664683.579695
[Mon Mar 11 14:29:16 2024] sched_clk : 1282630518.614694
[Mon Mar 11 14:29:16 2024] cpu_clk : 1282630436.154729
[Mon Mar 11 14:29:16 2024] jiffies : 5577331856
[Mon Mar 11 14:29:16 2024] sched_clock_stable() : 1
[Mon Mar 11 14:29:16 2024] sysctl_sched
[Mon Mar 11 14:29:16 2024] .sysctl_sched_latency : 24.000000
[Mon Mar 11 14:29:16 2024] .sysctl_sched_min_granularity : 3.000000
[Mon Mar 11 14:29:16 2024] .sysctl_sched_idle_min_granularity : 0.750000
[Mon Mar 11 14:29:16 2024] .sysctl_sched_wakeup_granularity : 4.000000
[Mon Mar 11 14:29:16 2024] .sysctl_sched_child_runs_first : 0
[Mon Mar 11 14:29:16 2024] .sysctl_sched_features : 125720123
[Mon Mar 11 14:29:16 2024] .sysctl_sched_tunable_scaling : 1 (logarithmic)
[Mon Mar 11 14:29:16 2024] cpu#0, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 2296394021
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : 51887
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331860
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630435.244990
[Mon Mar 11 14:29:16 2024] .clock_task : 1253286234.451401
[Mon Mar 11 14:29:16 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] rt_rq[0]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[0]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] I rcu_gp 3 7.021156 2 100 0.000000 0.003112 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I rcu_par_gp 4 8.521748 2 100 0.000000 0.001816 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I slub_flushwq 5 10.522486 2 100 0.000000 0.001692 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I netns 6 12.523243 2 100 0.000000 0.001795 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/0:0H 8 3203.959207 4 100 0.000000 0.042887 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I mm_percpu_wq 12 21.277050 2 100 0.000000 0.001855 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I rcu_tasks_kthre 13 3162.976921 9 120 0.000000 0.949708 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I rcu_tasks_rude_ 14 25.279343 2 120 0.000000 0.002068 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/0 16 20417584.604598 31070353 120 0.000000 364032.682502 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S migration/0 19 0.000000 373527 0 0.000000 738.700624 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/0 20 0.000000 3 49 0.000000 0.004950 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S cpuhp/0 22 22468.084833 25 120 0.000000 2.588689 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S khungtaskd 361 20416782.155692 10440 120 0.000000 2653.563724 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/0:1H 800 20417584.597643 2972197 100 0.000000 61296.732473 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/88-lpfc:0 935 0.000000 6383586 49 0.000000 127926.322832 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/171-lpfc:0 990 0.000000 6396763 49 0.000000 128916.085338 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:0 2697 8171.305813 2 100 0.000000 0.025829 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S cupsd 4399 7.881855 4138 120 0.000000 2607.818928 0.000000 0.000000 0 0 /autogroup-87
[Mon Mar 11 14:29:16 2024] I kdmflush/253:91 6396 26185.289233 2 100 0.000000 0.023215 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:95 6422 26241.860538 2 100 0.000000 0.022122 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S jbd2/dm-94-8 6473 20361729.729941 574 120 0.000000 66.244708 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 6492 26311.227438 2 100 0.000000 0.021092 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8850 20388822.869354 8289195 120 0.000000 357900.690801 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S automount 88129 11.464030 1 120 0.000000 0.512613 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88152 37.342455 1 120 0.000000 0.975275 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88163 50.261558 1 120 0.000000 0.919110 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S smbd[10.87.27.1 2343759 2214650.205580 236494 120 0.000000 95542.192550 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.8 2397415 2214625.547948 12317 120 0.000000 13032.835012 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/0:0 2433691 20417637.659863 4257088 120 0.000000 27949.242499 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/0:2 2475316 20414092.023162 32 120 0.000000 0.224977 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S check_fan_statu 2478246 276534.262092 10 120 0.000000 24.088894 0.000000 0.000000 0 0 /autogroup-109
[Mon Mar 11 14:29:16 2024] cpu#1, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 1
[Mon Mar 11 14:29:16 2024] .nr_switches : 807444859
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : 23435
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331904
[Mon Mar 11 14:29:16 2024] .curr->pid : 17
[Mon Mar 11 14:29:16 2024] .clock : 1282630436.239387
[Mon Mar 11 14:29:16 2024] .clock_task : 1279670360.339756
[Mon Mar 11 14:29:16 2024] .avg_idle : 904862
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] cfs_rq[1]:/autogroup-113
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 8115658.175504
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : -12301989.984208
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 0
[Mon Mar 11 14:29:16 2024] .load_avg : 0
[Mon Mar 11 14:29:16 2024] .runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .util_avg : 0
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 73
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] .se->exec_start : 1279670338.088628
[Mon Mar 11 14:29:16 2024] .se->vruntime : 26186993.311434
[Mon Mar 11 14:29:16 2024] .se->sum_exec_runtime : 10639176.007842
[Mon Mar 11 14:29:16 2024] .se->load.weight : 914143
[Mon Mar 11 14:29:16 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:16 2024] .se->avg.util_avg : 0
[Mon Mar 11 14:29:16 2024] .se->avg.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] cfs_rq[1]:/
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 26187002.773003
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : 5769354.613291
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 1
[Mon Mar 11 14:29:16 2024] .h_nr_running : 1
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 1048576
[Mon Mar 11 14:29:16 2024] .load_avg : 94
[Mon Mar 11 14:29:16 2024] .runnable_avg : 93
[Mon Mar 11 14:29:16 2024] .util_avg : 93
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 9
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] rt_rq[1]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.173428
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[1]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] >R pr/tty0 17 26187002.773003 6172000 120 0.000000 55552.090725 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S cpuhp/1 23 26928.561263 26 120 0.000000 1.297977 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/1 24 -2.996642 3 49 0.000000 0.009771 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S migration/1 25 0.000000 323819 0 0.000000 1720.423424 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/1 26 26186873.342690 11067261 120 0.000000 108044.304272 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/1:0H 28 402.193425 4 100 0.000000 0.026642 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/1:1H 378 26186873.129661 3241923 100 0.000000 71103.741959 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/u101:0 463 443.724114 2 100 0.000000 0.064463 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I nvmet-zbd-wq 894 1693.887859 2 100 0.000000 0.008278 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I nvmet-buffered- 895 1705.893374 2 100 0.000000 0.006049 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I nvmet-wq 896 1717.898662 2 100 0.000000 0.005741 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S scsi_eh_5 906 1745.293639 25 120 0.000000 3.381642 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I scsi_tmf_6 909 1753.919270 2 100 0.000000 0.007559 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/89-lpfc:1 936 0.000000 5293866 49 0.000000 126916.418104 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I lpfc_wq 988 1765.985777 2 100 0.000000 0.069727 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/172-lpfc:1 991 0.000000 5300318 49 0.000000 127800.522092 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:24 3594 27759.349464 2 100 0.000000 0.016555 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:28 3623 27822.825312 2 100 0.000000 0.016761 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:70 3986 28214.746330 2 100 0.000000 0.008648 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 4247 28865.751109 2 100 0.000000 0.010033 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S dbus-broker 4303 4980.347132 3735806 120 0.000000 177080.383037 0.000000 0.000000 1 0 /autogroup-65
[Mon Mar 11 14:29:16 2024] S rpc.rquotad 4319 3714.291187 4249 120 0.000000 25209.231292 0.000000 0.000000 1 0 /autogroup-80
[Mon Mar 11 14:29:16 2024] S sssd_be 4348 281228.994060 1313607 120 0.000000 930669.978920 0.000000 0.000000 1 0 /autogroup-78
[Mon Mar 11 14:29:16 2024] S f2b/observer 4437 19412.555733 22153 120 0.000000 2173.269196 0.000000 0.000000 1 4437 /autogroup-96
[Mon Mar 11 14:29:16 2024] S rpc.gssd 4431 1027.778699 132195 120 0.000000 16979.021952 0.000000 0.000000 1 0 /autogroup-100
[Mon Mar 11 14:29:16 2024] S JS Helper 5178 11.302628 12 120 0.000000 0.351211 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:16 2024] S wb[ESAT] 4744 532837.348257 5600132 120 0.000000 2375724.443295 0.000000 0.000000 1 0 /autogroup-95
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 6474 31232.973357 2 100 0.000000 0.065043 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I ib_nl_sa_wq 8765 35034.846633 2 100 0.000000 0.066047 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S automount 88118 5326.058588 115 120 0.000000 10.781388 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88124 5811.144173 98 120 0.000000 12.312219 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88135 5485.804293 39 120 0.000000 5.965224 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88156 5030.712050 15 120 0.000000 2.524423 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S atd 88110 48.570911 367 120 0.000000 108.183500 0.000000 0.000000 1 0 /autogroup-224
[Mon Mar 11 14:29:16 2024] S smbd[10.91.30.5 316974 8115278.994138 1283604 120 0.000000 1605373.289340 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.27.1 2347868 8115546.966585 3940904 120 0.000000 569321.239319 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.2 2362257 8115255.817322 789 120 0.000000 492.355096 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.28.1 2367684 8114281.525028 402 120 0.000000 135.984713 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/1:1 2399485 26186949.892506 152162 120 0.000000 63911.939232 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/1:2 2474995 26170899.951685 4 120 0.000000 0.010487 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] cpu#2, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 3090233980
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -204094
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331861
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630436.244759
[Mon Mar 11 14:29:16 2024] .clock_task : 1256281662.020018
[Mon Mar 11 14:29:16 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] rt_rq[2]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[2]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] I rcu_tasks_trace 15 191.591316 7 120 0.000000 1.228390 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S cpuhp/2 30 25785.486246 25 120 0.000000 1.377357 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/2 31 -6.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S migration/2 32 76.554899 374546 0 0.000000 1716.502442 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/2 33 41439826.273721 30996709 120 0.000000 347325.255191 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/2:0H 35 308.812723 4 100 0.000000 0.030917 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/u100:0 462 128.393642 2 100 0.000000 0.061361 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/2:1H 571 41439798.996811 4193050 100 0.000000 103229.132660 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S scsi_eh_8 916 1921.950948 25 120 0.000000 2.154584 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/90-lpfc:2 937 0.000000 12174502 49 0.000000 237076.318537 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/173-lpfc:2 992 0.000000 12201226 49 0.000000 237719.656129 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S scsi_eh_16 1014 2254.017550 2 120 0.000000 0.008601 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 2717 3374.585719 2 100 0.000000 0.006945 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:2 3217 22117.128323 2 100 0.000000 0.007700 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:3 3218 22128.897705 2 100 0.000000 0.008865 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 4253 27593.089036 2 100 0.000000 0.061857 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 4331 27648.261936 2 100 0.000000 0.066374 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S gmain 4355 3281.528039 320919 120 0.000000 27062.449859 0.000000 0.000000 0 4355 /autogroup-82
[Mon Mar 11 14:29:16 2024] D rs:main Q:Reg 4840 2062.828757 2229489 120 0.000000 93711.695350 0.000000 0.000000 0 4837 /autogroup-112
[Mon Mar 11 14:29:16 2024] S smbd 4757 2727645.254704 1012106 120 0.000000 1020301.486781 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] D nfsd 8795 41439836.435463 85261 120 0.000000 9703.829192 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8802 41439827.346838 110528 120 0.000000 11529.146586 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8808 41439834.679721 143362 120 0.000000 14661.554591 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S automount 88158 11.907868 1 120 0.000000 0.956451 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S smbd[10.87.30.4 2334575 2727645.282735 527 120 0.000000 168.578035 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.40.137. 2400165 2727554.305010 74673 120 0.000000 12342.589900 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/2:2 2424238 41439826.357798 643767 120 0.000000 13462.215571 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/2:1 2473316 41432871.673072 9 120 0.000000 0.074297 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/2:0 2477161 41434261.267573 3 120 0.000000 0.063449 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/u97:5 2477730 41438381.243921 9797 120 0.000000 114.740676 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] cpu#3, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 1174028061
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -5162
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331859
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630436.898544
[Mon Mar 11 14:29:16 2024] .clock_task : 1280169077.244563
[Mon Mar 11 14:29:16 2024] .avg_idle : 734944
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] cfs_rq[3]:/autogroup-113
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 8959496.244470
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : -11458151.915242
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 0
[Mon Mar 11 14:29:16 2024] .load_avg : 68
[Mon Mar 11 14:29:16 2024] .runnable_avg : 68
[Mon Mar 11 14:29:16 2024] .util_avg : 68
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 68
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 73
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] .se->exec_start : 1280169077.242552
[Mon Mar 11 14:29:16 2024] .se->vruntime : 31376681.441971
[Mon Mar 11 14:29:16 2024] .se->sum_exec_runtime : 10844857.316671
[Mon Mar 11 14:29:16 2024] .se->load.weight : 976755
[Mon Mar 11 14:29:16 2024] .se->avg.load_avg : 63
[Mon Mar 11 14:29:16 2024] .se->avg.util_avg : 67
[Mon Mar 11 14:29:16 2024] .se->avg.runnable_avg : 67
[Mon Mar 11 14:29:16 2024] cfs_rq[3]:/
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 31376681.441971
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : 10959033.282259
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 0
[Mon Mar 11 14:29:16 2024] .load_avg : 64
[Mon Mar 11 14:29:16 2024] .runnable_avg : 68
[Mon Mar 11 14:29:16 2024] .util_avg : 68
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] rt_rq[3]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[3]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] S cpuhp/3 36 23563.131278 25 120 0.000000 0.877111 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/3 37 -6.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S migration/3 38 108.554913 330113 0 0.000000 1818.512264 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/3 39 31376644.361624 12352259 120 0.000000 128122.475375 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/3:0H 41 222.855626 4 100 0.000000 0.049993 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/3:1H 774 31376644.275358 3358888 100 0.000000 73020.212897 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/91-lpfc:3 938 0.000000 5727349 49 0.000000 129632.349893 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/174-lpfc:3 993 0.000000 5737101 49 0.000000 130917.486645 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:23 3587 24829.743353 2 100 0.000000 0.010353 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:74 4102 25515.587727 2 100 0.000000 0.018098 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:85 4161 25602.653500 2 100 0.000000 0.018730 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 4309 25836.120153 2 100 0.000000 0.013754 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S gdbus 4359 1957.218446 48401 120 0.000000 16123.263267 0.000000 0.000000 1 4359 /autogroup-82
[Mon Mar 11 14:29:16 2024] S tuned 5104 345.552364 51 120 0.000000 126.060503 0.000000 0.000000 1 0 /autogroup-93
[Mon Mar 11 14:29:16 2024] S winbindd 4415 445384.357935 9168157 120 0.000000 2562461.686949 0.000000 0.000000 1 0 /autogroup-95
[Mon Mar 11 14:29:16 2024] I kcopyd 6394 27488.051576 2 100 0.000000 0.065770 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:96 6427 27515.609358 2 100 0.000000 0.020021 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 6476 27537.464132 2 100 0.000000 0.064131 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S pool-spawner 21804 14.045188 3 120 0.000000 0.579274 0.000000 0.000000 1 0 /autogroup-204
[Mon Mar 11 14:29:16 2024] S automount 88161 3162.684439 20 120 0.000000 3.225959 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S tmux: server 91743 1.658587 792 120 0.000000 126.457696 0.000000 0.000000 1 0 /autogroup-291
[Mon Mar 11 14:29:16 2024] S smbd[10.91.30.5 317495 8959491.175142 1305728 120 0.000000 1683541.041424 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.91.30.5 1292148 8959481.947024 2301055 120 0.000000 2057467.722051 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.40.238. 2211016 8958702.249213 1819 120 0.000000 848.317512 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.9 2279969 8959034.574789 941 120 0.000000 274.050833 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.28.1 2322257 8958677.490533 230593 120 0.000000 53481.407463 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.30.6 2376116 8959496.244470 1041340 120 0.000000 314530.985059 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.40.140. 2379578 8959491.055968 231804 120 0.000000 109159.750924 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I NFSv4 callback 2387784 30627143.826700 2 120 0.000000 0.065421 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S smbd[10.87.30.2 2393076 8958677.147265 306 120 0.000000 102.086484 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/3:2 2399492 31372271.815654 124996 120 0.000000 12495.976990 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S smbd[10.87.27.6 2400787 8958817.704219 334 120 0.000000 117.720586 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S sshd 2474602 39.992787 29 120 0.000000 68.415199 0.000000 0.000000 1 0 /autogroup-23273
[Mon Mar 11 14:29:16 2024] I kworker/3:1 2474822 31376661.430807 8593 120 0.000000 1394.753629 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/3:0 2477683 31372867.190878 228 120 0.000000 2.310259 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] cpu#4, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 3120418900
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -148839
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331860
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630437.242753
[Mon Mar 11 14:29:16 2024] .clock_task : 1255462453.137023
[Mon Mar 11 14:29:16 2024] .avg_idle : 778545
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] rt_rq[4]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.668072
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[4]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] S cpuhp/4 42 26980.275535 25 120 0.000000 1.425394 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/4 43 -9.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S migration/4 44 148.554901 385185 0 0.000000 1940.748008 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/4 45 41862412.801888 30271470 120 0.000000 341652.049409 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/4:0H 47 277.551054 4 100 0.000000 0.033847 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/24-pciehp 398 0.000000 3 49 0.000000 0.026089 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/4:1H 610 41862382.170150 4156540 100 0.000000 102020.392773 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I scsi_tmf_9 919 2752.901188 2 100 0.000000 0.088164 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/92-lpfc:4 939 0.000000 11772394 49 0.000000 229099.392793 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/175-lpfc:4 994 0.000000 11800966 49 0.000000 229372.196113 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I scsi_tmf_16 1015 2775.172950 2 100 0.000000 0.017693 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S multipathd 3356 0.000000 5399061 0 0.000000 1183666.226787 0.000000 0.000000 0 0 /autogroup-45
[Mon Mar 11 14:29:16 2024] I kdmflush/253:45 3779 28119.086445 2 100 0.000000 0.010071 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:80 4154 28188.682925 2 100 0.000000 0.079162 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:98 6437 29853.097809 2 100 0.000000 0.011114 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kcopyd 8695 30939.783039 2 100 0.000000 0.067660 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I dm-thin 8696 30951.795800 2 100 0.000000 0.014825 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8791 41862425.319621 81235 120 0.000000 9172.129488 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8843 41796142.132805 4948485 120 0.000000 144691.340628 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S automount 88123 11.878998 1 120 0.000000 0.927581 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88136 24.711510 1 120 0.000000 0.832519 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S smbd[10.87.18.2 2347080 2709130.511401 12987 120 0.000000 3746.791301 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.40.232. 2351731 2709229.186605 14796 120 0.000000 7543.722573 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.40.135. 2403402 2708971.809793 287 120 0.000000 97.036061 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.46.163. 2441678 2708911.686753 911 120 0.000000 156.069362 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/4:2 2452213 41862413.326802 237244 120 0.000000 5050.575391 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/u97:2 2470473 41862412.807654 186924 120 0.000000 10523.843178 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/4:0 2473634 41857072.777179 7 120 0.000000 0.033662 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/4:1 2477133 41860198.475875 35 120 0.000000 0.283880 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S rpcd_spoolss 2477396 31910.264427 62 120 0.000000 90.942701 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:16 2024] S nrpe 2478244 208179.039870 3 120 0.000000 15.346552 0.000000 0.000000 0 0 /autogroup-109
[Mon Mar 11 14:29:16 2024] cpu#5, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 1139975259
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -4251
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331859
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630437.242683
[Mon Mar 11 14:29:16 2024] .clock_task : 1280534890.217122
[Mon Mar 11 14:29:16 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] rt_rq[5]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[5]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] S cpuhp/5 48 23840.805171 25 120 0.000000 1.280069 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/5 49 -8.994369 3 49 0.000000 0.012037 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S migration/5 50 199.554905 329972 0 0.000000 1864.897757 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/5 51 26058068.522290 12057924 120 0.000000 124246.931181 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/5:0H 53 486.560136 4 100 0.000000 0.048330 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S kswapd1 381 26052246.390816 16339445 120 0.000000 2287795.277913 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/5:1H 615 26058122.471557 3249837 100 0.000000 71370.026925 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/93-lpfc:5 940 0.000000 5610497 49 0.000000 125280.602707 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/176-lpfc:5 995 0.000000 5616253 49 0.000000 125663.929586 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:52 3838 25119.079266 2 100 0.000000 0.008433 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irqbalance 4311 71484.448529 152715 120 0.000000 1573417.962703 0.000000 0.000000 1 0 /autogroup-75
[Mon Mar 11 14:29:16 2024] S collectd 4728 369511.632364 128435 120 0.000000 44485.078293 0.000000 0.000000 1 4728 /autogroup-105
[Mon Mar 11 14:29:16 2024] S reader#0 5333 369523.382421 2793417 120 0.000000 724497.962908 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:16 2024] I ext4-rsv-conver 6478 27816.058401 2 100 0.000000 0.062162 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I rdma_cm 8770 29806.725348 2 100 0.000000 0.063150 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8876 25969186.179753 11159235552 120 0.000000 289576859.332050 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S crond 88166 6986.579535 27986 120 0.000000 22611.125469 0.000000 0.000000 1 0 /autogroup-225
[Mon Mar 11 14:29:16 2024] I kworker/5:2 2242731 26058122.483201 197886 120 0.000000 9667.622454 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S smbd[10.87.18.2 2329756 4881602.958776 1265 120 0.000000 546.789685 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.5 2388610 4881139.695164 975 120 0.000000 416.662184 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.1 2473572 4881674.998961 48 120 0.000000 42.769145 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/5:0 2475005 26043571.627354 4 120 0.000000 0.068560 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S sshd 2476431 82.921151 28 120 0.000000 85.269690 0.000000 0.000000 1 0 /autogroup-23279
[Mon Mar 11 14:29:16 2024] cpu#6, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 3062242901
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -106944
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331860
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630437.244431
[Mon Mar 11 14:29:16 2024] .clock_task : 1258947745.276602
[Mon Mar 11 14:29:16 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] cfs_rq[6]:/
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 41504835.555867
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : 21087187.396155
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 0
[Mon Mar 11 14:29:16 2024] .load_avg : 0
[Mon Mar 11 14:29:16 2024] .runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .util_avg : 0
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] rt_rq[6]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[6]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] S cpuhp/6 54 23519.575129 25 120 0.000000 1.241424 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/6 55 -9.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S migration/6 56 250.554887 374824 0 0.000000 1810.588735 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/6 57 41504823.786919 32888344 120 0.000000 355399.772702 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/6:0H 59 150.090689 4 100 0.000000 0.073018 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I mld 405 62.449205 2 100 0.000000 0.066417 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/6:1H 613 41504823.556371 4039032 100 0.000000 97076.317301 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I scsi_tmf_1 898 2354.323831 2 100 0.000000 0.012448 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/94-lpfc:6 941 0.000000 13430824 49 0.000000 257079.391528 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S irq/177-lpfc:6 996 0.000000 13450622 49 0.000000 257802.504851 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I scsi_wq_16 1016 2602.303152 2 100 0.000000 0.062782 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:4 3425 24453.777229 2 100 0.000000 0.062827 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:83 4158 24702.121807 2 100 0.000000 0.019086 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S jbd2/dm-87-8 4324 41357138.761618 199311 120 0.000000 9861.838368 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S smbd-cleanupd 4821 2349962.387306 460709 120 0.000000 69235.366569 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] D nfsd 8801 41504830.774732 105987 120 0.000000 11312.552665 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8827 41498207.481865 4224456 120 0.000000 210282.411687 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8873 41410770.973687 2421393893 120 0.000000 57569030.172983 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S automount 88122 11.902441 1 120 0.000000 0.951024 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88133 24.817010 1 120 0.000000 0.914576 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88144 37.757844 1 120 0.000000 0.940841 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] I kworker/6:1 2330800 41504823.765693 1115575 120 0.000000 24590.300719 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] S smbd[10.40.140. 2381553 2349898.617444 908 120 0.000000 462.246333 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.1 2436431 2349991.737764 801 120 0.000000 495.571531 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/u97:4 2475262 41504824.588632 95756 120 0.000000 1368.427000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] I kworker/6:0 2475365 41500429.775511 55 120 0.000000 0.301423 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:16 2024] cpu#7, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 1112296375
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -2901
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331861
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630437.242950
[Mon Mar 11 14:29:16 2024] .clock_task : 1280675585.100171
[Mon Mar 11 14:29:16 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] cfs_rq[7]:/autogroup-96
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 21539.185882
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : -20396108.973830
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 0
[Mon Mar 11 14:29:16 2024] .load_avg : 0
[Mon Mar 11 14:29:16 2024] .runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .util_avg : 0
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] .se->exec_start : 1280675389.308501
[Mon Mar 11 14:29:16 2024] .se->vruntime : 24021038.076396
[Mon Mar 11 14:29:16 2024] .se->sum_exec_runtime : 22219.907153
[Mon Mar 11 14:29:16 2024] .se->load.weight : 2
[Mon Mar 11 14:29:16 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:16 2024] .se->avg.util_avg : 0
[Mon Mar 11 14:29:16 2024] .se->avg.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] cfs_rq[7]:/
[Mon Mar 11 14:29:16 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:16 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .min_vruntime : 24021049.637632
[Mon Mar 11 14:29:16 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:16 2024] .spread : 0.000000
[Mon Mar 11 14:29:16 2024] .spread0 : 3603401.477920
[Mon Mar 11 14:29:16 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:16 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:16 2024] .load : 0
[Mon Mar 11 14:29:16 2024] .load_avg : 0
[Mon Mar 11 14:29:16 2024] .runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .util_avg : 0
[Mon Mar 11 14:29:16 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:16 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:16 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:16 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:16 2024] .throttled : 0
[Mon Mar 11 14:29:16 2024] .throttle_count : 0
[Mon Mar 11 14:29:16 2024] rt_rq[7]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:16 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .rt_throttled : 0
[Mon Mar 11 14:29:16 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:16 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:16 2024] dl_rq[7]:
[Mon Mar 11 14:29:16 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:16 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:16 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:16 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:16 2024] runnable tasks:
[Mon Mar 11 14:29:16 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:16 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:16 2024] S cpuhp/7 60 25600.518470 25 120 0.000000 0.759082 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S idle_inject/7 61 -9.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S migration/7 62 301.554888 329845 0 0.000000 1858.327994 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S ksoftirqd/7 63 24021037.658927 11704933 120 0.000000 120270.586416 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/7:0H 65 73.578695 4 100 0.000000 0.046980 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S khugepaged 367 24020232.060326 340935 139 0.000000 76892.395465 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/7:1H 619 24020664.170955 3126434 100 0.000000 69349.082053 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I scsi_tmf_0 870 1175.527247 2 100 0.000000 0.006271 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/95-lpfc:7 942 0.000000 5447227 49 0.000000 122896.733492 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S irq/178-lpfc:7 997 0.000000 5457646 49 0.000000 123405.516396 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:10 3461 28161.873680 2 100 0.000000 0.012247 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:34 3679 28651.780026 2 100 0.000000 0.013257 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:63 3935 29301.128057 2 100 0.000000 0.012104 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:65 3946 29313.138543 2 100 0.000000 0.011999 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kdmflush/253:67 3969 29459.933995 2 100 0.000000 0.010088 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S systemd-logind 4364 7093.815773 776962 120 0.000000 217926.070558 0.000000 0.000000 1 4364 /autogroup-84
[Mon Mar 11 14:29:16 2024] S f2b/a.samba 4440 21539.185882 2563857 120 0.000000 198600.281732 0.000000 0.000000 1 4440 /autogroup-96
[Mon Mar 11 14:29:16 2024] I kdmflush/253:10 8710 33174.450924 2 100 0.000000 0.011034 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] D nfsd 8862 23953455.845450 35363239 120 0.000000 1419385.003862 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] S automount 88141 3786.846992 651 120 0.000000 64.560537 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S automount 88147 3787.796525 77 120 0.000000 10.927760 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:16 2024] S smbd[10.87.28.1 2323430 3210966.245174 555 120 0.000000 167.225211 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.18.2 2338596 3211057.972836 1259 120 0.000000 277.296081 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] S smbd[10.87.29.9 2364174 3211041.499312 1136 120 0.000000 403.241845 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:16 2024] I kworker/7:0 2391347 24021037.655783 137663 120 0.000000 9623.996685 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/u96:4 2442472 24021037.799546 304814 120 0.000000 63891.940913 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] I kworker/7:1 2474775 24000060.421482 209 120 0.000000 3.712917 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:16 2024] cpu#8, 2600.000 MHz
[Mon Mar 11 14:29:16 2024] .nr_running : 0
[Mon Mar 11 14:29:16 2024] .nr_switches : 3080471161
[Mon Mar 11 14:29:16 2024] .nr_uninterruptible : -30317
[Mon Mar 11 14:29:16 2024] .next_balance : 5577.331860
[Mon Mar 11 14:29:16 2024] .curr->pid : 0
[Mon Mar 11 14:29:16 2024] .clock : 1282630438.242448
[Mon Mar 11 14:29:16 2024] .clock_task : 1256901903.258196
[Mon Mar 11 14:29:16 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:16 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:16 2024] rt_rq[8]:
[Mon Mar 11 14:29:16 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[8]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/8 66 22813.348156 25 120 0.000000 1.336489 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/8 67 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/8 68 357.554869 387485 0 0.000000 1970.780585 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/8 69 41993741.123056 30769426 120 0.000000 346907.840181 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/8:0H 71 36.241745 4 100 0.000000 0.020097 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/8:1H 799 41993740.947888 4052722 100 0.000000 95313.659806 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/96-lpfc:8 943 0.000000 11960101 49 0.000000 228860.096783 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/179-lpfc:8 998 0.000000 11993726 49 0.000000 230112.493484 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:29 3632 24087.446820 2 100 0.000000 0.014320 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4131 24288.073500 2 100 0.000000 0.069462 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:84 4159 24300.086548 2 100 0.000000 0.016590 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-82-8 4242 41972109.154059 3931 120 0.000000 178.158919 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-84-8 4244 41993741.112736 1597998 120 0.000000 212185.515871 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4278 24611.945622 2 100 0.000000 0.017580 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-25-8 4360 41993741.699910 1066605 120 0.000000 293716.195293 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8817 41993649.655561 767920 120 0.000000 49042.709876 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8864 41907025.586779 54018370 120 0.000000 2034308.717942 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8874 41899093.774416 4328278877 120 0.000000 101222700.873244 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S cups-browsed 21421 1279.981811 47834 120 0.000000 27899.941482 0.000000 0.000000 0 21421 /autogroup-154
[Mon Mar 11 14:29:19 2024] S automount 88103 171406.346032 677612 120 0.000000 38289.352546 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88138 169291.699417 661 120 0.000000 44.329067 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.86.26.7 93091 2320327.111029 43106 120 0.000000 12089.788779 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.8 2326005 2320338.779057 1552 120 0.000000 341.377713 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.33.22.1 2339568 2320268.034009 107927 120 0.000000 40172.081903 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.157. 2352279 2320235.131961 11030 120 0.000000 2004.617139 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.144. 2399030 2320327.093264 305 120 0.000000 98.948081 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.9 2415574 2320267.683136 520 120 0.000000 168.165490 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/8:2 2431582 41993741.442363 342847 120 0.000000 12146.515928 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.8 2453051 2320268.057994 44409 120 0.000000 8253.224522 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.140. 2472699 2320267.676972 42 120 0.000000 37.048968 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/8:1 2474342 41986420.118242 9 120 0.000000 0.128683 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.46.231. 2476746 2320327.103551 32 120 0.000000 29.047644 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/8:0 2477090 41986432.114724 3 120 0.000000 0.062005 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#9, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 1090744483
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -3629
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633448.162660
[Mon Mar 11 14:29:19 2024] .clock_task : 1280766998.957531
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[9]:/autogroup-96
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 21979.774007
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -20395686.734108
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1280766963.128372
[Mon Mar 11 14:29:19 2024] .se->vruntime : 22917043.656286
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 23075.337183
[Mon Mar 11 14:29:19 2024] .se->load.weight : 2
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] cfs_rq[9]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 22917055.569646
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 2499389.061531
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[9]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[9]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/9 72 29047.055904 25 120 0.000000 0.706423 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/9 73 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/9 74 425.554870 328399 0 0.000000 1905.062763 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/9 75 22917042.504240 11796005 120 0.000000 118900.184131 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/9:0H 77 17.594484 4 100 0.000000 0.031421 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/9:1H 798 22917036.281674 3075585 100 0.000000 67882.014782 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ata_sff 867 571.058372 2 100 0.000000 0.006700 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/97-lpfc:9 944 0.000000 5536116 49 0.000000 127272.511024 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_15 959 1878.947891 2 120 0.000000 0.009502 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_15 960 1890.957250 2 100 0.000000 0.012156 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/180-lpfc:9 999 0.000000 5553780 49 0.000000 128033.745593 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:17 3532 30033.921325 2 100 0.000000 0.009476 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:39 3726 30381.573887 2 100 0.000000 0.013348 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:50 3826 30487.698585 2 100 0.000000 0.014397 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:58 3884 30653.471853 2 100 0.000000 0.012816 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:66 3952 30939.385028 2 100 0.000000 0.007816 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:71 3997 31061.969971 2 100 0.000000 0.008695 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S sssd_nss 4356 221629.645707 38248803 120 0.000000 4923177.836521 0.000000 0.000000 1 4356 /autogroup-78
[Mon Mar 11 14:29:19 2024] S fail2ban-server 4416 21979.774007 641345 120 0.000000 45596.034563 0.000000 0.000000 1 4416 /autogroup-96
[Mon Mar 11 14:29:19 2024] S writer#0 5327 408668.739206 55283017 120 0.000000 1457346.615143 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] D in:imfile 4838 26394.652682 264187 120 0.000000 34718.483624 0.000000 0.000000 1 4838 /autogroup-112
[Mon Mar 11 14:29:19 2024] S dmeventd 6398 3330.084132 1281468 120 0.000000 40871.730928 0.000000 0.000000 1 0 /autogroup-125
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 6490 32974.312305 2 100 0.000000 0.064333 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8877 22820276.982805 11701419907 120 0.000000 429441809.897510 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S automount 88157 1642.947080 3 120 0.000000 1.122149 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S bash 91744 28.774970 30 120 0.000000 37.281288 0.000000 0.000000 1 0 /autogroup-292
[Mon Mar 11 14:29:19 2024] S smbd[10.91.30.5 1834355 2237471.759777 617868 120 0.000000 425214.117932 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.6 2371619 2237299.651627 11689 120 0.000000 5215.294441 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/9:1 2418267 22917043.584166 112155 120 0.000000 4847.929284 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/9:2 2474561 22906344.871472 10 120 0.000000 0.029112 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#10, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 3077610603
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 9252
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633448.162887
[Mon Mar 11 14:29:19 2024] .clock_task : 1257146076.810369
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[10]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 42255559.680698
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 21837893.172583
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[10]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[10]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/10 78 24108.683097 25 120 0.000000 1.152375 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/10 79 -11.995539 3 49 0.000000 0.009791 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/10 80 493.554845 385461 0 0.000000 1959.101367 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/10 81 42255547.942246 29617116 120 0.000000 343513.297688 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/10:0H 83 262.155786 4 100 0.000000 0.042404 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksmd 366 61.731636 2 125 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kstrp 417 143.241105 2 100 0.000000 0.059071 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/10:1H 631 42255547.580610 3803773 100 0.000000 88277.991540 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/98-lpfc:10 945 0.000000 11076341 49 0.000000 214020.198408 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/181-lpfc:10 1000 0.000000 11099261 49 0.000000 215041.190507 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kmpath_handlerd 3347 25133.557815 2 100 0.000000 0.069959 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S chronyd 4323 1275.762076 51060 120 0.000000 10054.334465 0.000000 0.000000 0 0 /autogroup-81
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4354 27144.495240 2 100 0.000000 0.049604 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 6468 28116.080474 2 100 0.000000 0.013171 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8856 42173632.065713 13632169 120 0.000000 603724.437899 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88148 54.631573 1 120 0.000000 0.856774 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88160 67.523148 5 120 0.000000 0.891582 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.5 2292584 2190088.310647 163982 120 0.000000 6384.805423 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.140. 2356004 2189870.240777 12790 120 0.000000 3567.702978 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/10:0 2401926 42255547.630933 525694 120 0.000000 17289.892005 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u96:1 2466032 42245270.741957 72826 120 0.000000 23712.342502 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.8 2468009 2189815.902585 12594 120 0.000000 4359.969999 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/u97:4 2475262 42255550.330981 95797 120 0.000000 1370.146091 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/10:2 2475318 42249467.038340 5 120 0.000000 0.085163 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#11, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 3
[Mon Mar 11 14:29:19 2024] .nr_switches : 983131110
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 3744
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334927
[Mon Mar 11 14:29:19 2024] .curr->pid : 86
[Mon Mar 11 14:29:19 2024] .clock : 1282633449.082238
[Mon Mar 11 14:29:19 2024] .clock_task : 1280919089.410077
[Mon Mar 11 14:29:19 2024] .avg_idle : 102186
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[11]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 20562562.119539
[Mon Mar 11 14:29:19 2024] .min_vruntime : 20562574.119539
[Mon Mar 11 14:29:19 2024] .max_vruntime : 20562562.119539
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 144907.611424
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .h_nr_running : 1
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 1048576
[Mon Mar 11 14:29:19 2024] .load_avg : 1024
[Mon Mar 11 14:29:19 2024] .runnable_avg : 1024
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 9
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[11]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 1
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[11]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S kthreadd 2 20562543.414729 107941 120 0.000000 15887.833398 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S cpuhp/11 84 22207.434922 25 120 0.000000 1.022700 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/11 85 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] >R migration/11 86 561.554834 327264 0 0.000000 1883.315363 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/11 87 20562550.833718 11080392 120 0.000000 109299.051192 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/11:0H 89 29.689160 4 100 0.000000 0.033328 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/11:1H 802 20562542.792120 2925885 100 0.000000 64456.062898 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/99-lpfc:11 946 0.000000 5284428 49 0.000000 120758.038103 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] R irq/182-lpfc:11 1001 0.000000 5293562 49 0.000000 121415.771605 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:12 3481 22915.468222 2 100 0.000000 0.016748 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:37 3706 23441.812951 2 100 0.000000 0.015555 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:49 3817 23486.558578 2 100 0.000000 0.011377 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:57 3883 23537.168495 2 100 0.000000 0.012880 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-89-8 4218 20424617.823835 653 120 0.000000 27.585155 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S audit_prune_tre 4291 24531.883373 2 120 0.000000 0.014097 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:93 6413 27512.589667 2 100 0.000000 0.063343 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S upsclient 7162 11.197316 1 120 0.000000 0.245900 0.000000 0.000000 1 0 /autogroup-126
[Mon Mar 11 14:29:19 2024] D nfsd 8849 20507918.442387 10267739 120 0.000000 469683.450533 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S automount 88137 3593.618790 1456 120 0.000000 119.295303 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[134.58.56. 2310377 1686860.839989 609 120 0.000000 183.039133 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.157. 2341830 1686939.465307 4773 120 0.000000 1892.961002 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.143. 2396242 1686944.454449 297 120 0.000000 107.117138 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] R kworker/11:1 2440048 20562562.119539 67677 120 0.000000 2908.935182 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/11:0 2474533 20556132.404902 7 120 0.000000 0.031680 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/11:2 2477378 20556144.397274 2 120 0.000000 0.009717 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#12, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 3338589998
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 11188
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633449.161841
[Mon Mar 11 14:29:19 2024] .clock_task : 1256035048.123324
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[12]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 46210263.511947
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 25792597.003832
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[12]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[12]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/12 90 25731.680618 25 120 0.000000 0.983012 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/12 91 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/12 92 629.555545 374149 0 0.000000 1826.965259 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/12 93 46210251.580863 29657772 120 0.000000 358635.023831 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/12:0H 95 1.915455 4 100 0.000000 0.051643 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I md 373 25.354222 2 100 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S kswapd0 380 46208966.307891 40235627 120 0.000000 6355971.113823 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/12:1H 747 46210251.512380 3697677 100 0.000000 83453.417349 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/100-lpfc:12 947 0.000000 10590023 49 0.000000 203345.694616 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/183-lpfc:12 1002 0.000000 10613815 49 0.000000 204134.348373 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S upsclient 7160 11.218298 1 120 0.000000 0.266883 0.000000 0.000000 0 0 /autogroup-126
[Mon Mar 11 14:29:19 2024] D nfsd 8788 46210252.315939 77325 120 0.000000 8906.084762 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8805 46210255.242512 119022 120 0.000000 13064.632830 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88102 169945.320216 207379 120 0.000000 11302.113336 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88132 11.893369 2 120 0.000000 0.941952 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] I nfsiod 179473 542438.112080 2 100 0.000000 0.010287 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2304037 2066706.191575 669 120 0.000000 189.555346 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/12:0 2442877 46210251.571222 190481 120 0.000000 10075.320554 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D kworker/u96:3 2451130 46210251.545939 129668 120 0.000000 29904.483577 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/12:2 2473001 46197648.762509 587 120 0.000000 8.733456 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/12:1 2476314 46208506.263384 27 120 0.000000 0.226203 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2477400 22505.277956 58 120 0.000000 76.820899 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S rpcd_winreg 2477414 22506.245189 124 120 0.000000 72.379947 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#13, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 1036523359
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -5000
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633449.161350
[Mon Mar 11 14:29:19 2024] .clock_task : 1280928849.652503
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[13]:/autogroup-90
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 5454.450532
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -20412212.057583
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1280928745.080027
[Mon Mar 11 14:29:19 2024] .se->vruntime : 20987299.723570
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 5296.369357
[Mon Mar 11 14:29:19 2024] .se->load.weight : 2
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] cfs_rq[13]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 20987311.625055
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 569645.116940
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[13]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[13]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/13 96 26515.315734 25 120 0.000000 0.688334 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/13 97 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/13 98 697.555545 326310 0 0.000000 1881.477268 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/13 99 20987289.672704 11269408 120 0.000000 112124.135999 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/13:0H 101 90.426995 4 100 0.000000 0.026951 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S kdevtmpfs 354 516771.568484 710 120 0.000000 18.397420 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S kcompactd1 365 20986812.257170 16436853 120 0.000000 16724122.224337 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/13:1H 834 20987176.604530 2997375 100 0.000000 65602.061173 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/101-lpfc:13 948 0.000000 5357741 49 0.000000 119902.659859 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/184-lpfc:13 1003 0.000000 5364473 49 0.000000 120366.136674 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:35 3685 27334.473742 2 100 0.000000 0.008046 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:55 3864 27991.061987 2 100 0.000000 0.006029 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4272 28712.301627 2 100 0.000000 0.016582 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S uwsgi 4405 5454.450532 1281793 120 0.000000 84916.178768 0.000000 0.000000 1 0 /autogroup-90
[Mon Mar 11 14:29:19 2024] S wb[KWAK] 4668 197262.716916 44244 120 0.000000 2305.033634 0.000000 0.000000 1 0 /autogroup-95
[Mon Mar 11 14:29:19 2024] S reader#1 5334 425719.681523 2839491 120 0.000000 733106.796630 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] S rpc.mountd 5127 99005.393330 29154322 120 0.000000 2787512.543888 0.000000 0.000000 1 0 /autogroup-116
[Mon Mar 11 14:29:19 2024] S nsrexecd 5343 9856.559984 56447 120 0.000000 11562.303259 0.000000 0.000000 1 5343 /autogroup-117
[Mon Mar 11 14:29:19 2024] D nfsd 8842 20919524.034021 3343927 120 0.000000 102718.189630 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8854 20923697.512266 11372684 120 0.000000 513772.396025 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S samba-dcerpcd 2339807 283330.761782 3017914 120 0.000000 473688.121149 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S smbd[10.46.198. 2368564 1470206.698971 428 120 0.000000 144.529590 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/13:0 2391582 20987299.648567 149490 120 0.000000 10192.771420 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.138. 2445114 1470255.148564 124 120 0.000000 60.487331 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/13:2 2475727 20979233.554358 5 120 0.000000 0.068236 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S sudo 2478247 231296.670850 7 120 0.000000 27.938848 0.000000 0.000000 1 0 /autogroup-109
[Mon Mar 11 14:29:19 2024] cpu#14, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 5
[Mon Mar 11 14:29:19 2024] .nr_switches : 2948085043
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 29866
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334904
[Mon Mar 11 14:29:19 2024] .curr->pid : 2476436
[Mon Mar 11 14:29:19 2024] .clock : 1282633449.160128
[Mon Mar 11 14:29:19 2024] .clock_task : 1259194347.476126
[Mon Mar 11 14:29:19 2024] .avg_idle : 837639
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[14]:/autogroup-23280
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 9788.922277
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -20407877.585838
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .h_nr_running : 1
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 1048576
[Mon Mar 11 14:29:19 2024] .load_avg : 1024
[Mon Mar 11 14:29:19 2024] .runnable_avg : 1024
[Mon Mar 11 14:29:19 2024] .util_avg : 1024
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 1024
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 1024
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1259194347.474548
[Mon Mar 11 14:29:19 2024] .se->vruntime : 41124696.732550
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 9777.970860
[Mon Mar 11 14:29:19 2024] .se->load.weight : 1048576
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 1024
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 1024
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 1024
[Mon Mar 11 14:29:19 2024] cfs_rq[14]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 41114924.866591
[Mon Mar 11 14:29:19 2024] .min_vruntime : 41114936.866591
[Mon Mar 11 14:29:19 2024] .max_vruntime : 41114924.879998
[Mon Mar 11 14:29:19 2024] .spread : 0.013407
[Mon Mar 11 14:29:19 2024] .spread0 : 20697270.358476
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 4
[Mon Mar 11 14:29:19 2024] .h_nr_running : 4
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 94036992
[Mon Mar 11 14:29:19 2024] .load_avg : 91190
[Mon Mar 11 14:29:19 2024] .runnable_avg : 4098
[Mon Mar 11 14:29:19 2024] .util_avg : 1024
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 34
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[14]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[14]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/14 102 22719.181663 25 120 0.000000 1.249714 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/14 103 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] R migration/14 104 765.555520 376699 0 0.000000 1916.601274 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] R ksoftirqd/14 105 41114924.866591 25737578 120 0.000000 309956.742814 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/14:0H 107 56.763093 4 100 0.000000 0.136773 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] R kworker/14:1H 752 41114924.866591 3470597 100 0.000000 78490.546961 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/102-lpfc:14 949 0.000000 9526323 49 0.000000 186895.913108 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/185-lpfc:14 1004 0.000000 9549881 49 0.000000 187545.281753 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:30 3648 24153.462119 2 100 0.000000 0.012569 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S nsrexecd 5349 5354.579903 1307 120 0.000000 378.281657 0.000000 0.000000 0 5349 /autogroup-117
[Mon Mar 11 14:29:19 2024] D nfsd 8830 41098507.541959 3085393 120 0.000000 165356.212779 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8866 41029179.219205 90894108 120 0.000000 3155404.032331 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88128 11.889018 1 120 0.000000 0.937601 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88142 24.776843 1 120 0.000000 0.887832 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88153 37.663462 1 120 0.000000 0.886626 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88162 50.497713 1 120 0.000000 0.834258 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.3 2342764 2392344.215358 15522 120 0.000000 3813.478027 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.2 2347082 2392345.221014 681 120 0.000000 211.301623 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/14:1 2374246 41111361.615456 465506 120 0.000000 16576.849713 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] R kworker/14:2 2474614 41114924.879998 18507 120 0.000000 545.498992 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] >R bash 2476436 9788.922277 119 120 0.000000 9851.190462 0.000000 0.000000 0 0 /autogroup-23280
[Mon Mar 11 14:29:19 2024] I kworker/14:0 2477542 41113314.286995 3 120 0.000000 0.066917 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#15, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 1005229434
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -3188
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334872
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633449.160962
[Mon Mar 11 14:29:19 2024] .clock_task : 1280982883.172370
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[15]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[15]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/15 108 22016.621560 25 120 0.000000 0.986302 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/15 109 -11.994113 3 49 0.000000 0.012580 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/15 110 833.555516 325620 0 0.000000 1875.252549 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/15 111 20127088.769605 11165480 120 0.000000 111030.425195 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/15:0H 113 379.039483 4 100 0.000000 0.028546 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/15:1H 860 20126968.675867 2914917 100 0.000000 64936.831180 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/103-lpfc:15 950 0.000000 5321549 49 0.000000 119777.169310 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/186-lpfc:15 1005 0.000000 5333301 49 0.000000 120350.773208 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:9 3457 22713.167289 2 100 0.000000 0.017612 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:32 3660 22994.875960 2 100 0.000000 0.009589 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:64 3940 23521.177608 2 100 0.000000 0.010222 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:69 3975 23623.818868 2 100 0.000000 0.010667 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:75 4118 23677.152489 2 100 0.000000 0.018529 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:76 4132 23763.797476 2 100 0.000000 0.022192 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:78 4135 23775.812473 2 100 0.000000 0.017592 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-80-8 4259 20127091.336665 1264823 120 0.000000 191045.514599 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I tls-strp 4391 24323.090938 2 100 0.000000 0.024125 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S reader#2 5335 424932.858954 2847979 120 0.000000 726093.279887 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] S reader#4 5337 424944.736826 561134 120 0.000000 598804.485160 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] S rs:action-0-bui 4841 43053.521425 1927079 120 0.000000 118600.890131 0.000000 0.000000 1 4837 /autogroup-112
[Mon Mar 11 14:29:19 2024] S nsrexecd 5345 8951.175593 19130 120 0.000000 687.157688 0.000000 0.000000 1 0 /autogroup-117
[Mon Mar 11 14:29:19 2024] I kdmflush/253:90 6393 26005.424303 2 100 0.000000 0.016044 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8868 20056073.178896 183235980 120 0.000000 5701430.764254 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S automount 88150 2611.413623 75 120 0.000000 10.976391 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2339816 252403.714872 10457808 120 0.000000 2805930.098163 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.8 2363822 1249553.896512 371552 120 0.000000 81201.860000 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.4 2412672 1249554.528361 224 120 0.000000 77.994581 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/15:1 2432551 20127090.624557 85626 120 0.000000 5562.173355 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u96:2 2470993 20124091.050429 59898 120 0.000000 18123.536914 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/15:2 2471252 20091034.410305 5 120 0.000000 0.034046 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/15:0 2473641 20115589.531683 17 120 0.000000 0.114230 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#16, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2705289591
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 28417
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633450.161643
[Mon Mar 11 14:29:19 2024] .clock_task : 1262596047.551255
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[16]:/autogroup-113
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 2197473.318280
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -18220193.189835
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 102
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1262595914.330907
[Mon Mar 11 14:29:19 2024] .se->vruntime : 38224184.868966
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 1951453.474657
[Mon Mar 11 14:29:19 2024] .se->load.weight : 2
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] cfs_rq[16]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 38224184.868966
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 17806518.360851
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[16]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[16]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/16 114 24295.744054 25 120 0.000000 1.032223 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/16 115 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/16 116 901.555492 365900 0 0.000000 1917.531004 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/16 117 38224167.386205 24915738 120 0.000000 293710.902055 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/16:0H 119 172.856368 4 100 0.000000 0.045496 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/16:1H 762 38224167.268598 3333979 100 0.000000 73903.836713 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/112-lpfc:16 951 0.000000 9859731 49 0.000000 192667.887444 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/187-lpfc:16 1006 0.000000 9874120 49 0.000000 192984.835533 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:82 4157 25916.349937 2 100 0.000000 0.066049 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4208 26413.245510 2 100 0.000000 0.061676 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-86-8 4246 38223138.250801 154711 120 0.000000 11717.183685 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S JS Helper 5182 23.472142 5 120 0.000000 0.486658 0.000000 0.000000 0 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] D nfsd 8807 38224175.746992 133432 120 0.000000 13706.050891 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88125 11.862926 1 120 0.000000 0.911509 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88151 24.752428 1 120 0.000000 0.889509 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.91.30.5 3294561 2197473.318280 1271825 120 0.000000 1577525.997307 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.91.30.5 662850 2197042.215530 6803 120 0.000000 1846.429649 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.142. 2342339 2197099.276461 2288 120 0.000000 815.417760 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.229. 2355747 2197407.517671 91686 120 0.000000 4474.950891 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.133. 2384413 2197355.953824 360 120 0.000000 117.548543 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/16:2 2397744 38224172.768503 358373 120 0.000000 14281.570565 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.153. 2423410 2197204.000123 4598 120 0.000000 2057.641245 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2452452 2197461.427192 124 120 0.000000 58.217814 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/u97:0 2473005 38224167.279710 75915 120 0.000000 2203.181420 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/16:1 2475250 38218369.616277 7 120 0.000000 0.073999 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S sshd 2476434 11.303906 522 120 0.000000 85.795163 0.000000 0.000000 0 0 /autogroup-23279
[Mon Mar 11 14:29:19 2024] S nrpe 2478245 206330.570850 2 120 0.000000 0.829470 0.000000 0.000000 0 0 /autogroup-109
[Mon Mar 11 14:29:19 2024] cpu#17, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 964910989
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -2169
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633450.160629
[Mon Mar 11 14:29:19 2024] .clock_task : 1281032327.768478
[Mon Mar 11 14:29:19 2024] .avg_idle : 554673
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[17]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.178580
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[17]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/17 120 25941.651935 25 120 0.000000 1.166897 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/17 121 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/17 122 969.555489 324874 0 0.000000 1888.896263 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/17 123 19431575.263295 11514961 120 0.000000 111385.611222 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/17:0H 125 24.660770 4 100 0.000000 0.014031 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/17:1H 856 19431575.224780 3489009 100 0.000000 78027.237958 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/113-lpfc:17 952 0.000000 5591148 49 0.000000 130003.261092 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/188-lpfc:17 1007 0.000000 5602516 49 0.000000 130871.522555 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S multipathd 3353 0.000000 87 0 0.000000 4.046831 0.000000 0.000000 1 0 /autogroup-45
[Mon Mar 11 14:29:19 2024] S multipathd 3357 0.000000 13 0 0.000000 8.041793 0.000000 0.000000 1 0 /autogroup-45
[Mon Mar 11 14:29:19 2024] I kdmflush/253:7 3445 26574.657879 2 100 0.000000 0.019338 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:8 3453 26574.664939 2 100 0.000000 0.020543 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:15 3509 26746.732035 2 100 0.000000 0.008958 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:31 3657 27037.815929 2 100 0.000000 0.017258 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:36 3700 27097.935838 2 100 0.000000 0.010510 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:38 3714 27097.939852 2 100 0.000000 0.013735 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:41 3738 27111.960020 2 100 0.000000 0.017637 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:42 3752 27162.219368 2 100 0.000000 0.028321 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:43 3762 27175.281779 2 100 0.000000 0.012739 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:62 3928 27472.029779 2 100 0.000000 0.012568 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:68 3973 27472.430219 2 100 0.000000 0.016183 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:77 4134 27585.867067 2 100 0.000000 0.018980 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:88 4169 27597.882802 2 100 0.000000 0.019246 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4220 27714.742733 2 100 0.000000 0.018409 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4221 27726.755093 2 100 0.000000 0.015219 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S tuned 5263 325.642727 8 120 0.000000 3.884221 0.000000 0.000000 1 0 /autogroup-93
[Mon Mar 11 14:29:19 2024] S rpc.gssd 4432 1369.486182 42756 120 0.000000 1184.327586 0.000000 0.000000 1 0 /autogroup-100
[Mon Mar 11 14:29:19 2024] S JS Helper 5176 11.190193 9 120 0.000000 0.238776 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] I kdmflush/253:94 6417 30182.914014 2 100 0.000000 0.019072 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:10 8697 31492.309009 2 100 0.000000 0.065396 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 8747 31504.372007 2 100 0.000000 0.066252 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ib-comp-unb-wq 8763 31516.433269 2 100 0.000000 0.064634 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8796 19431574.200343 89864 120 0.000000 10105.801789 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.19.1 2324056 1158357.407324 525 120 0.000000 145.674088 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2339825 247435.964896 1517035 120 0.000000 396329.321166 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S smbd[10.33.88.9 2347580 1158392.953252 461 120 0.000000 152.838733 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.31.1 2352076 1158392.712303 1764 120 0.000000 903.633187 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/17:1 2391348 19431575.274720 165612 120 0.000000 9291.833357 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.88.16.7 2407419 1158392.652293 3145 120 0.000000 2538.127809 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.9 2432071 1158392.710280 174 120 0.000000 62.798772 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.46.159. 2465617 1158393.007578 64 120 0.000000 36.620453 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/17:0 2474774 19410071.831729 3 120 0.000000 0.063514 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#18, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2434048599
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 37600
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334874
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633450.161181
[Mon Mar 11 14:29:19 2024] .clock_task : 1265217668.233471
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[18]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[18]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/18 126 28898.836945 25 120 0.000000 0.871054 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/18 127 -11.995276 3 49 0.000000 0.270707 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/18 128 1037.816154 365466 0 0.000000 1891.855065 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/18 129 34240895.372719 22823678 120 0.000000 263730.895545 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/18:0H 131 47.497810 4 100 0.000000 0.063433 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I inet_frag_wq 355 59.367178 2 100 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/18:1H 754 34240765.437724 3174907 100 0.000000 70296.065673 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/114-lpfc:18 953 0.000000 9362298 49 0.000000 183411.599762 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/189-lpfc:18 1008 0.000000 9377696 49 0.000000 184275.156559 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:14 3502 29470.742691 2 100 0.000000 0.014193 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:53 3839 30342.884213 2 100 0.000000 0.007693 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:81 4155 30954.360271 2 100 0.000000 0.066985 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4361 31615.105410 2 100 0.000000 0.064344 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S gssproxy 4425 11.057469 1 120 0.000000 0.106054 0.000000 0.000000 0 0 /autogroup-98
[Mon Mar 11 14:29:19 2024] S rsyslogd 4752 1226.531223 2223 120 0.000000 192.539333 0.000000 0.000000 0 4752 /autogroup-112
[Mon Mar 11 14:29:19 2024] S jbd2/dm-92-8 6475 34106951.884440 18557 120 0.000000 2405.892759 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8834 34230189.304574 4819229 120 0.000000 262121.772772 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88126 166384.444556 11 120 0.000000 1.574640 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S sssd_kcm 179479 1875.901308 255137 120 0.000000 50715.484538 0.000000 0.000000 0 0 /autogroup-707
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 789171 2219101.590427 21804 120 0.000000 5621.762420 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.4 1367201 2219146.691139 4113 120 0.000000 1085.094801 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/18:1 2072443 34240921.800259 1049137 120 0.000000 31341.343265 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.228. 2338708 2219299.761504 27951 120 0.000000 10241.970906 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.33.88.1 2377288 2219251.750338 358 120 0.000000 114.269577 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/18:2 2474270 34236501.832518 8 120 0.000000 0.092269 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/18:0 2477203 34236513.814739 2 120 0.000000 0.062411 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#19, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 993035978
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -2982
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334872
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633450.161098
[Mon Mar 11 14:29:19 2024] .clock_task : 1281059918.581047
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[19]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[19]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/19 132 21715.636766 25 120 0.000000 0.845631 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/19 133 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/19 134 1105.816150 324822 0 0.000000 1896.862061 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/19 135 19607795.729144 11829573 120 0.000000 114680.925698 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/19:0H 137 102.031486 4 100 0.000000 0.018501 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/19:1H 813 19606020.360141 3491399 100 0.000000 78246.847787 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S lpfc_worker_0 934 19608286.899265 284753 100 0.000000 5576.249969 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/115-lpfc:19 954 0.000000 5710627 49 0.000000 131958.422164 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/190-lpfc:19 1009 0.000000 5727911 49 0.000000 133043.353601 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:47 3799 23287.908181 2 100 0.000000 0.007126 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:60 3907 23530.099872 2 100 0.000000 0.015141 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-76-8 4252 19605506.735404 141298 120 0.000000 7487.841466 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S nfsdcld 4302 2653.021662 641126 120 0.000000 117340.045585 0.000000 0.000000 1 0 /autogroup-69
[Mon Mar 11 14:29:19 2024] S gmain 4320 11.139325 1 120 0.000000 0.187910 0.000000 0.000000 1 0 /autogroup-75
[Mon Mar 11 14:29:19 2024] S polkitd 4667 3411.197544 317560 120 0.000000 38594.323126 0.000000 0.000000 1 4667 /autogroup-104
[Mon Mar 11 14:29:19 2024] S dmeventd 6411 2393.343824 128300 120 0.000000 13541.927602 0.000000 0.000000 1 0 /autogroup-125
[Mon Mar 11 14:29:19 2024] S jbd2/dm-102-8 8732 19486067.157920 2021 120 0.000000 868.781011 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2279961 1011902.994770 828 120 0.000000 235.659148 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S rpcd_winreg 2347737 225923.629574 7485002 120 0.000000 1825461.507361 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2400150 1011903.054564 265 120 0.000000 87.155161 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/19:1 2402658 19608286.911586 139868 120 0.000000 9668.678759 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/19:0 2475271 19595419.771327 5 120 0.000000 0.026001 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#20, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2852629227
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 25671
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633450.161536
[Mon Mar 11 14:29:19 2024] .clock_task : 1257094457.580888
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[20]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[20]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/20 138 25632.867284 25 120 0.000000 1.425623 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/20 139 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/20 140 1173.816857 390003 0 0.000000 2032.939139 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/20 141 40444902.519184 26626240 120 0.000000 318014.453433 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/20:0H 143 23.820108 4 100 0.000000 0.022756 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/20:1H 781 40444797.388260 3229792 100 0.000000 69745.633514 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_4 904 1735.685815 25 120 0.000000 6.093795 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_11 923 1712.093830 2 100 0.000000 0.381054 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/116-lpfc:20 955 0.000000 9318267 49 0.000000 181639.257968 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/191-lpfc:20 1010 0.000000 9325964 49 0.000000 182449.194570 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4351 27971.424278 2 100 0.000000 0.065866 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S upsclient 7154 6115.836221 1303786 120 0.000000 138592.310149 0.000000 0.000000 0 0 /autogroup-126
[Mon Mar 11 14:29:19 2024] D nfsd 8804 40444905.590664 118475 120 0.000000 12210.491394 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8841 40377251.850810 2216516 120 0.000000 79303.374470 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88092 10.614597 98 120 0.000000 30.665238 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S agetty 88117 0.819970 18 120 0.000000 60.743827 0.000000 0.000000 0 0 /autogroup-228
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.8 2352500 2360986.791597 211258 120 0.000000 41698.277658 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/20:2 2415685 40444902.893030 238245 120 0.000000 15568.767948 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2461553 2361021.694063 1189 120 0.000000 418.895454 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/20:0 2474728 40438956.586822 12 120 0.000000 0.057860 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#21, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 973798643
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -1736
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633451.162417
[Mon Mar 11 14:29:19 2024] .clock_task : 1281088670.122286
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[21]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.062506
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[21]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] D systemd 1 144507.589599 31252056 120 0.000000 3898869.479139 0.000000 0.000000 1 1 /autogroup-1
[Mon Mar 11 14:29:19 2024] S cpuhp/21 144 25698.411482 25 120 0.000000 0.694450 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/21 145 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/21 146 1241.816861 324561 0 0.000000 1924.545200 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/21 147 19220270.131493 11478007 120 0.000000 112369.656912 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/21:0H 149 92.900758 4 100 0.000000 0.028353 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/21:1H 829 19220269.864386 3237280 100 0.000000 71690.939907 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/117-lpfc:21 956 0.000000 5541480 49 0.000000 128874.523612 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/192-lpfc:21 1011 0.000000 5553172 49 0.000000 129798.137598 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:5 3431 26358.748805 2 100 0.000000 0.011092 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:6 3437 26370.763969 2 100 0.000000 0.017879 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:11 3476 26382.778699 2 100 0.000000 0.017673 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:13 3499 26394.790383 2 100 0.000000 0.014494 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:22 3574 26419.509153 2 100 0.000000 0.013064 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S sssd_pam 4357 158564.306573 260101 120 0.000000 15215.612361 0.000000 0.000000 1 0 /autogroup-78
[Mon Mar 11 14:29:19 2024] S sshd 4412 2764.722592 25472 120 0.000000 15730.520853 0.000000 0.000000 1 0 /autogroup-92
[Mon Mar 11 14:29:19 2024] S wb-idmap 5362 105007.202701 1179982 120 0.000000 316610.932947 0.000000 0.000000 1 0 /autogroup-95
[Mon Mar 11 14:29:19 2024] I kdmflush/253:92 6412 28925.135872 2 100 0.000000 0.063711 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.19.1 2324043 993397.331435 9307 120 0.000000 3831.526501 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.2 2341452 993352.281372 696 120 0.000000 234.585944 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.7 2352778 993352.670934 457 120 0.000000 143.696125 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2377277 993445.463755 463 120 0.000000 177.910535 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/21:0 2425747 19220269.892242 104836 120 0.000000 5100.510376 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.31.1 2432547 993352.640353 169 120 0.000000 70.783218 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] D kworker/u98:17 2468089 19149447.567297 28079 120 0.000000 558.428481 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u98:0 2471807 19208677.733933 99309 120 0.000000 4540.946140 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/21:2 2474756 19208593.509733 11 120 0.000000 0.141297 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#22, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2828003316
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 31570
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633451.163209
[Mon Mar 11 14:29:19 2024] .clock_task : 1258829139.958911
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[22]:/autogroup-113
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 2563344.661088
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -17854321.847027
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 6
[Mon Mar 11 14:29:19 2024] .runnable_avg : 6
[Mon Mar 11 14:29:19 2024] .util_avg : 6
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 6
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 102
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1258829130.225422
[Mon Mar 11 14:29:19 2024] .se->vruntime : 40243007.765829
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 2317918.909840
[Mon Mar 11 14:29:19 2024] .se->load.weight : 51909
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 6
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 6
[Mon Mar 11 14:29:19 2024] cfs_rq[22]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 40243007.765829
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 19825341.257714
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 8
[Mon Mar 11 14:29:19 2024] .runnable_avg : 6
[Mon Mar 11 14:29:19 2024] .util_avg : 6
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[22]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.244233
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[22]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/22 150 21402.380149 25 120 0.000000 1.221941 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/22 151 -11.995205 3 49 0.000000 0.011427 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/22 152 0.000000 383587 0 0.000000 1967.072244 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/22 153 40242994.971666 25427905 120 0.000000 306278.206932 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/22:0H 155 26.115073 4 100 0.000000 0.030861 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/22:1H 766 40242994.961953 3057485 100 0.000000 64809.087501 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_1 897 2174.245846 25 120 0.000000 5.054257 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/118-lpfc:22 957 0.000000 9010652 49 0.000000 175390.608767 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S irq/193-lpfc:22 1012 0.000000 9019551 49 0.000000 176127.281438 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-78-8 4219 40212107.891589 5126 120 0.000000 179.238307 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8784 40242950.005258 69603 120 0.000000 8387.456362 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8860 40144536.158368 24059818 120 0.000000 1013552.127209 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88131 17.460556 1 120 0.000000 0.898839 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88145 30.372599 1 120 0.000000 0.912050 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.9 2279987 2563247.133670 1378 120 0.000000 586.551502 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2376851 2563199.166218 386 120 0.000000 101.924994 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.233. 2455152 2563344.661088 256894 120 0.000000 51451.562581 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/22:2 2458065 40242982.937937 81387 120 0.000000 5647.071224 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/22:1 2474143 40233379.342697 23 120 0.000000 0.081104 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S bash 2474608 1.140863 174 120 0.000000 88.433570 0.000000 0.000000 0 0 /autogroup-23274
[Mon Mar 11 14:29:19 2024] I kworker/22:0 2476753 40241331.317160 5 120 0.000000 0.069463 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S sleep 2478153 85500.673951 1 120 0.000000 1.066047 0.000000 0.000000 0 0 /autogroup-292
[Mon Mar 11 14:29:19 2024] cpu#23, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 952435926
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 785
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334875
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633451.160900
[Mon Mar 11 14:29:19 2024] .clock_task : 1281142140.549718
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[23]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[23]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/23 156 25666.725145 25 120 0.000000 1.041153 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/23 157 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/23 158 1377.822149 323858 0 0.000000 1902.997882 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/23 159 18608895.385798 12287619 120 0.000000 115302.907533 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/23:0H 161 112.099119 4 100 0.000000 0.027491 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/23:1H 858 18608895.379447 3903217 100 0.000000 87479.525904 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_0 869 361.861815 2 120 0.000000 0.005336 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/119-lpfc:23 958 0.000000 6013015 49 0.000000 132947.985111 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S irq/194-lpfc:23 1013 0.000000 6028473 49 0.000000 133837.073157 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S multipathd 3352 24.687224 19 120 0.000000 1.447322 0.000000 0.000000 1 0 /autogroup-45
[Mon Mar 11 14:29:19 2024] I kdmflush/253:16 3520 27086.146499 2 100 0.000000 0.015280 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:19 3544 27160.529421 2 100 0.000000 0.016152 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:20 3554 27172.541157 2 100 0.000000 0.013776 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:33 3667 27186.703773 2 100 0.000000 0.009435 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:73 4086 28148.232554 2 100 0.000000 0.016868 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S auditd 4254 944.938064 251145 116 0.000000 42840.017228 0.000000 0.000000 1 0 /autogroup-62
[Mon Mar 11 14:29:19 2024] I bond0 4392 28372.006056 2 100 0.000000 0.023458 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S JS Helper 5175 11.162914 9 120 0.000000 0.211497 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] S jbd2/dm-97-8 6477 18495169.132305 2604 120 0.000000 124.399513 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 6489 29630.607598 2 100 0.000000 0.063328 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ib_mcast 8764 29960.631165 2 100 0.000000 0.071093 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8806 18609016.993217 129194 120 0.000000 13400.081893 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8832 18597529.564306 5346074 120 0.000000 274914.850237 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.9 2346134 919880.063625 3814 120 0.000000 1177.145176 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/23:0 2469472 18609154.536183 21772 120 0.000000 1320.907831 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/23:1 2474524 18598648.636634 9 120 0.000000 0.054034 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S sshd 2474605 344.020612 7161 120 0.000000 458.950673 0.000000 0.000000 1 0 /autogroup-23273
[Mon Mar 11 14:29:19 2024] I kworker/23:2 2477164 18598660.621122 2 120 0.000000 0.010650 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#24, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 1945978603
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 7002
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334875
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633451.161485
[Mon Mar 11 14:29:19 2024] .clock_task : 1260960834.948137
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[24]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 31682885.436619
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 11265218.928504
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[24]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[24]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/24 162 23393.664740 25 120 0.000000 1.542294 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/24 163 -12.000000 3 49 0.000000 0.003321 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/24 164 1445.822873 376915 0 0.000000 2100.214786 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/24 165 31682504.570533 6107745 120 0.000000 129659.113170 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/24:0H 167 1118.798300 4 100 0.000000 0.047430 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S kcompactd0 364 31682198.784436 39248453 120 0.000000 43301648.709535 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kblockd 370 53.253502 2 100 0.000000 0.001368 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/24:1H 768 31682872.632778 1213017 100 0.000000 35678.223987 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_2 901 2713.897643 2 100 0.000000 0.012790 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_10 920 2722.756590 27 120 0.000000 4.679658 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S rpcbind 4250 279.561892 56341 120 0.000000 9059.901200 0.000000 0.000000 0 0 /autogroup-61
[Mon Mar 11 14:29:19 2024] S nsrexecd 5346 3774.551353 643778 120 0.000000 30115.929347 0.000000 0.000000 0 5346 /autogroup-117
[Mon Mar 11 14:29:19 2024] S jbd2/dm-93-8 6467 31523220.237211 518 120 0.000000 27.247637 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8833 31604688.052593 686363 120 0.000000 47271.239267 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.3 2341031 2240546.214685 564 120 0.000000 181.558215 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2344554 2240544.812087 781 120 0.000000 200.331122 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.9 2432065 2240546.215642 178 120 0.000000 75.915833 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.31.4 2433848 2240505.044033 154 120 0.000000 60.890954 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/24:2 2453510 31682873.369693 32225 120 0.000000 2143.587188 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/24:1 2474838 31672624.576239 5 120 0.000000 0.068126 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2478224 21971.978621 50 120 0.000000 81.756013 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#25, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 934302642
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -22436
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334873
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633451.162025
[Mon Mar 11 14:29:19 2024] .clock_task : 1281056889.770396
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[25]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[25]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/25 168 24885.855059 25 120 0.000000 0.768376 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/25 169 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/25 170 1513.822841 323311 0 0.000000 1902.890051 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/25 171 20632998.120255 1885551 120 0.000000 58547.705586 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/25:0H 173 1587.428642 4 100 0.000000 0.045985 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_8 917 1538.904813 2 100 0.000000 0.012754 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_10 921 1550.914652 2 100 0.000000 0.011461 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/25:1H 962 20632471.806627 996076 100 0.000000 35301.941463 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I dm_bufio_cache 3811 25802.615710 2 100 0.000000 0.006567 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4245 26119.690907 2 100 0.000000 0.015519 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S rpc.idmapd 4301 2950.453399 948878 120 0.000000 123631.196827 0.000000 0.000000 1 0 /autogroup-68
[Mon Mar 11 14:29:19 2024] S rs:action-1-bui 4867 35046.877416 263332 120 0.000000 21148.632049 0.000000 0.000000 1 4838 /autogroup-112
[Mon Mar 11 14:29:19 2024] S nsrexecd 5342 3799.218789 25728 120 0.000000 1553.951151 0.000000 0.000000 1 5342 /autogroup-117
[Mon Mar 11 14:29:19 2024] S dmeventd 8698 2319.804457 151145 120 0.000000 21099.248873 0.000000 0.000000 1 8698 /autogroup-125
[Mon Mar 11 14:29:19 2024] D nfsd 8799 20633199.012571 95725 120 0.000000 10695.949003 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S gdbus 21808 67.110399 2245 120 0.000000 253.886466 0.000000 0.000000 1 0 /autogroup-204
[Mon Mar 11 14:29:19 2024] S smbd[10.91.30.5 86031 2261157.035605 4401802 120 0.000000 3865085.547867 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/25:1 2344356 20633369.242049 136226 120 0.000000 5756.346566 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2472086 2260850.549800 45 120 0.000000 39.249706 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/25:2 2473359 20624769.626038 6 120 0.000000 0.044961 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S journalctl 2475167 380.504202 273 120 0.000000 426.435553 0.000000 0.000000 1 0 /autogroup-23274
[Mon Mar 11 14:29:19 2024] I kworker/25:0 2476365 20624781.604785 2 120 0.000000 0.014551 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#26, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2457302330
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 26766
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633452.161932
[Mon Mar 11 14:29:19 2024] .clock_task : 1262053718.432981
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[26]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[26]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/26 174 23606.902582 25 120 0.000000 0.760992 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/26 175 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/26 176 1581.822817 363506 0 0.000000 1914.526254 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/26 177 36791242.577690 7382776 120 0.000000 180914.912083 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/26:0H 179 14.881818 4 100 0.000000 0.028332 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/26:1H 770 36791236.458131 1649062 100 0.000000 53845.018222 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_6 908 2137.437479 25 120 0.000000 6.102952 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-81-8 4308 36776987.299253 30974 120 0.000000 4963.794561 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S sssd 4316 76246.349498 42954 120 0.000000 2311.007455 0.000000 0.000000 0 0 /autogroup-78
[Mon Mar 11 14:29:19 2024] D nfsd 8845 36723414.897139 8901131 120 0.000000 282223.184362 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8851 36718568.345560 8652720 120 0.000000 372915.329855 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S colord 21791 142.073761 1066 120 0.000000 373.718129 0.000000 0.000000 0 0 /autogroup-204
[Mon Mar 11 14:29:19 2024] S automount 88101 11.213237 1 120 0.000000 0.261820 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88134 37.019646 1 120 0.000000 0.912702 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88146 49.929411 1 120 0.000000 0.909772 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[134.58.56. 960684 2068301.895701 5384 120 0.000000 1460.852050 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2322491 2068259.870093 5379 120 0.000000 2308.866908 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/26:2 2401639 36791243.818499 257596 120 0.000000 11027.920229 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u97:3 2462779 36789246.926220 379069 120 0.000000 15750.222948 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/26:1 2474460 36786323.377963 9 120 0.000000 0.049250 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/26:0 2477469 36786622.575759 3 120 0.000000 0.061638 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#27, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 1002480564
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -10955
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633452.161230
[Mon Mar 11 14:29:19 2024] .clock_task : 1281082968.810087
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[27]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[27]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/27 180 26958.097613 25 120 0.000000 0.856703 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/27 181 -11.994604 3 49 0.000000 0.011110 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/27 182 1649.822816 323698 0 0.000000 1851.914656 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/27 183 20569460.594109 2122703 120 0.000000 68419.959615 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/27:0H 185 58.938747 4 100 0.000000 0.021159 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/27:1H 828 20569850.225369 909599 100 0.000000 31243.623932 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I nvme_fc_wq 889 2067.194452 2 100 0.000000 0.008398 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I rpciod 2794 6757.087854 2 100 0.000000 0.013709 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I xprtiod 2795 6769.097604 2 100 0.000000 0.011074 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S systemd-udevd 2816 98460.616950 760799 120 0.000000 1248069.917916 0.000000 0.000000 1 0 /autogroup-39
[Mon Mar 11 14:29:19 2024] I kdmflush/253:56 3874 27886.389931 2 100 0.000000 0.008810 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S auditd 4255 731.981661 39721 116 0.000000 1325.433456 0.000000 0.000000 1 0 /autogroup-62
[Mon Mar 11 14:29:19 2024] D nfsd 8786 20569859.045812 74089 120 0.000000 8697.077926 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8853 20500589.645296 10007686 120 0.000000 442380.593217 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/27:2 2144617 20569859.954817 210170 120 0.000000 10137.066621 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2343951 1580634.286320 1118 120 0.000000 250.973953 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S rpcd_winreg 2347933 208314.687886 592979 120 0.000000 150673.165789 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S smbd[10.40.138. 2377602 1580633.988057 341 120 0.000000 118.763343 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.7 2425212 1580633.990081 354 120 0.000000 152.083427 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.8 2459178 1580639.767451 15144 120 0.000000 2988.473061 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/27:0 2473569 20558278.057202 5 120 0.000000 0.028884 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/27:1 2476181 20558693.970875 4 120 0.000000 0.076907 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#28, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2333117815
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 36485
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633452.161449
[Mon Mar 11 14:29:19 2024] .clock_task : 1266800310.542179
[Mon Mar 11 14:29:19 2024] .avg_idle : 983160
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[28]:/autogroup-113
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 2287115.375726
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -18130551.132389
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 3
[Mon Mar 11 14:29:19 2024] .runnable_avg : 3
[Mon Mar 11 14:29:19 2024] .util_avg : 3
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 3
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 102
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1266800238.818173
[Mon Mar 11 14:29:19 2024] .se->vruntime : 35351482.597983
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 2168728.352752
[Mon Mar 11 14:29:19 2024] .se->load.weight : 122164
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 3
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 3
[Mon Mar 11 14:29:19 2024] cfs_rq[28]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 35351491.742307
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 14933825.234192
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 6
[Mon Mar 11 14:29:19 2024] .runnable_avg : 3
[Mon Mar 11 14:29:19 2024] .util_avg : 3
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[28]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.741596
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[28]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/28 186 24467.327419 25 120 0.000000 0.931473 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/28 187 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/28 188 1717.822791 359161 0 0.000000 1918.506629 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/28 189 35351472.574412 6496166 120 0.000000 166193.264532 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/28:0H 191 100.572124 4 100 0.000000 0.021484 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kthrotld 388 60.224546 2 100 0.000000 0.015372 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/28:1H 797 35351479.743890 1473752 100 0.000000 48966.040525 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kmpathd 3346 25437.931795 2 100 0.000000 0.063045 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-85-8 4240 35218792.671883 1269 120 0.000000 98.149423 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8783 35351485.974431 68107 120 0.000000 8304.030129 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8800 35351491.742307 100784 120 0.000000 11085.091832 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8811 35351488.913213 188046 120 0.000000 18087.651476 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8819 35351098.469205 1475231 120 0.000000 80038.218149 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8821 35346689.963250 1355027 120 0.000000 77645.905253 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8872 35264053.263806 1371523107 120 0.000000 33490756.107842 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S gmain 21805 11.207902 2 120 0.000000 0.256487 0.000000 0.000000 0 0 /autogroup-204
[Mon Mar 11 14:29:19 2024] S automount 88149 11.861608 1 120 0.000000 0.910191 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.3 2193788 2287000.001916 1035 120 0.000000 279.775300 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.8 2346897 2286985.682236 620 120 0.000000 191.621440 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2347638 2287106.006442 13988 120 0.000000 5904.163319 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.86.26.7 2366328 2287088.692190 368 120 0.000000 105.971223 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/28:0 2409584 35351479.876784 217220 120 0.000000 11690.359957 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.2 2440138 2286999.860277 130 120 0.000000 53.316142 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/28:2 2474525 35348048.447079 8 120 0.000000 0.052003 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/28:1 2477534 35348532.167907 4 120 0.000000 0.069155 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#29, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 973699126
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -8284
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334875
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633452.160657
[Mon Mar 11 14:29:19 2024] .clock_task : 1281130905.050549
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[29]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[29]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/29 192 27336.035836 25 120 0.000000 0.640881 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/29 193 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/29 194 1785.822790 323910 0 0.000000 1933.488315 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/29 195 19780704.777553 2053919 120 0.000000 66844.935710 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/29:0H 197 644.947209 4 100 0.000000 0.014916 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/29:1H 877 19781749.635358 833808 100 0.000000 28560.002417 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I nvme-reset-wq 880 706.132822 2 100 0.000000 0.006370 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_3 902 2343.549794 25 120 0.000000 3.806096 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_3 903 1651.005638 2 100 0.000000 0.014333 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_4 905 1675.023417 2 100 0.000000 0.011040 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_5 907 1687.033632 2 100 0.000000 0.012448 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:18 3534 27900.784849 2 100 0.000000 0.014036 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:27 3612 27948.608761 2 100 0.000000 0.014517 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4265 28332.975251 2 100 0.000000 0.016037 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S avahi-daemon 4321 11.468531 2 120 0.000000 0.517116 0.000000 0.000000 1 0 /autogroup-70
[Mon Mar 11 14:29:19 2024] S sssd_autofs 4358 137116.608098 264924 120 0.000000 18254.029467 0.000000 0.000000 1 0 /autogroup-78
[Mon Mar 11 14:29:19 2024] I dm-thin 6395 28875.814440 2 100 0.000000 0.063019 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S dmeventd 6410 2145.772838 144146 102 0.000000 22017.259595 0.000000 0.000000 1 6410 /autogroup-125
[Mon Mar 11 14:29:19 2024] D nfsd 8793 19781761.546732 81524 120 0.000000 9286.620505 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8836 19766206.005765 4192253 120 0.000000 241849.341168 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S gdbus 21426 25.154925 161 120 0.000000 37.257953 0.000000 0.000000 1 21426 /autogroup-154
[Mon Mar 11 14:29:19 2024] S automount 88139 24.640968 1 120 0.000000 0.787324 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.7 2428733 1325662.905937 387 120 0.000000 177.665898 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/29:0 2436183 19781749.645167 61082 120 0.000000 3797.869688 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.140. 2453653 1325650.602636 96 120 0.000000 51.466211 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/29:2 2474780 19775520.754109 3 120 0.000000 0.018825 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#30, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 3769795270
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 39193
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334875
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633452.161040
[Mon Mar 11 14:29:19 2024] .clock_task : 1252314208.909423
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[30]:/autogroup-113
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 1827811.301253
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -18589855.206862
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 102
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1252314107.747249
[Mon Mar 11 14:29:19 2024] .se->vruntime : 54713088.078382
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 1667204.892016
[Mon Mar 11 14:29:19 2024] .se->load.weight : 2
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 0
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] cfs_rq[30]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 54713099.649567
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 34295433.141452
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[30]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[30]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/30 198 27707.242305 25 120 0.000000 0.883592 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/30 199 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/30 200 1853.822765 375246 0 0.000000 1914.133523 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/30 201 54713067.203589 12832494 120 0.000000 296928.222491 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/30:0H 203 40.610924 4 100 0.000000 0.046357 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/30:1H 773 54713087.649879 2047597 100 0.000000 57970.744443 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_11 922 2906.284561 25 120 0.000000 3.021279 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I scsi_wq_15 961 2916.915135 2 100 0.000000 0.006923 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ipmi-msghandler 3119 9177.907899 2 100 0.000000 0.007451 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:26 3610 30094.425256 2 100 0.000000 0.015141 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:48 3806 30441.338818 2 100 0.000000 0.009726 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ipmievd 4318 1279.475180 257116 120 0.000000 11973.922805 0.000000 0.000000 0 0 /autogroup-74
[Mon Mar 11 14:29:19 2024] D nfsd 8785 54713087.068428 72158 120 0.000000 8555.827205 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8803 54713084.529861 117561 120 0.000000 12174.146490 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8814 54713090.411096 326877 120 0.000000 26562.260384 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8815 54712918.239335 388488 120 0.000000 30889.130054 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8818 54712552.489522 1013118 120 0.000000 58538.292722 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8825 54696392.769622 2500570 120 0.000000 130849.129639 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8828 54600869.267487 249094 120 0.000000 19620.432999 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8831 54654008.177722 1588536 120 0.000000 95579.359611 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8861 54591761.355245 29213313 120 0.000000 1208432.697269 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8875 54569428.184409 7082610996 120 0.000000 175537379.985343 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.31.1 1059267 1827799.697405 5257 120 0.000000 1524.895754 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] D kworker/30:0 2364267 54712989.727112 345513 120 0.000000 14561.467284 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u97:1 2451949 54713099.649567 723816 120 0.000000 15800.529215 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/30:1 2473379 54688989.156641 10 120 0.000000 0.059539 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/30:2 2475882 54713087.662033 17 120 0.000000 0.125553 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#31, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 934247363
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -5035
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334877
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633452.162406
[Mon Mar 11 14:29:19 2024] .clock_task : 1281176641.859853
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[31]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 18913167.665467
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -1504498.842648
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[31]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[31]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/31 204 23183.738052 25 120 0.000000 1.984444 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/31 205 -11.994447 3 49 0.000000 0.233381 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/31 207 1918.044378 324029 0 0.000000 1921.749222 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/31 208 18912382.515114 1955950 120 0.000000 63903.878297 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/31:0H 210 67.506584 4 100 0.000000 0.034039 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/31:1H 803 18912464.158320 782302 100 0.000000 27164.626072 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:87 4166 23293.393835 2 100 0.000000 0.017439 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S f2b/observer 2470448 23278.646362 1 120 0.000000 0.168234 0.000000 0.000000 1 0 /autogroup-96
[Mon Mar 11 14:29:19 2024] S gdbus 5165 1933.257011 311267 120 0.000000 45341.992410 0.000000 0.000000 1 4667 /autogroup-104
[Mon Mar 11 14:29:19 2024] S JS Helper 5180 11.205943 10 120 0.000000 0.254526 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] S uwsgi 5236 3425.517111 226 120 0.000000 185.220898 0.000000 0.000000 1 0 /autogroup-90
[Mon Mar 11 14:29:19 2024] D nfsd 8792 18913162.419960 84339 120 0.000000 9440.605656 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8813 18913162.628026 261677 120 0.000000 23211.393110 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/31:2 2436168 18913154.991206 61335 120 0.000000 3433.530478 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2441978 1216497.375004 220 120 0.000000 92.346038 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2457400 1216472.702692 111 120 0.000000 60.372911 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/u96:5 2466033 18913167.665467 85607 120 0.000000 32767.313757 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/31:0 2474534 18890962.520235 3 120 0.000000 0.029093 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#32, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 3058688882
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 15636
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633453.161490
[Mon Mar 11 14:29:19 2024] .clock_task : 1260420965.487018
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[32]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[32]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/32 211 24602.849659 25 120 0.000000 0.671807 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/32 212 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/32 213 1986.044354 380976 0 0.000000 2045.251086 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/32 214 45066090.409469 10342211 120 0.000000 234618.394814 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/32:0H 216 39.602572 4 100 0.000000 0.013003 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I writeback 363 13.400695 2 100 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I tpm_dev_wq 372 13.400695 2 100 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/32:1H 406 45066114.141864 1761136 100 0.000000 52233.979953 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I poll_megasas0_s 931 2038.309746 2 100 0.000000 0.027107 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:51 3837 25593.905418 2 100 0.000000 0.018228 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:89 4172 25787.071043 2 100 0.000000 0.016299 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-83-8 4350 45066114.349151 303135 120 0.000000 18348.000104 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8782 45066119.378656 67460 120 0.000000 8251.485627 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8810 45066126.141449 164146 120 0.000000 16587.517234 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8847 44984526.578282 9036579 120 0.000000 357315.796423 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S automount 88127 23.939798 1 120 0.000000 0.882379 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88159 62.611349 1 120 0.000000 0.850809 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] I kworker/32:0 2344368 45066114.208484 378163 120 0.000000 18597.235107 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2344564 2011560.787408 477 120 0.000000 148.359065 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.18.2 2347077 2011637.083704 909 120 0.000000 307.074769 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.9 2395603 2011560.760869 310 120 0.000000 98.868814 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/32:1 2470837 45050189.265983 21 120 0.000000 0.076223 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/32:2 2475859 45059554.686693 21 120 0.000000 0.199284 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#33, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 911324112
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -3649
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334877
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633453.160511
[Mon Mar 11 14:29:19 2024] .clock_task : 1281205568.591957
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[33]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[33]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/33 217 21359.466636 25 120 0.000000 0.676036 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/33 218 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/33 219 2054.049488 323947 0 0.000000 1940.884327 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/33 220 18522501.468156 1881932 120 0.000000 61610.801756 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/33:0H 222 690.274881 4 100 0.000000 0.012629 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S kauditd 356 18522609.129157 349285 120 0.000000 4925.355065 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/33:1H 874 18522609.519504 805238 100 0.000000 27183.307839 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S writer#3 5331 425961.043751 55300298 120 0.000000 1456522.760828 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] S jbd2/dm-98-8 6487 18413323.890250 539 120 0.000000 28.676716 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:10 8719 23285.211605 2 100 0.000000 0.007915 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8844 18469903.898437 7148783 120 0.000000 214334.022179 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8852 18470784.612419 9758338 120 0.000000 439552.047941 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8858 18463784.126770 17912269 120 0.000000 783911.070444 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.2 2370451 1105561.267752 379 120 0.000000 125.485170 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.6 2419992 1105560.908876 405 120 0.000000 107.696804 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/33:1 2434371 18522620.201079 70707 120 0.000000 4725.174632 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u98:10 2463433 18522630.962612 224485 120 0.000000 4782.863496 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/33:2 2474557 18519125.341171 10 120 0.000000 0.054772 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/33:0 2477675 18519137.328252 2 120 0.000000 0.012966 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#34, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2857202473
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 26823
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334878
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633453.163247
[Mon Mar 11 14:29:19 2024] .clock_task : 1262053899.391721
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[34]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[34]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/34 223 22765.629809 25 120 0.000000 0.814599 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/34 224 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/34 225 2122.049463 368590 0 0.000000 1978.759386 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/34 226 42202131.591786 9172509 120 0.000000 219827.863105 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/34:0H 228 76.825262 4 100 0.000000 0.031952 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I acpi_thermal_pm 400 76.534164 2 100 0.000000 0.017320 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/34:1H 745 42202235.358631 1655644 100 0.000000 49251.726479 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:44 3770 23966.769205 2 100 0.000000 0.017563 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-79-8 4270 42202235.717902 999072 120 0.000000 52365.683002 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-67-8 4276 42202235.528436 523000 120 0.000000 22523.730329 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S NetworkManager 4349 2686.665908 486205 120 0.000000 65653.587121 0.000000 0.000000 0 4359 /autogroup-82
[Mon Mar 11 14:29:19 2024] S uwsgi 5238 664.290160 391 120 0.000000 492.952488 0.000000 0.000000 0 0 /autogroup-90
[Mon Mar 11 14:29:19 2024] I kworker/34:0 2078485 42202235.384938 931800 120 0.000000 30858.925314 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.5 2331098 2229876.546800 517 120 0.000000 145.288568 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S samba-bgqd 2339818 19243.478080 10803 120 0.000000 16447.852718 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S smbd[10.33.22.1 2349497 2229666.987357 486 120 0.000000 154.500876 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.31.1 2396394 2229558.831884 305 120 0.000000 102.112379 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.234. 2404568 2229912.682394 3599 120 0.000000 1319.976481 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/34:2 2475317 42197051.969339 18 120 0.000000 0.087815 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#35, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 874229258
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -2501
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334875
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633453.158114
[Mon Mar 11 14:29:19 2024] .clock_task : 1281231960.582846
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[35]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[35]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/35 229 21160.012116 25 120 0.000000 0.792289 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/35 230 -12.000000 3 49 0.000000 0.001287 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/35 231 0.000000 323614 0 0.000000 1919.440054 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/35 232 17769333.195118 1765668 120 0.000000 58048.002825 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/35:0H 234 46.667683 4 100 0.000000 0.014217 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/35:1H 620 17769643.424640 773910 100 0.000000 26580.944892 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I nvme-delete-wq 881 921.768551 2 100 0.000000 0.007748 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_12 925 1771.749158 2 100 0.000000 0.012430 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_14 929 1795.767151 2 100 0.000000 0.010589 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:79 4153 21813.120382 2 100 0.000000 0.019452 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S gssproxy 4420 1727.019608 38893 120 0.000000 2837.125562 0.000000 0.000000 1 0 /autogroup-98
[Mon Mar 11 14:29:19 2024] S gssproxy 4426 11.009374 2 120 0.000000 0.057959 0.000000 0.000000 1 0 /autogroup-98
[Mon Mar 11 14:29:19 2024] S gssproxy 4427 23.045582 1 120 0.000000 0.036217 0.000000 0.000000 1 0 /autogroup-98
[Mon Mar 11 14:29:19 2024] S gssproxy 4428 35.079440 1 120 0.000000 0.033867 0.000000 0.000000 1 0 /autogroup-98
[Mon Mar 11 14:29:19 2024] S JS Helper 5181 -5.349679 11 120 0.000000 0.362890 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] D nfsd 8824 17769624.287672 3701991 120 0.000000 191960.796469 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8865 17694268.332316 68392850 120 0.000000 2480606.148617 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.20.2 3797717 1018863.671099 18954 120 0.000000 7691.558275 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/35:0 2344315 17769643.464374 138933 120 0.000000 6886.987572 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.131. 2365796 1018880.141597 391 120 0.000000 128.296416 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.9 2458024 1018863.915275 76 120 0.000000 40.091002 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/35:1 2475663 17767524.203193 8 120 0.000000 0.032187 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2477388 202938.028752 1626 120 0.000000 520.201286 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#36, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2694265997
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 26995
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334877
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633453.161253
[Mon Mar 11 14:29:19 2024] .clock_task : 1264113913.150948
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[36]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[36]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/36 235 26429.055755 25 120 0.000000 0.727715 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/36 236 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/36 237 2258.054608 362728 0 0.000000 1944.725306 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/36 238 40162136.741591 8034022 120 0.000000 200149.660796 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/36:0H 240 318.331682 4 100 0.000000 0.032690 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kaluad 402 78.978709 2 100 0.000000 0.059950 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/36:1H 801 40162641.173472 1527143 100 0.000000 45688.673396 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:59 3893 28079.646107 2 100 0.000000 0.010937 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-33-8 4130 40162643.394077 1765898 120 0.000000 140931.592836 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd-notifyd 4819 2321249.010629 3385071 120 0.000000 181688.331842 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S systemd 5558 7121.970550 16618 120 0.000000 97074.074352 0.000000 0.000000 0 0 /autogroup-121
[Mon Mar 11 14:29:19 2024] D nfsd 8794 40162642.132851 85773 120 0.000000 9585.592461 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8822 40116193.065430 308578 120 0.000000 28704.577400 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/36:1 2364257 40162641.219761 284801 120 0.000000 18733.324188 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/36:0 2472450 40150655.534616 8 120 0.000000 0.041589 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u96:6 2472989 40162136.799824 49140 120 0.000000 15610.415321 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/36:2 2476104 40156521.875964 4 120 0.000000 0.069811 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#37, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 863794615
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -2274
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334877
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633453.161099
[Mon Mar 11 14:29:19 2024] .clock_task : 1281269055.831703
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[37]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[37]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/37 241 29446.555105 25 120 0.000000 0.857971 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/37 242 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/37 243 2326.054608 324178 0 0.000000 1949.245636 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/37 244 17661531.607683 1748896 120 0.000000 57209.549970 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/37:0H 246 1776.459651 4 100 0.000000 0.029970 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I nvme-auth-wq 882 967.825675 2 100 0.000000 0.008149 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_12 924 1866.356119 26 120 0.000000 2.526508 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/37:1H 930 17661889.419846 767198 100 0.000000 26604.845528 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I lpfc_wq 932 1835.035017 2 100 0.000000 0.010705 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:21 3570 30568.382664 2 100 0.000000 0.009171 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:86 4162 30627.871120 2 100 0.000000 0.017731 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4327 31101.213258 2 100 0.000000 0.017887 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S f2b/f.samba 4439 23163.522760 2566672 120 0.000000 331096.791605 0.000000 0.000000 1 4439 /autogroup-96
[Mon Mar 11 14:29:19 2024] S gssproxy 4429 1396.806687 43322 120 0.000000 39309.365681 0.000000 0.000000 1 0 /autogroup-98
[Mon Mar 11 14:29:19 2024] S JS Helper 5179 -12.733809 12 120 0.000000 0.348841 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] S reader#3 5336 425603.275060 495971 120 0.000000 409995.791717 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] D nfsd 8790 17661822.765779 79087 120 0.000000 9030.227870 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8855 17591330.602846 12034855 120 0.000000 531625.374692 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S dbus 21786 70.151293 112 120 0.000000 83.086232 0.000000 0.000000 1 0 /autogroup-87
[Mon Mar 11 14:29:19 2024] S automount 88154 11.945083 1 120 0.000000 0.993666 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88155 24.717274 1 120 0.000000 0.772198 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.3 2193786 990875.619912 1032 120 0.000000 283.353459 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.8 2325987 990875.861352 687 120 0.000000 234.835815 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.5 2346299 990851.613523 480 120 0.000000 151.753083 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/37:1 2411099 17661810.851206 109814 120 0.000000 7504.494947 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.146. 2456493 990875.742714 100 120 0.000000 56.896699 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/37:0 2474527 17657073.411967 10 120 0.000000 0.066711 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/37:2 2476855 17660704.148651 5 120 0.000000 0.025531 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_classic 2477044 226134.995644 110 120 0.000000 71.523731 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2477392 226133.047902 57 120 0.000000 70.037449 0.000000 0.000000 1 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#38, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2411982442
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 34366
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633454.157240
[Mon Mar 11 14:29:19 2024] .clock_task : 1266729442.249762
[Mon Mar 11 14:29:19 2024] .avg_idle : 840996
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[38]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[38]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/38 247 29367.584529 25 120 0.000000 1.102400 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/38 248 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/38 249 2394.054584 360036 0 0.000000 1952.378965 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/38 250 35897296.199335 6740495 120 0.000000 177167.053115 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/38:0H 252 275.785696 4 100 0.000000 0.018007 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S watchdogd 376 0.000000 2 49 0.000000 0.001131 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/38:1H 782 35897285.454785 1390661 100 0.000000 43730.675767 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_9 918 3099.896301 25 120 0.000000 1.918330 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_13 927 3096.260663 2 100 0.000000 0.049846 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:54 3851 30477.037714 2 100 0.000000 0.016385 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S uwsgi 5237 400.212737 295 120 0.000000 329.201492 0.000000 0.000000 0 0 /autogroup-90
[Mon Mar 11 14:29:19 2024] S (sd-pam) 5572 11.653767 1 120 0.000000 0.702352 0.000000 0.000000 0 0 /autogroup-121
[Mon Mar 11 14:29:19 2024] D nfsd 8816 35897297.456099 543826 120 0.000000 37992.093535 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8826 35864240.192841 578969 120 0.000000 45585.767320 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8839 35833966.301340 1150071 120 0.000000 57664.031683 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8846 35859524.401393 11806699 120 0.000000 436337.699240 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 1049691 2391400.329546 13982 120 0.000000 3883.138043 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/38:2 2364258 35897336.059223 275122 120 0.000000 14109.874838 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S sshd 2371682 0.039535 53 120 0.000000 11.867012 0.000000 0.000000 0 0 /autogroup-23021
[Mon Mar 11 14:29:19 2024] S smbd[10.87.19.4 2434016 2391457.809595 842 120 0.000000 301.613891 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.33.88.9 2464105 2391469.663007 1640 120 0.000000 1354.336849 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/38:1 2475692 35893456.662804 6 120 0.000000 0.073610 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.40.128. 2478178 2391483.986499 625 120 0.000000 54.845725 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] cpu#39, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 845617600
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -1266
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334877
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633454.161323
[Mon Mar 11 14:29:19 2024] .clock_task : 1281284910.810937
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[39]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 17343149.665584
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -3074516.842531
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 2
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[39]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[39]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/39 253 25042.299856 25 120 0.000000 0.639678 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/39 254 -11.995353 3 49 0.000000 0.010391 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/39 255 2462.054589 324307 0 0.000000 1941.529853 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/39 256 17342017.237773 1703732 120 0.000000 56103.106457 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/39:0H 258 136.379023 4 100 0.000000 0.028311 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/39:1H 765 17343137.665903 750770 100 0.000000 26398.818303 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S lpfc_worker_1 989 17343137.665723 284763 100 0.000000 5426.790937 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S multipathd 3354 0.000000 12 0 0.000000 11.094414 0.000000 0.000000 1 0 /autogroup-45
[Mon Mar 11 14:29:19 2024] S avahi-daemon 4304 428.281844 93423 120 0.000000 15830.156555 0.000000 0.000000 1 0 /autogroup-70
[Mon Mar 11 14:29:19 2024] S nsrexecd 5339 7672.943725 475447 120 0.000000 34733.376497 0.000000 0.000000 1 5339 /autogroup-117
[Mon Mar 11 14:29:19 2024] S jbd2/dm-95-8 6488 17248764.126110 671 120 0.000000 208.657763 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:99 8694 26600.750779 2 100 0.000000 0.063299 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ib-comp-wq 8762 26636.949487 2 100 0.000000 0.065486 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8798 17343149.665584 93175 120 0.000000 10395.675057 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8823 17343148.085731 2973818 120 0.000000 147797.809136 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8859 17303619.863031 20778866 120 0.000000 898391.855561 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S automount 88121 2653.399424 13 120 0.000000 2.245948 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S tlsmgr 158515 1539.398420 4346 120 0.000000 953.044802 0.000000 0.000000 1 0 /autogroup-115
[Mon Mar 11 14:29:19 2024] I kworker/39:2 2251290 17343137.733346 174864 120 0.000000 12516.844085 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.8 2372510 949322.698281 384 120 0.000000 121.010069 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/39:0 2471904 17342219.118298 5 120 0.000000 0.550950 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/39:1 2478206 17342231.085874 2 120 0.000000 0.060302 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#40, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2980246779
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 24150
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334878
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633454.161538
[Mon Mar 11 14:29:19 2024] .clock_task : 1257562675.387490
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[40]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[40]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/40 259 22774.414769 25 120 0.000000 1.054486 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/40 260 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/40 261 2530.056051 394260 0 0.000000 2106.751273 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/40 262 44019337.022821 10993228 120 0.000000 238958.281123 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/40:0H 264 70.589518 4 100 0.000000 0.084035 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kintegrityd 369 51.034898 2 100 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I md_bitmap 374 51.036105 2 100 0.000000 0.001207 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/40:1H 804 44019336.969523 1671018 100 0.000000 47751.898571 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_2 900 2889.559709 25 120 0.000000 4.466813 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_14 928 2890.311214 25 120 0.000000 4.418888 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-77-8 4353 44019340.410796 1682917 120 0.000000 464110.420386 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:97 6433 26014.861775 2 100 0.000000 0.014046 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-101-8 8746 43860152.185757 13470 120 0.000000 3385.568781 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8797 44019348.969030 92422 120 0.000000 10285.420367 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8835 43950074.644499 1033655 120 0.000000 61019.596108 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8840 43941588.794471 1931440 120 0.000000 84935.159947 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8857 43932394.829460 15366373 120 0.000000 676417.528097 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8869 43929480.450118 283550159 120 0.000000 8285285.827066 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.86.26.7 348298 2282803.750072 36042 120 0.000000 10191.207203 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 1045161 2282504.598515 5109 120 0.000000 1412.250436 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/40:0 2193065 44019337.054720 584792 120 0.000000 21168.813802 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2330921 2282464.301928 2283 120 0.000000 793.827340 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.1 2339621 2282447.267549 1073 120 0.000000 513.012288 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.231. 2433966 2282521.660267 5322 120 0.000000 2847.313292 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.33.88.4 2443657 2282726.707646 122 120 0.000000 49.163112 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/40:1 2475325 44018722.803022 16 120 0.000000 0.147345 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/40:2 2478207 44019024.950793 3 120 0.000000 0.017096 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2478220 18636.984234 143 120 0.000000 96.879171 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#41, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 769033688
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -2153
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334876
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633454.160581
[Mon Mar 11 14:29:19 2024] .clock_task : 1281380260.022644
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[41]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[41]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/41 265 22916.688052 25 120 0.000000 0.850472 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/41 266 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/41 267 2598.056055 325773 0 0.000000 2098.472928 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/41 268 15809663.340352 1653137 120 0.000000 54863.923639 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/41:0H 270 153.424794 4 100 0.000000 0.054856 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/41:1H 760 15809849.989686 942541 100 0.000000 32671.621713 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ixgbe 886 2410.191218 2 100 0.000000 0.010429 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I nfit 3237 22516.800348 2 100 0.000000 0.009425 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S multipathd 3355 0.000000 42958 0 0.000000 1095.175132 0.000000 0.000000 1 0 /autogroup-45
[Mon Mar 11 14:29:19 2024] I kdmflush/253:25 3598 23654.529860 2 100 0.000000 0.010061 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4243 23864.506328 2 100 0.000000 0.034406 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S writer#1 5329 277180.368642 55225428 120 0.000000 1457581.827934 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 8733 25341.513119 2 100 0.000000 0.065460 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S nfsclients.sh 91766 289400.280209 212204 120 0.000000 316873.551136 0.000000 0.000000 1 0 /autogroup-292
[Mon Mar 11 14:29:19 2024] S bash 2371683 95.410554 40 120 0.000000 39.418602 0.000000 0.000000 1 0 /autogroup-23022
[Mon Mar 11 14:29:19 2024] S smbd[10.40.147. 2377087 853732.743590 61578 120 0.000000 22680.039593 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/41:1 2408233 15809851.021005 177638 120 0.000000 6159.061412 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2419997 853787.191465 215 120 0.000000 77.941484 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.136. 2459316 853749.947129 87 120 0.000000 47.726812 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/41:0 2475205 15808064.909054 4 120 0.000000 0.031815 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D kworker/u98:2 2476491 15809851.840965 40438 120 0.000000 1287.117784 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/41:2 2477809 15808088.892049 2 120 0.000000 0.002833 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#42, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2996328903
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 23982
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334879
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633454.163040
[Mon Mar 11 14:29:19 2024] .clock_task : 1259162399.048103
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[42]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[42]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/42 271 23770.590731 25 120 0.000000 1.112376 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/42 272 -11.996193 3 49 0.000000 0.008634 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/42 273 2666.056029 376584 0 0.000000 1978.490589 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/42 274 44313378.415395 10284550 120 0.000000 234271.575233 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/42:0H 276 405.090127 4 100 0.000000 0.017486 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I cryptd 368 33.956059 2 100 0.000000 0.001297 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I edac-poller 375 45.957394 2 100 0.000000 0.001341 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kmpath_rdacd 401 70.023013 2 100 0.000000 0.060965 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/42:1H 805 44313535.668704 1664246 100 0.000000 47711.767500 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_7 914 1981.205596 25 120 0.000000 3.018351 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I scsi_tmf_7 915 1978.339045 2 100 0.000000 0.061972 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:1 3216 19623.395132 2 100 0.000000 0.008752 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ext4-rsv-conver 4241 24415.520577 2 100 0.000000 0.069758 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D in:imjournal 4837 1018.808474 1549844 120 0.000000 494163.301716 0.000000 0.000000 0 4837 /autogroup-112
[Mon Mar 11 14:29:19 2024] S nsrexecd 5347 4562.820660 256334 120 0.000000 8065.031771 0.000000 0.000000 0 5347 /autogroup-117
[Mon Mar 11 14:29:19 2024] D nfsd 8812 44312943.970944 205781 120 0.000000 20048.371105 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8870 44214210.515615 460088797 120 0.000000 12545690.671023 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8871 44212554.801884 778478749 120 0.000000 20032936.803347 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S gmain 21425 11.129420 1 120 0.000000 0.178004 0.000000 0.000000 0 0 /autogroup-154
[Mon Mar 11 14:29:19 2024] S automount 88140 11.871090 1 120 0.000000 0.919673 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S automount 88164 24.730710 1 120 0.000000 0.859627 0.000000 0.000000 0 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.40.230. 2340528 2874680.406835 37051 120 0.000000 13523.823190 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.39.226. 2411168 2874650.986950 24300 120 0.000000 2759.730043 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/42:0 2427884 44313535.694543 133518 120 0.000000 9601.413689 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/42:2 2474171 44307997.269374 9 120 0.000000 0.095482 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/42:1 2477540 44311426.797706 7 120 0.000000 0.073412 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2478222 18709.324821 69 120 0.000000 69.818768 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#43, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .nr_switches : 852520640
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -1295
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334931
[Mon Mar 11 14:29:19 2024] .curr->pid : 2792
[Mon Mar 11 14:29:19 2024] .clock : 1282633455.160727
[Mon Mar 11 14:29:19 2024] .clock_task : 1281307282.033299
[Mon Mar 11 14:29:19 2024] .avg_idle : 924089
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[43]:/autogroup-29
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 5369.166860
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -20412297.341255
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .h_nr_running : 1
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 1048576
[Mon Mar 11 14:29:19 2024] .load_avg : 32
[Mon Mar 11 14:29:19 2024] .runnable_avg : 32
[Mon Mar 11 14:29:19 2024] .util_avg : 32
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 32
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 32
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] .se->exec_start : 1281307282.033299
[Mon Mar 11 14:29:19 2024] .se->vruntime : 17451283.075921
[Mon Mar 11 14:29:19 2024] .se->sum_exec_runtime : 5347.659787
[Mon Mar 11 14:29:19 2024] .se->load.weight : 1048576
[Mon Mar 11 14:29:19 2024] .se->avg.load_avg : 43
[Mon Mar 11 14:29:19 2024] .se->avg.util_avg : 32
[Mon Mar 11 14:29:19 2024] .se->avg.runnable_avg : 32
[Mon Mar 11 14:29:19 2024] cfs_rq[43]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 17451283.075921
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -2966383.432194
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .h_nr_running : 1
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 1048576
[Mon Mar 11 14:29:19 2024] .load_avg : 43
[Mon Mar 11 14:29:19 2024] .runnable_avg : 32
[Mon Mar 11 14:29:19 2024] .util_avg : 32
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 143
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[43]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[43]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/43 277 27597.030351 25 120 0.000000 0.898476 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/43 278 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/43 279 2734.056023 323730 0 0.000000 1942.974615 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/43 280 17450792.576998 1723775 120 0.000000 56842.083760 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/43:0H 282 36.451975 4 100 0.000000 0.154937 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/43:1H 728 17450633.834340 886926 100 0.000000 30777.899285 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] >R systemd-journal 2792 5369.166860 7467532 120 0.000000 617171.090881 0.000000 0.000000 1 2792 /autogroup-29
[Mon Mar 11 14:29:19 2024] S dbus-broker-lau 4294 5406.993099 402 120 0.000000 78.073023 0.000000 0.000000 1 0 /autogroup-65
[Mon Mar 11 14:29:19 2024] S jbd2/dm-88-8 4330 17345548.757446 8756 120 0.000000 1451.682965 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S JS Helper 5177 11.109554 11 120 0.000000 0.158137 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] S master 4982 943.815980 96965 120 0.000000 13883.092393 0.000000 0.000000 1 0 /autogroup-115
[Mon Mar 11 14:29:19 2024] D nfsd 8838 17414572.885648 2472995 120 0.000000 129045.613813 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.88.18.1 3000371 869127.471874 1362984 120 0.000000 87769.378646 0.000000 0.000000 1 3000371 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2341353 869127.560035 655 120 0.000000 234.201396 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S sshd 2371675 78.317121 24 120 0.000000 84.581467 0.000000 0.000000 1 0 /autogroup-23021
[Mon Mar 11 14:29:19 2024] I kworker/43:2 2414694 17451263.645045 114830 120 0.000000 6441.889371 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.20.1 2462719 869038.676470 638 120 0.000000 221.309743 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/u98:12 2464235 17451120.593081 234066 120 0.000000 6753.054198 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/43:1 2471690 17426068.129165 9 120 0.000000 0.052791 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#44, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2816030255
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 29153
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334878
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633455.161070
[Mon Mar 11 14:29:19 2024] .clock_task : 1262384626.708904
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] rt_rq[44]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[44]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/44 283 23515.869837 25 120 0.000000 0.828902 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/44 284 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/44 285 2802.055998 367433 0 0.000000 1996.863011 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/44 286 42157696.977957 9077806 120 0.000000 217146.834413 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/44:0H 288 280.322767 4 100 0.000000 0.017773 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S oom_reaper 362 52.198947 2 120 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I blkcg_punt_bio 371 52.198947 2 100 0.000000 0.000000 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I ipv6_addrconf 407 88.735838 2 100 0.000000 0.047464 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u99:0 461 181.140492 2 100 0.000000 0.011316 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/44:1H 641 42157696.977051 1535421 100 0.000000 44286.464054 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S scsi_eh_13 926 2745.566335 25 120 0.000000 3.029101 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:40 3731 24269.855143 2 100 0.000000 0.015202 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/sda2-8 4207 41215155.025274 121 120 0.000000 3.186544 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D jbd2/dm-96-8 6491 42157696.983533 766787 120 0.000000 187933.894428 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8787 42157702.140609 74621 120 0.000000 8693.896547 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8820 42151736.333957 876842 120 0.000000 59665.234451 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8863 42062268.037734 42914126 120 0.000000 1666453.479846 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.2 185208 2791369.985906 33693182 120 0.000000 2608026.833978 0.000000 0.000000 0 185208 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.9 2364172 2791326.948755 33732 120 0.000000 6168.537741 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.4 2364895 2791264.056726 416 120 0.000000 133.751437 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2398554 2791354.438690 754 120 0.000000 413.025715 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.19.5 2470233 2791356.283812 4581 120 0.000000 1117.537752 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/44:2 2472062 42150722.988707 46 120 0.000000 0.306183 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/44:0 2474523 42157697.011401 7945 120 0.000000 273.450846 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/44:1 2476853 42154640.702730 4 120 0.000000 0.019172 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S rpcd_spoolss 2478223 16746.774963 53 120 0.000000 75.279481 0.000000 0.000000 0 0 /autogroup-22820
[Mon Mar 11 14:29:19 2024] cpu#45, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 834697063
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -150
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334880
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633455.162037
[Mon Mar 11 14:29:19 2024] .clock_task : 1281324452.811200
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[45]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 16908831.887344
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -3508834.620771
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 1
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[45]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[45]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/45 289 22697.043172 25 120 0.000000 0.696489 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/45 290 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/45 291 2870.055998 323700 0 0.000000 1897.906226 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/45 292 16908428.714213 1687308 120 0.000000 56094.407051 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/45:0H 294 44.345188 4 100 0.000000 0.042061 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/45:1H 764 16908819.887511 824074 100 0.000000 29250.996644 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I nvme-wq 879 688.114634 2 100 0.000000 0.008472 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:46 3782 23634.542679 2 100 0.000000 0.012263 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:61 3919 23764.680184 2 100 0.000000 0.012188 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kdmflush/253:72 4007 24047.673019 2 100 0.000000 0.010746 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S gmain 5162 11.034125 2 120 0.000000 0.082710 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] S polkitd 5197 23.250636 4 120 0.000000 0.216518 0.000000 0.000000 1 0 /autogroup-104
[Mon Mar 11 14:29:19 2024] S writer#2 5330 482320.770660 55147407 120 0.000000 1458194.181012 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] S uwsgi 5239 4819.060713 643 120 0.000000 1283.521483 0.000000 0.000000 1 0 /autogroup-90
[Mon Mar 11 14:29:19 2024] D nfsd 8789 16908790.199547 78181 120 0.000000 8981.738658 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8848 16845783.144976 8259682 120 0.000000 342138.812648 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8867 16830172.894740 125301912 120 0.000000 4135727.762149 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S automount 88130 1599.204683 23 120 0.000000 3.557280 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.19.5 978979 848284.387452 5378 120 0.000000 1452.966166 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2297566 848296.192180 9526 120 0.000000 11178.141828 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.33.88.4 2322831 848280.165421 12619 120 0.000000 3106.191462 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.30.4 2364892 848235.060798 775 120 0.000000 286.880544 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.40.142. 2381864 848279.466223 510 120 0.000000 197.729459 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/45:0 2385965 16908819.681044 207637 120 0.000000 7487.489092 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.1 2440119 848279.477097 177 120 0.000000 75.190257 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/45:2 2473358 16896354.894791 5 120 0.000000 0.043559 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/u98:1 2473403 16898740.925324 69077 120 0.000000 4976.446729 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/45:1 2476033 16905248.134675 8 120 0.000000 0.054582 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] cpu#46, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .nr_switches : 2717295473
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : 37982
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334877
[Mon Mar 11 14:29:19 2024] .curr->pid : 0
[Mon Mar 11 14:29:19 2024] .clock : 1282633455.161381
[Mon Mar 11 14:29:19 2024] .clock_task : 1263601685.171145
[Mon Mar 11 14:29:19 2024] .avg_idle : 1000000
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[46]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 40443164.714160
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : 20025498.206045
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 0
[Mon Mar 11 14:29:19 2024] .h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 0
[Mon Mar 11 14:29:19 2024] .load_avg : 0
[Mon Mar 11 14:29:19 2024] .runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .util_avg : 0
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 0
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[46]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.000000
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[46]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] S cpuhp/46 295 23217.842829 25 120 0.000000 0.748679 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/46 296 -11.996217 3 49 0.000000 0.010193 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S migration/46 297 0.000000 364531 0 0.000000 1954.429126 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/46 298 40442964.446823 8047609 120 0.000000 201628.273289 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/46:0H 300 365.751900 4 100 0.000000 0.050581 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I zswap-shrink 460 239.007746 2 100 0.000000 0.013230 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/46:1H 721 40443152.714483 1518121 100 0.000000 44149.622712 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S nrpe 4740 168196.970426 114215 120 0.000000 25818.707718 0.000000 0.000000 0 0 /autogroup-109
[Mon Mar 11 14:29:19 2024] D nfsd 8809 40443140.629874 157349 120 0.000000 15910.209360 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8837 40391590.232658 2132154 120 0.000000 114007.722388 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/46:1 2050654 40443138.236066 808697 120 0.000000 25467.045857 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.27.3 2318500 2482567.304916 969 120 0.000000 349.279908 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2323597 2482627.945363 569 120 0.000000 180.530759 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2340410 2482584.037614 476 120 0.000000 146.005053 0.000000 0.000000 0 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/46:2 2472151 40426263.236097 12 120 0.000000 0.046309 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] I kworker/46:0 2475460 40438475.372011 7 120 0.000000 0.086767 0.000000 0.000000 0 0 /
[Mon Mar 11 14:29:19 2024] cpu#47, 2600.000 MHz
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .nr_switches : 838026468
[Mon Mar 11 14:29:19 2024] .nr_uninterruptible : -632
[Mon Mar 11 14:29:19 2024] .next_balance : 5577.334932
[Mon Mar 11 14:29:19 2024] .curr->pid : 17
[Mon Mar 11 14:29:19 2024] .clock : 1282633455.160009
[Mon Mar 11 14:29:19 2024] .clock_task : 1281281464.096361
[Mon Mar 11 14:29:19 2024] .avg_idle : 692134
[Mon Mar 11 14:29:19 2024] .max_idle_balance_cost : 500000
[Mon Mar 11 14:29:19 2024] cfs_rq[47]:/
[Mon Mar 11 14:29:19 2024] .exec_clock : 0.000000
[Mon Mar 11 14:29:19 2024] .MIN_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .min_vruntime : 17562030.542460
[Mon Mar 11 14:29:19 2024] .max_vruntime : 0.000001
[Mon Mar 11 14:29:19 2024] .spread : 0.000000
[Mon Mar 11 14:29:19 2024] .spread0 : -2855635.965655
[Mon Mar 11 14:29:19 2024] .nr_spread_over : 0
[Mon Mar 11 14:29:19 2024] .nr_running : 1
[Mon Mar 11 14:29:19 2024] .h_nr_running : 1
[Mon Mar 11 14:29:19 2024] .idle_nr_running : 0
[Mon Mar 11 14:29:19 2024] .idle_h_nr_running : 0
[Mon Mar 11 14:29:19 2024] .load : 1048576
[Mon Mar 11 14:29:19 2024] .load_avg : 41
[Mon Mar 11 14:29:19 2024] .runnable_avg : 41
[Mon Mar 11 14:29:19 2024] .util_avg : 41
[Mon Mar 11 14:29:19 2024] .util_est_enqueued : 27
[Mon Mar 11 14:29:19 2024] .removed.load_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.util_avg : 0
[Mon Mar 11 14:29:19 2024] .removed.runnable_avg : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg_contrib : 0
[Mon Mar 11 14:29:19 2024] .tg_load_avg : 0
[Mon Mar 11 14:29:19 2024] .throttled : 0
[Mon Mar 11 14:29:19 2024] .throttle_count : 0
[Mon Mar 11 14:29:19 2024] rt_rq[47]:
[Mon Mar 11 14:29:19 2024] .rt_nr_running : 0
[Mon Mar 11 14:29:19 2024] .rt_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .rt_throttled : 0
[Mon Mar 11 14:29:19 2024] .rt_time : 0.024833
[Mon Mar 11 14:29:19 2024] .rt_runtime : 950.000000
[Mon Mar 11 14:29:19 2024] dl_rq[47]:
[Mon Mar 11 14:29:19 2024] .dl_nr_running : 0
[Mon Mar 11 14:29:19 2024] .dl_nr_migratory : 0
[Mon Mar 11 14:29:19 2024] .dl_bw->bw : 996147
[Mon Mar 11 14:29:19 2024] .dl_bw->total_bw : 0
[Mon Mar 11 14:29:19 2024] runnable tasks:
[Mon Mar 11 14:29:19 2024] S task PID tree-key switches prio wait-time sum-exec sum-sleep
[Mon Mar 11 14:29:19 2024] -------------------------------------------------------------------------------------------------------------
[Mon Mar 11 14:29:19 2024] >R pr/tty0 17 17562025.446738 6172008 120 0.000000 55561.107622 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S cpuhp/47 301 24226.751130 25 120 0.000000 0.799915 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S idle_inject/47 302 -12.000000 3 49 0.000000 0.000000 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S migration/47 303 3006.062858 323084 0 0.000000 2026.298100 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S ksoftirqd/47 304 17561685.512870 1652852 120 0.000000 54349.339133 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/47:0H 306 152.594739 4 100 0.000000 0.029461 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/47:1H 749 17562018.542556 1107336 100 0.000000 38223.315965 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S jbd2/dm-0-8 2716 17562018.765148 1369629 120 0.000000 59603.923972 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S multipathd 3350 0.000000 2212278 0 0.000000 46065.475878 0.000000 0.000000 1 0 /autogroup-45
[Mon Mar 11 14:29:19 2024] S tuned 4413 14011.121626 1302257 120 0.000000 128009.012523 0.000000 0.000000 1 4413 /autogroup-93
[Mon Mar 11 14:29:19 2024] S tuned 4666 14009.725811 1304993 120 0.000000 123279.129785 0.000000 0.000000 1 4666 /autogroup-93
[Mon Mar 11 14:29:19 2024] S writer#4 5332 577861.987132 55260970 120 0.000000 1455707.168755 0.000000 0.000000 1 5327 /autogroup-105
[Mon Mar 11 14:29:19 2024] S qmgr 5126 1961.675670 10678 120 0.000000 3052.668226 0.000000 0.000000 1 0 /autogroup-115
[Mon Mar 11 14:29:19 2024] I kdmflush/253:10 8699 25697.008861 2 100 0.000000 0.070436 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] D nfsd 8829 17516468.836694 347892 120 0.000000 28554.780021 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S automount 88143 2104.600765 27 120 0.000000 2.853638 0.000000 0.000000 1 0 /autogroup-222
[Mon Mar 11 14:29:19 2024] S smbd[10.87.29.8 2353738 867963.285101 3377 120 0.000000 846.773417 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] S pickup 2449277 1961.409240 73 120 0.000000 37.307330 0.000000 0.000000 1 0 /autogroup-115
[Mon Mar 11 14:29:19 2024] I kworker/47:2 2452570 17562018.593685 49418 120 0.000000 3118.221796 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] S smbd[10.87.28.1 2457742 867939.476315 118 120 0.000000 68.375658 0.000000 0.000000 1 0 /autogroup-113
[Mon Mar 11 14:29:19 2024] I kworker/u98:3 2461839 17554369.712742 171580 120 0.000000 5403.343331 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/47:1 2474397 17556065.443165 9 120 0.000000 0.066801 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] I kworker/47:0 2476852 17556096.226761 3 120 0.000000 0.019199 0.000000 0.000000 1 0 /
[Mon Mar 11 14:29:19 2024] Showing busy workqueues and worker pools:
[Mon Mar 11 14:29:19 2024] workqueue events: flags=0x0
[Mon Mar 11 14:29:19 2024] pwq 60: cpus=30 node=0 flags=0x0 nice=0 active=2/256 refcnt=3
[Mon Mar 11 14:29:19 2024] in-flight: 2364267:delayed_fput delayed_fput
[Mon Mar 11 14:29:19 2024] pwq 28: cpus=14 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] pending: drm_fb_helper_damage_work [drm_kms_helper]
[Mon Mar 11 14:29:19 2024] pwq 22: cpus=11 node=1 flags=0x0 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] pending: kfree_rcu_monitor
[Mon Mar 11 14:29:19 2024] workqueue events_power_efficient: flags=0x80
[Mon Mar 11 14:29:19 2024] pwq 28: cpus=14 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] pending: fb_flashcursor
[Mon Mar 11 14:29:19 2024] workqueue mm_percpu_wq: flags=0x8
[Mon Mar 11 14:29:19 2024] pwq 28: cpus=14 node=0 flags=0x0 nice=0 active=2/256 refcnt=3
[Mon Mar 11 14:29:19 2024] pending: vmstat_update, lru_add_drain_per_cpu
[Mon Mar 11 14:29:19 2024] pwq 22: cpus=11 node=1 flags=0x0 nice=0 active=2/256 refcnt=4
[Mon Mar 11 14:29:19 2024] pending: vmstat_update, lru_add_drain_per_cpu BAR(367)
[Mon Mar 11 14:29:19 2024] workqueue writeback: flags=0x4a
[Mon Mar 11 14:29:19 2024] pwq 98: cpus=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47 node=1 flags=0x4 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] in-flight: 2476491:wb_workfn
[Mon Mar 11 14:29:19 2024] workqueue kblockd: flags=0x18
[Mon Mar 11 14:29:19 2024] pwq 29: cpus=14 node=0 flags=0x0 nice=-20 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] pending: blk_mq_timeout_work
[Mon Mar 11 14:29:19 2024] workqueue lpfc_wq: flags=0x8
[Mon Mar 11 14:29:19 2024] pwq 28: cpus=14 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] pending: lpfc_sli4_hba_process_cq [lpfc]
[Mon Mar 11 14:29:19 2024] workqueue lpfc_wq: flags=0x8
[Mon Mar 11 14:29:19 2024] pwq 28: cpus=14 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] pending: lpfc_sli4_hba_process_cq [lpfc]
[Mon Mar 11 14:29:19 2024] workqueue nfsd4: flags=0x2
[Mon Mar 11 14:29:19 2024] pwq 98: cpus=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47 node=1 flags=0x4 nice=0 active=1/256 refcnt=2
[Mon Mar 11 14:29:19 2024] in-flight: 2468089:laundromat_main [nfsd]
[Mon Mar 11 14:29:19 2024] workqueue nfsd4_callbacks: flags=0xa0002
[Mon Mar 11 14:29:19 2024] pwq 96: cpus=0-47 flags=0x4 nice=0 active=1/1 refcnt=326
[Mon Mar 11 14:29:19 2024] in-flight: 2451130:nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] inactive: nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] , nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd], nfsd4_run_cb_work [nfsd]
[Mon Mar 11 14:29:19 2024] pool 60: cpus=30 node=0 flags=0x0 nice=0 hung=1s workers=3 idle: 2475882 2473379
[Mon Mar 11 14:29:19 2024] pool 96: cpus=0-47 flags=0x4 nice=0 hung=0s workers=6 idle: 2466033 2442472 2472989 2470993 2466032
[Mon Mar 11 14:29:19 2024] pool 98: cpus=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47 node=1 flags=0x4 nice=0 hung=0s workers=8 idle: 2463319 2463433 2464235 2473403 2471807 2461839
[-- Attachment #3: rpc_tasks.txt --]
[-- Type: text/plain, Size: 111 bytes --]
18423 2281 0 0x18 0x0 1354 nfsd4_cb_ops [nfsd] nfs4_cbv1 CB_RECALL_ANY a:call_start [sunrpc] q:delayq
[-- Attachment #4: nfs_threads.txt --]
[-- Type: text/plain, Size: 919 bytes --]
/proc/2002/stack:
[<0>] nfs_wait_client_init_complete.part.12+0x3b/0x90 [nfs]
[<0>] nfs41_discover_server_trunking+0x61/0xa0 [nfsv4]
[<0>] nfs4_discover_server_trunking+0x72/0x240 [nfsv4]
[<0>] nfs4_init_client+0xbd/0x150 [nfsv4]
[<0>] nfs4_set_client+0xed/0x150 [nfsv4]
[<0>] nfs4_create_server+0x127/0x2c0 [nfsv4]
[<0>] nfs4_try_get_tree+0x33/0xb0 [nfsv4]
[<0>] vfs_get_tree+0x25/0xc0
[<0>] do_mount+0x2e9/0x950
[<0>] ksys_mount+0xbe/0xe0
[<0>] __x64_sys_mount+0x21/0x30
[<0>] do_syscall_64+0x5b/0x1b0
[<0>] entry_SYSCALL_64_after_hwframe+0x61/0xc6
/proc/2022/stack:
[<0>] nfs41_callback_svc+0x18f/0x1a0 [nfsv4]
[<0>] kthread+0x134/0x150
[<0>] ret_from_fork+0x1f/0x40
/proc/2023/stack:
[<0>] msleep+0x28/0x40
[<0>] nfs4_handle_reclaim_lease_error+0x7e/0x140 [nfsv4]
[<0>] nfs4_state_manager+0x487/0x860 [nfsv4]
[<0>] nfs4_run_state_manager+0x20/0x40 [nfsv4]
[<0>] kthread+0x134/0x150
[<0>] ret_from_fork+0x1f/0x40
^ permalink raw reply [relevance 1%]
* [djwong-xfs:twf-hoist] [xfs] eacb32cc55: aim7.jobs-per-min -66.2% regression
@ 2024-03-11 15:03 3% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-03-11 15:03 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: oe-lkp, lkp, oliver.sang
Hello,
we noticed by this commit we had below config diff comparing to parent after
building:
@@ -6357,6 +6357,7 @@ CONFIG_XFS_SUPPORT_ASCII_CI=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
+CONFIG_XFS_TIME_STATS=y
CONFIG_XFS_DRAIN_INTENTS=y
CONFIG_XFS_LIVE_HOOKS=y
CONFIG_XFS_MEMORY_BUFS=y
@@ -7055,6 +7056,7 @@ CONFIG_GENERIC_NET_UTILS=y
CONFIG_CORDIC=m
# CONFIG_PRIME_NUMBERS is not set
CONFIG_RATIONAL=y
+CONFIG_MEAN_AND_VARIANCE=m
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
@@ -7189,6 +7191,7 @@ CONFIG_SBITMAP=y
CONFIG_ASN1_ENCODER=y
CONFIG_FIRMWARE_TABLE=y
+CONFIG_TIME_STATS=m
#
# Kernel hacking
not sure if below performance is expected by this commit.
kernel test robot noticed a -66.2% regression of aim7.jobs-per-min on:
commit: eacb32cc553342496b6bcd4412731ceb81eaca02 ("xfs: present wait time statistics")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git twf-hoist
testcase: aim7
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
parameters:
disk: 1BRD_48G
fs: xfs
test: disk_cp
load: 3000
cpufreq_governor: performance
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min -78.5% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1BRD_48G |
| | fs=xfs |
| | load=3000 |
| | test=disk_rr |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min -88.7% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1BRD_48G |
| | fs=xfs |
| | load=3000 |
| | test=disk_rw |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fxmark: fxmark.ssd_xfs_MRDM_18_bufferedio.works/sec -39.0% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory |
| test parameters | cpufreq_governor=performance |
| | directio=bufferedio |
| | disk=1SSD |
| | fstype=xfs |
| | media=ssd |
| | test=MRDM |
| | thread_nr=18 |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fxmark: fxmark.ssd_xfs_MRDL_4_directio.works/sec -77.8% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory |
| test parameters | cpufreq_governor=performance |
| | directio=directio |
| | disk=1SSD |
| | fstype=xfs |
| | media=ssd |
| | test=MRDL |
| | thread_nr=4 |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fxmark: fxmark.ssd_xfs_MRDM_18_directio.works/sec -39.9% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory |
| test parameters | cpufreq_governor=performance |
| | directio=directio |
| | disk=1SSD |
| | fstype=xfs |
| | media=ssd |
| | test=MRDM |
| | thread_nr=18 |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fxmark: fxmark.ssd_xfs_DWOL_54_bufferedio.works/sec -98.1% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory |
| test parameters | cpufreq_governor=performance |
| | directio=bufferedio |
| | disk=1SSD |
| | fstype=xfs |
| | media=ssd |
| | test=DWOL |
| | thread_nr=54 |
+------------------+------------------------------------------------------------------------------------------------+
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202403112240.76647647-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240311/202403112240.76647647-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-12/performance/1BRD_48G/xfs/x86_64-rhel-8.3/3000/debian-12-x86_64-20240206.cgz/lkp-icl-2sp2/disk_cp/aim7
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
108.39 +102.7% 219.75 ± 2% uptime.boot
5.551e+09 +18.4% 6.573e+09 cpuidle..time
4358360 +31.3% 5722587 cpuidle..usage
74.40 -58.3% 30.99 ± 2% iostat.cpu.idle
24.60 ± 5% +179.0% 68.63 iostat.cpu.system
1.00 ± 2% -62.0% 0.38 ± 3% iostat.cpu.user
1.17 ± 76% +38128.6% 446.00 ± 44% perf-c2c.DRAM.local
33.17 ± 26% +75689.4% 25136 ± 2% perf-c2c.DRAM.remote
23.50 ± 27% +1.3e+05% 29864 perf-c2c.HITM.local
9.17 ± 45% +2.1e+05% 19232 perf-c2c.HITM.remote
32.67 ± 29% +1.5e+05% 49097 perf-c2c.HITM.total
73.70 -43.4 30.29 ± 3% mpstat.cpu.all.idle%
0.24 -0.1 0.18 mpstat.cpu.all.irq%
0.07 -0.0 0.03 ± 2% mpstat.cpu.all.soft%
24.97 ± 5% +44.2 69.13 mpstat.cpu.all.sys%
1.02 -0.6 0.38 ± 2% mpstat.cpu.all.usr%
40.06 ± 5% +98.0% 79.34 mpstat.max_utilization_pct
74.40 -58.4% 30.97 ± 2% vmstat.cpu.id
24.59 ± 5% +179.2% 68.64 vmstat.cpu.sy
675.07 ± 34% -72.8% 183.65 ± 25% vmstat.io.bo
33.37 ± 3% +169.1% 89.79 vmstat.procs.r
38074 -63.3% 13985 ± 3% vmstat.system.cs
146690 ± 2% +15.8% 169857 vmstat.system.in
319672 ± 2% -66.2% 107944 ± 2% aim7.jobs-per-min
56.52 ± 2% +195.6% 167.06 ± 2% aim7.time.elapsed_time
56.52 ± 2% +195.6% 167.06 ± 2% aim7.time.elapsed_time.max
53433 ± 3% +234.5% 178712 ± 4% aim7.time.involuntary_context_switches
173447 +58.9% 275545 aim7.time.minor_page_faults
1774 ± 8% +736.0% 14835 ± 3% aim7.time.system_time
436983 ± 2% -8.4% 400397 aim7.time.voluntary_context_switches
161434 ± 6% +334.7% 701682 meminfo.Active
160535 ± 6% +336.4% 700635 meminfo.Active(anon)
898.81 +16.4% 1046 meminfo.Active(file)
53545 ± 7% +173.4% 146396 ± 2% meminfo.AnonHugePages
3325452 +17.1% 3893986 meminfo.Cached
3289242 +17.5% 3863384 meminfo.Committed_AS
8390656 ± 9% +18.5% 9942016 ± 8% meminfo.DirectMap2M
10412 ± 13% +274.9% 39039 ± 3% meminfo.Dirty
10126 ± 12% +273.9% 37864 ± 3% meminfo.Inactive(file)
79835 ± 4% -18.0% 65431 ± 3% meminfo.Mapped
195727 ± 6% +275.7% 735414 meminfo.Shmem
4146 ± 43% +3331.3% 142286 ± 35% numa-meminfo.node0.Active
3247 ± 55% +4253.5% 141399 ± 36% numa-meminfo.node0.Active(anon)
5111 ± 5% +284.1% 19633 ± 3% numa-meminfo.node0.Dirty
126016 ± 78% +185.8% 360206 ± 21% numa-meminfo.node0.FilePages
5231 ± 4% +266.7% 19183 ± 3% numa-meminfo.node0.Inactive(file)
18385 ± 8% -26.3% 13550 ± 9% numa-meminfo.node0.Mapped
7383 ± 26% +1917.7% 148983 ± 35% numa-meminfo.node0.Shmem
157308 ± 6% +255.7% 559478 ± 9% numa-meminfo.node1.Active
157308 ± 6% +255.6% 559319 ± 9% numa-meminfo.node1.Active(anon)
333038 ± 7% +57.3% 524000 ± 28% numa-meminfo.node1.AnonPages
385725 ± 9% +48.5% 572940 ± 25% numa-meminfo.node1.AnonPages.max
5039 ± 6% +282.9% 19298 ± 3% numa-meminfo.node1.Dirty
368785 ± 6% +54.5% 569811 ± 26% numa-meminfo.node1.Inactive
364110 ± 6% +51.4% 551238 ± 27% numa-meminfo.node1.Inactive(anon)
4674 ± 5% +297.3% 18573 ± 3% numa-meminfo.node1.Inactive(file)
61846 ± 3% -15.5% 52229 ± 4% numa-meminfo.node1.Mapped
188500 ± 6% +211.2% 586633 ± 9% numa-meminfo.node1.Shmem
813.07 ± 55% +4245.0% 35327 ± 36% numa-vmstat.node0.nr_active_anon
1425 ± 14% +243.2% 4892 ± 2% numa-vmstat.node0.nr_dirty
31655 ± 78% +184.4% 90040 ± 21% numa-vmstat.node0.nr_file_pages
1433 ± 12% +233.9% 4786 numa-vmstat.node0.nr_inactive_file
4677 ± 8% -24.7% 3523 ± 8% numa-vmstat.node0.nr_mapped
1846 ± 26% +1916.9% 37245 ± 35% numa-vmstat.node0.nr_shmem
813.07 ± 55% +4245.0% 35327 ± 36% numa-vmstat.node0.nr_zone_active_anon
1431 ± 12% +234.4% 4787 numa-vmstat.node0.nr_zone_inactive_file
1425 ± 14% +243.2% 4893 ± 2% numa-vmstat.node0.nr_zone_write_pending
39337 ± 6% +255.2% 139725 ± 9% numa-vmstat.node1.nr_active_anon
83253 ± 7% +57.4% 131028 ± 28% numa-vmstat.node1.nr_anon_pages
1232 ± 12% +292.6% 4838 ± 2% numa-vmstat.node1.nr_dirty
91058 ± 6% +51.4% 137886 ± 27% numa-vmstat.node1.nr_inactive_anon
1151 ± 13% +304.0% 4652 ± 2% numa-vmstat.node1.nr_inactive_file
15815 ± 3% -15.8% 13314 ± 5% numa-vmstat.node1.nr_mapped
47170 ± 6% +210.8% 146602 ± 9% numa-vmstat.node1.nr_shmem
39337 ± 6% +255.2% 139725 ± 9% numa-vmstat.node1.nr_zone_active_anon
91057 ± 6% +51.4% 137886 ± 27% numa-vmstat.node1.nr_zone_inactive_anon
1152 ± 13% +303.9% 4653 ± 2% numa-vmstat.node1.nr_zone_inactive_file
1236 ± 11% +291.2% 4838 ± 2% numa-vmstat.node1.nr_zone_write_pending
40142 ± 6% +336.3% 175139 proc-vmstat.nr_active_anon
2513 ± 13% +285.4% 9689 ± 2% proc-vmstat.nr_dirty
831402 +17.1% 973470 proc-vmstat.nr_file_pages
2441 ± 12% +285.8% 9418 ± 2% proc-vmstat.nr_inactive_file
69493 +2.0% 70915 proc-vmstat.nr_kernel_stack
20344 ± 4% -17.7% 16735 ± 4% proc-vmstat.nr_mapped
48995 ± 6% +275.3% 183884 proc-vmstat.nr_shmem
36752 +3.8% 38146 proc-vmstat.nr_slab_reclaimable
92049 +1.9% 93797 proc-vmstat.nr_slab_unreclaimable
40142 ± 6% +336.3% 175139 proc-vmstat.nr_zone_active_anon
2441 ± 12% +285.8% 9418 ± 2% proc-vmstat.nr_zone_inactive_file
2514 ± 13% +285.4% 9689 ± 2% proc-vmstat.nr_zone_write_pending
19089 ± 46% +335.3% 83099 ± 7% proc-vmstat.numa_hint_faults
6990 ± 46% +253.6% 24719 ± 16% proc-vmstat.numa_hint_faults_local
134151 +1.3% 135903 proc-vmstat.numa_other
20210 ± 51% +155.8% 51698 ± 11% proc-vmstat.numa_pages_migrated
82751 ± 4% +80.9% 149676 ± 2% proc-vmstat.pgactivate
541420 ± 2% +74.6% 945466 proc-vmstat.pgfault
20210 ± 51% +155.8% 51698 ± 11% proc-vmstat.pgmigrate_success
19901 ± 6% +286.8% 76986 ± 4% proc-vmstat.pgreuse
1613 +6.0% 1710 proc-vmstat.unevictable_pgs_culled
1.42 +25.8% 1.79 ± 2% perf-stat.i.MPKI
7.299e+09 ± 2% -6.4% 6.83e+09 perf-stat.i.branch-instructions
1.49 -0.8 0.64 ± 3% perf-stat.i.branch-miss-rate%
44541839 -48.5% 22959002 ± 3% perf-stat.i.branch-misses
18.21 +9.8 28.04 perf-stat.i.cache-miss-rate%
61025158 -5.9% 57408775 ± 3% perf-stat.i.cache-misses
3.149e+08 ± 2% -37.2% 1.978e+08 perf-stat.i.cache-references
38871 -63.9% 14030 ± 3% perf-stat.i.context-switches
2.05 ± 7% +251.1% 7.19 perf-stat.i.cpi
8.576e+10 ± 5% +169.9% 2.314e+11 perf-stat.i.cpu-cycles
1308 ± 3% +8.5% 1419 perf-stat.i.cpu-migrations
1645 ± 4% +140.5% 3958 ± 4% perf-stat.i.cycles-between-cache-misses
3.692e+10 ± 2% -17.1% 3.06e+10 perf-stat.i.instructions
0.63 ± 4% -65.0% 0.22 ± 2% perf-stat.i.ipc
12.95 ± 42% -69.0% 4.01 ± 61% perf-stat.i.major-faults
8505 -37.7% 5295 perf-stat.i.minor-faults
8518 -37.8% 5299 perf-stat.i.page-faults
1.66 +13.3% 1.88 ± 3% perf-stat.overall.MPKI
0.60 -0.3 0.33 ± 2% perf-stat.overall.branch-miss-rate%
19.38 +9.6 28.99 perf-stat.overall.cache-miss-rate%
2.33 ± 7% +224.6% 7.56 perf-stat.overall.cpi
1405 ± 6% +186.9% 4033 ± 4% perf-stat.overall.cycles-between-cache-misses
0.43 ± 7% -69.3% 0.13 perf-stat.overall.ipc
7.207e+09 -5.5% 6.808e+09 perf-stat.ps.branch-instructions
43291321 -47.5% 22711270 ± 3% perf-stat.ps.branch-misses
3.116e+08 -36.6% 1.975e+08 ± 2% perf-stat.ps.cache-references
38344 -63.6% 13959 ± 3% perf-stat.ps.context-switches
125758 +1.2% 127229 perf-stat.ps.cpu-clock
8.483e+10 ± 5% +171.9% 2.306e+11 perf-stat.ps.cpu-cycles
1294 ± 3% +9.2% 1413 perf-stat.ps.cpu-migrations
3.646e+10 -16.3% 3.05e+10 perf-stat.ps.instructions
12.63 ± 41% -69.9% 3.80 ± 61% perf-stat.ps.major-faults
8238 ± 2% -36.3% 5244 perf-stat.ps.minor-faults
8251 -36.4% 5248 perf-stat.ps.page-faults
125759 +1.2% 127229 perf-stat.ps.task-clock
2.098e+12 +144.4% 5.127e+12 ± 2% perf-stat.total.instructions
60724 ±213% +5951.6% 3674801 ± 2% sched_debug.cfs_rq:/.avg_vruntime.avg
114098 ±127% +3234.6% 3804798 ± 2% sched_debug.cfs_rq:/.avg_vruntime.max
49147 ±223% +6666.0% 3325321 ± 2% sched_debug.cfs_rq:/.avg_vruntime.min
9287 ± 82% +402.9% 46708 ± 7% sched_debug.cfs_rq:/.avg_vruntime.stddev
0.10 ± 21% +483.2% 0.56 ± 5% sched_debug.cfs_rq:/.h_nr_running.avg
0.29 ± 12% +35.8% 0.40 ± 6% sched_debug.cfs_rq:/.h_nr_running.stddev
60724 ±213% +5951.6% 3674801 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
114098 ±127% +3234.6% 3804798 ± 2% sched_debug.cfs_rq:/.min_vruntime.max
49147 ±223% +6666.0% 3325321 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
9287 ± 82% +402.9% 46708 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
0.10 ± 21% +474.1% 0.55 ± 5% sched_debug.cfs_rq:/.nr_running.avg
0.29 ± 12% +31.1% 0.38 ± 6% sched_debug.cfs_rq:/.nr_running.stddev
538.50 ± 3% -66.6% 179.83 ± 7% sched_debug.cfs_rq:/.removed.runnable_avg.max
95.13 ± 27% -67.9% 30.54 ± 23% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
538.50 ± 3% -66.6% 179.72 ± 6% sched_debug.cfs_rq:/.removed.util_avg.max
95.13 ± 27% -67.9% 30.53 ± 23% sched_debug.cfs_rq:/.removed.util_avg.stddev
241.32 ± 21% +138.3% 575.17 ± 3% sched_debug.cfs_rq:/.runnable_avg.avg
240.10 ± 21% +134.5% 562.97 ± 2% sched_debug.cfs_rq:/.util_avg.avg
14.27 ± 15% +2240.2% 333.84 ± 6% sched_debug.cfs_rq:/.util_est.avg
535.00 ± 8% +84.8% 988.89 ± 13% sched_debug.cfs_rq:/.util_est.max
80.41 ± 10% +183.4% 227.90 ± 7% sched_debug.cfs_rq:/.util_est.stddev
47908 ±201% +769.1% 416347 ± 15% sched_debug.cpu.avg_idle.min
199653 ± 14% -43.5% 112758 ± 8% sched_debug.cpu.avg_idle.stddev
56473 ± 18% +99.1% 112426 sched_debug.cpu.clock.avg
56479 ± 18% +99.1% 112467 sched_debug.cpu.clock.max
56465 ± 18% +99.0% 112381 sched_debug.cpu.clock.min
3.29 ± 5% +640.5% 24.36 ± 3% sched_debug.cpu.clock.stddev
56312 ± 18% +99.2% 112158 sched_debug.cpu.clock_task.avg
56453 ± 18% +99.0% 112333 sched_debug.cpu.clock_task.max
47566 ± 21% +117.0% 103239 sched_debug.cpu.clock_task.min
276.10 ± 21% +944.3% 2883 ± 7% sched_debug.cpu.curr->pid.avg
3643 ± 27% +99.7% 7277 sched_debug.cpu.curr->pid.max
880.86 ± 6% +119.7% 1935 ± 6% sched_debug.cpu.curr->pid.stddev
0.00 ± 13% +109.3% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
0.10 ± 24% +481.3% 0.55 ± 5% sched_debug.cpu.nr_running.avg
0.29 ± 14% +37.6% 0.40 ± 6% sched_debug.cpu.nr_running.stddev
14194 ± 65% +137.2% 33663 ± 25% sched_debug.cpu.nr_switches.max
1930 ± 35% +72.4% 3328 ± 19% sched_debug.cpu.nr_switches.stddev
0.01 ± 58% +1.7e+05% 15.16 sched_debug.cpu.nr_uninterruptible.avg
26.92 ± 15% +153.5% 68.22 ± 21% sched_debug.cpu.nr_uninterruptible.max
6.58 ± 26% +169.3% 17.73 ± 20% sched_debug.cpu.nr_uninterruptible.stddev
56468 ± 18% +99.0% 112382 sched_debug.cpu_clk
55236 ± 18% +101.2% 111149 sched_debug.ktime
0.00 ± 23% -56.7% 0.00 ± 97% sched_debug.rt_rq:.rt_time.avg
0.21 ± 23% -56.7% 0.09 ± 97% sched_debug.rt_rq:.rt_time.max
0.02 ± 23% -56.7% 0.01 ± 97% sched_debug.rt_rq:.rt_time.stddev
57389 ± 18% +97.5% 113316 sched_debug.sched_clk
65.84 ± 2% -64.5 1.34 ± 7% perf-profile.calltrace.cycles-pp.read
64.92 ± 2% -63.7 1.24 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
64.80 ± 2% -63.6 1.23 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.30 ± 2% -63.1 1.19 ± 8% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
63.51 ± 2% -62.4 1.14 ± 8% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
62.07 ± 2% -61.0 1.04 ± 8% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
61.90 ± 2% -60.9 1.01 ± 9% perf-profile.calltrace.cycles-pp.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read.do_syscall_64
29.82 ± 3% -29.2 0.66 ± 12% perf-profile.calltrace.cycles-pp.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
25.30 -25.3 0.00 perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
25.21 -25.2 0.00 perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
23.01 ± 3% -23.0 0.00 perf-profile.calltrace.cycles-pp.touch_atime.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
22.94 ± 3% -22.9 0.00 perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
13.72 ± 3% -12.5 1.24 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
7.44 ± 3% -6.8 0.64 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
6.63 ± 10% -6.6 0.00 perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
6.55 ± 10% -6.5 0.00 perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
3.97 ± 9% -2.6 1.36 ± 4% perf-profile.calltrace.cycles-pp.unlink
3.96 ± 9% -2.6 1.36 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
3.96 ± 9% -2.6 1.36 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
3.91 ± 9% -2.6 1.35 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
3.90 ± 9% -2.6 1.34 ± 4% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
3.58 ± 10% -2.4 1.22 ± 4% perf-profile.calltrace.cycles-pp.down_write.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.58 ± 10% -2.4 1.22 ± 4% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write.do_unlinkat.__x64_sys_unlink.do_syscall_64
3.48 ± 10% -2.3 1.19 ± 4% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.do_unlinkat.__x64_sys_unlink
2.97 ± 10% -2.0 0.94 ± 6% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.do_unlinkat
1.59 ± 10% -0.8 0.83 perf-profile.calltrace.cycles-pp.creat64
1.58 ± 10% -0.8 0.82 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.58 ± 10% -0.8 0.82 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
1.55 ± 10% -0.7 0.82 perf-profile.calltrace.cycles-pp.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.55 ± 10% -0.7 0.82 perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
1.54 ± 10% -0.7 0.82 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.54 ± 10% -0.7 0.82 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64
1.52 ± 10% -0.7 0.82 perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat
1.05 ± 11% -0.5 0.52 ± 2% perf-profile.calltrace.cycles-pp.down_write.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
1.05 ± 11% -0.5 0.52 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write.open_last_lookups.path_openat.do_filp_open
0.48 ± 45% +1.2 1.68 ± 9% perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.48 ± 45% +1.2 1.70 ± 9% perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
0.39 ± 71% +1.3 1.68 ± 9% perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified
0.00 +1.6 1.57 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve
0.00 +1.6 1.59 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc
0.00 +1.6 1.62 ± 10% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time
2.23 ± 2% +3.5 5.74 ± 2% perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
0.00 +3.7 3.72 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified
0.00 +3.7 3.74 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.00 +3.8 3.77 ± 6% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
1.56 ± 4% +4.1 5.67 ± 2% perf-profile.calltrace.cycles-pp.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write
1.23 ± 8% +4.4 5.62 ± 2% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write
23.86 ± 3% +71.8 95.66 perf-profile.calltrace.cycles-pp.write
17.39 ± 4% +72.0 89.43 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
23.06 ± 3% +72.5 95.54 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
22.96 ± 3% +72.6 95.53 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
22.63 ± 3% +72.9 95.48 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
22.28 ± 3% +73.2 95.44 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
21.19 ± 3% +74.1 95.30 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.27 ± 5% +84.9 88.14 perf-profile.calltrace.cycles-pp.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
2.26 ± 6% +85.8 88.03 perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.00 +86.4 86.42 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.00 +86.7 86.66 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter
0.49 ± 45% +87.0 87.48 perf-profile.calltrace.cycles-pp.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.00 +87.2 87.16 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write
66.11 ± 2% -64.7 1.38 ± 7% perf-profile.children.cycles-pp.read
64.34 ± 2% -63.1 1.20 ± 8% perf-profile.children.cycles-pp.ksys_read
63.54 ± 2% -62.4 1.15 ± 8% perf-profile.children.cycles-pp.vfs_read
62.08 ± 2% -61.0 1.04 ± 8% perf-profile.children.cycles-pp.xfs_file_read_iter
61.93 ± 2% -60.9 1.01 ± 8% perf-profile.children.cycles-pp.xfs_file_buffered_read
29.86 ± 3% -29.2 0.67 ± 12% perf-profile.children.cycles-pp.filemap_read
26.60 -26.2 0.44 ± 4% perf-profile.children.cycles-pp.xfs_ilock
25.28 -25.0 0.27 ± 6% perf-profile.children.cycles-pp.down_read
23.02 ± 3% -22.8 0.19 ± 6% perf-profile.children.cycles-pp.touch_atime
22.99 ± 3% -22.8 0.19 ± 6% perf-profile.children.cycles-pp.atime_needs_update
13.77 ± 3% -12.5 1.24 ± 3% perf-profile.children.cycles-pp.iomap_write_iter
7.59 ± 8% -7.4 0.23 ± 3% perf-profile.children.cycles-pp.xfs_iunlock
7.51 ± 3% -6.9 0.64 ± 3% perf-profile.children.cycles-pp.iomap_write_begin
6.59 ± 10% -6.5 0.05 perf-profile.children.cycles-pp.up_read
4.71 ± 11% -4.5 0.25 ± 26% perf-profile.children.cycles-pp.filemap_get_pages
4.34 ± 2% -4.0 0.38 ± 4% perf-profile.children.cycles-pp.__filemap_get_folio
5.74 ± 7% -3.9 1.82 ± 3% perf-profile.children.cycles-pp.down_write
3.57 ± 2% -3.3 0.28 ± 4% perf-profile.children.cycles-pp.iomap_write_end
4.63 ± 10% -2.9 1.75 ± 3% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
3.98 ± 9% -2.6 1.36 ± 4% perf-profile.children.cycles-pp.unlink
4.20 ± 10% -2.6 1.59 ± 4% perf-profile.children.cycles-pp.rwsem_optimistic_spin
3.91 ± 9% -2.6 1.35 ± 4% perf-profile.children.cycles-pp.__x64_sys_unlink
3.90 ± 9% -2.6 1.35 ± 4% perf-profile.children.cycles-pp.do_unlinkat
3.59 ± 11% -2.3 1.27 ± 4% perf-profile.children.cycles-pp.osq_lock
2.47 ± 6% -2.3 0.22 ± 3% perf-profile.children.cycles-pp.__iomap_write_begin
2.46 ± 6% -2.2 0.23 ± 28% perf-profile.children.cycles-pp.filemap_get_read_batch
2.25 ± 2% -2.1 0.19 ± 4% perf-profile.children.cycles-pp.filemap_add_folio
1.81 ± 6% -1.7 0.14 ± 4% perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.82 ± 9% -1.6 0.24 ± 6% perf-profile.children.cycles-pp.cpu_startup_entry
1.82 ± 9% -1.6 0.24 ± 6% perf-profile.children.cycles-pp.do_idle
1.82 ± 9% -1.6 0.24 ± 6% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
1.80 ± 9% -1.6 0.24 ± 5% perf-profile.children.cycles-pp.start_secondary
1.74 ± 9% -1.5 0.24 ± 6% perf-profile.children.cycles-pp.cpuidle_idle_call
1.68 ± 6% -1.5 0.18 ± 3% perf-profile.children.cycles-pp.__close
1.67 ± 6% -1.5 0.18 ± 3% perf-profile.children.cycles-pp.__x64_sys_close
1.66 ± 6% -1.5 0.18 ± 3% perf-profile.children.cycles-pp.__fput
1.65 ± 6% -1.5 0.18 ± 3% perf-profile.children.cycles-pp.dput
1.64 ± 6% -1.5 0.18 ± 3% perf-profile.children.cycles-pp.__dentry_kill
1.61 ± 6% -1.4 0.17 ± 5% perf-profile.children.cycles-pp.evict
1.60 ± 6% -1.4 0.17 ± 3% perf-profile.children.cycles-pp.truncate_inode_pages_range
1.83 ± 7% -1.4 0.41 ± 3% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.64 ± 9% -1.4 0.22 ± 5% perf-profile.children.cycles-pp.cpuidle_enter
1.63 ± 9% -1.4 0.22 ± 5% perf-profile.children.cycles-pp.cpuidle_enter_state
1.52 ± 2% -1.4 0.14 ± 4% perf-profile.children.cycles-pp.__filemap_add_folio
1.60 ± 9% -1.4 0.22 ± 6% perf-profile.children.cycles-pp.acpi_idle_enter
1.59 ± 9% -1.4 0.21 ± 5% perf-profile.children.cycles-pp.acpi_safe_halt
1.30 ± 4% -1.2 0.10 ± 4% perf-profile.children.cycles-pp.filemap_dirty_folio
1.32 ± 8% -1.2 0.15 ± 4% perf-profile.children.cycles-pp.zero_user_segments
1.29 ± 8% -1.1 0.15 ± 3% perf-profile.children.cycles-pp.memset_orig
1.22 ± 7% -0.9 0.28 ± 3% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.04 ± 9% -0.9 0.12 ± 4% perf-profile.children.cycles-pp.copy_page_to_iter
1.07 ± 3% -0.9 0.16 ± 8% perf-profile.children.cycles-pp.__xfs_trans_commit
0.96 ± 3% -0.9 0.11 ± 5% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.96 ± 3% -0.8 0.12 ± 7% perf-profile.children.cycles-pp.xlog_cil_commit
0.94 ± 9% -0.8 0.11 ± 4% perf-profile.children.cycles-pp._copy_to_iter
0.89 ± 7% -0.8 0.11 ± 3% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
1.61 ± 10% -0.8 0.83 perf-profile.children.cycles-pp.do_sys_openat2
1.60 ± 10% -0.8 0.83 perf-profile.children.cycles-pp.creat64
1.58 ± 10% -0.8 0.82 perf-profile.children.cycles-pp.do_filp_open
1.58 ± 10% -0.8 0.82 perf-profile.children.cycles-pp.path_openat
1.55 ± 10% -0.7 0.82 perf-profile.children.cycles-pp.__x64_sys_creat
0.80 -0.7 0.07 ± 7% perf-profile.children.cycles-pp.folio_alloc
1.52 ± 11% -0.7 0.82 perf-profile.children.cycles-pp.open_last_lookups
0.80 ± 5% -0.7 0.13 ± 2% perf-profile.children.cycles-pp.up_write
0.71 -0.6 0.06 ± 7% perf-profile.children.cycles-pp.alloc_pages_mpol
0.69 ± 5% -0.6 0.06 ± 9% perf-profile.children.cycles-pp.folio_add_lru
0.71 ± 10% -0.6 0.10 ± 5% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.82 ± 7% -0.6 0.22 ± 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.65 -0.6 0.06 ± 8% perf-profile.children.cycles-pp.__alloc_pages
0.66 ± 3% -0.6 0.07 ± 7% perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
0.80 ± 7% -0.6 0.21 ± 3% perf-profile.children.cycles-pp.hrtimer_interrupt
0.67 ± 4% -0.6 0.08 ± 8% perf-profile.children.cycles-pp.rw_verify_area
0.64 ± 9% -0.6 0.07 ± 5% perf-profile.children.cycles-pp.__fdget_pos
0.62 ± 6% -0.6 0.06 perf-profile.children.cycles-pp.__folio_mark_dirty
0.62 ± 6% -0.6 0.07 ± 7% perf-profile.children.cycles-pp.filemap_get_entry
0.61 ± 5% -0.6 0.06 ± 6% perf-profile.children.cycles-pp.release_pages
0.60 ± 8% -0.5 0.08 perf-profile.children.cycles-pp.xas_load
0.56 ± 5% -0.5 0.05 ± 7% perf-profile.children.cycles-pp.__folio_batch_release
0.54 ± 6% -0.5 0.04 ± 44% perf-profile.children.cycles-pp.folio_batch_move_lru
0.53 ± 7% -0.5 0.07 perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.52 ± 9% -0.4 0.07 ± 5% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.50 ± 6% -0.4 0.06 ± 6% perf-profile.children.cycles-pp.security_file_permission
0.48 ± 7% -0.4 0.06 ± 6% perf-profile.children.cycles-pp.fault_in_readable
0.47 ± 9% -0.4 0.06 ± 9% perf-profile.children.cycles-pp.iomap_iter_advance
0.47 ± 9% -0.4 0.06 ± 6% perf-profile.children.cycles-pp.__cond_resched
0.66 ± 10% -0.4 0.26 perf-profile.children.cycles-pp.kthread
0.66 ± 10% -0.4 0.26 perf-profile.children.cycles-pp.ret_from_fork
0.66 ± 10% -0.4 0.26 perf-profile.children.cycles-pp.ret_from_fork_asm
0.64 ± 11% -0.4 0.25 perf-profile.children.cycles-pp.worker_thread
0.82 ± 8% -0.4 0.44 perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.54 ± 6% -0.4 0.16 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.40 ± 8% -0.4 0.02 ± 99% perf-profile.children.cycles-pp.truncate_cleanup_folio
0.42 ± 6% -0.4 0.05 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.44 ± 8% -0.4 0.07 ± 5% perf-profile.children.cycles-pp.__schedule
0.60 ± 11% -0.4 0.24 perf-profile.children.cycles-pp.process_one_work
0.39 ± 6% -0.4 0.02 ± 99% perf-profile.children.cycles-pp.delete_from_page_cache_batch
0.50 ± 6% -0.4 0.15 perf-profile.children.cycles-pp.tick_nohz_highres_handler
0.37 ± 6% -0.3 0.02 ± 99% perf-profile.children.cycles-pp.apparmor_file_permission
0.57 ± 10% -0.3 0.23 ± 2% perf-profile.children.cycles-pp.xfs_inodegc_worker
0.56 ± 10% -0.3 0.23 ± 2% perf-profile.children.cycles-pp.xfs_inactive
0.37 ± 4% -0.3 0.05 perf-profile.children.cycles-pp.__mem_cgroup_charge
0.39 ± 9% -0.3 0.07 ± 5% perf-profile.children.cycles-pp.schedule
0.44 ± 7% -0.3 0.13 ± 3% perf-profile.children.cycles-pp.update_process_times
0.44 ± 6% -0.3 0.14 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.35 ± 8% -0.3 0.07 ± 7% perf-profile.children.cycles-pp.load_balance
0.33 ± 8% -0.3 0.06 perf-profile.children.cycles-pp.pick_next_task_fair
0.32 ± 8% -0.3 0.06 ± 6% perf-profile.children.cycles-pp.newidle_balance
0.32 ± 9% -0.2 0.07 ± 5% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.30 ± 9% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.find_busiest_group
0.30 ± 9% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.update_sd_lb_stats
0.28 ± 9% -0.2 0.05 perf-profile.children.cycles-pp.update_sg_lb_stats
0.32 ± 5% -0.2 0.11 perf-profile.children.cycles-pp.scheduler_tick
0.27 ± 9% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.xfs_buf_read_map
0.35 ± 11% -0.2 0.14 ± 3% perf-profile.children.cycles-pp.xfs_inactive_ifree
0.26 ± 9% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.xfs_buf_get_map
0.23 ± 10% -0.2 0.04 ± 44% perf-profile.children.cycles-pp.xfs_buf_lookup
0.23 ± 9% -0.2 0.06 ± 9% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.26 ± 7% -0.2 0.10 perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.43 ± 8% -0.1 0.28 perf-profile.children.cycles-pp.lookup_open
0.26 ± 7% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.vfs_unlink
0.34 ± 8% -0.1 0.20 perf-profile.children.cycles-pp.xfs_generic_create
0.22 ± 11% -0.1 0.08 perf-profile.children.cycles-pp.xfs_inactive_truncate
0.25 ± 8% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.xfs_remove
0.25 ± 8% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.xfs_vn_unlink
0.32 ± 9% -0.1 0.19 ± 2% perf-profile.children.cycles-pp.xfs_create
0.17 ± 5% -0.1 0.06 perf-profile.children.cycles-pp.task_tick_fair
0.07 ± 7% -0.0 0.05 ± 7% perf-profile.children.cycles-pp.main
0.07 ± 7% -0.0 0.05 ± 7% perf-profile.children.cycles-pp.run_builtin
0.06 ± 11% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_vn_lookup
0.06 ± 8% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_lookup
0.06 ± 9% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xfs_dir_lookup
0.00 +0.1 0.07 ± 8% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.1 0.07 perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.xfs_iget
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.xfs_iget_cache_hit
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.local_clock_noinstr
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.xfs_icreate
0.00 +0.1 0.08 ± 4% perf-profile.children.cycles-pp.local_clock
0.00 +0.1 0.08 ± 4% perf-profile.children.cycles-pp.xfs_lock_two_inodes
0.00 +0.1 0.08 perf-profile.children.cycles-pp.mean_and_variance_weighted_update
0.00 +0.1 0.08 perf-profile.children.cycles-pp.xfs_trans_alloc_dir
0.00 +0.3 0.30 ± 2% perf-profile.children.cycles-pp.time_stats_update_one
0.66 ± 7% +1.1 1.73 ± 9% perf-profile.children.cycles-pp.xfs_trans_reserve
0.67 ± 7% +1.1 1.75 ± 9% perf-profile.children.cycles-pp.xfs_trans_alloc
0.64 ± 8% +1.1 1.73 ± 9% perf-profile.children.cycles-pp.xfs_log_reserve
2.29 ± 2% +3.5 5.75 ± 2% perf-profile.children.cycles-pp.xfs_file_write_checks
95.39 +3.8 99.21 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
95.22 +4.0 99.19 perf-profile.children.cycles-pp.do_syscall_64
1.59 ± 4% +4.1 5.67 ± 2% perf-profile.children.cycles-pp.kiocb_modified
1.23 ± 8% +4.4 5.62 ± 2% perf-profile.children.cycles-pp.xfs_vn_update_time
24.16 ± 3% +71.6 95.72 perf-profile.children.cycles-pp.write
17.42 ± 4% +72.0 89.43 perf-profile.children.cycles-pp.iomap_file_buffered_write
22.70 ± 3% +72.8 95.52 perf-profile.children.cycles-pp.ksys_write
22.35 ± 3% +73.1 95.48 perf-profile.children.cycles-pp.vfs_write
21.23 ± 3% +74.1 95.30 perf-profile.children.cycles-pp.xfs_file_buffered_write
3.32 ± 5% +84.8 88.15 perf-profile.children.cycles-pp.iomap_iter
2.34 ± 6% +85.7 88.05 perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.61 ± 7% +86.9 87.51 perf-profile.children.cycles-pp.xfs_ilock_for_iomap
0.26 ± 7% +91.9 92.17 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.51 ± 3% +92.0 92.47 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +93.0 93.01 perf-profile.children.cycles-pp.__time_stats_update
25.09 -24.8 0.26 ± 6% perf-profile.self.cycles-pp.down_read
22.67 ± 3% -22.5 0.16 ± 6% perf-profile.self.cycles-pp.atime_needs_update
6.55 ± 10% -6.5 0.05 perf-profile.self.cycles-pp.up_read
3.55 ± 11% -2.3 1.27 ± 4% perf-profile.self.cycles-pp.osq_lock
2.22 ± 7% -2.0 0.20 ± 32% perf-profile.self.cycles-pp.filemap_get_read_batch
1.78 ± 6% -1.6 0.14 ± 2% perf-profile.self.cycles-pp.iomap_set_range_uptodate
1.28 ± 8% -1.1 0.15 ± 3% perf-profile.self.cycles-pp.memset_orig
1.05 ± 14% -1.0 0.05 ± 8% perf-profile.self.cycles-pp.vfs_read
1.02 ± 6% -0.9 0.10 ± 4% perf-profile.self.cycles-pp.filemap_read
0.95 ± 5% -0.9 0.06 ± 6% perf-profile.self.cycles-pp.down_write
0.95 ± 2% -0.8 0.11 ± 6% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
0.93 ± 9% -0.8 0.11 ± 3% perf-profile.self.cycles-pp._copy_to_iter
0.73 ± 3% -0.7 0.04 ± 44% perf-profile.self.cycles-pp.iomap_write_end
0.73 ± 8% -0.6 0.10 ± 4% perf-profile.self.cycles-pp.acpi_safe_halt
0.72 ± 5% -0.6 0.12 ± 4% perf-profile.self.cycles-pp.up_write
0.62 ± 9% -0.6 0.06 ± 7% perf-profile.self.cycles-pp.__fdget_pos
0.55 ± 7% -0.5 0.07 ± 9% perf-profile.self.cycles-pp.vfs_write
0.50 ± 8% -0.4 0.07 perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.47 ± 6% -0.4 0.06 ± 6% perf-profile.self.cycles-pp.fault_in_readable
0.46 ± 8% -0.4 0.06 ± 6% perf-profile.self.cycles-pp.iomap_iter_advance
0.81 ± 8% -0.4 0.44 perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.41 ± 10% -0.4 0.05 perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
0.47 ± 8% -0.3 0.21 ± 3% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
0.28 ± 10% -0.2 0.06 perf-profile.self.cycles-pp.xfs_iunlock
0.26 ± 7% -0.2 0.10 perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.12 ± 9% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.xfs_ilock_for_iomap
0.00 +0.1 0.07 ± 5% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.1 0.08 ± 4% perf-profile.self.cycles-pp.mean_and_variance_weighted_update
0.00 +0.2 0.22 ± 2% perf-profile.self.cycles-pp.time_stats_update_one
0.00 +0.3 0.26 perf-profile.self.cycles-pp.__time_stats_update
0.26 ± 7% +91.9 92.16 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-icl-2sp2: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-12/performance/1BRD_48G/xfs/x86_64-rhel-8.3/3000/debian-12-x86_64-20240206.cgz/lkp-icl-2sp2/disk_rr/aim7
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
4.022e+09 +286.9% 1.556e+10 cpuidle..time
3109334 +191.8% 9073047 cpuidle..usage
87.82 ± 2% +147.7% 217.51 uptime.boot
10014 ± 3% +114.5% 21478 uptime.idle
82.91 -12.4% 72.59 iostat.cpu.idle
15.48 +74.5% 27.01 ± 2% iostat.cpu.system
1.60 -75.4% 0.39 ± 2% iostat.cpu.user
82.92 -12.5% 72.55 vmstat.cpu.id
20.62 ± 8% +76.7% 36.43 ± 3% vmstat.procs.r
58002 -73.3% 15496 ± 2% vmstat.system.cs
145812 -25.1% 109180 vmstat.system.in
1.83 ± 58% +21609.1% 398.00 ± 9% perf-c2c.DRAM.local
37.33 ± 12% +63336.6% 23683 perf-c2c.DRAM.remote
24.50 ± 24% +68055.1% 16698 ± 2% perf-c2c.HITM.local
9.50 ± 20% +1.1e+05% 10837 ± 3% perf-c2c.HITM.remote
34.00 ± 21% +80887.7% 27535 ± 2% perf-c2c.HITM.total
82.10 -9.8 72.28 mpstat.cpu.all.idle%
0.23 ± 2% -0.1 0.14 mpstat.cpu.all.irq%
0.09 -0.0 0.06 mpstat.cpu.all.soft%
15.95 +11.2 27.12 ± 2% mpstat.cpu.all.sys%
1.63 -1.2 0.39 ± 2% mpstat.cpu.all.usr%
28.68 ± 14% +210.6% 89.09 ± 2% mpstat.max_utilization_pct
506286 -78.5% 108634 ± 2% aim7.jobs-per-min
35.74 +364.4% 165.95 ± 2% aim7.time.elapsed_time
35.74 +364.4% 165.95 ± 2% aim7.time.elapsed_time.max
52012 ± 4% +286.3% 200918 aim7.time.involuntary_context_switches
972.83 ± 17% -55.8% 429.83 ± 64% aim7.time.major_page_faults
175190 +25.4% 219689 ± 4% aim7.time.minor_page_faults
659.89 +769.8% 5739 ± 4% aim7.time.system_time
485573 +2.9% 499583 aim7.time.voluntary_context_switches
40533 +659.1% 307702 meminfo.Active
36956 +730.5% 306930 meminfo.Active(anon)
3577 ± 4% -78.4% 771.31 ± 21% meminfo.Active(file)
41804 +245.2% 144289 meminfo.AnonHugePages
4204470 +20.7% 5074474 meminfo.Cached
3123403 +10.9% 3464843 meminfo.Committed_AS
1000298 +63.6% 1636831 meminfo.Dirty
1842596 +33.4% 2457188 meminfo.Inactive
1002406 +63.5% 1639021 meminfo.Inactive(file)
141036 +10.5% 155843 meminfo.KReclaimable
96676 -58.1% 40484 meminfo.Mapped
141036 +10.5% 155843 meminfo.SReclaimable
85961 +270.0% 318017 meminfo.Shmem
5516 ± 34% +562.1% 36526 ± 29% numa-meminfo.node0.Active
3861 ± 43% +834.2% 36069 ± 29% numa-meminfo.node0.Active(anon)
1655 ± 16% -72.4% 457.02 ± 19% numa-meminfo.node0.Active(file)
428365 ± 13% -51.9% 205980 ± 70% numa-meminfo.node0.AnonPages
499321 +64.0% 819091 numa-meminfo.node0.Dirty
1395085 ± 81% +100.5% 2797772 ± 45% numa-meminfo.node0.FilePages
430360 ± 13% -51.8% 207258 ± 70% numa-meminfo.node0.Inactive(anon)
500717 +63.8% 820179 numa-meminfo.node0.Inactive(file)
53923 ± 37% +51.1% 81464 ± 23% numa-meminfo.node0.KReclaimable
53923 ± 37% +51.1% 81464 ± 23% numa-meminfo.node0.SReclaimable
9148 ± 18% +342.5% 40479 ± 25% numa-meminfo.node0.Shmem
34935 ± 5% +676.5% 271264 ± 3% numa-meminfo.node1.Active
33127 ± 5% +717.7% 270893 ± 3% numa-meminfo.node1.Active(anon)
1807 ± 12% -79.5% 370.48 ± 27% numa-meminfo.node1.Active(file)
20785 ± 82% +593.8% 144199 ± 2% numa-meminfo.node1.AnonHugePages
366276 ± 15% +65.0% 604188 ± 24% numa-meminfo.node1.AnonPages
406028 ± 14% +64.5% 668110 ± 23% numa-meminfo.node1.AnonPages.max
501405 +63.0% 817411 numa-meminfo.node1.Dirty
912965 ± 6% +56.5% 1429144 ± 10% numa-meminfo.node1.Inactive
502134 +63.0% 818311 numa-meminfo.node1.Inactive(file)
63618 ± 16% -69.2% 19592 ± 65% numa-meminfo.node1.Mapped
77868 ± 2% +256.5% 277598 ± 3% numa-meminfo.node1.Shmem
9255 +729.0% 76720 proc-vmstat.nr_active_anon
858.16 ± 5% -76.8% 199.08 ± 20% proc-vmstat.nr_active_file
198626 +2.0% 202558 proc-vmstat.nr_anon_pages
250401 +63.4% 409075 proc-vmstat.nr_dirty
1051898 +20.6% 1268433 proc-vmstat.nr_file_pages
210445 -2.8% 204541 proc-vmstat.nr_inactive_anon
250890 +63.2% 409553 proc-vmstat.nr_inactive_file
68034 +4.8% 71287 proc-vmstat.nr_kernel_stack
24977 -58.2% 10429 proc-vmstat.nr_mapped
21943 +262.3% 79501 proc-vmstat.nr_shmem
35260 +10.5% 38957 proc-vmstat.nr_slab_reclaimable
90842 +4.2% 94700 proc-vmstat.nr_slab_unreclaimable
9255 +729.0% 76720 proc-vmstat.nr_zone_active_anon
858.71 ± 5% -76.8% 199.08 ± 20% proc-vmstat.nr_zone_active_file
210445 -2.8% 204541 proc-vmstat.nr_zone_inactive_anon
250890 +63.2% 409553 proc-vmstat.nr_zone_inactive_file
250401 +63.4% 409074 proc-vmstat.nr_zone_write_pending
10416 ± 46% +430.9% 55301 ± 16% proc-vmstat.numa_hint_faults
3192 ± 32% +840.3% 30016 ± 15% proc-vmstat.numa_hint_faults_local
10396 ± 83% +155.9% 26605 ± 23% proc-vmstat.numa_pages_migrated
510354 +68.3% 858737 ± 2% proc-vmstat.pgfault
10396 ± 83% +155.9% 26605 ± 23% proc-vmstat.pgmigrate_success
19583 ± 8% +167.5% 52381 ± 7% proc-vmstat.pgreuse
1586 +16.9% 1854 ± 17% proc-vmstat.unevictable_pgs_culled
961.29 ± 45% +838.0% 9017 ± 29% numa-vmstat.node0.nr_active_anon
472.89 ± 6% -78.4% 102.25 ± 12% numa-vmstat.node0.nr_active_file
107153 ± 13% -51.9% 51488 ± 70% numa-vmstat.node0.nr_anon_pages
125564 +62.9% 204492 numa-vmstat.node0.nr_dirty
349504 ± 81% +100.0% 699145 ± 45% numa-vmstat.node0.nr_file_pages
107650 ± 13% -51.9% 51811 ± 70% numa-vmstat.node0.nr_inactive_anon
125899 +62.6% 204765 numa-vmstat.node0.nr_inactive_file
2281 ± 20% +343.6% 10122 ± 25% numa-vmstat.node0.nr_shmem
13480 ± 37% +51.0% 20361 ± 23% numa-vmstat.node0.nr_slab_reclaimable
961.29 ± 45% +838.0% 9017 ± 29% numa-vmstat.node0.nr_zone_active_anon
477.37 ± 6% -78.5% 102.51 ± 11% numa-vmstat.node0.nr_zone_active_file
107650 ± 13% -51.9% 51811 ± 70% numa-vmstat.node0.nr_zone_inactive_anon
125894 +62.6% 204760 numa-vmstat.node0.nr_zone_inactive_file
125567 +62.9% 204491 numa-vmstat.node0.nr_zone_write_pending
8176 ± 4% +728.2% 67717 ± 3% numa-vmstat.node1.nr_active_anon
460.58 ± 11% -78.6% 98.78 ± 18% numa-vmstat.node1.nr_active_file
91619 ± 15% +64.8% 151031 ± 24% numa-vmstat.node1.nr_anon_pages
125690 ± 2% +62.5% 204264 numa-vmstat.node1.nr_dirty
125860 ± 2% +62.5% 204466 numa-vmstat.node1.nr_inactive_file
16188 ± 16% -69.0% 5022 ± 64% numa-vmstat.node1.nr_mapped
19539 ± 4% +255.2% 69408 ± 3% numa-vmstat.node1.nr_shmem
8176 ± 4% +728.2% 67717 ± 3% numa-vmstat.node1.nr_zone_active_anon
460.57 ± 12% -78.9% 97.40 ± 19% numa-vmstat.node1.nr_zone_active_file
125874 ± 2% +62.4% 204467 numa-vmstat.node1.nr_zone_inactive_file
125698 ± 2% +62.5% 204265 numa-vmstat.node1.nr_zone_write_pending
1.58 +66.2% 2.62 perf-stat.i.MPKI
1.192e+10 -62.6% 4.464e+09 perf-stat.i.branch-instructions
2.01 ± 2% -1.3 0.74 perf-stat.i.branch-miss-rate%
58629431 -63.5% 21406096 perf-stat.i.branch-misses
23.08 +9.9 33.03 perf-stat.i.cache-miss-rate%
1.258e+08 -54.0% 57816617 perf-stat.i.cache-misses
4.603e+08 -63.4% 1.683e+08 perf-stat.i.cache-references
59744 -73.9% 15587 ± 2% perf-stat.i.context-switches
0.85 +387.0% 4.13 ± 2% perf-stat.i.cpi
5.441e+10 +68.7% 9.177e+10 ± 2% perf-stat.i.cpu-cycles
1378 -49.0% 703.77 ± 2% perf-stat.i.cpu-migrations
1117 ± 2% +50.2% 1678 ± 3% perf-stat.i.cycles-between-cache-misses
6.012e+10 -64.7% 2.119e+10 perf-stat.i.instructions
1.21 -74.0% 0.31 ± 2% perf-stat.i.ipc
27.27 ± 21% -89.9% 2.74 ± 63% perf-stat.i.major-faults
12030 ± 4% -60.0% 4812 perf-stat.i.minor-faults
12057 ± 4% -60.1% 4815 perf-stat.i.page-faults
2.10 +29.9% 2.72 perf-stat.overall.MPKI
27.33 +7.0 34.33 perf-stat.overall.cache-miss-rate%
0.91 +379.0% 4.35 ± 2% perf-stat.overall.cpi
432.97 +269.0% 1597 ± 3% perf-stat.overall.cycles-between-cache-misses
1.10 -79.1% 0.23 ± 2% perf-stat.overall.ipc
1.176e+10 -62.1% 4.453e+09 perf-stat.ps.branch-instructions
56420555 -62.4% 21214593 perf-stat.ps.branch-misses
1.243e+08 -53.7% 57567703 perf-stat.ps.cache-misses
4.548e+08 -63.1% 1.677e+08 perf-stat.ps.cache-references
58873 -73.7% 15483 ± 2% perf-stat.ps.context-switches
124508 +2.2% 127224 perf-stat.ps.cpu-clock
5.382e+10 +70.8% 9.192e+10 ± 2% perf-stat.ps.cpu-cycles
1365 -48.6% 702.29 ± 2% perf-stat.ps.cpu-migrations
5.928e+10 -64.3% 2.114e+10 perf-stat.ps.instructions
26.56 ± 16% -90.1% 2.64 ± 63% perf-stat.ps.major-faults
11533 ± 3% -58.8% 4748 perf-stat.ps.minor-faults
11560 ± 3% -58.9% 4751 perf-stat.ps.page-faults
124508 +2.2% 127224 perf-stat.ps.task-clock
2.183e+12 +61.8% 3.531e+12 perf-stat.total.instructions
2663 ± 13% +26943.0% 720263 ± 2% sched_debug.cfs_rq:/.avg_vruntime.avg
45242 ± 25% +1760.4% 841668 ± 5% sched_debug.cfs_rq:/.avg_vruntime.max
71.13 ± 23% +9e+05% 643242 ± 4% sched_debug.cfs_rq:/.avg_vruntime.min
5502 ± 22% +443.6% 29912 ± 7% sched_debug.cfs_rq:/.avg_vruntime.stddev
0.12 ± 15% +82.1% 0.22 ± 12% sched_debug.cfs_rq:/.h_nr_running.avg
23.29 ±200% +17896.2% 4191 ± 48% sched_debug.cfs_rq:/.left_deadline.avg
2507 ±196% +14707.5% 371229 ± 46% sched_debug.cfs_rq:/.left_deadline.max
222.70 ±196% +16997.2% 38075 ± 43% sched_debug.cfs_rq:/.left_deadline.stddev
22.79 ±202% +18296.2% 4191 ± 48% sched_debug.cfs_rq:/.left_vruntime.avg
2443 ±198% +15094.3% 371219 ± 46% sched_debug.cfs_rq:/.left_vruntime.max
217.10 ±198% +17437.7% 38074 ± 43% sched_debug.cfs_rq:/.left_vruntime.stddev
2663 ± 13% +26940.2% 720263 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
45242 ± 25% +1760.4% 841668 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
71.13 ± 23% +9e+05% 643242 ± 4% sched_debug.cfs_rq:/.min_vruntime.min
5503 ± 22% +443.5% 29912 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
0.12 ± 15% +82.1% 0.22 ± 12% sched_debug.cfs_rq:/.nr_running.avg
48.11 ± 38% -77.3% 10.91 ± 29% sched_debug.cfs_rq:/.removed.load_avg.avg
1024 -66.7% 341.33 sched_debug.cfs_rq:/.removed.load_avg.max
209.92 ± 17% -72.0% 58.79 ± 14% sched_debug.cfs_rq:/.removed.load_avg.stddev
21.13 ± 40% -77.0% 4.86 ± 35% sched_debug.cfs_rq:/.removed.runnable_avg.avg
544.33 ± 6% -68.1% 173.67 sched_debug.cfs_rq:/.removed.runnable_avg.max
96.69 ± 20% -72.1% 26.99 ± 19% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
21.13 ± 40% -77.0% 4.86 ± 35% sched_debug.cfs_rq:/.removed.util_avg.avg
544.17 ± 6% -68.1% 173.56 sched_debug.cfs_rq:/.removed.util_avg.max
96.68 ± 20% -72.1% 26.99 ± 19% sched_debug.cfs_rq:/.removed.util_avg.stddev
22.79 ±202% +18296.2% 4191 ± 48% sched_debug.cfs_rq:/.right_vruntime.avg
2443 ±198% +15094.3% 371219 ± 46% sched_debug.cfs_rq:/.right_vruntime.max
217.10 ±198% +17437.7% 38074 ± 43% sched_debug.cfs_rq:/.right_vruntime.stddev
1255 ± 12% -23.3% 962.56 ± 7% sched_debug.cfs_rq:/.runnable_avg.max
295.14 ± 5% -31.6% 201.94 ± 3% sched_debug.cfs_rq:/.runnable_avg.stddev
1254 ± 13% -23.7% 957.44 ± 8% sched_debug.cfs_rq:/.util_avg.max
294.47 ± 5% -32.4% 199.11 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
15.58 ± 33% +226.6% 50.87 ± 14% sched_debug.cfs_rq:/.util_est.avg
81.45 ± 17% +35.8% 110.58 ± 9% sched_debug.cfs_rq:/.util_est.stddev
4357 ± 20% +8859.1% 390390 ± 11% sched_debug.cpu.avg_idle.min
201773 ± 4% -47.9% 105067 ± 5% sched_debug.cpu.avg_idle.stddev
51582 ± 4% +115.5% 111173 sched_debug.cpu.clock.avg
51588 ± 4% +115.6% 111208 sched_debug.cpu.clock.max
51573 ± 4% +115.5% 111134 sched_debug.cpu.clock.min
3.41 ± 6% +506.1% 20.68 ± 4% sched_debug.cpu.clock.stddev
51437 ± 4% +115.6% 110910 sched_debug.cpu.clock_task.avg
51577 ± 4% +115.4% 111093 sched_debug.cpu.clock_task.max
42597 ± 5% +137.8% 101306 sched_debug.cpu.clock_task.min
337.29 ± 17% +215.0% 1062 ± 15% sched_debug.cpu.curr->pid.avg
3182 +129.2% 7293 sched_debug.cpu.curr->pid.max
948.04 ± 8% +107.8% 1969 ± 7% sched_debug.cpu.curr->pid.stddev
0.00 ± 9% +97.3% 0.00 ± 23% sched_debug.cpu.next_balance.stddev
0.12 ± 15% +78.3% 0.21 ± 12% sched_debug.cpu.nr_running.avg
1059 ± 4% +676.6% 8226 ± 2% sched_debug.cpu.nr_switches.avg
10400 ± 25% +265.0% 37961 ± 24% sched_debug.cpu.nr_switches.max
140.83 ± 16% +4023.2% 5806 ± 4% sched_debug.cpu.nr_switches.min
1551 ± 9% +147.3% 3836 ± 20% sched_debug.cpu.nr_switches.stddev
0.01 ± 57% +2e+05% 15.55 sched_debug.cpu.nr_uninterruptible.avg
30.00 ± 39% +139.6% 71.89 ± 23% sched_debug.cpu.nr_uninterruptible.max
5.83 ± 18% +163.9% 15.38 ± 5% sched_debug.cpu.nr_uninterruptible.stddev
51577 ± 4% +115.5% 111135 sched_debug.cpu_clk
50345 ± 4% +118.3% 109903 sched_debug.ktime
52458 ± 4% +113.8% 112137 sched_debug.sched_clk
27.37 -24.5 2.89 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
25.61 -22.5 3.10 ± 3% perf-profile.calltrace.cycles-pp.read
23.22 -20.4 2.85 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
22.95 -20.1 2.81 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
22.06 -19.4 2.71 ± 3% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
21.25 -18.6 2.60 ± 3% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
19.10 -16.8 2.34 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.71 -16.4 2.29 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read.do_syscall_64
17.15 -15.0 2.10 ± 3% perf-profile.calltrace.cycles-pp.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read.ksys_read
14.90 -13.4 1.49 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
9.47 -8.6 0.85 ± 3% perf-profile.calltrace.cycles-pp.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
7.08 -6.3 0.82 ± 5% perf-profile.calltrace.cycles-pp.__close
7.07 -6.3 0.82 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
7.07 -6.3 0.82 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
7.05 -6.2 0.81 ± 5% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
7.02 -6.2 0.81 ± 5% perf-profile.calltrace.cycles-pp.__fput.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
6.85 -6.2 0.66 ± 6% perf-profile.calltrace.cycles-pp.dput.__fput.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.83 -6.2 0.66 ± 6% perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.__x64_sys_close.do_syscall_64
6.12 -6.1 0.00 perf-profile.calltrace.cycles-pp.filemap_add_folio.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
6.75 -6.1 0.65 ± 7% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.__x64_sys_close
6.71 -6.1 0.64 ± 6% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
6.54 -5.9 0.63 ± 6% perf-profile.calltrace.cycles-pp.folio_mark_accessed.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
6.18 -5.5 0.72 ± 2% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
5.36 -4.7 0.64 ± 4% perf-profile.calltrace.cycles-pp.unlink
5.33 -4.7 0.64 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
5.33 -4.7 0.64 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
5.27 -4.6 0.62 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
5.26 -4.6 0.62 ± 4% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
5.33 -4.6 0.71 ± 4% perf-profile.calltrace.cycles-pp.copy_page_to_iter.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.vfs_read
5.10 -4.4 0.68 ± 4% perf-profile.calltrace.cycles-pp._copy_to_iter.copy_page_to_iter.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
4.66 -4.2 0.43 ± 44% perf-profile.calltrace.cycles-pp.llseek
4.64 -4.1 0.56 ± 3% perf-profile.calltrace.cycles-pp.__iomap_write_begin.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
3.30 -2.5 0.84 ± 2% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.acpi_safe_halt.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter
3.55 -2.4 1.17 ± 2% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
3.55 -2.4 1.17 ± 2% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
3.54 -2.4 1.17 ± 2% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
3.58 -2.4 1.23 ± 2% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
5.33 -2.3 3.08 ± 2% perf-profile.calltrace.cycles-pp.creat64
5.31 -2.2 3.07 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
5.31 -2.2 3.07 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
3.35 -2.2 1.14 ± 2% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
5.24 -2.2 3.06 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
5.24 -2.2 3.06 ± 2% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
5.20 -2.1 3.06 ± 2% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.20 -2.1 3.06 ± 2% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64
3.14 -2.1 1.06 ± 2% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
5.12 -2.1 3.04 ± 2% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat
3.13 -2.1 1.06 ± 2% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
3.08 -2.0 1.04 ± 2% perf-profile.calltrace.cycles-pp.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
3.96 -1.3 2.68 ± 2% perf-profile.calltrace.cycles-pp.down_write.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
1.92 ± 3% -1.3 0.65 perf-profile.calltrace.cycles-pp.acpi_safe_halt.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
3.96 -1.3 2.68 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write.open_last_lookups.path_openat.do_filp_open
3.39 -0.8 2.54 ± 2% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.open_last_lookups.path_openat
2.77 ± 2% -0.5 2.28 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write.open_last_lookups
2.57 -0.3 2.30 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
0.60 +0.2 0.76 perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.00 +0.6 0.62 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve
0.00 +0.6 0.64 ± 10% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc
0.00 +0.6 0.65 ± 2% perf-profile.calltrace.cycles-pp.up_write.xfs_iunlock.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write
0.00 +0.7 0.66 ± 9% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time
0.00 +0.7 0.72 ± 8% perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified
0.00 +0.7 0.72 ± 8% perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.00 +0.7 0.74 ± 8% perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
0.00 +0.7 0.74 ± 5% perf-profile.calltrace.cycles-pp.time_stats_update_one.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter
1.12 +1.0 2.12 ± 5% perf-profile.calltrace.cycles-pp.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write
0.00 +1.1 1.11 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified
0.00 +1.1 1.12 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.00 +1.2 1.15 ± 4% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
0.00 +2.0 2.02 ± 5% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write
45.91 +43.8 89.74 perf-profile.calltrace.cycles-pp.write
44.09 +45.5 89.54 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
43.82 +45.7 89.51 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
42.92 +46.5 89.41 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
42.04 +47.2 89.29 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
39.30 +49.7 88.97 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.70 +51.7 86.42 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
6.41 +77.0 83.42 perf-profile.calltrace.cycles-pp.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
4.52 +78.7 83.18 perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.00 +79.2 79.22 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.00 +79.8 79.81 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter
1.09 ± 2% +80.7 81.79 perf-profile.calltrace.cycles-pp.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.00 +81.2 81.22 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write
27.50 -24.6 2.90 ± 3% perf-profile.children.cycles-pp.iomap_write_iter
25.82 -22.6 3.19 ± 3% perf-profile.children.cycles-pp.read
22.14 -19.4 2.72 ± 4% perf-profile.children.cycles-pp.ksys_read
21.31 -18.7 2.61 ± 3% perf-profile.children.cycles-pp.vfs_read
19.13 -16.8 2.35 ± 3% perf-profile.children.cycles-pp.xfs_file_read_iter
18.80 -16.5 2.30 ± 3% perf-profile.children.cycles-pp.xfs_file_buffered_read
17.26 -15.2 2.11 ± 4% perf-profile.children.cycles-pp.filemap_read
14.99 -13.5 1.50 ± 3% perf-profile.children.cycles-pp.iomap_write_begin
9.57 -8.7 0.86 ± 3% perf-profile.children.cycles-pp.__filemap_get_folio
7.72 -7.2 0.48 ± 7% perf-profile.children.cycles-pp.folio_batch_move_lru
7.55 -7.2 0.34 ± 15% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
7.08 -6.3 0.82 ± 5% perf-profile.children.cycles-pp.__close
7.05 -6.2 0.81 ± 5% perf-profile.children.cycles-pp.__x64_sys_close
9.50 -6.2 3.29 perf-profile.children.cycles-pp.down_write
7.02 -6.2 0.81 ± 5% perf-profile.children.cycles-pp.__fput
6.87 -6.2 0.67 ± 6% perf-profile.children.cycles-pp.dput
6.83 -6.2 0.66 ± 6% perf-profile.children.cycles-pp.__dentry_kill
6.75 -6.1 0.65 ± 7% perf-profile.children.cycles-pp.evict
6.72 -6.1 0.65 ± 6% perf-profile.children.cycles-pp.truncate_inode_pages_range
6.57 -5.9 0.64 ± 6% perf-profile.children.cycles-pp.folio_mark_accessed
6.13 -5.7 0.44 ± 3% perf-profile.children.cycles-pp.filemap_add_folio
6.27 -5.5 0.73 ± 3% perf-profile.children.cycles-pp.iomap_write_end
8.38 -5.2 3.14 perf-profile.children.cycles-pp.rwsem_down_write_slowpath
5.38 -4.7 0.64 ± 4% perf-profile.children.cycles-pp.unlink
5.36 -4.6 0.72 ± 4% perf-profile.children.cycles-pp.copy_page_to_iter
5.27 -4.6 0.62 ± 4% perf-profile.children.cycles-pp.__x64_sys_unlink
5.26 -4.6 0.62 ± 4% perf-profile.children.cycles-pp.do_unlinkat
5.12 -4.4 0.69 ± 4% perf-profile.children.cycles-pp._copy_to_iter
7.25 -4.4 2.84 ± 2% perf-profile.children.cycles-pp.rwsem_optimistic_spin
4.70 -4.1 0.56 ± 3% perf-profile.children.cycles-pp.__iomap_write_begin
4.71 -4.1 0.60 ± 2% perf-profile.children.cycles-pp.llseek
4.22 -3.9 0.35 ± 8% perf-profile.children.cycles-pp.folio_activate
4.13 -3.8 0.34 ± 10% perf-profile.children.cycles-pp.__folio_batch_release
4.08 -3.7 0.34 ± 9% perf-profile.children.cycles-pp.release_pages
6.03 ± 2% -3.5 2.53 ± 2% perf-profile.children.cycles-pp.osq_lock
3.44 -3.3 0.14 ± 5% perf-profile.children.cycles-pp.folio_add_lru
3.51 -3.1 0.44 ± 3% perf-profile.children.cycles-pp.iomap_set_range_uptodate
2.99 -2.6 0.36 ± 4% perf-profile.children.cycles-pp.zero_user_segments
2.90 -2.6 0.35 ± 3% perf-profile.children.cycles-pp.memset_orig
2.76 -2.4 0.36 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
3.55 -2.4 1.17 ± 2% perf-profile.children.cycles-pp.start_secondary
2.65 -2.4 0.30 ± 3% perf-profile.children.cycles-pp.__filemap_add_folio
3.58 -2.4 1.23 ± 2% perf-profile.children.cycles-pp.do_idle
3.58 -2.4 1.23 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
3.58 -2.4 1.23 ± 2% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
5.35 -2.3 3.08 ± 2% perf-profile.children.cycles-pp.creat64
5.35 -2.3 3.08 ± 2% perf-profile.children.cycles-pp.do_sys_openat2
2.48 -2.3 0.21 ± 5% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
2.48 -2.2 0.25 ± 4% perf-profile.children.cycles-pp.filemap_dirty_folio
5.28 -2.2 3.07 ± 2% perf-profile.children.cycles-pp.do_filp_open
5.28 -2.2 3.07 ± 2% perf-profile.children.cycles-pp.path_openat
5.24 -2.2 3.06 ± 2% perf-profile.children.cycles-pp.__x64_sys_creat
3.38 -2.2 1.20 ± 2% perf-profile.children.cycles-pp.cpuidle_idle_call
2.40 -2.1 0.31 ± 4% perf-profile.children.cycles-pp.filemap_get_pages
5.12 -2.1 3.04 ± 2% perf-profile.children.cycles-pp.open_last_lookups
3.18 -2.1 1.12 ± 2% perf-profile.children.cycles-pp.cpuidle_enter
3.16 -2.0 1.12 ± 2% perf-profile.children.cycles-pp.cpuidle_enter_state
3.17 -2.0 1.17 ± 2% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
3.11 -2.0 1.10 ± 2% perf-profile.children.cycles-pp.acpi_idle_enter
3.10 -2.0 1.10 ± 2% perf-profile.children.cycles-pp.acpi_safe_halt
1.98 ± 3% -1.8 0.17 ± 5% perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
2.06 -1.8 0.27 ± 3% perf-profile.children.cycles-pp.filemap_get_read_batch
2.26 -1.8 0.49 ± 3% perf-profile.children.cycles-pp.xfs_ilock
1.98 -1.7 0.25 ± 2% perf-profile.children.cycles-pp.__fdget_pos
1.90 -1.7 0.25 ± 4% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
1.87 -1.6 0.22 ± 3% perf-profile.children.cycles-pp.workingset_activation
1.80 -1.6 0.24 ± 4% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
1.68 -1.5 0.19 ± 4% perf-profile.children.cycles-pp.rw_verify_area
1.56 -1.4 0.15 ± 3% perf-profile.children.cycles-pp.__folio_mark_dirty
2.16 ± 3% -1.4 0.75 ± 2% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.49 -1.3 0.18 ± 4% perf-profile.children.cycles-pp.xas_load
1.51 ± 2% -1.3 0.21 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
1.46 -1.3 0.18 ± 3% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.76 -1.3 0.49 perf-profile.children.cycles-pp.rwsem_spin_on_owner
1.37 -1.2 0.16 ± 4% perf-profile.children.cycles-pp.workingset_age_nonresident
1.63 -1.2 0.46 ± 2% perf-profile.children.cycles-pp.ret_from_fork
1.63 -1.2 0.46 ± 2% perf-profile.children.cycles-pp.ret_from_fork_asm
1.63 -1.2 0.46 ± 2% perf-profile.children.cycles-pp.kthread
1.60 -1.2 0.44 ± 2% perf-profile.children.cycles-pp.worker_thread
1.31 ± 2% -1.1 0.18 ± 3% perf-profile.children.cycles-pp.xlog_cil_commit
1.25 ± 4% -1.1 0.13 ± 5% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
1.26 -1.1 0.15 ± 4% perf-profile.children.cycles-pp.folio_alloc
1.18 ± 2% -1.1 0.10 ± 3% perf-profile.children.cycles-pp.folio_account_dirtied
1.23 -1.1 0.15 ± 4% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
1.49 -1.1 0.41 ± 3% perf-profile.children.cycles-pp.process_one_work
1.20 -1.1 0.14 ± 4% perf-profile.children.cycles-pp.security_file_permission
1.45 -1.1 0.39 ± 2% perf-profile.children.cycles-pp.xfs_inodegc_worker
1.42 -1.0 0.38 ± 3% perf-profile.children.cycles-pp.xfs_inactive
1.20 -1.0 0.17 ± 4% perf-profile.children.cycles-pp.disk_rr
1.18 -1.0 0.15 ± 2% perf-profile.children.cycles-pp.ksys_lseek
1.17 -1.0 0.15 ± 6% perf-profile.children.cycles-pp.alloc_pages_mpol
1.14 -1.0 0.13 ± 5% perf-profile.children.cycles-pp.truncate_cleanup_folio
1.11 -1.0 0.13 ± 3% perf-profile.children.cycles-pp.folio_activate_fn
1.06 ± 4% -1.0 0.08 ± 5% perf-profile.children.cycles-pp.balance_dirty_pages
1.11 -1.0 0.14 ± 3% perf-profile.children.cycles-pp.iomap_iter_advance
1.10 -1.0 0.13 ± 5% perf-profile.children.cycles-pp.filemap_get_entry
1.92 -1.0 0.95 perf-profile.children.cycles-pp.xfs_iunlock
1.10 -1.0 0.14 ± 5% perf-profile.children.cycles-pp.__cond_resched
1.10 -1.0 0.14 ± 3% perf-profile.children.cycles-pp.fault_in_readable
1.18 -1.0 0.23 ± 4% perf-profile.children.cycles-pp.touch_atime
1.15 -0.9 0.22 ± 4% perf-profile.children.cycles-pp.__schedule
0.99 ± 4% -0.9 0.08 ± 6% perf-profile.children.cycles-pp.mem_cgroup_wb_stats
1.01 -0.9 0.12 ± 4% perf-profile.children.cycles-pp.__folio_cancel_dirty
1.01 -0.9 0.13 ± 5% perf-profile.children.cycles-pp.__alloc_pages
0.94 ± 5% -0.9 0.08 ± 6% perf-profile.children.cycles-pp.cgroup_rstat_flush
0.97 -0.8 0.12 ± 6% perf-profile.children.cycles-pp.delete_from_page_cache_batch
1.02 -0.8 0.19 ± 5% perf-profile.children.cycles-pp.schedule
0.90 -0.8 0.10 ± 3% perf-profile.children.cycles-pp.apparmor_file_permission
0.89 -0.8 0.10 ± 4% perf-profile.children.cycles-pp.__mem_cgroup_charge
1.26 ± 4% -0.8 0.51 ± 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.85 -0.7 0.10 ± 4% perf-profile.children.cycles-pp.atime_needs_update
1.23 ± 4% -0.7 0.49 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.82 ± 2% -0.7 0.09 ± 7% perf-profile.children.cycles-pp.folio_account_cleaned
1.06 -0.7 0.35 perf-profile.children.cycles-pp.lookup_open
0.92 -0.7 0.23 ± 2% perf-profile.children.cycles-pp.xfs_inactive_ifree
0.86 -0.7 0.17 ± 4% perf-profile.children.cycles-pp.pick_next_task_fair
0.83 -0.7 0.16 ± 5% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.77 -0.7 0.10 ± 6% perf-profile.children.cycles-pp.get_page_from_freelist
0.82 -0.7 0.17 ± 5% perf-profile.children.cycles-pp.newidle_balance
0.85 -0.6 0.21 ± 4% perf-profile.children.cycles-pp.load_balance
0.72 -0.6 0.09 ± 6% perf-profile.children.cycles-pp.xas_descend
0.72 -0.6 0.09 ± 7% perf-profile.children.cycles-pp.xas_store
0.69 ± 4% -0.6 0.07 ± 5% perf-profile.children.cycles-pp.cgroup_rstat_flush_locked
0.85 -0.6 0.25 perf-profile.children.cycles-pp.xfs_generic_create
0.66 ± 3% -0.6 0.07 ± 8% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.65 -0.6 0.08 ± 6% perf-profile.children.cycles-pp.__mod_lruvec_state
0.82 -0.6 0.24 ± 2% perf-profile.children.cycles-pp.xfs_create
0.74 ± 2% -0.6 0.18 ± 4% perf-profile.children.cycles-pp.find_busiest_group
0.73 -0.6 0.17 ± 4% perf-profile.children.cycles-pp.update_sd_lb_stats
0.64 -0.5 0.09 ± 5% perf-profile.children.cycles-pp.xfs_ifree
0.70 -0.5 0.16 ± 4% perf-profile.children.cycles-pp.vfs_unlink
0.60 ± 2% -0.5 0.06 ± 6% perf-profile.children.cycles-pp.lru_add_fn
0.62 -0.5 0.08 ± 5% perf-profile.children.cycles-pp.xfs_inode_uninit
0.68 -0.5 0.15 ± 4% perf-profile.children.cycles-pp.xfs_buf_read_map
0.68 -0.5 0.15 ± 4% perf-profile.children.cycles-pp.xfs_remove
0.68 -0.5 0.15 ± 4% perf-profile.children.cycles-pp.xfs_vn_unlink
0.66 -0.5 0.15 ± 4% perf-profile.children.cycles-pp.xfs_buf_get_map
0.83 -0.5 0.32 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.67 ± 2% -0.5 0.16 ± 4% perf-profile.children.cycles-pp.update_sg_lb_stats
0.60 -0.5 0.08 ± 5% perf-profile.children.cycles-pp.xfs_difree
0.58 ± 2% -0.5 0.08 perf-profile.children.cycles-pp.down_read
0.56 -0.5 0.07 ± 10% perf-profile.children.cycles-pp.inode_needs_update_time
0.56 -0.5 0.07 ± 5% perf-profile.children.cycles-pp.xfs_break_layouts
0.60 -0.5 0.11 ± 6% perf-profile.children.cycles-pp.xfs_buf_lookup
0.57 -0.5 0.08 ± 5% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.56 -0.5 0.08 ± 4% perf-profile.children.cycles-pp.xfs_read_agi
0.76 -0.5 0.29 perf-profile.children.cycles-pp.tick_nohz_highres_handler
0.54 -0.5 0.07 ± 7% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.51 -0.5 0.06 ± 6% perf-profile.children.cycles-pp.__mod_node_page_state
0.70 -0.4 0.26 ± 2% perf-profile.children.cycles-pp.irq_exit_rcu
0.52 -0.4 0.07 ± 5% perf-profile.children.cycles-pp.xfs_ialloc_read_agi
0.51 -0.4 0.07 ± 7% perf-profile.children.cycles-pp.xlog_cil_insert_format_items
0.49 ± 2% -0.4 0.06 ± 6% perf-profile.children.cycles-pp.task_work_run
0.48 ± 2% -0.4 0.06 ± 6% perf-profile.children.cycles-pp.task_mm_cid_work
0.49 -0.4 0.06 perf-profile.children.cycles-pp.folio_unlock
0.48 -0.4 0.06 ± 9% perf-profile.children.cycles-pp.__count_memcg_events
0.67 -0.4 0.24 ± 3% perf-profile.children.cycles-pp.__do_softirq
0.47 -0.4 0.05 ± 8% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.66 -0.4 0.24 ± 2% perf-profile.children.cycles-pp.tick_sched_handle
0.46 -0.4 0.05 perf-profile.children.cycles-pp.cgroup_rstat_updated
0.47 ± 2% -0.4 0.06 ± 8% perf-profile.children.cycles-pp.generic_write_checks
0.47 -0.4 0.06 perf-profile.children.cycles-pp.down
0.65 -0.4 0.24 ± 3% perf-profile.children.cycles-pp.update_process_times
0.47 ± 2% -0.4 0.06 ± 7% perf-profile.children.cycles-pp.xfs_is_always_cow_inode
0.49 -0.4 0.09 ± 5% perf-profile.children.cycles-pp.xfs_buf_find_lock
0.50 -0.4 0.10 ± 6% perf-profile.children.cycles-pp.schedule_preempt_disabled
2.70 -0.4 2.31 ± 4% perf-profile.children.cycles-pp.xfs_file_write_checks
0.48 -0.4 0.09 ± 5% perf-profile.children.cycles-pp.xfs_buf_lock
0.42 -0.4 0.04 ± 44% perf-profile.children.cycles-pp.rcu_all_qs
0.50 ± 2% -0.4 0.12 ± 3% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.42 -0.4 0.06 ± 9% perf-profile.children.cycles-pp.up_read
0.42 -0.4 0.06 ± 8% perf-profile.children.cycles-pp.__down
0.41 -0.4 0.05 perf-profile.children.cycles-pp.xas_start
0.42 -0.4 0.06 ± 8% perf-profile.children.cycles-pp.___down_common
0.42 -0.4 0.07 ± 7% perf-profile.children.cycles-pp.schedule_timeout
0.50 ± 2% -0.3 0.15 ± 4% perf-profile.children.cycles-pp.xfs_inactive_truncate
0.36 ± 2% -0.3 0.03 ± 70% perf-profile.children.cycles-pp.find_lock_entries
0.36 ± 2% -0.3 0.02 ± 99% perf-profile.children.cycles-pp.current_time
0.38 ± 2% -0.3 0.05 perf-profile.children.cycles-pp.xfs_file_write_iter
0.37 -0.3 0.05 ± 7% perf-profile.children.cycles-pp.xfs_bmbt_to_iomap
0.49 -0.3 0.18 ± 2% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.39 -0.3 0.11 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
0.45 ± 10% -0.3 0.18 ± 5% perf-profile.children.cycles-pp.ktime_get
0.43 -0.3 0.17 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.96 -0.2 0.72 ± 2% perf-profile.children.cycles-pp.up_write
0.29 -0.2 0.05 ± 8% perf-profile.children.cycles-pp.xfs_dir_remove_child
0.24 ± 2% -0.2 0.06 ± 8% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
0.30 ± 14% -0.2 0.12 ± 5% perf-profile.children.cycles-pp.clockevents_program_event
0.24 ± 2% -0.2 0.06 ± 7% perf-profile.children.cycles-pp.xfs_da_read_buf
0.17 ± 6% -0.1 0.06 ± 6% perf-profile.children.cycles-pp.irq_enter_rcu
0.18 ± 3% -0.1 0.08 ± 6% perf-profile.children.cycles-pp.task_tick_fair
0.15 ± 8% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.tick_irq_enter
0.16 ± 3% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.rebalance_domains
0.24 ± 2% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.sysvec_call_function_single
0.14 ± 4% -0.1 0.06 perf-profile.children.cycles-pp.update_load_avg
0.14 ± 3% -0.1 0.09 perf-profile.children.cycles-pp.xfs_vn_lookup
0.14 ± 3% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.xfs_lookup
0.13 ± 2% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.xfs_dir_lookup
0.11 ± 3% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.xfs_free_eofblocks
0.08 ± 5% -0.0 0.05 perf-profile.children.cycles-pp.__update_blocked_fair
0.14 ± 3% -0.0 0.11 ± 3% perf-profile.children.cycles-pp.update_blocked_averages
0.07 ± 10% -0.0 0.04 ± 44% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.08 ± 4% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.xfs_icreate
0.05 +0.0 0.06 perf-profile.children.cycles-pp.xfs_iget
0.03 ± 70% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.xfs_trans_alloc_dir
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.arch_call_rest_init
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.rest_init
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.start_kernel
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.x86_64_start_kernel
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.x86_64_start_reservations
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.xfs_lock_two_inodes
0.00 +0.1 0.06 perf-profile.children.cycles-pp.xfs_can_free_eofblocks
0.00 +0.1 0.06 perf-profile.children.cycles-pp.xfs_iget_cache_hit
0.00 +0.1 0.06 perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +0.1 0.12 ± 6% perf-profile.children.cycles-pp._nohz_idle_balance
0.06 +0.1 0.19 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.2 0.20 ± 4% perf-profile.children.cycles-pp.local_clock_noinstr
0.00 +0.2 0.20 ± 6% perf-profile.children.cycles-pp.local_clock
0.00 +0.2 0.21 ± 5% perf-profile.children.cycles-pp.mean_and_variance_weighted_update
0.47 ± 3% +0.4 0.85 ± 7% perf-profile.children.cycles-pp.xfs_trans_alloc
0.44 ± 4% +0.4 0.84 ± 7% perf-profile.children.cycles-pp.xfs_trans_reserve
0.42 ± 4% +0.4 0.83 ± 7% perf-profile.children.cycles-pp.xfs_log_reserve
0.00 +0.8 0.80 ± 4% perf-profile.children.cycles-pp.time_stats_update_one
1.18 +1.0 2.13 ± 5% perf-profile.children.cycles-pp.kiocb_modified
0.59 ± 5% +1.6 2.14 ± 5% perf-profile.children.cycles-pp.xfs_vn_update_time
87.44 +9.8 97.29 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
86.83 +10.4 97.21 perf-profile.children.cycles-pp.do_syscall_64
46.10 +43.8 89.86 perf-profile.children.cycles-pp.write
43.02 +46.4 89.44 perf-profile.children.cycles-pp.ksys_write
42.16 +47.2 89.33 perf-profile.children.cycles-pp.vfs_write
39.41 +49.6 88.98 perf-profile.children.cycles-pp.xfs_file_buffered_write
34.77 +51.7 86.42 perf-profile.children.cycles-pp.iomap_file_buffered_write
7.68 +74.2 81.93 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
8.03 +74.6 82.65 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
6.53 +76.9 83.44 perf-profile.children.cycles-pp.iomap_iter
4.74 +78.5 83.22 perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
1.17 ± 2% +80.7 81.83 perf-profile.children.cycles-pp.xfs_ilock_for_iomap
0.00 +83.7 83.74 perf-profile.children.cycles-pp.__time_stats_update
5.06 -4.4 0.68 ± 4% perf-profile.self.cycles-pp._copy_to_iter
5.93 ± 2% -3.4 2.52 ± 2% perf-profile.self.cycles-pp.osq_lock
3.45 -3.0 0.43 ± 3% perf-profile.self.cycles-pp.iomap_set_range_uptodate
2.87 -2.5 0.34 ± 4% perf-profile.self.cycles-pp.memset_orig
1.90 -1.7 0.24 ± 3% perf-profile.self.cycles-pp.__fdget_pos
1.84 -1.6 0.24 ± 3% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
1.77 -1.5 0.23 ± 3% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
1.65 -1.4 0.21 ± 3% perf-profile.self.cycles-pp.filemap_read
1.42 -1.3 0.15 ± 3% perf-profile.self.cycles-pp.vfs_write
1.74 -1.2 0.49 perf-profile.self.cycles-pp.rwsem_spin_on_owner
1.42 -1.2 0.18 ± 5% perf-profile.self.cycles-pp.filemap_get_read_batch
1.36 -1.2 0.16 ± 5% perf-profile.self.cycles-pp.workingset_age_nonresident
1.21 -1.1 0.14 ± 4% perf-profile.self.cycles-pp.vfs_read
1.20 -1.0 0.15 ± 4% perf-profile.self.cycles-pp.do_syscall_64
1.07 -0.9 0.13 ± 4% perf-profile.self.cycles-pp.iomap_iter_advance
1.07 -0.9 0.13 ± 3% perf-profile.self.cycles-pp.fault_in_readable
1.03 ± 5% -0.9 0.10 ± 4% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
1.06 -0.9 0.14 ± 2% perf-profile.self.cycles-pp.disk_rr
0.98 ± 2% -0.9 0.08 ± 8% perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
1.15 ± 2% -0.9 0.29 perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
1.30 -0.8 0.46 ± 2% perf-profile.self.cycles-pp.acpi_safe_halt
0.90 ± 3% -0.8 0.06 ± 9% perf-profile.self.cycles-pp.__lruvec_stat_mod_folio
0.88 -0.8 0.11 ± 6% perf-profile.self.cycles-pp.iomap_write_end
0.82 -0.7 0.10 ± 7% perf-profile.self.cycles-pp.__filemap_get_folio
0.78 -0.7 0.11 ± 4% perf-profile.self.cycles-pp.write
0.76 -0.7 0.09 perf-profile.self.cycles-pp.iomap_file_buffered_write
0.77 -0.7 0.10 ± 3% perf-profile.self.cycles-pp.llseek
0.77 -0.7 0.12 ± 4% perf-profile.self.cycles-pp.down_write
0.75 -0.7 0.10 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.72 -0.6 0.09 ± 4% perf-profile.self.cycles-pp.iomap_write_iter
0.72 -0.6 0.09 ± 5% perf-profile.self.cycles-pp.read
0.68 -0.6 0.09 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.66 -0.6 0.08 ± 5% perf-profile.self.cycles-pp.__cond_resched
0.64 -0.6 0.08 ± 4% perf-profile.self.cycles-pp.xas_descend
0.64 ± 2% -0.6 0.08 ± 5% perf-profile.self.cycles-pp.xfs_ilock
0.62 -0.6 0.07 ± 6% perf-profile.self.cycles-pp.release_pages
0.59 -0.5 0.07 ± 5% perf-profile.self.cycles-pp.folio_activate_fn
0.60 -0.5 0.08 ± 6% perf-profile.self.cycles-pp.xfs_file_buffered_write
0.59 -0.5 0.06 ± 7% perf-profile.self.cycles-pp.folio_batch_move_lru
0.58 ± 2% -0.5 0.06 ± 7% perf-profile.self.cycles-pp.iomap_write_begin
0.55 -0.5 0.06 ± 6% perf-profile.self.cycles-pp.apparmor_file_permission
0.56 -0.5 0.07 perf-profile.self.cycles-pp.filemap_get_entry
0.69 -0.5 0.20 ± 4% perf-profile.self.cycles-pp.xfs_iunlock
0.54 ± 4% -0.5 0.06 ± 9% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.54 ± 2% -0.5 0.06 ± 6% perf-profile.self.cycles-pp.__iomap_write_begin
0.53 ± 2% -0.5 0.06 ± 7% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.52 -0.5 0.06 ± 6% perf-profile.self.cycles-pp.iomap_iter
0.50 ± 2% -0.4 0.06 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.50 -0.4 0.06 ± 6% perf-profile.self.cycles-pp.xas_load
0.49 ± 2% -0.4 0.06 ± 9% perf-profile.self.cycles-pp.workingset_activation
0.48 -0.4 0.06 perf-profile.self.cycles-pp.filemap_dirty_folio
0.47 -0.4 0.05 ± 7% perf-profile.self.cycles-pp.__mod_node_page_state
0.48 ± 2% -0.4 0.06 ± 7% perf-profile.self.cycles-pp.folio_mark_accessed
0.47 ± 2% -0.4 0.05 ± 8% perf-profile.self.cycles-pp.rw_verify_area
0.46 -0.4 0.06 ± 6% perf-profile.self.cycles-pp.folio_unlock
0.44 ± 2% -0.4 0.04 ± 44% perf-profile.self.cycles-pp.task_mm_cid_work
0.39 -0.4 0.02 ± 99% perf-profile.self.cycles-pp.__filemap_add_folio
0.42 ± 2% -0.4 0.06 perf-profile.self.cycles-pp.down_read
0.48 ± 3% -0.4 0.12 ± 3% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.40 -0.4 0.04 ± 44% perf-profile.self.cycles-pp.xfs_file_write_checks
0.40 -0.4 0.04 ± 45% perf-profile.self.cycles-pp.up_read
0.38 -0.3 0.03 ± 70% perf-profile.self.cycles-pp.atime_needs_update
0.46 ± 2% -0.3 0.12 ± 3% perf-profile.self.cycles-pp.update_sg_lb_stats
0.38 -0.3 0.04 ± 44% perf-profile.self.cycles-pp.xas_store
0.36 ± 3% -0.3 0.05 ± 7% perf-profile.self.cycles-pp.xfs_is_always_cow_inode
0.35 -0.3 0.05 perf-profile.self.cycles-pp.xfs_bmbt_to_iomap
0.34 ± 4% -0.3 0.05 perf-profile.self.cycles-pp.xfs_file_read_iter
0.42 ± 11% -0.3 0.17 ± 6% perf-profile.self.cycles-pp.ktime_get
0.36 -0.3 0.11 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
0.28 ± 3% -0.2 0.07 perf-profile.self.cycles-pp.xfs_ilock_for_iomap
0.78 -0.1 0.68 ± 2% perf-profile.self.cycles-pp.up_write
0.63 +0.1 0.73 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.06 +0.1 0.19 ± 5% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.2 0.20 ± 5% perf-profile.self.cycles-pp.mean_and_variance_weighted_update
0.00 +0.6 0.60 ± 5% perf-profile.self.cycles-pp.time_stats_update_one
0.00 +0.7 0.69 ± 2% perf-profile.self.cycles-pp.__time_stats_update
7.68 +74.2 81.93 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-icl-2sp2: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-12/performance/1BRD_48G/xfs/x86_64-rhel-8.3/3000/debian-12-x86_64-20240206.cgz/lkp-icl-2sp2/disk_rw/aim7
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
4.092e+09 +56.1% 6.388e+09 ± 21% cpuidle..time
89.56 ± 3% +322.7% 378.57 uptime.boot
10178 ± 3% +25.6% 12785 ± 10% uptime.idle
82.27 -80.8% 15.82 ± 23% iostat.cpu.idle
16.13 +420.3% 83.93 ± 4% iostat.cpu.system
1.59 -84.6% 0.25 iostat.cpu.user
3.17 ± 28% +6500.0% 209.00 ± 6% perf-c2c.DRAM.local
38.00 ± 19% +62325.4% 23721 perf-c2c.DRAM.remote
30.33 ± 22% +90835.2% 27583 perf-c2c.HITM.local
6.83 ± 29% +2.6e+05% 18029 perf-c2c.HITM.remote
37.17 ± 20% +1.2e+05% 45612 perf-c2c.HITM.total
82.24 -80.8% 15.80 ± 23% vmstat.cpu.id
16.15 +419.9% 83.94 ± 4% vmstat.cpu.sy
21.85 ± 7% +462.3% 122.84 ± 10% vmstat.procs.r
58000 -83.0% 9838 ± 11% vmstat.system.cs
147400 +4.1% 153458 vmstat.system.in
81.46 -66.1 15.36 ± 24% mpstat.cpu.all.idle%
0.25 ± 3% -0.1 0.15 mpstat.cpu.all.irq%
0.10 -0.1 0.02 ± 15% mpstat.cpu.all.soft%
16.57 +67.7 84.23 ± 4% mpstat.cpu.all.sys%
1.62 -1.4 0.24 mpstat.cpu.all.usr%
28.64 ± 3% +249.0% 99.97 mpstat.max_utilization_pct
491457 -88.7% 55715 ± 2% aim7.jobs-per-min
36.81 +778.6% 323.40 ± 2% aim7.time.elapsed_time
36.81 +778.6% 323.40 ± 2% aim7.time.elapsed_time.max
46646 ± 4% +3038.8% 1464149 ± 43% aim7.time.involuntary_context_switches
948.00 ± 16% -65.4% 327.83 ± 54% aim7.time.major_page_faults
174670 +59.5% 278569 ± 3% aim7.time.minor_page_faults
713.96 +4799.5% 34980 ± 6% aim7.time.system_time
38.75 +25.3% 48.54 aim7.time.user_time
497817 -39.8% 299645 ± 18% aim7.time.voluntary_context_switches
39532 ± 2% +2281.0% 941267 ± 2% meminfo.Active
39452 ± 2% +2285.6% 941187 ± 2% meminfo.Active(anon)
42754 ± 8% +449.8% 235082 ± 24% meminfo.AnonHugePages
4216054 +30.3% 5493552 meminfo.Cached
3144630 +28.6% 4045028 meminfo.Committed_AS
1003816 +39.5% 1400732 meminfo.Dirty
1856110 +20.9% 2243634 meminfo.Inactive
1007177 +39.5% 1405244 meminfo.Inactive(file)
141074 +13.1% 159548 meminfo.KReclaimable
108107 ± 2% -49.9% 54174 ± 6% meminfo.Mapped
141074 +13.1% 159548 meminfo.SReclaimable
93866 ± 2% +932.5% 969158 ± 2% meminfo.Shmem
8717280 +11.4% 9712081 ± 2% meminfo.max_used_kB
5319 ± 28% +2726.0% 150325 ± 49% numa-meminfo.node0.Active
5239 ± 28% +2767.7% 150245 ± 49% numa-meminfo.node0.Active(anon)
510121 ± 3% +37.7% 702606 numa-meminfo.node0.Dirty
512156 ± 3% +37.8% 705662 numa-meminfo.node0.Inactive(file)
205841 ± 25% +26.1% 259615 ± 6% numa-meminfo.node0.SUnreclaim
10668 ± 16% +1370.5% 156879 ± 48% numa-meminfo.node0.Shmem
34264 ± 7% +2209.4% 791285 ± 10% numa-meminfo.node1.Active
34264 ± 7% +2209.4% 791285 ± 10% numa-meminfo.node1.Active(anon)
39664 ± 22% +412.2% 203165 ± 33% numa-meminfo.node1.AnonHugePages
493715 ± 2% +41.4% 697996 numa-meminfo.node1.Dirty
495139 ± 2% +41.3% 699479 numa-meminfo.node1.Inactive(file)
72700 ± 14% -43.4% 41137 ± 26% numa-meminfo.node1.Mapped
45256 ±131% -94.3% 2597 ± 15% numa-meminfo.node1.PageTables
84096 ± 4% +865.9% 812326 ± 10% numa-meminfo.node1.Shmem
1320 ± 28% +2744.0% 37568 ± 49% numa-vmstat.node0.nr_active_anon
127287 ± 3% +37.9% 175532 numa-vmstat.node0.nr_dirty
127777 ± 3% +38.0% 176272 numa-vmstat.node0.nr_inactive_file
2683 ± 16% +1361.6% 39218 ± 48% numa-vmstat.node0.nr_shmem
51429 ± 25% +26.2% 64886 ± 6% numa-vmstat.node0.nr_slab_unreclaimable
1320 ± 28% +2744.0% 37568 ± 49% numa-vmstat.node0.nr_zone_active_anon
127766 ± 3% +38.0% 176272 numa-vmstat.node0.nr_zone_inactive_file
127273 ± 3% +37.9% 175532 numa-vmstat.node0.nr_zone_write_pending
8585 ± 7% +2204.2% 197814 ± 10% numa-vmstat.node1.nr_active_anon
19.37 ± 23% +412.1% 99.20 ± 33% numa-vmstat.node1.nr_anon_transparent_hugepages
122933 ± 2% +41.8% 174349 numa-vmstat.node1.nr_dirty
123307 ± 2% +41.7% 174673 numa-vmstat.node1.nr_inactive_file
18781 ± 13% -44.3% 10468 ± 27% numa-vmstat.node1.nr_mapped
11299 ±131% -94.3% 649.55 ± 15% numa-vmstat.node1.nr_page_table_pages
21618 ± 3% +839.5% 203091 ± 10% numa-vmstat.node1.nr_shmem
8585 ± 7% +2204.2% 197814 ± 10% numa-vmstat.node1.nr_zone_active_anon
123307 ± 2% +41.7% 174673 numa-vmstat.node1.nr_zone_inactive_file
122937 ± 2% +41.8% 174350 numa-vmstat.node1.nr_zone_write_pending
9882 ± 2% +2282.1% 235403 ± 2% proc-vmstat.nr_active_anon
199413 +2.0% 203340 proc-vmstat.nr_anon_pages
20.80 ± 8% +452.2% 114.87 ± 24% proc-vmstat.nr_anon_transparent_hugepages
250598 +39.6% 349822 proc-vmstat.nr_dirty
1054061 +30.3% 1373091 proc-vmstat.nr_file_pages
212518 -1.4% 209456 proc-vmstat.nr_inactive_anon
251461 +39.6% 350977 proc-vmstat.nr_inactive_file
68209 +2.1% 69671 proc-vmstat.nr_kernel_stack
27785 ± 2% -50.9% 13639 ± 6% proc-vmstat.nr_mapped
23849 ± 2% +916.0% 242311 ± 2% proc-vmstat.nr_shmem
35265 +13.1% 39882 proc-vmstat.nr_slab_reclaimable
90827 +4.7% 95061 proc-vmstat.nr_slab_unreclaimable
9882 ± 2% +2282.1% 235403 ± 2% proc-vmstat.nr_zone_active_anon
212518 -1.4% 209456 proc-vmstat.nr_zone_inactive_anon
251460 +39.6% 350977 proc-vmstat.nr_zone_inactive_file
250598 +39.6% 349824 proc-vmstat.nr_zone_write_pending
20152 ± 16% +354.2% 91526 ± 9% proc-vmstat.numa_hint_faults
11027 ± 24% +181.3% 31016 ± 10% proc-vmstat.numa_hint_faults_local
134138 +1.1% 135631 proc-vmstat.numa_other
10519 ± 60% +409.8% 53631 ± 22% proc-vmstat.numa_pages_migrated
26727 ± 2% +551.7% 174180 ± 9% proc-vmstat.pgactivate
77903322 +1.1% 78778278 proc-vmstat.pgalloc_normal
520430 +153.9% 1321137 proc-vmstat.pgfault
77544961 +1.0% 78338147 proc-vmstat.pgfree
10519 ± 60% +409.8% 53631 ± 22% proc-vmstat.pgmigrate_success
18267 ± 11% +396.2% 90639 ± 5% proc-vmstat.pgreuse
1.29 +22.4% 1.58 ± 2% perf-stat.i.MPKI
1.378e+10 -50.4% 6.836e+09 ± 2% perf-stat.i.branch-instructions
1.92 ± 4% -1.5 0.37 ± 3% perf-stat.i.branch-miss-rate%
55160833 -73.8% 14453891 ± 2% perf-stat.i.branch-misses
23.93 +11.4 35.30 perf-stat.i.cache-miss-rate%
1.166e+08 -59.0% 47763711 perf-stat.i.cache-misses
4.074e+08 -67.4% 1.329e+08 perf-stat.i.cache-references
58692 -83.2% 9857 ± 11% perf-stat.i.context-switches
0.78 +1082.1% 9.26 ± 2% perf-stat.i.cpi
5.656e+10 +397.6% 2.814e+11 ± 4% perf-stat.i.cpu-cycles
1507 ± 2% -39.8% 907.28 ± 9% perf-stat.i.cpu-migrations
1188 ± 9% +387.8% 5797 ± 5% perf-stat.i.cycles-between-cache-misses
6.918e+10 -57.2% 2.958e+10 perf-stat.i.instructions
1.29 -88.3% 0.15 ± 2% perf-stat.i.ipc
26.74 ± 16% -96.0% 1.07 ± 55% perf-stat.i.major-faults
12211 -68.3% 3871 perf-stat.i.minor-faults
12238 -68.4% 3872 perf-stat.i.page-faults
0.39 -0.2 0.21 ± 4% perf-stat.overall.branch-miss-rate%
28.60 +7.3 35.90 perf-stat.overall.cache-miss-rate%
0.82 +1060.1% 9.50 ± 2% perf-stat.overall.cpi
485.33 +1113.9% 5891 ± 5% perf-stat.overall.cycles-between-cache-misses
1.22 -91.4% 0.11 ± 2% perf-stat.overall.ipc
1.365e+10 -50.0% 6.825e+09 ± 2% perf-stat.ps.branch-instructions
52868687 -72.7% 14449111 ± 2% perf-stat.ps.branch-misses
1.156e+08 -58.8% 47686827 perf-stat.ps.cache-misses
4.043e+08 -67.1% 1.328e+08 perf-stat.ps.cache-references
58046 -83.1% 9815 ± 11% perf-stat.ps.context-switches
124590 +2.4% 127598 perf-stat.ps.cpu-clock
5.612e+10 +400.3% 2.808e+11 ± 4% perf-stat.ps.cpu-cycles
1493 ± 2% -39.4% 905.34 ± 9% perf-stat.ps.cpu-migrations
6.85e+10 -56.9% 2.953e+10 perf-stat.ps.instructions
25.26 ± 16% -95.9% 1.04 ± 54% perf-stat.ps.major-faults
11556 -66.5% 3867 perf-stat.ps.minor-faults
11581 -66.6% 3868 perf-stat.ps.page-faults
124590 +2.4% 127598 perf-stat.ps.task-clock
2.588e+12 +270.4% 9.585e+12 ± 4% perf-stat.total.instructions
2685 ± 10% +5.6e+05% 15041909 ± 11% sched_debug.cfs_rq:/.avg_vruntime.avg
46128 ± 28% +33525.5% 15510760 ± 12% sched_debug.cfs_rq:/.avg_vruntime.max
66.88 ± 34% +2.1e+07% 14078207 ± 12% sched_debug.cfs_rq:/.avg_vruntime.min
5569 ± 19% +3338.4% 191488 ± 27% sched_debug.cfs_rq:/.avg_vruntime.stddev
0.12 ± 9% +555.4% 0.79 ± 8% sched_debug.cfs_rq:/.h_nr_running.avg
0.33 ± 6% +30.5% 0.43 ± 4% sched_debug.cfs_rq:/.h_nr_running.stddev
4835 ± 83% +123.5% 10807 ± 23% sched_debug.cfs_rq:/.load.avg
1673 ± 82% -87.0% 217.07 ± 54% sched_debug.cfs_rq:/.load_avg.avg
2685 ± 10% +5.6e+05% 15041909 ± 11% sched_debug.cfs_rq:/.min_vruntime.avg
46128 ± 28% +33525.5% 15510760 ± 12% sched_debug.cfs_rq:/.min_vruntime.max
66.88 ± 34% +2.1e+07% 14078207 ± 12% sched_debug.cfs_rq:/.min_vruntime.min
5569 ± 19% +3338.4% 191488 ± 27% sched_debug.cfs_rq:/.min_vruntime.stddev
0.12 ± 9% +502.5% 0.72 ± 3% sched_debug.cfs_rq:/.nr_running.avg
26.53 ± 42% -86.2% 3.67 ± 44% sched_debug.cfs_rq:/.removed.runnable_avg.avg
532.83 ± 3% -83.2% 89.72 ± 8% sched_debug.cfs_rq:/.removed.runnable_avg.max
106.34 ± 22% -84.9% 16.10 ± 24% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
26.53 ± 42% -86.2% 3.67 ± 44% sched_debug.cfs_rq:/.removed.util_avg.avg
532.83 ± 3% -83.2% 89.44 ± 9% sched_debug.cfs_rq:/.removed.util_avg.max
106.34 ± 22% -84.9% 16.08 ± 25% sched_debug.cfs_rq:/.removed.util_avg.stddev
268.22 ± 7% +198.9% 801.70 ± 9% sched_debug.cfs_rq:/.runnable_avg.avg
266.99 ± 7% +174.7% 733.53 ± 4% sched_debug.cfs_rq:/.util_avg.avg
306.90 ± 9% -18.0% 251.75 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
16.67 ± 15% +4066.8% 694.53 ± 11% sched_debug.cfs_rq:/.util_est.avg
519.33 ± 6% +262.4% 1882 ± 14% sched_debug.cfs_rq:/.util_est.max
84.40 ± 9% +332.6% 365.14 ± 6% sched_debug.cfs_rq:/.util_est.stddev
861301 +11.4% 959063 sched_debug.cpu.avg_idle.avg
4140 ± 6% +13041.4% 544165 ± 10% sched_debug.cpu.avg_idle.min
214548 ± 4% -62.3% 80963 ± 6% sched_debug.cpu.avg_idle.stddev
52341 ± 5% +291.7% 205031 sched_debug.cpu.clock.avg
52347 ± 5% +291.8% 205082 sched_debug.cpu.clock.max
52332 ± 5% +291.7% 204974 sched_debug.cpu.clock.min
3.24 ± 4% +836.0% 30.32 ± 7% sched_debug.cpu.clock.stddev
52196 ± 5% +292.0% 204629 sched_debug.cpu.clock_task.avg
52335 ± 5% +291.3% 204814 sched_debug.cpu.clock_task.max
43435 ± 6% +350.5% 195667 sched_debug.cpu.clock_task.min
340.16 ± 11% +1060.6% 3948 ± 5% sched_debug.cpu.curr->pid.avg
3184 +210.6% 9891 ± 3% sched_debug.cpu.curr->pid.max
953.47 ± 5% +101.5% 1921 ± 10% sched_debug.cpu.curr->pid.stddev
0.00 ± 10% +137.2% 0.00 ± 6% sched_debug.cpu.next_balance.stddev
0.12 ± 12% +568.3% 0.78 ± 9% sched_debug.cpu.nr_running.avg
0.33 ± 7% +32.0% 0.43 ± 4% sched_debug.cpu.nr_running.stddev
1078 ± 3% +1119.6% 13149 ± 10% sched_debug.cpu.nr_switches.avg
11461 ± 21% +304.3% 46341 ± 19% sched_debug.cpu.nr_switches.max
124.50 ± 5% +8251.1% 10397 ± 12% sched_debug.cpu.nr_switches.min
1719 ± 9% +149.6% 4292 ± 20% sched_debug.cpu.nr_switches.stddev
0.01 ± 70% +1.7e+05% 17.66 sched_debug.cpu.nr_uninterruptible.avg
26.33 ± 17% +285.3% 101.47 ± 16% sched_debug.cpu.nr_uninterruptible.max
-18.67 +125.6% -42.11 sched_debug.cpu.nr_uninterruptible.min
5.47 ± 7% +368.8% 25.66 ± 15% sched_debug.cpu.nr_uninterruptible.stddev
52336 ± 5% +291.6% 204973 sched_debug.cpu_clk
51103 ± 5% +298.7% 203741 sched_debug.ktime
53300 ± 5% +286.3% 205885 sched_debug.sched_clk
38.98 -38.1 0.86 ± 3% perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
15.89 -15.9 0.00 perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
12.47 -12.5 0.00 perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
9.50 -9.5 0.00 perf-profile.calltrace.cycles-pp.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
7.45 -7.4 0.00 perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_write_end.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write
6.23 ± 5% -6.2 0.00 perf-profile.calltrace.cycles-pp.creat64
6.21 ± 5% -6.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
6.20 ± 5% -6.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
6.14 ± 5% -6.1 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
6.14 ± 5% -6.1 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
6.10 ± 6% -6.1 0.00 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.10 ± 6% -6.1 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64
6.03 ± 6% -6.0 0.00 perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat
5.72 ± 6% -5.7 0.00 perf-profile.calltrace.cycles-pp.unlink
5.69 ± 6% -5.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
5.69 ± 6% -5.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
5.64 ± 6% -5.6 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
5.63 ± 6% -5.6 0.00 perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
0.00 +1.5 1.49 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve
0.00 +1.5 1.51 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc
0.00 +1.5 1.53 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time
0.00 +1.6 1.59 perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified
0.00 +1.6 1.60 perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.00 +1.6 1.62 perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
4.84 +1.9 6.76 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
2.09 ± 2% +4.6 6.70 ± 3% perf-profile.calltrace.cycles-pp.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write
0.00 +4.8 4.85 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified
0.00 +4.9 4.86 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
0.00 +4.9 4.89 ± 4% perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
0.58 ± 5% +6.1 6.66 ± 3% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write
74.04 +25.0 99.07 perf-profile.calltrace.cycles-pp.write
70.25 +28.7 98.98 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
69.76 +29.2 98.97 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
68.04 +30.9 98.93 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
66.43 +32.5 98.89 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
61.26 +37.5 98.77 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
52.58 +39.3 91.92 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
11.86 +79.2 91.02 perf-profile.calltrace.cycles-pp.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
8.24 +82.7 90.93 perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
2.02 +88.5 90.51 perf-profile.calltrace.cycles-pp.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write.xfs_file_buffered_write
0.00 +89.7 89.67 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.00 +89.8 89.83 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter
0.00 +90.2 90.23 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin.iomap_iter.iomap_file_buffered_write
39.24 -38.4 0.86 ± 3% perf-profile.children.cycles-pp.iomap_write_iter
16.07 -15.7 0.36 ± 4% perf-profile.children.cycles-pp.iomap_write_begin
12.63 -12.4 0.26 ± 2% perf-profile.children.cycles-pp.iomap_write_end
11.91 ± 6% -11.7 0.23 ± 12% perf-profile.children.cycles-pp.down_write
9.80 ± 7% -9.6 0.17 ± 15% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
9.66 -9.4 0.23 ± 3% perf-profile.children.cycles-pp.__filemap_get_folio
8.78 ± 8% -8.6 0.16 ± 13% perf-profile.children.cycles-pp.rwsem_optimistic_spin
8.38 -8.2 0.16 ± 3% perf-profile.children.cycles-pp.iomap_set_range_uptodate
7.70 ± 9% -7.6 0.09 ± 19% perf-profile.children.cycles-pp.osq_lock
6.25 ± 5% -6.0 0.25 ± 6% perf-profile.children.cycles-pp.do_sys_openat2
6.24 ± 5% -6.0 0.25 ± 6% perf-profile.children.cycles-pp.creat64
6.19 ± 5% -5.9 0.25 ± 6% perf-profile.children.cycles-pp.do_filp_open
6.18 ± 5% -5.9 0.25 ± 5% perf-profile.children.cycles-pp.path_openat
6.14 ± 5% -5.9 0.25 ± 6% perf-profile.children.cycles-pp.__x64_sys_creat
6.03 ± 6% -5.8 0.24 ± 6% perf-profile.children.cycles-pp.open_last_lookups
5.74 ± 6% -5.6 0.15 ± 8% perf-profile.children.cycles-pp.unlink
5.64 ± 6% -5.5 0.15 ± 8% perf-profile.children.cycles-pp.__x64_sys_unlink
5.63 ± 6% -5.5 0.15 ± 8% perf-profile.children.cycles-pp.do_unlinkat
5.00 -4.9 0.10 ± 5% perf-profile.children.cycles-pp.__iomap_write_begin
4.35 -4.3 0.10 ± 5% perf-profile.children.cycles-pp.llseek
3.96 -3.8 0.17 ± 2% perf-profile.children.cycles-pp.__close
3.94 -3.8 0.16 ± 3% perf-profile.children.cycles-pp.__x64_sys_close
3.91 -3.7 0.16 ± 3% perf-profile.children.cycles-pp.__fput
3.77 -3.7 0.07 ± 6% perf-profile.children.cycles-pp.dput
3.74 -3.7 0.07 ± 6% perf-profile.children.cycles-pp.__dentry_kill
3.66 -3.6 0.07 ± 5% perf-profile.children.cycles-pp.evict
3.64 -3.6 0.07 ± 5% perf-profile.children.cycles-pp.truncate_inode_pages_range
3.50 -3.4 0.08 ± 6% perf-profile.children.cycles-pp.filemap_add_folio
3.44 ± 2% -3.4 0.07 ± 9% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
3.34 -3.2 0.09 ± 5% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
3.08 -2.9 0.21 ± 5% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
2.96 -2.8 0.14 ± 5% perf-profile.children.cycles-pp.xfs_ilock
2.87 -2.8 0.06 ± 9% perf-profile.children.cycles-pp.zero_user_segments
2.88 -2.8 0.07 perf-profile.children.cycles-pp.filemap_get_entry
2.81 -2.8 0.06 ± 6% perf-profile.children.cycles-pp.filemap_dirty_folio
2.79 -2.7 0.06 ± 6% perf-profile.children.cycles-pp.memset_orig
2.49 -2.4 0.06 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64
2.45 -2.4 0.08 ± 12% perf-profile.children.cycles-pp.xfs_iunlock
2.32 -2.3 0.06 ± 6% perf-profile.children.cycles-pp.fault_in_iov_iter_readable
2.28 -2.2 0.06 ± 8% perf-profile.children.cycles-pp.__filemap_add_folio
2.11 -2.1 0.05 perf-profile.children.cycles-pp.iomap_iter_advance
2.06 -2.0 0.05 ± 7% perf-profile.children.cycles-pp.fault_in_readable
2.13 ± 2% -2.0 0.17 ± 5% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.74 -1.7 0.02 ± 99% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
1.52 -1.5 0.07 ± 13% perf-profile.children.cycles-pp.rwsem_spin_on_owner
1.50 ± 2% -1.4 0.14 ± 11% perf-profile.children.cycles-pp.kthread
1.50 ± 2% -1.4 0.14 ± 11% perf-profile.children.cycles-pp.ret_from_fork
1.50 ± 2% -1.4 0.14 ± 11% perf-profile.children.cycles-pp.ret_from_fork_asm
1.47 ± 2% -1.3 0.14 ± 12% perf-profile.children.cycles-pp.worker_thread
1.42 ± 3% -1.3 0.12 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
1.37 ± 2% -1.2 0.14 ± 10% perf-profile.children.cycles-pp.process_one_work
1.33 ± 2% -1.2 0.11 perf-profile.children.cycles-pp.xfs_inodegc_worker
1.31 ± 2% -1.2 0.11 perf-profile.children.cycles-pp.xfs_inactive
1.28 ± 2% -1.1 0.15 ± 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.23 ± 3% -1.1 0.10 perf-profile.children.cycles-pp.xlog_cil_commit
1.25 ± 2% -1.1 0.14 ± 3% perf-profile.children.cycles-pp.hrtimer_interrupt
1.00 -0.8 0.16 perf-profile.children.cycles-pp.lookup_open
0.92 -0.8 0.10 perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.84 ± 2% -0.8 0.06 perf-profile.children.cycles-pp.xfs_inactive_ifree
0.88 -0.8 0.11 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.82 -0.7 0.10 ± 4% perf-profile.children.cycles-pp.tick_nohz_highres_handler
0.79 -0.7 0.11 perf-profile.children.cycles-pp.xfs_generic_create
0.76 -0.6 0.11 perf-profile.children.cycles-pp.xfs_create
0.72 -0.6 0.10 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.71 -0.6 0.10 ± 4% perf-profile.children.cycles-pp.update_process_times
0.64 -0.6 0.06 ± 6% perf-profile.children.cycles-pp.vfs_unlink
0.62 -0.6 0.06 perf-profile.children.cycles-pp.xfs_remove
0.62 -0.6 0.06 ± 6% perf-profile.children.cycles-pp.xfs_vn_unlink
0.46 ± 2% -0.4 0.05 perf-profile.children.cycles-pp.xfs_inactive_truncate
0.47 -0.4 0.08 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.21 ± 4% -0.2 0.05 perf-profile.children.cycles-pp.task_tick_fair
0.14 ± 2% -0.1 0.05 perf-profile.children.cycles-pp.xfs_vn_lookup
0.13 ± 2% -0.1 0.05 perf-profile.children.cycles-pp.xfs_lookup
0.12 ± 4% -0.1 0.05 perf-profile.children.cycles-pp.xfs_dir_lookup
0.10 ± 3% -0.1 0.05 perf-profile.children.cycles-pp.xfs_free_eofblocks
0.12 ± 3% -0.0 0.09 perf-profile.children.cycles-pp.xfs_release
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.local_clock_noinstr
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.local_clock
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.mean_and_variance_weighted_update
0.00 +0.2 0.25 ± 2% perf-profile.children.cycles-pp.time_stats_update_one
0.45 ± 4% +1.2 1.64 perf-profile.children.cycles-pp.xfs_trans_alloc
0.42 ± 3% +1.2 1.62 perf-profile.children.cycles-pp.xfs_trans_reserve
0.39 ± 4% +1.2 1.61 perf-profile.children.cycles-pp.xfs_log_reserve
5.10 +1.7 6.77 ± 3% perf-profile.children.cycles-pp.xfs_file_write_checks
2.21 ± 2% +4.5 6.70 ± 3% perf-profile.children.cycles-pp.kiocb_modified
0.58 ± 5% +6.1 6.66 ± 3% perf-profile.children.cycles-pp.xfs_vn_update_time
88.41 +11.2 99.65 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
87.83 +11.8 99.64 perf-profile.children.cycles-pp.do_syscall_64
74.36 +24.8 99.13 perf-profile.children.cycles-pp.write
68.20 +30.8 98.96 perf-profile.children.cycles-pp.ksys_write
66.63 +32.3 98.92 perf-profile.children.cycles-pp.vfs_write
61.46 +37.3 98.78 perf-profile.children.cycles-pp.xfs_file_buffered_write
52.72 +39.2 91.92 perf-profile.children.cycles-pp.iomap_file_buffered_write
12.07 +79.0 91.03 perf-profile.children.cycles-pp.iomap_iter
8.66 +82.3 90.95 perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
2.18 +88.4 90.55 perf-profile.children.cycles-pp.xfs_ilock_for_iomap
0.90 ± 2% +95.7 96.58 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.65 ± 2% +95.7 96.36 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +97.0 97.02 perf-profile.children.cycles-pp.__time_stats_update
8.27 -8.1 0.16 ± 3% perf-profile.self.cycles-pp.iomap_set_range_uptodate
7.57 ± 9% -7.5 0.09 ± 19% perf-profile.self.cycles-pp.osq_lock
3.28 -3.2 0.09 ± 5% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
2.77 -2.7 0.06 ± 6% perf-profile.self.cycles-pp.memset_orig
2.67 -2.6 0.06 ± 9% perf-profile.self.cycles-pp.vfs_write
2.03 -2.0 0.04 ± 44% perf-profile.self.cycles-pp.iomap_iter_advance
2.00 -2.0 0.04 ± 44% perf-profile.self.cycles-pp.fault_in_readable
2.12 -1.9 0.18 ± 4% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
1.69 -1.7 0.02 ± 99% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
1.50 -1.4 0.06 ± 7% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.88 ± 2% -0.8 0.10 ± 3% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.52 ± 2% -0.5 0.06 ± 6% perf-profile.self.cycles-pp.xfs_ilock_for_iomap
0.45 ± 2% -0.2 0.22 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.mean_and_variance_weighted_update
0.00 +0.2 0.18 ± 4% perf-profile.self.cycles-pp.time_stats_update_one
0.00 +0.2 0.21 ± 3% perf-profile.self.cycles-pp.__time_stats_update
0.64 ± 2% +95.7 96.36 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-icl-2sp5: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
gcc-12/performance/bufferedio/1SSD/xfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp5/MRDM/fxmark/18
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.72 -0.2 0.49 ± 4% mpstat.cpu.all.usr%
130035 ± 9% -21.4% 102208 ± 23% numa-meminfo.node1.SUnreclaim
32512 ± 9% -21.4% 25556 ± 23% numa-vmstat.node1.nr_slab_unreclaimable
0.08 ±143% -87.1% 0.01 ± 73% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.__flush_work.isra.0
60568 ± 10% -41.6% 35398 ± 11% turbostat.POLL
3021562 -10.8% 2694935 ± 3% meminfo.KReclaimable
3021562 -10.8% 2694935 ± 3% meminfo.SReclaimable
24629 -10.8% 21974 ± 3% vmstat.io.bo
26873 ± 2% -12.2% 23585 ± 4% vmstat.system.cs
111867 ± 3% -11.2% 99343 sched_debug.cpu.nr_switches.avg
468160 ± 8% -18.4% 381917 ± 6% sched_debug.cpu.nr_switches.max
96264 ± 9% -17.1% 79801 ± 6% sched_debug.cpu.nr_switches.stddev
13.28 -40.3% 7.93 ± 5% fxmark.ssd_xfs_MRDM_18_bufferedio.user_sec
1.47 -40.3% 0.88 ± 5% fxmark.ssd_xfs_MRDM_18_bufferedio.user_util
4.476e+08 -39.0% 2.728e+08 ± 3% fxmark.ssd_xfs_MRDM_18_bufferedio.works
8952400 -39.0% 5456929 ± 3% fxmark.ssd_xfs_MRDM_18_bufferedio.works/sec
755303 -10.8% 673770 ± 3% proc-vmstat.nr_slab_reclaimable
76184 -1.7% 74884 proc-vmstat.nr_slab_unreclaimable
1634297 -4.6% 1559025 proc-vmstat.numa_hit
1500937 -5.0% 1425733 proc-vmstat.numa_local
2847649 -5.9% 2680074 proc-vmstat.pgalloc_normal
2772462 -5.7% 2614732 proc-vmstat.pgfree
3620160 -8.9% 3299175 proc-vmstat.pgpgout
48293239 -7.5% 44664801 ± 5% perf-stat.i.branch-instructions
26937 ± 2% -12.4% 23585 ± 4% perf-stat.i.context-switches
53014632 -6.9% 49367688 ± 5% perf-stat.i.dTLB-loads
17975906 -6.4% 16824039 ± 5% perf-stat.i.dTLB-stores
2.386e+08 -7.4% 2.211e+08 ± 5% perf-stat.i.instructions
14.26 -3.7% 13.73 ± 2% perf-stat.i.metric.K/sec
0.95 -6.8% 0.89 ± 5% perf-stat.i.metric.M/sec
10071 ± 8% -13.5% 8707 ± 7% perf-stat.i.node-stores
48002301 ± 2% -7.6% 44344423 ± 5% perf-stat.ps.branch-instructions
27015 ± 2% -12.4% 23660 ± 4% perf-stat.ps.context-switches
52705638 -7.0% 49022925 ± 5% perf-stat.ps.dTLB-loads
17882324 -6.5% 16719436 ± 5% perf-stat.ps.dTLB-stores
2.372e+08 -7.5% 2.195e+08 ± 5% perf-stat.ps.instructions
10008 ± 8% -13.6% 8648 ± 7% perf-stat.ps.node-stores
22.94 ± 12% -21.6 1.38 ± 6% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
22.82 ± 12% -21.5 1.30 ± 6% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_readdir.iterate_dir.__x64_sys_getdents64
20.80 ± 14% -20.8 0.00 perf-profile.calltrace.cycles-pp.down_read.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
19.35 ± 15% -18.6 0.79 ± 19% perf-profile.calltrace.cycles-pp.down_read_killable.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.54 ± 15% -16.5 0.08 ±223% perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.iterate_dir.__x64_sys_getdents64.do_syscall_64
16.62 ± 15% -16.4 0.18 ±141% perf-profile.calltrace.cycles-pp.touch_atime.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.42 ± 8% -5.4 0.00 perf-profile.calltrace.cycles-pp.up_read.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.12 ± 18% -5.1 0.00 perf-profile.calltrace.cycles-pp.xfs_ifork_zapped.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
2.54 ± 10% -0.9 1.63 ± 4% perf-profile.calltrace.cycles-pp.xfs_dir2_leaf_getdents.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
0.00 +0.8 0.83 ± 9% perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
97.98 +1.0 98.96 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.getdents64
97.79 +1.1 98.86 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
0.00 +1.3 1.30 ± 7% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
97.32 +1.3 98.62 perf-profile.calltrace.cycles-pp.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
0.00 +1.9 1.90 ± 3% perf-profile.calltrace.cycles-pp.time_stats_update_one.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
95.87 +2.0 97.88 perf-profile.calltrace.cycles-pp.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
53.85 ± 11% +42.3 96.18 perf-profile.calltrace.cycles-pp.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
20.98 ± 14% +70.2 91.19 perf-profile.calltrace.cycles-pp.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
0.00 +85.3 85.26 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir
0.00 +86.5 86.47 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
0.00 +89.7 89.69 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
28.26 ± 8% -26.8 1.49 ± 6% perf-profile.children.cycles-pp.up_read
22.97 ± 12% -21.6 1.39 ± 6% perf-profile.children.cycles-pp.xfs_iunlock
20.83 ± 14% -20.0 0.84 ± 9% perf-profile.children.cycles-pp.down_read
19.38 ± 15% -18.6 0.80 ± 19% perf-profile.children.cycles-pp.down_read_killable
16.65 ± 15% -16.2 0.43 ± 21% perf-profile.children.cycles-pp.touch_atime
16.60 ± 15% -16.2 0.40 ± 21% perf-profile.children.cycles-pp.atime_needs_update
5.14 ± 18% -4.8 0.34 ± 12% perf-profile.children.cycles-pp.xfs_ifork_zapped
2.56 ± 9% -0.9 1.65 ± 4% perf-profile.children.cycles-pp.xfs_dir2_leaf_getdents
0.92 ± 3% -0.4 0.48 ± 5% perf-profile.children.cycles-pp.__fdget_pos
0.54 -0.3 0.23 ± 20% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.63 ± 4% -0.3 0.33 ± 8% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.49 ± 2% -0.2 0.26 ± 7% perf-profile.children.cycles-pp.readdir64_r
0.41 ± 4% -0.2 0.22 ± 5% perf-profile.children.cycles-pp.mutex_lock
0.31 ± 3% -0.2 0.16 ± 10% perf-profile.children.cycles-pp.security_file_permission
0.29 ± 2% -0.1 0.14 ± 10% perf-profile.children.cycles-pp.mutex_unlock
0.31 ± 3% -0.1 0.16 ± 6% perf-profile.children.cycles-pp.__cond_resched
0.24 ± 4% -0.1 0.12 ± 5% perf-profile.children.cycles-pp.current_time
0.25 ± 3% -0.1 0.13 ± 10% perf-profile.children.cycles-pp.apparmor_file_permission
0.21 ± 3% -0.1 0.11 ± 10% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.10 ± 7% -0.1 0.03 ± 70% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.13 ± 3% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.rcu_all_qs
0.10 ± 6% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.aa_file_perm
0.08 ± 5% -0.0 0.05 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
99.42 +0.2 99.65 perf-profile.children.cycles-pp.getdents64
0.00 +0.3 0.34 ± 6% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.4 0.39 ± 6% perf-profile.children.cycles-pp.local_clock_noinstr
0.00 +0.4 0.42 ± 6% perf-profile.children.cycles-pp.local_clock
0.00 +0.5 0.51 ± 8% perf-profile.children.cycles-pp.mean_and_variance_weighted_update
98.06 +1.0 99.03 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
97.92 +1.0 98.96 perf-profile.children.cycles-pp.do_syscall_64
0.14 ± 3% +1.2 1.32 ± 7% perf-profile.children.cycles-pp.xfs_ilock
97.39 +1.3 98.66 perf-profile.children.cycles-pp.__x64_sys_getdents64
0.00 +1.9 1.93 ± 3% perf-profile.children.cycles-pp.time_stats_update_one
95.96 +2.0 97.93 perf-profile.children.cycles-pp.iterate_dir
53.94 ± 11% +42.3 96.23 perf-profile.children.cycles-pp.xfs_readdir
21.01 ± 13% +70.2 91.22 perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +85.3 85.27 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +86.5 86.49 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +89.7 89.73 perf-profile.children.cycles-pp.__time_stats_update
28.21 ± 8% -26.7 1.47 ± 6% perf-profile.self.cycles-pp.up_read
20.70 ± 14% -19.9 0.79 ± 10% perf-profile.self.cycles-pp.down_read
19.25 ± 15% -18.5 0.74 ± 20% perf-profile.self.cycles-pp.down_read_killable
16.26 ± 15% -16.0 0.24 ± 37% perf-profile.self.cycles-pp.atime_needs_update
5.12 ± 18% -4.8 0.34 ± 12% perf-profile.self.cycles-pp.xfs_ifork_zapped
1.52 ± 16% -0.7 0.78 ± 5% perf-profile.self.cycles-pp.xfs_dir2_leaf_getdents
0.67 ± 2% -0.3 0.35 ± 4% perf-profile.self.cycles-pp.xfs_readdir
0.53 -0.3 0.22 ± 20% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.50 ± 5% -0.3 0.25 ± 8% perf-profile.self.cycles-pp.getdents64
0.52 ± 4% -0.3 0.27 ± 7% perf-profile.self.cycles-pp.__fdget_pos
0.46 ± 3% -0.2 0.25 ± 6% perf-profile.self.cycles-pp.readdir64_r
0.29 ± 5% -0.1 0.14 ± 9% perf-profile.self.cycles-pp.do_syscall_64
0.27 -0.1 0.14 ± 10% perf-profile.self.cycles-pp.mutex_unlock
0.29 ± 5% -0.1 0.15 ± 6% perf-profile.self.cycles-pp.mutex_lock
0.24 ± 8% -0.1 0.12 ± 5% perf-profile.self.cycles-pp.__x64_sys_getdents64
0.26 ± 3% -0.1 0.13 ± 7% perf-profile.self.cycles-pp.iterate_dir
0.20 ± 5% -0.1 0.11 ± 8% perf-profile.self.cycles-pp.xfs_bmap_last_extent
0.19 ± 3% -0.1 0.10 ± 4% perf-profile.self.cycles-pp.__cond_resched
0.17 ± 5% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.current_time
0.18 ± 4% -0.1 0.10 ± 10% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.16 ± 4% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.10 ± 7% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.22 ± 8% -0.1 0.15 ± 4% perf-profile.self.cycles-pp.xfs_dir2_leaf_readbuf
0.09 ± 4% -0.1 0.03 ±100% perf-profile.self.cycles-pp.aa_file_perm
0.13 ± 6% -0.1 0.07 ± 11% perf-profile.self.cycles-pp.apparmor_file_permission
0.12 ± 4% -0.1 0.06 ± 13% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.13 ± 10% -0.1 0.08 ± 6% perf-profile.self.cycles-pp.xfs_iunlock
0.13 ± 2% -0.1 0.08 ± 10% perf-profile.self.cycles-pp.xfs_ilock
0.08 ± 5% -0.0 0.05 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.20 ± 6% -0.0 0.17 ± 2% perf-profile.self.cycles-pp.xfs_bmap_last_offset
0.15 ± 6% +0.0 0.18 ± 7% perf-profile.self.cycles-pp.xfs_dir2_isblock
0.00 +0.3 0.32 ± 6% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.5 0.49 ± 8% perf-profile.self.cycles-pp.mean_and_variance_weighted_update
0.00 +1.2 1.22 ± 7% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +1.3 1.30 ± 3% perf-profile.self.cycles-pp.__time_stats_update
0.00 +1.4 1.43 ± 6% perf-profile.self.cycles-pp.time_stats_update_one
0.00 +85.3 85.26 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-icl-2sp5: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
gcc-12/performance/directio/1SSD/xfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp5/MRDL/fxmark/4
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
1723975 ± 2% -23.9% 1311571 cpuidle..usage
3662221 -25.1% 2743157 ± 3% numa-numastat.node0.local_node
3744047 -24.9% 2812332 ± 3% numa-numastat.node0.numa_hit
231.00 ± 9% -49.6% 116.50 ± 13% perf-c2c.DRAM.local
105.67 ± 6% +700.5% 845.83 ± 13% perf-c2c.HITM.local
33.95 ± 2% -12.2% 29.81 iostat.cpu.idle
57.47 +13.0% 64.95 iostat.cpu.system
4.86 -64.5% 1.73 ± 5% iostat.cpu.user
33.09 ± 2% -4.2 28.94 mpstat.cpu.all.idle%
0.86 -0.2 0.64 ± 4% mpstat.cpu.all.soft%
56.06 +7.8 63.87 mpstat.cpu.all.sys%
4.92 -3.2 1.74 ± 5% mpstat.cpu.all.usr%
2864831 ± 2% -29.0% 2034962 ± 4% numa-vmstat.node0.nr_slab_reclaimable
81544 ± 9% -21.6% 63892 ± 5% numa-vmstat.node0.nr_slab_unreclaimable
3743733 -24.9% 2812617 ± 3% numa-vmstat.node0.numa_hit
3661917 -25.1% 2743443 ± 3% numa-vmstat.node0.numa_local
11449352 -28.9% 8146085 ± 4% numa-meminfo.node0.KReclaimable
19416949 ± 6% -23.3% 14885538 ± 7% numa-meminfo.node0.MemUsed
11449352 -28.9% 8146085 ± 4% numa-meminfo.node0.SReclaimable
326039 ± 9% -21.6% 255663 ± 5% numa-meminfo.node0.SUnreclaim
11775391 ± 2% -28.6% 8401749 ± 4% numa-meminfo.node0.Slab
92824 ± 2% -29.0% 65900 ± 4% vmstat.io.bo
14364104 -23.2% 11035980 ± 3% vmstat.memory.cache
0.29 ± 5% -20.8% 0.23 ± 8% vmstat.procs.b
10486 ± 2% -53.5% 4873 ± 5% vmstat.system.cs
13976 -3.7% 13454 vmstat.system.in
11477494 ± 2% -28.9% 8164752 ± 4% meminfo.KReclaimable
20764067 -22.9% 16005871 ± 3% meminfo.Memused
11477494 ± 2% -28.9% 8164752 ± 4% meminfo.SReclaimable
436124 -14.4% 373502 ± 2% meminfo.SUnreclaim
11913618 -28.3% 8538254 ± 4% meminfo.Slab
23134924 -23.3% 17748439 ± 3% meminfo.max_used_kB
136500 ± 7% -86.0% 19068 ± 21% turbostat.C1
0.01 -0.0 0.00 turbostat.C1%
322742 ± 6% -75.6% 78887 ± 8% turbostat.C1E
0.26 ± 4% -0.1 0.19 ± 3% turbostat.C1E%
0.62 -52.5% 0.30 ± 4% turbostat.IPC
24402 ± 11% -72.8% 6631 ± 9% turbostat.POLL
33.84 -62.6% 12.66 ± 54% turbostat.Pkg%pc2
47.05 -1.3% 46.44 turbostat.RAMWatt
173.95 +11.1% 193.21 fxmark.ssd_xfs_MRDL_4_directio.sys_sec
86.92 +11.1% 96.54 fxmark.ssd_xfs_MRDL_4_directio.sys_util
25.32 -76.1% 6.06 ± 4% fxmark.ssd_xfs_MRDL_4_directio.user_sec
12.65 -76.1% 3.03 ± 4% fxmark.ssd_xfs_MRDL_4_directio.user_util
8.749e+08 -77.8% 1.946e+08 ± 4% fxmark.ssd_xfs_MRDL_4_directio.works
17497044 -77.8% 3891628 ± 4% fxmark.ssd_xfs_MRDL_4_directio.works/sec
21466 ± 5% -16.3% 17966 ± 11% fxmark.time.involuntary_context_switches
65.83 ± 2% +6.1% 69.83 fxmark.time.percent_of_cpu_this_job_got
97.95 +7.8% 105.58 fxmark.time.system_time
86078 ± 6% -93.0% 6001 ± 41% fxmark.time.voluntary_context_switches
10340 ± 2% -55.5% 4598 ± 5% perf-stat.i.context-switches
0.09 ± 2% +9.4% 0.10 ± 10% perf-stat.i.cpi
69.59 -15.8% 58.60 perf-stat.i.cpu-migrations
0.03 ± 2% +0.0 0.03 ± 12% perf-stat.i.dTLB-load-miss-rate%
156462 ± 2% +5.5% 165050 ± 2% perf-stat.i.dTLB-load-misses
0.01 +0.0 0.01 ± 10% perf-stat.i.dTLB-store-miss-rate%
2.21 ± 2% +0.1 2.34 ± 2% perf-stat.i.node-load-miss-rate%
10272 ± 2% -55.5% 4566 ± 5% perf-stat.ps.context-switches
69.21 -15.8% 58.29 perf-stat.ps.cpu-migrations
157254 ± 2% +5.5% 165912 ± 2% perf-stat.ps.dTLB-load-misses
92433 +1.0% 93337 proc-vmstat.nr_anon_pages
2752726 +4.3% 2871524 proc-vmstat.nr_dirty_background_threshold
5512183 +4.3% 5750069 proc-vmstat.nr_dirty_threshold
27721727 +4.3% 28911448 proc-vmstat.nr_free_pages
93894 +1.1% 94927 proc-vmstat.nr_inactive_anon
2871541 -28.9% 2042995 ± 4% proc-vmstat.nr_slab_reclaimable
109042 -14.3% 93417 ± 2% proc-vmstat.nr_slab_unreclaimable
93894 +1.1% 94927 proc-vmstat.nr_zone_inactive_anon
3962264 -22.7% 3063301 ± 2% proc-vmstat.numa_hit
3828849 -23.5% 2928624 ± 2% proc-vmstat.numa_local
8143997 -25.0% 6111379 ± 3% proc-vmstat.pgalloc_normal
8115157 -25.0% 6086423 ± 3% proc-vmstat.pgfree
15033507 -31.1% 10353219 ± 4% proc-vmstat.pgpgout
133109 ± 4% +13.9% 151595 ± 2% sched_debug.cfs_rq:/.avg_vruntime.avg
146323 ± 4% +14.7% 167760 ± 4% sched_debug.cfs_rq:/.avg_vruntime.max
125246 ± 5% +14.9% 143936 ± 3% sched_debug.cfs_rq:/.avg_vruntime.min
133109 ± 4% +13.9% 151595 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
146323 ± 4% +14.7% 167760 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
125247 ± 5% +14.9% 143936 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
391.28 ± 36% +60.6% 628.44 ± 11% sched_debug.cfs_rq:/.util_est.min
226.82 ± 18% -34.7% 148.21 ± 28% sched_debug.cfs_rq:/.util_est.stddev
0.77 ± 16% -28.2% 0.55 ± 13% sched_debug.cpu.nr_running.stddev
193017 ± 2% -58.5% 80137 ± 6% sched_debug.cpu.nr_switches.avg
337313 ± 3% -44.7% 186423 ± 4% sched_debug.cpu.nr_switches.max
143865 ± 3% -70.1% 43034 ± 9% sched_debug.cpu.nr_switches.min
76395 ± 7% -26.9% 55870 ± 4% sched_debug.cpu.nr_switches.stddev
56.72 ± 16% -71.5% 16.17 ± 28% sched_debug.cpu.nr_uninterruptible.max
39.09 ± 12% -47.9% 20.35 ± 15% sched_debug.cpu.nr_uninterruptible.stddev
13.00 ± 3% -10.4 2.57 ± 6% perf-profile.calltrace.cycles-pp.xfs_dir2_leaf_readbuf.xfs_dir2_leaf_getdents.xfs_readdir.iterate_dir.__x64_sys_getdents64
15.65 ± 3% -10.0 5.68 ± 5% perf-profile.calltrace.cycles-pp.xfs_dir2_leaf_getdents.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
10.96 ± 2% -8.6 2.31 ± 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.getdents64
14.76 ± 2% -8.3 6.46 ± 5% perf-profile.calltrace.cycles-pp.xfs_dir2_isblock.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
13.28 ± 2% -7.8 5.48 ± 6% perf-profile.calltrace.cycles-pp.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir.iterate_dir.__x64_sys_getdents64
8.70 ± 6% -7.0 1.68 ± 8% perf-profile.calltrace.cycles-pp.xfs_iext_lookup_extent.xfs_dir2_leaf_readbuf.xfs_dir2_leaf_getdents.xfs_readdir.iterate_dir
11.50 ± 2% -6.9 4.57 ± 7% perf-profile.calltrace.cycles-pp.xfs_bmap_last_extent.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir.iterate_dir
8.57 -6.8 1.78 ± 4% perf-profile.calltrace.cycles-pp.__fdget_pos.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
7.40 ± 2% -5.6 1.83 ± 6% perf-profile.calltrace.cycles-pp.xfs_iext_last.xfs_bmap_last_extent.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir
5.33 ± 2% -4.2 1.10 ± 3% perf-profile.calltrace.cycles-pp.touch_atime.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.60 ± 2% -3.7 0.94 ± 4% perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.iterate_dir.__x64_sys_getdents64.do_syscall_64
4.34 ± 3% -3.4 0.94 ± 4% perf-profile.calltrace.cycles-pp.readdir64_r
3.92 ± 4% -3.1 0.86 ± 9% perf-profile.calltrace.cycles-pp.down_read_killable.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.75 ± 2% -3.0 0.76 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.__fdget_pos.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.77 ± 2% -2.9 0.83 ± 9% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
2.90 ± 13% -2.3 0.60 ± 11% perf-profile.calltrace.cycles-pp.security_file_permission.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.56 ± 3% -2.1 0.48 ± 45% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_readdir.iterate_dir.__x64_sys_getdents64
2.62 ± 2% -2.1 0.56 ± 7% perf-profile.calltrace.cycles-pp.up_read.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.24 ± 2% -2.0 0.26 ±100% perf-profile.calltrace.cycles-pp.mutex_unlock.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
1.59 ± 7% -1.1 0.44 ± 44% perf-profile.calltrace.cycles-pp.xfs_iext_get_extent.xfs_bmap_last_extent.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir
0.00 +0.6 0.63 ± 6% perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
0.82 ± 2% +1.0 1.82 ± 12% perf-profile.calltrace.cycles-pp.xfs_iread_extents.xfs_bmap_last_extent.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir
0.00 +1.0 1.03 ± 7% perf-profile.calltrace.cycles-pp.native_sched_clock.local_clock_noinstr.local_clock.xfs_ilock.xfs_ilock_data_map_shared
0.00 +1.2 1.20 ± 7% perf-profile.calltrace.cycles-pp.local_clock_noinstr.local_clock.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir
0.00 +1.3 1.27 ± 7% perf-profile.calltrace.cycles-pp.local_clock.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
0.77 ± 17% +1.4 2.15 ± 6% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
0.00 +1.9 1.85 ± 15% perf-profile.calltrace.cycles-pp.mean_and_variance_weighted_update.time_stats_update_one.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir
93.48 +4.9 98.35 perf-profile.calltrace.cycles-pp.getdents64
0.00 +7.2 7.19 ± 3% perf-profile.calltrace.cycles-pp.time_stats_update_one.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
82.33 +13.6 95.91 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.getdents64
80.40 +15.1 95.51 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
75.96 +18.6 94.58 perf-profile.calltrace.cycles-pp.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
62.35 +29.4 91.71 perf-profile.calltrace.cycles-pp.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
44.29 +43.6 87.85 perf-profile.calltrace.cycles-pp.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +54.6 54.61 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir
0.00 +58.3 58.26 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
3.29 ± 6% +70.2 73.50 perf-profile.calltrace.cycles-pp.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
0.00 +70.8 70.85 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
13.50 ± 3% -10.8 2.68 ± 5% perf-profile.children.cycles-pp.xfs_dir2_leaf_readbuf
15.90 ± 3% -10.2 5.72 ± 5% perf-profile.children.cycles-pp.xfs_dir2_leaf_getdents
15.02 ± 2% -8.5 6.50 ± 5% perf-profile.children.cycles-pp.xfs_dir2_isblock
13.64 ± 2% -8.1 5.55 ± 6% perf-profile.children.cycles-pp.xfs_bmap_last_offset
11.86 ± 2% -7.2 4.66 ± 7% perf-profile.children.cycles-pp.xfs_bmap_last_extent
8.83 ± 6% -7.1 1.70 ± 8% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
8.97 -7.1 1.86 ± 4% perf-profile.children.cycles-pp.__fdget_pos
7.52 ± 2% -5.7 1.86 ± 6% perf-profile.children.cycles-pp.xfs_iext_last
6.38 -5.0 1.34 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64
5.58 ± 2% -4.4 1.16 ± 3% perf-profile.children.cycles-pp.touch_atime
5.42 -4.2 1.17 ± 7% perf-profile.children.cycles-pp.up_read
5.09 ± 3% -4.0 1.05 ± 4% perf-profile.children.cycles-pp.atime_needs_update
4.85 ± 3% -3.8 1.04 ± 4% perf-profile.children.cycles-pp.readdir64_r
4.17 ± 3% -3.3 0.92 ± 9% perf-profile.children.cycles-pp.down_read_killable
4.02 ± 2% -3.2 0.82 ± 4% perf-profile.children.cycles-pp.mutex_lock
4.03 ± 2% -3.2 0.88 ± 8% perf-profile.children.cycles-pp.xfs_iunlock
3.64 ± 3% -2.8 0.80 ± 6% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
3.18 ± 12% -2.5 0.67 ± 11% perf-profile.children.cycles-pp.security_file_permission
2.56 ± 12% -2.0 0.53 ± 10% perf-profile.children.cycles-pp.apparmor_file_permission
2.62 ± 4% -2.0 0.63 ± 6% perf-profile.children.cycles-pp.__cond_resched
2.35 ± 2% -1.8 0.52 ± 9% perf-profile.children.cycles-pp.mutex_unlock
2.26 -1.8 0.46 ± 3% perf-profile.children.cycles-pp.current_time
2.08 -1.6 0.45 ± 10% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
2.22 ± 6% -1.5 0.67 ± 6% perf-profile.children.cycles-pp.down_read
1.70 ± 6% -1.1 0.55 ± 4% perf-profile.children.cycles-pp.xfs_iext_get_extent
1.03 ± 8% -0.8 0.19 ± 10% perf-profile.children.cycles-pp.aa_file_perm
0.99 ± 2% -0.8 0.22 ± 10% perf-profile.children.cycles-pp.syscall_return_via_sysret
1.01 ± 4% -0.8 0.24 ± 5% perf-profile.children.cycles-pp.rcu_all_qs
0.92 ± 2% -0.7 0.21 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.66 ± 3% -0.5 0.13 ± 10% perf-profile.children.cycles-pp.xfs_ifork_zapped
0.65 ± 3% -0.5 0.13 ± 6% perf-profile.children.cycles-pp.xfs_file_readdir
0.63 ± 5% -0.5 0.13 ± 14% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.61 ± 5% -0.5 0.13 ± 10% perf-profile.children.cycles-pp.main_work
2.66 -0.4 2.24 ± 9% perf-profile.children.cycles-pp.xfs_iread_extents
0.51 ± 3% -0.4 0.10 ± 10% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.52 ± 5% -0.4 0.11 ± 14% perf-profile.children.cycles-pp.xfs_isilocked
0.49 ± 6% -0.4 0.10 ± 8% perf-profile.children.cycles-pp.make_vfsgid
0.48 ± 6% -0.4 0.10 ± 21% perf-profile.children.cycles-pp.make_vfsuid
0.40 ± 2% -0.3 0.08 ± 17% perf-profile.children.cycles-pp.amd_clear_divider
0.28 ± 4% -0.2 0.08 ± 16% perf-profile.children.cycles-pp.readdir_r@plt
0.23 ± 11% -0.2 0.03 ±102% perf-profile.children.cycles-pp.__f_unlock_pos
0.00 +0.1 0.09 ± 15% perf-profile.children.cycles-pp.sched_clock_noinstr
0.00 +0.1 0.14 ± 8% perf-profile.children.cycles-pp.mean_and_variance_weighted_get_mean
0.00 +0.2 0.19 ± 15% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +1.1 1.07 ± 6% perf-profile.children.cycles-pp.native_sched_clock
0.00 +1.3 1.29 ± 6% perf-profile.children.cycles-pp.local_clock_noinstr
0.90 ± 15% +1.3 2.22 ± 6% perf-profile.children.cycles-pp.xfs_ilock
0.00 +1.4 1.36 ± 6% perf-profile.children.cycles-pp.local_clock
0.00 +1.9 1.91 ± 15% perf-profile.children.cycles-pp.mean_and_variance_weighted_update
94.78 +3.8 98.60 perf-profile.children.cycles-pp.getdents64
0.00 +7.3 7.28 ± 3% perf-profile.children.cycles-pp.time_stats_update_one
82.66 +13.5 96.16 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
81.23 +14.6 95.87 perf-profile.children.cycles-pp.do_syscall_64
76.58 +18.1 94.71 perf-profile.children.cycles-pp.__x64_sys_getdents64
63.21 +28.7 91.89 perf-profile.children.cycles-pp.iterate_dir
45.04 +43.0 88.05 perf-profile.children.cycles-pp.xfs_readdir
0.00 +54.6 54.64 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +58.3 58.32 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.46 ± 6% +70.1 73.54 perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +71.0 70.98 perf-profile.children.cycles-pp.__time_stats_update
8.56 ± 6% -6.9 1.65 ± 8% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
7.45 ± 2% -5.6 1.82 ± 6% perf-profile.self.cycles-pp.xfs_iext_last
5.96 -4.6 1.30 ± 5% perf-profile.self.cycles-pp.xfs_readdir
5.16 -4.0 1.12 ± 7% perf-profile.self.cycles-pp.up_read
5.07 -4.0 1.06 ± 8% perf-profile.self.cycles-pp.__fdget_pos
4.85 ± 2% -3.8 1.04 ± 8% perf-profile.self.cycles-pp.getdents64
4.67 ± 2% -3.7 1.00 ± 4% perf-profile.self.cycles-pp.readdir64_r
3.53 ± 3% -2.8 0.78 ± 6% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
3.14 -2.5 0.64 ± 6% perf-profile.self.cycles-pp.xfs_dir2_leaf_readbuf
3.14 ± 3% -2.4 0.70 ± 12% perf-profile.self.cycles-pp.down_read_killable
2.87 ± 2% -2.3 0.57 ± 5% perf-profile.self.cycles-pp.mutex_lock
2.74 ± 3% -2.2 0.58 ± 5% perf-profile.self.cycles-pp.do_syscall_64
2.49 ± 2% -2.0 0.53 ± 7% perf-profile.self.cycles-pp.iterate_dir
2.21 ± 7% -1.7 0.46 ± 11% perf-profile.self.cycles-pp.__x64_sys_getdents64
2.22 ± 2% -1.7 0.49 ± 8% perf-profile.self.cycles-pp.mutex_unlock
2.10 ± 5% -1.7 0.44 ± 4% perf-profile.self.cycles-pp.atime_needs_update
1.79 ± 2% -1.4 0.37 ± 8% perf-profile.self.cycles-pp.entry_SYSCALL_64
1.75 ± 3% -1.3 0.43 ± 11% perf-profile.self.cycles-pp.xfs_bmap_last_extent
1.62 -1.3 0.33 ± 3% perf-profile.self.cycles-pp.current_time
1.66 ± 4% -1.3 0.38 ± 6% perf-profile.self.cycles-pp.__cond_resched
1.54 ± 3% -1.2 0.32 ± 10% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.38 ± 18% -1.1 0.30 ± 15% perf-profile.self.cycles-pp.apparmor_file_permission
1.56 ± 9% -1.1 0.49 ± 8% perf-profile.self.cycles-pp.down_read
1.34 ± 3% -1.1 0.28 ± 6% perf-profile.self.cycles-pp.xfs_iunlock
1.56 ± 7% -1.0 0.53 ± 5% perf-profile.self.cycles-pp.xfs_iext_get_extent
1.89 ± 2% -1.0 0.91 ± 9% perf-profile.self.cycles-pp.xfs_bmap_last_offset
1.15 -0.9 0.24 ± 12% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.99 ± 2% -0.8 0.21 ± 10% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.92 ± 11% -0.7 0.17 ± 9% perf-profile.self.cycles-pp.aa_file_perm
0.92 ± 3% -0.7 0.21 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.84 ± 15% -0.6 0.22 ± 15% perf-profile.self.cycles-pp.xfs_ilock
0.76 ± 13% -0.6 0.17 ± 13% perf-profile.self.cycles-pp.security_file_permission
1.51 ± 10% -0.6 0.94 ± 4% perf-profile.self.cycles-pp.xfs_dir2_isblock
0.66 ± 4% -0.5 0.17 ± 6% perf-profile.self.cycles-pp.rcu_all_qs
0.52 ± 2% -0.4 0.10 ± 10% perf-profile.self.cycles-pp.xfs_ifork_zapped
0.50 ± 6% -0.4 0.10 ± 13% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.50 ± 4% -0.4 0.11 ± 10% perf-profile.self.cycles-pp.touch_atime
0.40 ± 5% -0.3 0.08 ± 6% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.40 ± 7% -0.3 0.08 ± 11% perf-profile.self.cycles-pp.xfs_isilocked
0.39 ± 4% -0.3 0.08 ± 16% perf-profile.self.cycles-pp.xfs_file_readdir
0.40 ± 9% -0.3 0.10 ± 14% perf-profile.self.cycles-pp.xfs_ilock_data_map_shared
0.37 ± 6% -0.3 0.08 ± 17% perf-profile.self.cycles-pp.make_vfsuid
0.36 ± 6% -0.3 0.07 ± 10% perf-profile.self.cycles-pp.make_vfsgid
0.36 ± 5% -0.3 0.08 ± 13% perf-profile.self.cycles-pp.main_work
0.26 ± 3% -0.2 0.04 ± 75% perf-profile.self.cycles-pp.amd_clear_divider
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.sched_clock_noinstr
0.00 +0.1 0.08 ± 20% perf-profile.self.cycles-pp.local_clock
0.00 +0.1 0.09 ± 7% perf-profile.self.cycles-pp.mean_and_variance_weighted_get_mean
0.00 +0.2 0.16 ± 17% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.2 0.16 ± 12% perf-profile.self.cycles-pp.local_clock_noinstr
2.41 ± 4% +0.6 3.04 ± 9% perf-profile.self.cycles-pp.xfs_dir2_leaf_getdents
0.00 +1.0 1.02 ± 7% perf-profile.self.cycles-pp.native_sched_clock
0.00 +1.8 1.84 ± 15% perf-profile.self.cycles-pp.mean_and_variance_weighted_update
0.00 +3.7 3.68 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +5.3 5.33 ± 5% perf-profile.self.cycles-pp.__time_stats_update
0.00 +5.4 5.41 ± 6% perf-profile.self.cycles-pp.time_stats_update_one
0.00 +54.6 54.61 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-icl-2sp5: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
gcc-12/performance/directio/1SSD/xfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp5/MRDM/fxmark/18
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.71 -0.2 0.49 ± 3% mpstat.cpu.all.usr%
746408 -8.8% 681016 numa-vmstat.node0.nr_slab_reclaimable
2986839 -8.8% 2724869 numa-meminfo.node0.KReclaimable
2986839 -8.8% 2724869 numa-meminfo.node0.SReclaimable
24725 -9.1% 22474 vmstat.io.bo
26976 ± 2% -11.7% 23822 ± 3% vmstat.system.cs
27023 ± 2% -11.9% 23819 ± 3% perf-stat.i.context-switches
2.58 ± 2% -0.2 2.43 ± 4% perf-stat.i.node-load-miss-rate%
27122 ± 2% -11.9% 23897 ± 3% perf-stat.ps.context-switches
13.31 -41.6% 7.78 ± 3% fxmark.ssd_xfs_MRDM_18_directio.user_sec
1.48 -41.6% 0.86 ± 3% fxmark.ssd_xfs_MRDM_18_directio.user_util
4.484e+08 -39.9% 2.694e+08 ± 3% fxmark.ssd_xfs_MRDM_18_directio.works
8968572 -39.9% 5387033 ± 3% fxmark.ssd_xfs_MRDM_18_directio.works/sec
132653 ± 6% -19.1% 107342 ± 10% turbostat.C1
1744751 ± 2% -9.0% 1587377 ± 3% turbostat.C1E
57030 ± 6% -36.8% 36060 ± 8% turbostat.POLL
60.33 -6.4% 56.50 ± 3% turbostat.PkgTmp
373.90 ±169% -93.6% 24.07 ±101% perf-sched.wait_and_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
32.17 ± 30% -59.6% 13.00 ± 84% perf-sched.wait_and_delay.count.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
1024 ± 64% -77.0% 235.34 ±112% perf-sched.wait_and_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
373.72 ±169% -94.4% 20.88 ±113% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
7.32 ± 69% +6821.3% 506.49 ± 97% perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
1022 ± 64% -77.1% 233.78 ±112% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
758714 -9.0% 690051 proc-vmstat.nr_slab_reclaimable
76373 -1.5% 75262 proc-vmstat.nr_slab_unreclaimable
1637725 -5.2% 1553110 proc-vmstat.numa_hit
1502959 -5.5% 1419696 proc-vmstat.numa_local
2850414 -6.1% 2676416 proc-vmstat.pgalloc_normal
2771928 -5.9% 2607971 proc-vmstat.pgfree
3609605 -8.5% 3302858 proc-vmstat.pgpgout
1073 ±223% +315.7% 4460 ±109% sched_debug.cfs_rq:/.left_deadline.avg
19352 ±222% +226.6% 63207 ± 98% sched_debug.cfs_rq:/.left_deadline.max
4427 ±223% +254.4% 15691 ±100% sched_debug.cfs_rq:/.left_deadline.stddev
1072 ±223% +315.7% 4460 ±109% sched_debug.cfs_rq:/.left_vruntime.avg
19339 ±223% +226.7% 63184 ± 98% sched_debug.cfs_rq:/.left_vruntime.max
4425 ±223% +254.5% 15689 ±100% sched_debug.cfs_rq:/.left_vruntime.stddev
220636 ± 72% +4652.2% 10485088 ±134% sched_debug.cfs_rq:/.load.max
34889 ± 77% +6694.7% 2370641 ±136% sched_debug.cfs_rq:/.load.stddev
1072 ±223% +315.7% 4460 ±109% sched_debug.cfs_rq:/.right_vruntime.avg
19339 ±223% +226.7% 63184 ± 98% sched_debug.cfs_rq:/.right_vruntime.max
4425 ±223% +254.5% 15689 ±100% sched_debug.cfs_rq:/.right_vruntime.stddev
113555 ± 2% -12.2% 99730 ± 3% sched_debug.cpu.nr_switches.avg
446947 ± 11% -14.7% 381267 ± 10% sched_debug.cpu.nr_switches.max
22.90 ± 14% -22.1 0.77 ± 17% perf-profile.calltrace.cycles-pp.down_read_killable.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.52 ± 14% -19.5 0.00 perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.iterate_dir.__x64_sys_getdents64.do_syscall_64
19.60 ± 14% -19.5 0.09 ±223% perf-profile.calltrace.cycles-pp.touch_atime.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.69 ± 16% -18.3 1.39 ± 11% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
19.57 ± 16% -18.2 1.32 ± 11% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_readdir.iterate_dir.__x64_sys_getdents64
17.59 ± 17% -17.6 0.00 perf-profile.calltrace.cycles-pp.down_read.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
5.86 ± 5% -5.9 0.00 perf-profile.calltrace.cycles-pp.up_read.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.43 ± 7% -0.8 1.59 ± 5% perf-profile.calltrace.cycles-pp.xfs_dir2_leaf_getdents.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
1.52 ± 6% -0.4 1.09 ± 11% perf-profile.calltrace.cycles-pp.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir.iterate_dir.__x64_sys_getdents64
1.34 ± 7% -0.4 0.91 ± 13% perf-profile.calltrace.cycles-pp.xfs_bmap_last_extent.xfs_bmap_last_offset.xfs_dir2_isblock.xfs_readdir.iterate_dir
1.68 ± 4% -0.4 1.27 ± 9% perf-profile.calltrace.cycles-pp.xfs_dir2_isblock.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
0.00 +0.8 0.84 ± 10% perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
97.95 +1.0 98.98 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.getdents64
97.76 +1.1 98.88 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
0.00 +1.3 1.30 ± 8% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
97.30 +1.3 98.64 perf-profile.calltrace.cycles-pp.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
0.00 +1.9 1.91 ± 3% perf-profile.calltrace.cycles-pp.time_stats_update_one.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
95.85 +2.1 97.93 perf-profile.calltrace.cycles-pp.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe.getdents64
46.84 ± 13% +49.4 96.26 perf-profile.calltrace.cycles-pp.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64.entry_SYSCALL_64_after_hwframe
17.78 ± 17% +73.6 91.35 perf-profile.calltrace.cycles-pp.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64.do_syscall_64
0.00 +85.4 85.45 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir
0.00 +86.6 86.62 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir
0.00 +89.9 89.86 perf-profile.calltrace.cycles-pp.__time_stats_update.xfs_ilock_data_map_shared.xfs_readdir.iterate_dir.__x64_sys_getdents64
25.46 ± 11% -24.0 1.50 ± 11% perf-profile.children.cycles-pp.up_read
22.93 ± 14% -22.2 0.78 ± 17% perf-profile.children.cycles-pp.down_read_killable
19.63 ± 14% -19.2 0.42 ± 16% perf-profile.children.cycles-pp.touch_atime
19.58 ± 14% -19.2 0.39 ± 17% perf-profile.children.cycles-pp.atime_needs_update
19.72 ± 16% -18.3 1.40 ± 11% perf-profile.children.cycles-pp.xfs_iunlock
17.62 ± 17% -16.8 0.85 ± 10% perf-profile.children.cycles-pp.down_read
4.59 ± 12% -4.3 0.33 ± 16% perf-profile.children.cycles-pp.xfs_ifork_zapped
2.45 ± 7% -0.8 1.60 ± 5% perf-profile.children.cycles-pp.xfs_dir2_leaf_getdents
0.92 ± 2% -0.5 0.45 ± 4% perf-profile.children.cycles-pp.__fdget_pos
1.39 ± 7% -0.5 0.93 ± 13% perf-profile.children.cycles-pp.xfs_bmap_last_extent
1.56 ± 5% -0.5 1.11 ± 11% perf-profile.children.cycles-pp.xfs_bmap_last_offset
1.70 ± 4% -0.4 1.28 ± 9% perf-profile.children.cycles-pp.xfs_dir2_isblock
0.85 ± 12% -0.4 0.47 ± 21% perf-profile.children.cycles-pp.xfs_iext_last
0.64 -0.3 0.32 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.53 -0.3 0.22 ± 21% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.50 ± 2% -0.2 0.25 ± 4% perf-profile.children.cycles-pp.readdir64_r
0.40 -0.2 0.20 ± 5% perf-profile.children.cycles-pp.mutex_lock
0.32 ± 3% -0.2 0.16 ± 3% perf-profile.children.cycles-pp.__cond_resched
0.29 ± 4% -0.1 0.14 ± 8% perf-profile.children.cycles-pp.mutex_unlock
0.31 ± 6% -0.1 0.17 ± 7% perf-profile.children.cycles-pp.security_file_permission
0.24 ± 3% -0.1 0.12 ± 5% perf-profile.children.cycles-pp.current_time
0.25 ± 6% -0.1 0.14 ± 9% perf-profile.children.cycles-pp.apparmor_file_permission
0.21 ± 6% -0.1 0.11 ± 6% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.11 ± 3% -0.1 0.03 ± 70% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.12 ± 5% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.rcu_all_qs
0.10 ± 8% -0.1 0.03 ± 70% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.10 ± 6% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.aa_file_perm
0.16 ± 5% -0.0 0.13 ± 8% perf-profile.children.cycles-pp.xfs_iext_get_extent
99.40 +0.2 99.65 perf-profile.children.cycles-pp.getdents64
0.00 +0.3 0.33 ± 6% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.4 0.38 ± 6% perf-profile.children.cycles-pp.local_clock_noinstr
0.00 +0.4 0.41 ± 6% perf-profile.children.cycles-pp.local_clock
0.00 +0.5 0.49 ± 9% perf-profile.children.cycles-pp.mean_and_variance_weighted_update
98.03 +1.0 99.04 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
97.89 +1.1 98.97 perf-profile.children.cycles-pp.do_syscall_64
0.14 ± 4% +1.2 1.32 ± 8% perf-profile.children.cycles-pp.xfs_ilock
97.37 +1.3 98.68 perf-profile.children.cycles-pp.__x64_sys_getdents64
0.00 +1.9 1.93 ± 3% perf-profile.children.cycles-pp.time_stats_update_one
95.94 +2.0 97.98 perf-profile.children.cycles-pp.iterate_dir
46.93 ± 13% +49.4 96.30 perf-profile.children.cycles-pp.xfs_readdir
17.80 ± 17% +73.6 91.37 perf-profile.children.cycles-pp.xfs_ilock_data_map_shared
0.00 +85.5 85.46 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +86.6 86.64 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +89.9 89.90 perf-profile.children.cycles-pp.__time_stats_update
25.41 ± 11% -23.9 1.48 ± 11% perf-profile.self.cycles-pp.up_read
22.80 ± 14% -22.1 0.73 ± 19% perf-profile.self.cycles-pp.down_read_killable
19.23 ± 14% -19.0 0.24 ± 28% perf-profile.self.cycles-pp.atime_needs_update
17.50 ± 17% -16.7 0.80 ± 11% perf-profile.self.cycles-pp.down_read
4.58 ± 12% -4.3 0.32 ± 15% perf-profile.self.cycles-pp.xfs_ifork_zapped
1.48 ± 20% -0.7 0.80 ± 6% perf-profile.self.cycles-pp.xfs_dir2_leaf_getdents
0.83 ± 12% -0.4 0.46 ± 22% perf-profile.self.cycles-pp.xfs_iext_last
0.66 ± 3% -0.3 0.33 ± 6% perf-profile.self.cycles-pp.xfs_readdir
0.52 -0.3 0.22 ± 20% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.52 ± 4% -0.3 0.26 ± 6% perf-profile.self.cycles-pp.__fdget_pos
0.50 ± 2% -0.2 0.25 ± 7% perf-profile.self.cycles-pp.getdents64
0.48 -0.2 0.24 ± 4% perf-profile.self.cycles-pp.readdir64_r
0.29 ± 3% -0.2 0.14 ± 6% perf-profile.self.cycles-pp.mutex_lock
0.29 ± 3% -0.1 0.14 ± 9% perf-profile.self.cycles-pp.do_syscall_64
0.27 ± 5% -0.1 0.13 ± 11% perf-profile.self.cycles-pp.mutex_unlock
0.26 ± 4% -0.1 0.13 ± 7% perf-profile.self.cycles-pp.iterate_dir
0.24 ± 4% -0.1 0.12 ± 5% perf-profile.self.cycles-pp.__x64_sys_getdents64
0.20 ± 3% -0.1 0.10 ± 7% perf-profile.self.cycles-pp.xfs_bmap_last_extent
0.20 ± 2% -0.1 0.10 perf-profile.self.cycles-pp.__cond_resched
0.18 ± 4% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.current_time
0.18 ± 3% -0.1 0.09 ± 11% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.16 ± 3% -0.1 0.08 ± 10% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.11 ± 3% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.23 ± 5% -0.1 0.16 ± 3% perf-profile.self.cycles-pp.xfs_dir2_leaf_readbuf
0.14 ± 6% -0.1 0.08 ± 10% perf-profile.self.cycles-pp.apparmor_file_permission
0.14 ± 9% -0.1 0.08 ± 12% perf-profile.self.cycles-pp.xfs_iunlock
0.12 ± 10% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.09 ± 5% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.09 ± 6% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.aa_file_perm
0.13 ± 5% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.xfs_ilock
0.15 ± 4% -0.0 0.12 ± 6% perf-profile.self.cycles-pp.xfs_iext_get_extent
0.00 +0.3 0.31 ± 6% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.5 0.48 ± 10% perf-profile.self.cycles-pp.mean_and_variance_weighted_update
0.00 +1.2 1.18 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +1.3 1.31 ± 2% perf-profile.self.cycles-pp.__time_stats_update
0.00 +1.5 1.45 ± 6% perf-profile.self.cycles-pp.time_stats_update_one
0.00 +85.4 85.45 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-icl-2sp5: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
gcc-12/performance/bufferedio/1SSD/xfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp5/DWOL/fxmark/54
commit:
57c9d40720 ("time_stats: Kill TIME_STATS_HAVE_QUANTILES")
eacb32cc55 ("xfs: present wait time statistics")
57c9d4072054333b eacb32cc553342496b6bcd44127
---------------- ---------------------------
%stddev %change %stddev
\ | \
3.61 ± 2% -3.4 0.26 ± 2% mpstat.cpu.all.usr%
54.21 ± 2% +8.8% 59.01 iostat.cpu.system
3.54 ± 2% -92.5% 0.27 ± 2% iostat.cpu.user
1195 ± 2% +7.6% 1286 ± 2% vmstat.io.bo
72779 -4.0% 69873 vmstat.system.in
16872 ± 7% -33.2% 11273 ± 6% numa-meminfo.node0.Active
16869 ± 7% -33.2% 11268 ± 6% numa-meminfo.node0.Active(anon)
23914 ± 5% -27.3% 17388 ± 6% numa-meminfo.node0.Shmem
4217 ± 7% -33.2% 2817 ± 6% numa-vmstat.node0.nr_active_anon
5979 ± 5% -27.3% 4347 ± 6% numa-vmstat.node0.nr_shmem
4217 ± 7% -33.2% 2817 ± 6% numa-vmstat.node0.nr_zone_active_anon
3.17 ± 49% +863.2% 30.50 ± 23% perf-c2c.DRAM.local
21.50 ± 18% +30876.0% 6659 perf-c2c.DRAM.remote
42.17 ± 15% +21792.5% 9231 ± 3% perf-c2c.HITM.local
17.17 ± 25% +33079.6% 5695 ± 2% perf-c2c.HITM.remote
59.33 ± 13% +25058.1% 14927 ± 3% perf-c2c.HITM.total
1737 +4.5% 1815 ± 2% perf-stat.i.context-switches
515.02 ± 5% +12.9% 581.35 ± 7% perf-stat.i.cycles-between-cache-misses
0.26 ± 4% -10.6% 0.24 ± 8% perf-stat.overall.MPKI
4854 ± 5% +14.1% 5540 ± 10% perf-stat.overall.cycles-between-cache-misses
1721 +4.5% 1797 ± 2% perf-stat.ps.context-switches
3129 +2.6% 3210 turbostat.Bzy_MHz
5.84 ±125% -99.1% 0.05 turbostat.IPC
55.14 ± 3% -55.1 0.00 turbostat.PKG_%
67.67 ± 3% -9.9% 61.00 ± 2% turbostat.PkgTmp
348.45 -18.5% 284.12 turbostat.PkgWatt
45.75 +2.3% 46.80 turbostat.RAMWatt
1.47 +27.3% 1.87 ± 14% fxmark.ssd_xfs_DWOL_54_bufferedio.idle_sec
0.05 +27.3% 0.07 ± 14% fxmark.ssd_xfs_DWOL_54_bufferedio.idle_util
14.54 ± 3% -11.8% 12.83 fxmark.ssd_xfs_DWOL_54_bufferedio.irq_sec
0.54 ± 3% -11.8% 0.47 fxmark.ssd_xfs_DWOL_54_bufferedio.irq_util
163.32 -97.3% 4.35 ± 4% fxmark.ssd_xfs_DWOL_54_bufferedio.user_sec
6.04 -97.3% 0.16 ± 4% fxmark.ssd_xfs_DWOL_54_bufferedio.user_util
5.931e+09 -98.1% 1.136e+08 ± 4% fxmark.ssd_xfs_DWOL_54_bufferedio.works
1.186e+08 -98.1% 2271986 ± 4% fxmark.ssd_xfs_DWOL_54_bufferedio.works/sec
14.28 ± 19% -6.4 7.91 ± 33% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
14.28 ± 19% -6.4 7.91 ± 33% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
14.28 ± 19% -6.4 7.91 ± 33% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
14.28 ± 19% -6.1 8.15 ± 31% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
14.28 ± 19% -6.4 7.91 ± 33% perf-profile.children.cycles-pp.start_secondary
14.28 ± 19% -6.1 8.15 ± 31% perf-profile.children.cycles-pp.cpu_startup_entry
14.28 ± 19% -6.1 8.15 ± 31% perf-profile.children.cycles-pp.do_idle
14.28 ± 19% -6.1 8.15 ± 31% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.03 ± 57% +137.1% 0.08 ± 36% perf-sched.sch_delay.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.00 ±141% +7030.0% 0.12 ± 66% perf-sched.sch_delay.max.ms.__cond_resched.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
2.01 ± 75% -91.5% 0.17 ±215% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
54.01 ± 36% +56.0% 84.25 ± 21% perf-sched.wait_and_delay.avg.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
14.51 ± 71% -61.4% 5.60 ± 4% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
392.17 ± 81% +541.9% 2517 ± 3% perf-sched.wait_and_delay.count.__cond_resched.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
470.67 ± 44% +89.3% 890.83 ± 4% perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
475.56 ± 10% -37.1% 299.00 ± 22% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.00 ±223% +2600.0% 0.02 ± 25% perf-sched.wait_time.avg.ms.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
53.98 ± 36% +56.0% 84.23 ± 21% perf-sched.wait_time.avg.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
3.42 ± 22% -66.8% 1.13 ± 16% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
0.00 ±147% +937.5% 0.01 ± 43% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xfs_ilock
14.50 ± 71% -61.4% 5.60 ± 4% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.07 ±215% +845.9% 0.65 ± 40% perf-sched.wait_time.max.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time
0.00 ±223% +2600.0% 0.02 ± 25% perf-sched.wait_time.max.ms.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
0.00 ±147% +2350.0% 0.03 ± 37% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xfs_ilock
475.51 ± 10% -37.1% 298.99 ± 22% perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 3%]
* [linus:master] [btrfs] e06cc89475: aim7.jobs-per-min -12.9% regression
@ 2024-03-11 13:12 3% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-03-11 13:12 UTC (permalink / raw)
To: Filipe Manana
Cc: oe-lkp, lkp, linux-kernel, David Sterba, linux-btrfs, ying.huang,
feng.tang, fengwei.yin, oliver.sang
Hello,
kernel test robot noticed a -12.9% regression of aim7.jobs-per-min on:
commit: e06cc89475eddc1f3a7a4d471524256152c68166 ("btrfs: fix data races when accessing the reserved amount of block reserves")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
[test failed on linus/master 09e5c48fea173b72f1c763776136eeb379b1bc47]
testcase: aim7
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
parameters:
disk: 1BRD_48G
fs: btrfs
test: disk_cp
load: 1500
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202403112002.cc4b1158-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240311/202403112002.cc4b1158-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-12/performance/1BRD_48G/btrfs/x86_64-rhel-8.3/1500/debian-12-x86_64-20240206.cgz/lkp-icl-2sp2/disk_cp/aim7
commit:
5897710b28 ("btrfs: send: don't issue unnecessary zero writes for trailing hole")
e06cc89475 ("btrfs: fix data races when accessing the reserved amount of block reserves")
5897710b28cabab0 e06cc89475eddc1f3a7a4d47152
---------------- ---------------------------
%stddev %change %stddev
\ | \
13.71 -6.3% 12.84 iostat.cpu.idle
86109 ± 5% -10.3% 77204 ± 2% meminfo.Mapped
0.29 ± 2% -0.0 0.25 ± 2% mpstat.cpu.all.usr%
249.60 +12.6% 280.99 ± 2% uptime.boot
148704 ± 3% +11.9% 166363 ± 3% numa-vmstat.node0.nr_written
148026 ± 4% +10.5% 163536 ± 3% numa-vmstat.node1.nr_written
83929 -8.8% 76554 vmstat.system.cs
202906 -4.6% 193642 vmstat.system.in
21940 ± 5% -10.2% 19706 ± 2% proc-vmstat.nr_mapped
296731 ± 4% +11.2% 329900 ± 3% proc-vmstat.nr_written
971976 +6.8% 1037759 proc-vmstat.pgfault
1190113 ± 4% +11.2% 1323358 ± 3% proc-vmstat.pgpgout
61472 ± 3% +9.8% 67507 ± 3% proc-vmstat.pgreuse
45149 -12.9% 39308 ± 2% aim7.jobs-per-min
199.49 +14.9% 229.19 ± 2% aim7.time.elapsed_time
199.49 +14.9% 229.19 ± 2% aim7.time.elapsed_time.max
106461 ± 3% +20.1% 127873 ± 2% aim7.time.involuntary_context_switches
153317 +4.7% 160598 aim7.time.minor_page_faults
22001 +16.1% 25542 ± 2% aim7.time.system_time
8341344 +4.7% 8730263 aim7.time.voluntary_context_switches
1.52 +10.0% 1.67 perf-stat.i.MPKI
7.428e+09 -2.7% 7.229e+09 perf-stat.i.branch-instructions
0.62 ± 2% -0.1 0.56 perf-stat.i.branch-miss-rate%
27712058 -10.6% 24784125 ± 2% perf-stat.i.branch-misses
24.15 +1.3 25.40 perf-stat.i.cache-miss-rate%
51305985 +5.9% 54318013 perf-stat.i.cache-misses
84790 -8.9% 77275 perf-stat.i.context-switches
8.56 +5.1% 9.00 perf-stat.i.cpi
3464 -3.4% 3346 perf-stat.i.cpu-migrations
5494 -4.1% 5271 perf-stat.i.cycles-between-cache-misses
3.253e+10 -3.4% 3.141e+10 perf-stat.i.instructions
0.18 -7.5% 0.17 perf-stat.i.ipc
4301 ± 3% -6.5% 4022 ± 2% perf-stat.i.minor-faults
4303 ± 3% -6.5% 4024 ± 2% perf-stat.i.page-faults
1.58 +9.6% 1.73 perf-stat.overall.MPKI
0.37 -0.0 0.34 perf-stat.overall.branch-miss-rate%
24.56 +1.3 25.83 perf-stat.overall.cache-miss-rate%
8.90 +4.6% 9.31 perf-stat.overall.cpi
5642 -4.5% 5386 perf-stat.overall.cycles-between-cache-misses
0.11 -4.4% 0.11 perf-stat.overall.ipc
7.412e+09 -2.6% 7.216e+09 perf-stat.ps.branch-instructions
27605707 ± 2% -10.4% 24743238 perf-stat.ps.branch-misses
51201807 +5.9% 54221492 perf-stat.ps.cache-misses
84459 -8.8% 77008 perf-stat.ps.context-switches
2.889e+11 +1.1% 2.92e+11 perf-stat.ps.cpu-cycles
3468 -3.4% 3349 perf-stat.ps.cpu-migrations
3.246e+10 -3.4% 3.135e+10 perf-stat.ps.instructions
4534 -6.7% 4228 perf-stat.ps.minor-faults
4537 -6.8% 4230 perf-stat.ps.page-faults
6.503e+12 +11.0% 7.221e+12 perf-stat.total.instructions
33.78 -0.2 33.57 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_do_write_iter
33.84 -0.2 33.66 perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
33.84 -0.2 33.66 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
33.66 -0.2 33.49 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write
1.00 ± 4% -0.1 0.88 ± 2% perf-profile.calltrace.cycles-pp.btrfs_set_extent_delalloc.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
0.94 ± 4% -0.1 0.83 ± 2% perf-profile.calltrace.cycles-pp.btrfs_get_extent.btrfs_set_extent_delalloc.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter
0.77 ± 4% -0.1 0.68 ± 2% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_set_extent_delalloc.btrfs_dirty_pages
0.77 ± 4% -0.1 0.68 ± 2% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_set_extent_delalloc.btrfs_dirty_pages.btrfs_buffered_write
0.57 -0.1 0.52 ± 3% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_set_extent_delalloc
27.57 +0.1 27.71 perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
98.02 +0.2 98.16 perf-profile.calltrace.cycles-pp.write
97.96 +0.2 98.12 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
97.96 +0.2 98.12 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
97.92 +0.2 98.08 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
97.82 +0.2 97.98 perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write.do_syscall_64
97.90 +0.2 98.07 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
97.85 +0.2 98.02 perf-profile.calltrace.cycles-pp.btrfs_do_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.80 +0.3 35.06 perf-profile.calltrace.cycles-pp._raw_spin_lock.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
34.69 +0.3 34.96 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata
26.02 +0.3 26.31 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent.clear_state_bit
26.39 +0.3 26.68 perf-profile.calltrace.cycles-pp.__clear_extent_bit.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
26.34 +0.3 26.63 perf-profile.calltrace.cycles-pp.clear_state_bit.__clear_extent_bit.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter
35.09 +0.3 35.38 perf-profile.calltrace.cycles-pp.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter
35.09 +0.3 35.38 perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
26.33 +0.3 26.63 perf-profile.calltrace.cycles-pp.btrfs_clear_delalloc_extent.clear_state_bit.__clear_extent_bit.btrfs_dirty_pages.btrfs_buffered_write
26.07 +0.3 26.37 perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent.clear_state_bit.__clear_extent_bit
26.08 +0.3 26.38 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent.clear_state_bit.__clear_extent_bit.btrfs_dirty_pages
35.18 +0.3 35.50 perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
25.94 +0.3 26.26 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent
1.01 ± 4% -0.1 0.88 ± 2% perf-profile.children.cycles-pp.btrfs_set_extent_delalloc
0.94 ± 4% -0.1 0.83 ± 2% perf-profile.children.cycles-pp.btrfs_get_extent
0.83 ± 4% -0.1 0.74 ± 2% perf-profile.children.cycles-pp.btrfs_search_slot
0.77 ± 4% -0.1 0.68 ± 2% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.36 ± 3% -0.1 0.30 ± 5% perf-profile.children.cycles-pp.read
0.58 -0.1 0.52 ± 3% perf-profile.children.cycles-pp.btrfs_read_lock_root_node
0.56 -0.1 0.51 ± 2% perf-profile.children.cycles-pp.__btrfs_tree_read_lock
0.56 -0.1 0.51 ± 2% perf-profile.children.cycles-pp.down_read
0.54 -0.1 0.49 ± 2% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
0.11 ± 4% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.calc_available_free_space
0.28 ± 4% -0.0 0.24 ± 5% perf-profile.children.cycles-pp.ksys_read
0.36 ± 2% -0.0 0.31 ± 2% perf-profile.children.cycles-pp.prepare_pages
0.26 ± 4% -0.0 0.22 ± 6% perf-profile.children.cycles-pp.vfs_read
0.45 ± 2% -0.0 0.42 perf-profile.children.cycles-pp.__schedule
0.43 ± 2% -0.0 0.40 perf-profile.children.cycles-pp.schedule
0.23 ± 3% -0.0 0.20 ± 2% perf-profile.children.cycles-pp.__set_extent_bit
0.14 ± 4% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.btrfs_space_info_update_bytes_may_use
0.42 ± 2% -0.0 0.40 perf-profile.children.cycles-pp.schedule_preempt_disabled
0.19 -0.0 0.16 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
0.36 ± 2% -0.0 0.34 perf-profile.children.cycles-pp.load_balance
0.19 ± 3% -0.0 0.16 ± 3% perf-profile.children.cycles-pp.__filemap_get_folio
0.37 ± 2% -0.0 0.35 perf-profile.children.cycles-pp.newidle_balance
0.31 ± 2% -0.0 0.28 perf-profile.children.cycles-pp.cpu_startup_entry
0.31 ± 2% -0.0 0.28 perf-profile.children.cycles-pp.do_idle
0.31 ± 3% -0.0 0.29 perf-profile.children.cycles-pp.find_busiest_group
0.31 ± 2% -0.0 0.28 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.38 ± 2% -0.0 0.35 perf-profile.children.cycles-pp.pick_next_task_fair
0.30 -0.0 0.28 ± 2% perf-profile.children.cycles-pp.start_secondary
0.29 ± 3% -0.0 0.27 perf-profile.children.cycles-pp.update_sg_lb_stats
0.35 -0.0 0.33 perf-profile.children.cycles-pp.__close
0.27 ± 2% -0.0 0.25 ± 2% perf-profile.children.cycles-pp.cpuidle_idle_call
0.34 -0.0 0.32 perf-profile.children.cycles-pp.btrfs_evict_inode
0.14 -0.0 0.12 ± 4% perf-profile.children.cycles-pp.btrfs_read_folio
0.34 -0.0 0.32 perf-profile.children.cycles-pp.evict
0.15 ± 2% -0.0 0.13 ± 5% perf-profile.children.cycles-pp.prepare_uptodate_page
0.35 -0.0 0.33 perf-profile.children.cycles-pp.__x64_sys_close
0.20 ± 2% -0.0 0.18 ± 2% perf-profile.children.cycles-pp.acpi_idle_enter
0.34 -0.0 0.33 perf-profile.children.cycles-pp.__dentry_kill
0.35 -0.0 0.33 perf-profile.children.cycles-pp.__fput
0.34 -0.0 0.33 perf-profile.children.cycles-pp.dentry_kill
0.35 -0.0 0.33 perf-profile.children.cycles-pp.dput
0.31 -0.0 0.29 perf-profile.children.cycles-pp.update_sd_lb_stats
0.20 -0.0 0.18 ± 2% perf-profile.children.cycles-pp.acpi_safe_halt
0.12 ± 3% -0.0 0.10 ± 3% perf-profile.children.cycles-pp.alloc_extent_state
0.12 -0.0 0.10 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc
0.15 ± 3% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.set_extent_bit
0.21 -0.0 0.20 ± 2% perf-profile.children.cycles-pp.cpuidle_enter
0.09 ± 5% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
0.21 -0.0 0.19 ± 2% perf-profile.children.cycles-pp.cpuidle_enter_state
0.07 ± 6% -0.0 0.06 perf-profile.children.cycles-pp.btrfs_folio_clamp_clear_checked
0.11 -0.0 0.10 ± 4% perf-profile.children.cycles-pp.btrfs_do_readpage
0.08 ± 5% -0.0 0.07 perf-profile.children.cycles-pp.btrfs_drop_pages
0.09 ± 4% -0.0 0.08 perf-profile.children.cycles-pp.truncate_inode_pages_range
0.08 ± 5% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.btrfs_write_check
0.07 -0.0 0.06 perf-profile.children.cycles-pp.btrfs_create_new_inode
0.08 -0.0 0.07 perf-profile.children.cycles-pp.lock_extent
0.06 -0.0 0.05 perf-profile.children.cycles-pp.kmem_cache_free
0.15 -0.0 0.14 perf-profile.children.cycles-pp.asm_sysvec_call_function_single
99.48 +0.0 99.52 perf-profile.children.cycles-pp.do_syscall_64
99.49 +0.0 99.53 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.21 +0.1 0.26 perf-profile.children.cycles-pp.need_preemptive_reclaim
60.25 +0.1 60.37 perf-profile.children.cycles-pp.btrfs_block_rsv_release
59.96 +0.1 60.07 perf-profile.children.cycles-pp.btrfs_inode_rsv_release
27.57 +0.1 27.71 perf-profile.children.cycles-pp.btrfs_dirty_pages
98.06 +0.1 98.21 perf-profile.children.cycles-pp.write
97.95 +0.2 98.11 perf-profile.children.cycles-pp.ksys_write
97.83 +0.2 97.99 perf-profile.children.cycles-pp.btrfs_buffered_write
97.94 +0.2 98.10 perf-profile.children.cycles-pp.vfs_write
97.86 +0.2 98.02 perf-profile.children.cycles-pp.btrfs_do_write_iter
26.57 +0.3 26.84 perf-profile.children.cycles-pp.__clear_extent_bit
26.44 +0.3 26.72 perf-profile.children.cycles-pp.clear_state_bit
35.50 +0.3 35.79 perf-profile.children.cycles-pp.__reserve_bytes
26.37 +0.3 26.67 perf-profile.children.cycles-pp.btrfs_clear_delalloc_extent
35.23 +0.3 35.52 perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
35.19 +0.3 35.50 perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
95.80 +0.4 96.18 perf-profile.children.cycles-pp._raw_spin_lock
95.21 +0.4 95.60 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.70 -0.0 0.67 perf-profile.self.cycles-pp._raw_spin_lock
0.13 ± 2% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.btrfs_space_info_update_bytes_may_use
0.07 ± 5% -0.0 0.06 perf-profile.self.cycles-pp.btrfs_folio_clamp_clear_checked
0.08 ± 4% -0.0 0.07 perf-profile.self.cycles-pp.kmem_cache_alloc
0.07 -0.0 0.06 perf-profile.self.cycles-pp.memset_orig
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.btrfs_block_rsv_release
94.48 +0.4 94.88 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 3%]
* Re: [PATCH v5 37/37] memprofiling: Documentation
2024-03-07 3:18 0% ` Randy Dunlap
2024-03-07 16:51 0% ` Suren Baghdasaryan
@ 2024-03-07 18:17 0% ` Kent Overstreet
1 sibling, 0 replies; 200+ results
From: Kent Overstreet @ 2024-03-07 18:17 UTC (permalink / raw)
To: Randy Dunlap
Cc: Suren Baghdasaryan, akpm, mhocko, vbabka, hannes, roman.gushchin,
mgorman, dave, willy, liam.howlett, penguin-kernel, corbet, void,
peterz, juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, shakeelb, songmuchun, jbaron, aliceryhl, rientjes,
minchan, kaleshsingh, kernel-team, linux-doc, linux-kernel,
iommu, linux-arch, linux-fsdevel, linux-mm, linux-modules,
kasan-dev, cgroups
On Wed, Mar 06, 2024 at 07:18:57PM -0800, Randy Dunlap wrote:
> Hi,
> This includes some editing suggestions and some doc build fixes.
>
>
> On 3/6/24 10:24, Suren Baghdasaryan wrote:
> > From: Kent Overstreet <kent.overstreet@linux.dev>
> >
> > Provide documentation for memory allocation profiling.
> >
> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> > Documentation/mm/allocation-profiling.rst | 91 +++++++++++++++++++++++
> > 1 file changed, 91 insertions(+)
> > create mode 100644 Documentation/mm/allocation-profiling.rst
> >
> > diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
> > new file mode 100644
> > index 000000000000..8a862c7d3aab
> > --- /dev/null
> > +++ b/Documentation/mm/allocation-profiling.rst
> > @@ -0,0 +1,91 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +===========================
> > +MEMORY ALLOCATION PROFILING
> > +===========================
> > +
> > +Low overhead (suitable for production) accounting of all memory allocations,
> > +tracked by file and line number.
> > +
> > +Usage:
> > +kconfig options:
> > + - CONFIG_MEM_ALLOC_PROFILING
> > + - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> > + - CONFIG_MEM_ALLOC_PROFILING_DEBUG
> > + adds warnings for allocations that weren't accounted because of a
> > + missing annotation
> > +
> > +Boot parameter:
> > + sysctl.vm.mem_profiling=0|1|never
> > +
> > + When set to "never", memory allocation profiling overheads is minimized and it
>
> overhead is
>
> > + cannot be enabled at runtime (sysctl becomes read-only).
> > + When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
> > + When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
> > +
> > +sysctl:
> > + /proc/sys/vm/mem_profiling
> > +
> > +Runtime info:
> > + /proc/allocinfo
> > +
> > +Example output:
> > + root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
> > + 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
> > + 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
> > + 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> > + 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> > + 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
> > + 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
> > + 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> > + 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> > + 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
> > + 55M 4887 mm/slub.c:2259 func:alloc_slab_page
> > + 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
> > +===================
> > +Theory of operation
> > +===================
> > +
> > +Memory allocation profiling builds off of code tagging, which is a library for
> > +declaring static structs (that typcially describe a file and line number in
>
> typically
>
> > +some way, hence code tagging) and then finding and operating on them at runtime
>
> at runtime,
>
> > +- i.e. iterating over them to print them in debugfs/procfs.
>
> i.e., iterating
i.e. latin id est, that is: grammatically my version is fine
>
> > +
> > +To add accounting for an allocation call, we replace it with a macro
> > +invocation, alloc_hooks(), that
> > + - declares a code tag
> > + - stashes a pointer to it in task_struct
> > + - calls the real allocation function
> > + - and finally, restores the task_struct alloc tag pointer to its previous value.
> > +
> > +This allows for alloc_hooks() calls to be nested, with the most recent one
> > +taking effect. This is important for allocations internal to the mm/ code that
> > +do not properly belong to the outer allocation context and should be counted
> > +separately: for example, slab object extension vectors, or when the slab
> > +allocates pages from the page allocator.
> > +
> > +Thus, proper usage requires determining which function in an allocation call
> > +stack should be tagged. There are many helper functions that essentially wrap
> > +e.g. kmalloc() and do a little more work, then are called in multiple places;
> > +we'll generally want the accounting to happen in the callers of these helpers,
> > +not in the helpers themselves.
> > +
> > +To fix up a given helper, for example foo(), do the following:
> > + - switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
> > + - rename it to foo_noprof()
> > + - define a macro version of foo() like so:
> > + #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
> > +
> > +It's also possible to stash a pointer to an alloc tag in your own data structures.
> > +
> > +Do this when you're implementing a generic data structure that does allocations
> > +"on behalf of" some other code - for example, the rhashtable code. This way,
> > +instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
> > +break it out by rhashtable type.
> > +
> > +To do so:
> > + - Hook your data structure's init function, like any other allocation function
>
> maybe end the line above with a '.' like the following line.
>
> > + - Within your init function, use the convenience macro alloc_tag_record() to
> > + record alloc tag in your data structure.
> > + - Then, use the following form for your allocations:
> > + alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
>
>
> Finally, there are a number of documentation build warnings in this patch.
> I'm no ReST expert, but the attached patch fixes them for me.
>
> --
> #Randy
^ permalink raw reply [relevance 0%]
* Re: [PATCH v5 37/37] memprofiling: Documentation
2024-03-07 3:18 0% ` Randy Dunlap
@ 2024-03-07 16:51 0% ` Suren Baghdasaryan
2024-03-07 18:17 0% ` Kent Overstreet
1 sibling, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-07 16:51 UTC (permalink / raw)
To: Randy Dunlap
Cc: akpm, kent.overstreet, mhocko, vbabka, hannes, roman.gushchin,
mgorman, dave, willy, liam.howlett, penguin-kernel, corbet, void,
peterz, juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, shakeelb, songmuchun, jbaron, aliceryhl, rientjes,
minchan, kaleshsingh, kernel-team, linux-doc, linux-kernel,
iommu, linux-arch, linux-fsdevel, linux-mm, linux-modules,
kasan-dev, cgroups
On Thu, Mar 7, 2024 at 3:19 AM Randy Dunlap <rdunlap@infradead.org> wrote:
>
> Hi,
> This includes some editing suggestions and some doc build fixes.
>
>
> On 3/6/24 10:24, Suren Baghdasaryan wrote:
> > From: Kent Overstreet <kent.overstreet@linux.dev>
> >
> > Provide documentation for memory allocation profiling.
> >
> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> > Documentation/mm/allocation-profiling.rst | 91 +++++++++++++++++++++++
> > 1 file changed, 91 insertions(+)
> > create mode 100644 Documentation/mm/allocation-profiling.rst
> >
> > diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
> > new file mode 100644
> > index 000000000000..8a862c7d3aab
> > --- /dev/null
> > +++ b/Documentation/mm/allocation-profiling.rst
> > @@ -0,0 +1,91 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +===========================
> > +MEMORY ALLOCATION PROFILING
> > +===========================
> > +
> > +Low overhead (suitable for production) accounting of all memory allocations,
> > +tracked by file and line number.
> > +
> > +Usage:
> > +kconfig options:
> > + - CONFIG_MEM_ALLOC_PROFILING
> > + - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> > + - CONFIG_MEM_ALLOC_PROFILING_DEBUG
> > + adds warnings for allocations that weren't accounted because of a
> > + missing annotation
> > +
> > +Boot parameter:
> > + sysctl.vm.mem_profiling=0|1|never
> > +
> > + When set to "never", memory allocation profiling overheads is minimized and it
>
> overhead is
>
> > + cannot be enabled at runtime (sysctl becomes read-only).
> > + When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
> > + When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
> > +
> > +sysctl:
> > + /proc/sys/vm/mem_profiling
> > +
> > +Runtime info:
> > + /proc/allocinfo
> > +
> > +Example output:
> > + root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
> > + 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
> > + 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
> > + 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> > + 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> > + 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
> > + 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
> > + 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> > + 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> > + 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
> > + 55M 4887 mm/slub.c:2259 func:alloc_slab_page
> > + 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
> > +===================
> > +Theory of operation
> > +===================
> > +
> > +Memory allocation profiling builds off of code tagging, which is a library for
> > +declaring static structs (that typcially describe a file and line number in
>
> typically
>
> > +some way, hence code tagging) and then finding and operating on them at runtime
>
> at runtime,
>
> > +- i.e. iterating over them to print them in debugfs/procfs.
>
> i.e., iterating
>
> > +
> > +To add accounting for an allocation call, we replace it with a macro
> > +invocation, alloc_hooks(), that
> > + - declares a code tag
> > + - stashes a pointer to it in task_struct
> > + - calls the real allocation function
> > + - and finally, restores the task_struct alloc tag pointer to its previous value.
> > +
> > +This allows for alloc_hooks() calls to be nested, with the most recent one
> > +taking effect. This is important for allocations internal to the mm/ code that
> > +do not properly belong to the outer allocation context and should be counted
> > +separately: for example, slab object extension vectors, or when the slab
> > +allocates pages from the page allocator.
> > +
> > +Thus, proper usage requires determining which function in an allocation call
> > +stack should be tagged. There are many helper functions that essentially wrap
> > +e.g. kmalloc() and do a little more work, then are called in multiple places;
> > +we'll generally want the accounting to happen in the callers of these helpers,
> > +not in the helpers themselves.
> > +
> > +To fix up a given helper, for example foo(), do the following:
> > + - switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
> > + - rename it to foo_noprof()
> > + - define a macro version of foo() like so:
> > + #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
> > +
> > +It's also possible to stash a pointer to an alloc tag in your own data structures.
> > +
> > +Do this when you're implementing a generic data structure that does allocations
> > +"on behalf of" some other code - for example, the rhashtable code. This way,
> > +instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
> > +break it out by rhashtable type.
> > +
> > +To do so:
> > + - Hook your data structure's init function, like any other allocation function
>
> maybe end the line above with a '.' like the following line.
>
> > + - Within your init function, use the convenience macro alloc_tag_record() to
> > + record alloc tag in your data structure.
> > + - Then, use the following form for your allocations:
> > + alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
>
>
> Finally, there are a number of documentation build warnings in this patch.
> I'm no ReST expert, but the attached patch fixes them for me.
Thanks Randy! I'll use your cleaned-up patch in the next submission.
Cheers,
Suren.
>
> --
> #Randy
^ permalink raw reply [relevance 0%]
* Re: [PATCH v5 37/37] memprofiling: Documentation
2024-03-06 18:24 4% ` [PATCH v5 37/37] memprofiling: Documentation Suren Baghdasaryan
@ 2024-03-07 3:18 0% ` Randy Dunlap
2024-03-07 16:51 0% ` Suren Baghdasaryan
2024-03-07 18:17 0% ` Kent Overstreet
0 siblings, 2 replies; 200+ results
From: Randy Dunlap @ 2024-03-07 3:18 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, shakeelb, songmuchun, jbaron, aliceryhl, rientjes,
minchan, kaleshsingh, kernel-team, linux-doc, linux-kernel,
iommu, linux-arch, linux-fsdevel, linux-mm, linux-modules,
kasan-dev, cgroups
[-- Attachment #1: Type: text/plain, Size: 5488 bytes --]
Hi,
This includes some editing suggestions and some doc build fixes.
On 3/6/24 10:24, Suren Baghdasaryan wrote:
> From: Kent Overstreet <kent.overstreet@linux.dev>
>
> Provide documentation for memory allocation profiling.
>
> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
> Documentation/mm/allocation-profiling.rst | 91 +++++++++++++++++++++++
> 1 file changed, 91 insertions(+)
> create mode 100644 Documentation/mm/allocation-profiling.rst
>
> diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
> new file mode 100644
> index 000000000000..8a862c7d3aab
> --- /dev/null
> +++ b/Documentation/mm/allocation-profiling.rst
> @@ -0,0 +1,91 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +===========================
> +MEMORY ALLOCATION PROFILING
> +===========================
> +
> +Low overhead (suitable for production) accounting of all memory allocations,
> +tracked by file and line number.
> +
> +Usage:
> +kconfig options:
> + - CONFIG_MEM_ALLOC_PROFILING
> + - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> + - CONFIG_MEM_ALLOC_PROFILING_DEBUG
> + adds warnings for allocations that weren't accounted because of a
> + missing annotation
> +
> +Boot parameter:
> + sysctl.vm.mem_profiling=0|1|never
> +
> + When set to "never", memory allocation profiling overheads is minimized and it
overhead is
> + cannot be enabled at runtime (sysctl becomes read-only).
> + When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
> + When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
> +
> +sysctl:
> + /proc/sys/vm/mem_profiling
> +
> +Runtime info:
> + /proc/allocinfo
> +
> +Example output:
> + root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
> + 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
> + 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
> + 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> + 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> + 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
> + 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
> + 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> + 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> + 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
> + 55M 4887 mm/slub.c:2259 func:alloc_slab_page
> + 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
> +===================
> +Theory of operation
> +===================
> +
> +Memory allocation profiling builds off of code tagging, which is a library for
> +declaring static structs (that typcially describe a file and line number in
typically
> +some way, hence code tagging) and then finding and operating on them at runtime
at runtime,
> +- i.e. iterating over them to print them in debugfs/procfs.
i.e., iterating
> +
> +To add accounting for an allocation call, we replace it with a macro
> +invocation, alloc_hooks(), that
> + - declares a code tag
> + - stashes a pointer to it in task_struct
> + - calls the real allocation function
> + - and finally, restores the task_struct alloc tag pointer to its previous value.
> +
> +This allows for alloc_hooks() calls to be nested, with the most recent one
> +taking effect. This is important for allocations internal to the mm/ code that
> +do not properly belong to the outer allocation context and should be counted
> +separately: for example, slab object extension vectors, or when the slab
> +allocates pages from the page allocator.
> +
> +Thus, proper usage requires determining which function in an allocation call
> +stack should be tagged. There are many helper functions that essentially wrap
> +e.g. kmalloc() and do a little more work, then are called in multiple places;
> +we'll generally want the accounting to happen in the callers of these helpers,
> +not in the helpers themselves.
> +
> +To fix up a given helper, for example foo(), do the following:
> + - switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
> + - rename it to foo_noprof()
> + - define a macro version of foo() like so:
> + #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
> +
> +It's also possible to stash a pointer to an alloc tag in your own data structures.
> +
> +Do this when you're implementing a generic data structure that does allocations
> +"on behalf of" some other code - for example, the rhashtable code. This way,
> +instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
> +break it out by rhashtable type.
> +
> +To do so:
> + - Hook your data structure's init function, like any other allocation function
maybe end the line above with a '.' like the following line.
> + - Within your init function, use the convenience macro alloc_tag_record() to
> + record alloc tag in your data structure.
> + - Then, use the following form for your allocations:
> + alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
Finally, there are a number of documentation build warnings in this patch.
I'm no ReST expert, but the attached patch fixes them for me.
--
#Randy
[-- Attachment #2: docum-mm-alloc-profiling-fix403.patch --]
[-- Type: text/x-patch, Size: 2965 bytes --]
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
---
Documentation/mm/allocation-profiling.rst | 28 ++++++++++----------
Documentation/mm/index.rst | 1
2 files changed, 16 insertions(+), 13 deletions(-)
diff -- a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
--- a/Documentation/mm/allocation-profiling.rst
+++ b/Documentation/mm/allocation-profiling.rst
@@ -9,11 +9,11 @@ tracked by file and line number.
Usage:
kconfig options:
- - CONFIG_MEM_ALLOC_PROFILING
- - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
- - CONFIG_MEM_ALLOC_PROFILING_DEBUG
- adds warnings for allocations that weren't accounted because of a
- missing annotation
+- CONFIG_MEM_ALLOC_PROFILING
+- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+- CONFIG_MEM_ALLOC_PROFILING_DEBUG
+adds warnings for allocations that weren't accounted because of a
+missing annotation
Boot parameter:
sysctl.vm.mem_profiling=0|1|never
@@ -29,7 +29,8 @@ sysctl:
Runtime info:
/proc/allocinfo
-Example output:
+Example output::
+
root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
@@ -42,21 +43,22 @@ Example output:
15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
55M 4887 mm/slub.c:2259 func:alloc_slab_page
122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+
===================
Theory of operation
===================
Memory allocation profiling builds off of code tagging, which is a library for
declaring static structs (that typcially describe a file and line number in
-some way, hence code tagging) and then finding and operating on them at runtime
-- i.e. iterating over them to print them in debugfs/procfs.
+some way, hence code tagging) and then finding and operating on them at runtime,
+i.e., iterating over them to print them in debugfs/procfs.
To add accounting for an allocation call, we replace it with a macro
-invocation, alloc_hooks(), that
- - declares a code tag
- - stashes a pointer to it in task_struct
- - calls the real allocation function
- - and finally, restores the task_struct alloc tag pointer to its previous value.
+invocation, alloc_hooks(), that:
+- declares a code tag
+- stashes a pointer to it in task_struct
+- calls the real allocation function
+- and finally, restores the task_struct alloc tag pointer to its previous value.
This allows for alloc_hooks() calls to be nested, with the most recent one
taking effect. This is important for allocations internal to the mm/ code that
diff -- a/Documentation/mm/index.rst b/Documentation/mm/index.rst
--- a/Documentation/mm/index.rst
+++ b/Documentation/mm/index.rst
@@ -26,6 +26,7 @@ see the :doc:`admin guide <../admin-guid
page_cache
shmfs
oom
+ allocation-profiling
Legacy Documentation
====================
^ permalink raw reply [relevance 0%]
* [PATCH v5 37/37] memprofiling: Documentation
2024-03-06 18:23 3% [PATCH v5 00/37] Memory allocation profiling Suren Baghdasaryan
2024-03-06 18:24 3% ` [PATCH v5 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
@ 2024-03-06 18:24 4% ` Suren Baghdasaryan
2024-03-07 3:18 0% ` Randy Dunlap
1 sibling, 1 reply; 200+ results
From: Suren Baghdasaryan @ 2024-03-06 18:24 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, shakeelb, songmuchun, jbaron, aliceryhl, rientjes,
minchan, kaleshsingh, surenb, kernel-team, linux-doc,
linux-kernel, iommu, linux-arch, linux-fsdevel, linux-mm,
linux-modules, kasan-dev, cgroups
From: Kent Overstreet <kent.overstreet@linux.dev>
Provide documentation for memory allocation profiling.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Documentation/mm/allocation-profiling.rst | 91 +++++++++++++++++++++++
1 file changed, 91 insertions(+)
create mode 100644 Documentation/mm/allocation-profiling.rst
diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
new file mode 100644
index 000000000000..8a862c7d3aab
--- /dev/null
+++ b/Documentation/mm/allocation-profiling.rst
@@ -0,0 +1,91 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+MEMORY ALLOCATION PROFILING
+===========================
+
+Low overhead (suitable for production) accounting of all memory allocations,
+tracked by file and line number.
+
+Usage:
+kconfig options:
+ - CONFIG_MEM_ALLOC_PROFILING
+ - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ - CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ adds warnings for allocations that weren't accounted because of a
+ missing annotation
+
+Boot parameter:
+ sysctl.vm.mem_profiling=0|1|never
+
+ When set to "never", memory allocation profiling overheads is minimized and it
+ cannot be enabled at runtime (sysctl becomes read-only).
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
+
+sysctl:
+ /proc/sys/vm/mem_profiling
+
+Runtime info:
+ /proc/allocinfo
+
+Example output:
+ root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
+ 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
+ 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
+ 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
+ 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 55M 4887 mm/slub.c:2259 func:alloc_slab_page
+ 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+===================
+Theory of operation
+===================
+
+Memory allocation profiling builds off of code tagging, which is a library for
+declaring static structs (that typcially describe a file and line number in
+some way, hence code tagging) and then finding and operating on them at runtime
+- i.e. iterating over them to print them in debugfs/procfs.
+
+To add accounting for an allocation call, we replace it with a macro
+invocation, alloc_hooks(), that
+ - declares a code tag
+ - stashes a pointer to it in task_struct
+ - calls the real allocation function
+ - and finally, restores the task_struct alloc tag pointer to its previous value.
+
+This allows for alloc_hooks() calls to be nested, with the most recent one
+taking effect. This is important for allocations internal to the mm/ code that
+do not properly belong to the outer allocation context and should be counted
+separately: for example, slab object extension vectors, or when the slab
+allocates pages from the page allocator.
+
+Thus, proper usage requires determining which function in an allocation call
+stack should be tagged. There are many helper functions that essentially wrap
+e.g. kmalloc() and do a little more work, then are called in multiple places;
+we'll generally want the accounting to happen in the callers of these helpers,
+not in the helpers themselves.
+
+To fix up a given helper, for example foo(), do the following:
+ - switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
+ - rename it to foo_noprof()
+ - define a macro version of foo() like so:
+ #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
+
+It's also possible to stash a pointer to an alloc tag in your own data structures.
+
+Do this when you're implementing a generic data structure that does allocations
+"on behalf of" some other code - for example, the rhashtable code. This way,
+instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
+break it out by rhashtable type.
+
+To do so:
+ - Hook your data structure's init function, like any other allocation function
+ - Within your init function, use the convenience macro alloc_tag_record() to
+ record alloc tag in your data structure.
+ - Then, use the following form for your allocations:
+ alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply related [relevance 4%]
* [PATCH v5 13/37] lib: add allocation tagging support for memory allocation profiling
2024-03-06 18:23 3% [PATCH v5 00/37] Memory allocation profiling Suren Baghdasaryan
@ 2024-03-06 18:24 3% ` Suren Baghdasaryan
2024-03-06 18:24 4% ` [PATCH v5 37/37] memprofiling: Documentation Suren Baghdasaryan
1 sibling, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-06 18:24 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, shakeelb, songmuchun, jbaron, aliceryhl, rientjes,
minchan, kaleshsingh, surenb, kernel-team, linux-doc,
linux-kernel, iommu, linux-arch, linux-fsdevel, linux-mm,
linux-modules, kasan-dev, cgroups
Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily
instrument memory allocators. It registers an "alloc_tags" codetag type
with /proc/allocinfo interface to output allocation tag information when
the feature is enabled.
CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory
allocation profiling instrumentation.
Memory allocation profiling can be enabled or disabled at runtime using
/proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n.
CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation
profiling by default.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
---
Documentation/admin-guide/sysctl/vm.rst | 16 +++
Documentation/filesystems/proc.rst | 29 +++++
include/asm-generic/codetag.lds.h | 14 +++
include/asm-generic/vmlinux.lds.h | 3 +
include/linux/alloc_tag.h | 145 +++++++++++++++++++++++
include/linux/sched.h | 24 ++++
lib/Kconfig.debug | 25 ++++
lib/Makefile | 2 +
lib/alloc_tag.c | 149 ++++++++++++++++++++++++
scripts/module.lds.S | 7 ++
10 files changed, 414 insertions(+)
create mode 100644 include/asm-generic/codetag.lds.h
create mode 100644 include/linux/alloc_tag.h
create mode 100644 lib/alloc_tag.c
diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index c59889de122b..e86c968a7a0e 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/vm:
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
+- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
@@ -425,6 +426,21 @@ e.g., up to one or two maps per allocation.
The default value is 65530.
+mem_profiling
+==============
+
+Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
+
+1: Enable memory profiling.
+
+0: Disable memory profiling.
+
+Enabling memory profiling introduces a small performance overhead for all
+memory allocations.
+
+The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
+
+
memory_failure_early_kill:
==========================
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 104c6d047d9b..8150dc3d689c 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -688,6 +688,7 @@ files are there, and which are missing.
============ ===============================================================
File Content
============ ===============================================================
+ allocinfo Memory allocations profiling information
apm Advanced power management info
bootconfig Kernel command line obtained from boot config,
and, if there were kernel parameters from the
@@ -953,6 +954,34 @@ also be allocatable although a lot of filesystem metadata may have to be
reclaimed to achieve this.
+allocinfo
+~~~~~~~
+
+Provides information about memory allocations at all locations in the code
+base. Each allocation in the code is identified by its source file, line
+number, module (if originates from a loadable module) and the function calling
+the allocation. The number of bytes allocated and number of calls at each
+location are reported.
+
+Example output.
+
+::
+
+ > sort -rn /proc/allocinfo
+ 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
+ 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
+ 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
+ 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
+ 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
+ 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ ...
+
+
meminfo
~~~~~~~
diff --git a/include/asm-generic/codetag.lds.h b/include/asm-generic/codetag.lds.h
new file mode 100644
index 000000000000..64f536b80380
--- /dev/null
+++ b/include/asm-generic/codetag.lds.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_GENERIC_CODETAG_LDS_H
+#define __ASM_GENERIC_CODETAG_LDS_H
+
+#define SECTION_WITH_BOUNDARIES(_name) \
+ . = ALIGN(8); \
+ __start_##_name = .; \
+ KEEP(*(_name)) \
+ __stop_##_name = .;
+
+#define CODETAG_SECTIONS() \
+ SECTION_WITH_BOUNDARIES(alloc_tags)
+
+#endif /* __ASM_GENERIC_CODETAG_LDS_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 5dd3a61d673d..c9997dc50c50 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -50,6 +50,8 @@
* [__nosave_begin, __nosave_end] for the nosave data
*/
+#include <asm-generic/codetag.lds.h>
+
#ifndef LOAD_OFFSET
#define LOAD_OFFSET 0
#endif
@@ -366,6 +368,7 @@
. = ALIGN(8); \
BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \
BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \
+ CODETAG_SECTIONS() \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
TRACE_PRINTKS() \
diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
new file mode 100644
index 000000000000..b970ff1c80dc
--- /dev/null
+++ b/include/linux/alloc_tag.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * allocation tagging
+ */
+#ifndef _LINUX_ALLOC_TAG_H
+#define _LINUX_ALLOC_TAG_H
+
+#include <linux/bug.h>
+#include <linux/codetag.h>
+#include <linux/container_of.h>
+#include <linux/preempt.h>
+#include <asm/percpu.h>
+#include <linux/cpumask.h>
+#include <linux/static_key.h>
+
+struct alloc_tag_counters {
+ u64 bytes;
+ u64 calls;
+};
+
+/*
+ * An instance of this structure is created in a special ELF section at every
+ * allocation callsite. At runtime, the special section is treated as
+ * an array of these. Embedded codetag utilizes codetag framework.
+ */
+struct alloc_tag {
+ struct codetag ct;
+ struct alloc_tag_counters __percpu *counters;
+} __aligned(8);
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+
+static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct)
+{
+ return container_of(ct, struct alloc_tag, ct);
+}
+
+#ifdef ARCH_NEEDS_WEAK_PER_CPU
+/*
+ * When percpu variables are required to be defined as weak, static percpu
+ * variables can't be used inside a function (see comments for DECLARE_PER_CPU_SECTION).
+ */
+#error "Memory allocation profiling is incompatible with ARCH_NEEDS_WEAK_PER_CPU"
+#endif
+
+#define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+ __section("alloc_tags") = { \
+ .ct = CODE_TAG_INIT, \
+ .counters = &_alloc_tag_cntr };
+
+DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static inline bool mem_alloc_profiling_enabled(void)
+{
+ return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ &mem_alloc_profiling_key);
+}
+
+static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag)
+{
+ struct alloc_tag_counters v = { 0, 0 };
+ struct alloc_tag_counters *counter;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ counter = per_cpu_ptr(tag->counters, cpu);
+ v.bytes += counter->bytes;
+ v.calls += counter->calls;
+ }
+
+ return v;
+}
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ WARN_ONCE(ref && ref->ct,
+ "alloc_tag was not cleared (got tag for %s:%u)\n",
+ ref->ct->filename, ref->ct->lineno);
+
+ WARN_ONCE(!tag, "current->alloc_tag not set");
+}
+
+static inline void alloc_tag_sub_check(union codetag_ref *ref)
+{
+ WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n");
+}
+#else
+static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag) {}
+static inline void alloc_tag_sub_check(union codetag_ref *ref) {}
+#endif
+
+/* Caller should verify both ref and tag to be valid */
+static inline void __alloc_tag_ref_set(union codetag_ref *ref, struct alloc_tag *tag)
+{
+ ref->ct = &tag->ct;
+ /*
+ * We need in increment the call counter every time we have a new
+ * allocation or when we split a large allocation into smaller ones.
+ * Each new reference for every sub-allocation needs to increment call
+ * counter because when we free each part the counter will be decremented.
+ */
+ this_cpu_inc(tag->counters->calls);
+}
+
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes)
+{
+ alloc_tag_add_check(ref, tag);
+ if (!ref || !tag)
+ return;
+
+ __alloc_tag_ref_set(ref, tag);
+ this_cpu_add(tag->counters->bytes, bytes);
+}
+
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes)
+{
+ struct alloc_tag *tag;
+
+ alloc_tag_sub_check(ref);
+ if (!ref || !ref->ct)
+ return;
+
+ tag = ct_to_alloc_tag(ref->ct);
+
+ this_cpu_sub(tag->counters->bytes, bytes);
+ this_cpu_dec(tag->counters->calls);
+
+ ref->ct = NULL;
+}
+
+#else /* CONFIG_MEM_ALLOC_PROFILING */
+
+#define DEFINE_ALLOC_TAG(_alloc_tag)
+static inline bool mem_alloc_profiling_enabled(void) { return false; }
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag,
+ size_t bytes) {}
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {}
+
+#endif /* CONFIG_MEM_ALLOC_PROFILING */
+
+#endif /* _LINUX_ALLOC_TAG_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 998861865b84..f85b58e385a3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -770,6 +770,10 @@ struct task_struct {
unsigned int flags;
unsigned int ptrace;
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+ struct alloc_tag *alloc_tag;
+#endif
+
#ifdef CONFIG_SMP
int on_cpu;
struct __call_single_node wake_entry;
@@ -810,6 +814,7 @@ struct task_struct {
struct task_group *sched_task_group;
#endif
+
#ifdef CONFIG_UCLAMP_TASK
/*
* Clamp values requested for a scheduling entity.
@@ -2185,4 +2190,23 @@ static inline int sched_core_idle_cpu(int cpu) { return idle_cpu(cpu); }
extern void sched_set_stop_task(int cpu, struct task_struct *stop);
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag)
+{
+ swap(current->alloc_tag, tag);
+ return tag;
+}
+
+static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old)
+{
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n");
+#endif
+ current->alloc_tag = old;
+}
+#else
+#define alloc_tag_save(_tag) NULL
+#define alloc_tag_restore(_tag, _old) do {} while (0)
+#endif
+
#endif
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 5485a5780fa7..0dd6ab986246 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -972,6 +972,31 @@ config CODE_TAGGING
bool
select KALLSYMS
+config MEM_ALLOC_PROFILING
+ bool "Enable memory allocation profiling"
+ default n
+ depends on PROC_FS
+ depends on !DEBUG_FORCE_WEAK_PER_CPU
+ select CODE_TAGGING
+ help
+ Track allocation source code and record total allocation size
+ initiated at that code location. The mechanism can be used to track
+ memory leaks with a low performance and memory impact.
+
+config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ bool "Enable memory allocation profiling by default"
+ default y
+ depends on MEM_ALLOC_PROFILING
+
+config MEM_ALLOC_PROFILING_DEBUG
+ bool "Memory allocation profiler debugging"
+ default n
+ depends on MEM_ALLOC_PROFILING
+ select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ help
+ Adds warnings with helpful error messages for memory allocation
+ profiling.
+
source "lib/Kconfig.kasan"
source "lib/Kconfig.kfence"
source "lib/Kconfig.kmsan"
diff --git a/lib/Makefile b/lib/Makefile
index 6b48b22fdfac..859112f09bf5 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -236,6 +236,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_CODE_TAGGING) += codetag.o
+obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o
+
lib-$(CONFIG_GENERIC_BUG) += bug.o
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
new file mode 100644
index 000000000000..f09c8a422bc2
--- /dev/null
+++ b/lib/alloc_tag.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/alloc_tag.h>
+#include <linux/fs.h>
+#include <linux/gfp.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_buf.h>
+#include <linux/seq_file.h>
+
+static struct codetag_type *alloc_tag_cttype;
+
+DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static void *allocinfo_start(struct seq_file *m, loff_t *pos)
+{
+ struct codetag_iterator *iter;
+ struct codetag *ct;
+ loff_t node = *pos;
+
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ m->private = iter;
+ if (!iter)
+ return NULL;
+
+ codetag_lock_module_list(alloc_tag_cttype, true);
+ *iter = codetag_get_ct_iter(alloc_tag_cttype);
+ while ((ct = codetag_next_ct(iter)) != NULL && node)
+ node--;
+
+ return ct ? iter : NULL;
+}
+
+static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ struct codetag *ct = codetag_next_ct(iter);
+
+ (*pos)++;
+ if (!ct)
+ return NULL;
+
+ return iter;
+}
+
+static void allocinfo_stop(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)m->private;
+
+ if (iter) {
+ codetag_lock_module_list(alloc_tag_cttype, false);
+ kfree(iter);
+ }
+}
+
+static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct)
+{
+ struct alloc_tag *tag = ct_to_alloc_tag(ct);
+ struct alloc_tag_counters counter = alloc_tag_read(tag);
+ s64 bytes = counter.bytes;
+
+ seq_buf_printf(out, "%12lli %8llu ", bytes, counter.calls);
+ codetag_to_text(out, ct);
+ seq_buf_putc(out, ' ');
+ seq_buf_putc(out, '\n');
+}
+
+static int allocinfo_show(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ char *bufp;
+ size_t n = seq_get_buf(m, &bufp);
+ struct seq_buf buf;
+
+ seq_buf_init(&buf, bufp, n);
+ alloc_tag_to_text(&buf, iter->ct);
+ seq_commit(m, seq_buf_used(&buf));
+ return 0;
+}
+
+static const struct seq_operations allocinfo_seq_op = {
+ .start = allocinfo_start,
+ .next = allocinfo_next,
+ .stop = allocinfo_stop,
+ .show = allocinfo_show,
+};
+
+static void __init procfs_init(void)
+{
+ proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op);
+}
+
+static bool alloc_tag_module_unload(struct codetag_type *cttype,
+ struct codetag_module *cmod)
+{
+ struct codetag_iterator iter = codetag_get_ct_iter(cttype);
+ struct alloc_tag_counters counter;
+ bool module_unused = true;
+ struct alloc_tag *tag;
+ struct codetag *ct;
+
+ for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) {
+ if (iter.cmod != cmod)
+ continue;
+
+ tag = ct_to_alloc_tag(ct);
+ counter = alloc_tag_read(tag);
+
+ if (WARN(counter.bytes,
+ "%s:%u module %s func:%s has %llu allocated at module unload",
+ ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes))
+ module_unused = false;
+ }
+
+ return module_unused;
+}
+
+static struct ctl_table memory_allocation_profiling_sysctls[] = {
+ {
+ .procname = "mem_profiling",
+ .data = &mem_alloc_profiling_key,
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ .mode = 0444,
+#else
+ .mode = 0644,
+#endif
+ .proc_handler = proc_do_static_key,
+ },
+ { }
+};
+
+static int __init alloc_tag_init(void)
+{
+ const struct codetag_type_desc desc = {
+ .section = "alloc_tags",
+ .tag_size = sizeof(struct alloc_tag),
+ .module_unload = alloc_tag_module_unload,
+ };
+
+ alloc_tag_cttype = codetag_register_type(&desc);
+ if (IS_ERR_OR_NULL(alloc_tag_cttype))
+ return PTR_ERR(alloc_tag_cttype);
+
+ register_sysctl_init("vm", memory_allocation_profiling_sysctls);
+ procfs_init();
+
+ return 0;
+}
+module_init(alloc_tag_init);
diff --git a/scripts/module.lds.S b/scripts/module.lds.S
index bf5bcf2836d8..45c67a0994f3 100644
--- a/scripts/module.lds.S
+++ b/scripts/module.lds.S
@@ -9,6 +9,8 @@
#define DISCARD_EH_FRAME *(.eh_frame)
#endif
+#include <asm-generic/codetag.lds.h>
+
SECTIONS {
/DISCARD/ : {
*(.discard)
@@ -47,12 +49,17 @@ SECTIONS {
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
+ CODETAG_SECTIONS()
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
+#else
+ .data : {
+ CODETAG_SECTIONS()
+ }
#endif
}
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply related [relevance 3%]
* [PATCH v5 00/37] Memory allocation profiling
@ 2024-03-06 18:23 3% Suren Baghdasaryan
2024-03-06 18:24 3% ` [PATCH v5 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-03-06 18:24 4% ` [PATCH v5 37/37] memprofiling: Documentation Suren Baghdasaryan
0 siblings, 2 replies; 200+ results
From: Suren Baghdasaryan @ 2024-03-06 18:23 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, jhubbard, tj, muchun.song, rppt, paulmck,
pasha.tatashin, yosryahmed, yuzhao, dhowells, hughd, andreyknvl,
keescook, ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, bristot,
vschneid, cl, penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver,
dvyukov, shakeelb, songmuchun, jbaron, aliceryhl, rientjes,
minchan, kaleshsingh, surenb, kernel-team, linux-doc,
linux-kernel, iommu, linux-arch, linux-fsdevel, linux-mm,
linux-modules, kasan-dev, cgroups
Rebased over mm-unstable.
Overview:
Low overhead [1] per-callsite memory allocation profiling. Not just for
debug kernels, overhead low enough to be deployed in production.
Example output:
root@moria-kvm:~# sort -rn /proc/allocinfo
127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
56373248 4737 mm/slub.c:2259 func:alloc_slab_page
14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
3940352 962 mm/memory.c:4214 func:alloc_anon_folio
2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
...
Since v4 [2]:
- Added Reviewed-by, per Pasha Tatashin, Vlastimil Babka, Alice Ryhl
- Changed slab_free_freelist_hook() to use __fastpath_inline,
per Pasha Tatashin
- Removed [3] as it is already Ack'ed and merged into in mm-unstable
- Moved alloc_slab_obj_exts(), prepare_slab_obj_exts_hook() and
alloc_tagging_slab_free_hook() into slub.c, per Vlastimil Babka
- Removed drive-by spacing fixups, per Vlastimil Babka
- Restored early memcg_kmem_online() check before calling
free_slab_obj_exts(), per Vlastimil Babka
- Added pr_warn() when module can't be unloaded, per Vlastimil Babka
- Dropped __alloc_tag_sub() and alloc_tag_sub_noalloc(),
per Vlastimil Babka
- Fixed alloc_tag_add() to check for tag to be valid, per Vlastimil Babka
- Moved alloc_tag_ref_set() where it's first used
- Added a patch introducing a tristate early boot parameter,
per Vlastimil Babka
- Updated description for page splitting patch, per Vlastimil Babka
- Added a patch fixing non-compound page accounting in __free_pages(),
per Vlastimil Babka
- Added early mem_alloc_profiling_enabled() checks in
alloc_tagging_slab_free_hook() and prepare_slab_obj_exts_hook(),
per Vlastimil Babka
- Moved rust krealloc() helper patch before krealloc() is redefined,
per Alice Ryhl
- Replaced printk(KERN_NOTICE...) with pr_notice(), per Vlastimil Babka
- Fixed codetag_{un}load_module() redefinition for CONFIG_MODULE=n,
per kernel test robot
- Updated documentation to describe new early boot parameter
- Rebased over mm-unstable
Usage:
kconfig options:
- CONFIG_MEM_ALLOC_PROFILING
- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
- CONFIG_MEM_ALLOC_PROFILING_DEBUG
adds warnings for allocations that weren't accounted because of a
missing annotation
sysctl:
/proc/sys/vm/mem_profiling
Runtime info:
/proc/allocinfo
Notes:
[1]: Overhead
To measure the overhead we are comparing the following configurations:
(1) Baseline with CONFIG_MEMCG_KMEM=n
(2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
(3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
(4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
(5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
(6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
(7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
Performance overhead:
To evaluate performance we implemented an in-kernel test executing
multiple get_free_page/free_page and kmalloc/kfree calls with allocation
sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
affinity set to a specific CPU to minimize the noise. Below are results
from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
56 core Intel Xeon:
kmalloc pgalloc
(1 baseline) 6.764s 16.902s
(2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
(3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
(4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
(5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
(6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
(7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
Memory overhead:
Kernel size:
text data bss dec diff
(1) 26515311 18890222 17018880 62424413
(2) 26524728 19423818 16740352 62688898 264485
(3) 26524724 19423818 16740352 62688894 264481
(4) 26524728 19423818 16740352 62688898 264485
(5) 26541782 18964374 16957440 62463596 39183
Memory consumption on a 56 core Intel CPU with 125GB of memory:
Code tags: 192 kB
PageExts: 262144 kB (256MB)
SlabExts: 9876 kB (9.6MB)
PcpuExts: 512 kB (0.5MB)
Total overhead is 0.2% of total memory.
Benchmarks:
Hackbench tests run 100 times:
hackbench -s 512 -l 200 -g 15 -f 25 -P
baseline disabled profiling enabled profiling
avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
stdev 0.0137 0.0188 0.0077
hackbench -l 10000
baseline disabled profiling enabled profiling
avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
stdev 0.0933 0.0286 0.0489
stress-ng tests:
stress-ng --class memory --seq 4 -t 60
stress-ng --class cpu --seq 4 -t 60
Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
[2] https://lore.kernel.org/all/20240221194052.927623-1-surenb@google.com/
[3] https://lore.kernel.org/all/20240221194052.927623-7-surenb@google.com/
Kent Overstreet (13):
fix missing vmalloc.h includes
asm-generic/io.h: Kill vmalloc.h dependency
mm/slub: Mark slab_free_freelist_hook() __always_inline
scripts/kallysms: Always include __start and __stop symbols
fs: Convert alloc_inode_sb() to a macro
rust: Add a rust helper for krealloc()
mempool: Hook up to memory allocation profiling
mm: percpu: Introduce pcpuobj_ext
mm: percpu: Add codetag reference into pcpuobj_ext
mm: vmalloc: Enable memory allocation profiling
rhashtable: Plumb through alloc tag
MAINTAINERS: Add entries for code tagging and memory allocation
profiling
memprofiling: Documentation
Suren Baghdasaryan (24):
mm: introduce slabobj_ext to support slab object extensions
mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext
creation
mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation
slab: objext: introduce objext_flags as extension to
page_memcg_data_flags
lib: code tagging framework
lib: code tagging module support
lib: prevent module unloading if memory is not freed
lib: add allocation tagging support for memory allocation profiling
lib: introduce support for page allocation tagging
lib: introduce early boot parameter to avoid page_ext memory overhead
mm: percpu: increase PERCPU_MODULE_RESERVE to accommodate allocation
tags
change alloc_pages name in dma_map_ops to avoid name conflicts
mm: enable page allocation tagging
mm: create new codetag references during page splitting
mm: fix non-compound multi-order memory accounting in __free_pages
mm/page_ext: enable early_page_ext when
CONFIG_MEM_ALLOC_PROFILING_DEBUG=y
lib: add codetag reference into slabobj_ext
mm/slab: add allocation accounting into slab allocation and free paths
mm/slab: enable slab allocation tagging for kmalloc and friends
mm: percpu: enable per-cpu allocation tagging
lib: add memory allocations report in show_mem()
codetag: debug: skip objext checking when it's for objext itself
codetag: debug: mark codetags for reserved pages as empty
codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext
allocations
Documentation/admin-guide/sysctl/vm.rst | 16 +
Documentation/filesystems/proc.rst | 29 ++
Documentation/mm/allocation-profiling.rst | 91 +++++
MAINTAINERS | 17 +
arch/alpha/kernel/pci_iommu.c | 2 +-
arch/alpha/lib/checksum.c | 1 +
arch/alpha/lib/fpreg.c | 1 +
arch/alpha/lib/memcpy.c | 1 +
arch/arm/kernel/irq.c | 1 +
arch/arm/kernel/traps.c | 1 +
arch/arm64/kernel/efi.c | 1 +
arch/loongarch/include/asm/kfence.h | 1 +
arch/mips/jazz/jazzdma.c | 2 +-
arch/powerpc/kernel/dma-iommu.c | 2 +-
arch/powerpc/kernel/iommu.c | 1 +
arch/powerpc/mm/mem.c | 1 +
arch/powerpc/platforms/ps3/system-bus.c | 4 +-
arch/powerpc/platforms/pseries/vio.c | 2 +-
arch/riscv/kernel/elf_kexec.c | 1 +
arch/riscv/kernel/probes/kprobes.c | 1 +
arch/s390/kernel/cert_store.c | 1 +
arch/s390/kernel/ipl.c | 1 +
arch/x86/include/asm/io.h | 1 +
arch/x86/kernel/amd_gart_64.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 1 +
arch/x86/kernel/irq_64.c | 1 +
arch/x86/mm/fault.c | 1 +
drivers/accel/ivpu/ivpu_mmu_context.c | 1 +
drivers/gpu/drm/gma500/mmu.c | 1 +
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 1 +
.../gpu/drm/i915/gem/selftests/mock_dmabuf.c | 1 +
drivers/gpu/drm/i915/gt/shmem_utils.c | 1 +
drivers/gpu/drm/i915/gvt/firmware.c | 1 +
drivers/gpu/drm/i915/gvt/gtt.c | 1 +
drivers/gpu/drm/i915/gvt/handlers.c | 1 +
drivers/gpu/drm/i915/gvt/mmio.c | 1 +
drivers/gpu/drm/i915/gvt/vgpu.c | 1 +
drivers/gpu/drm/i915/intel_gvt.c | 1 +
drivers/gpu/drm/imagination/pvr_vm_mips.c | 1 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 1 +
drivers/gpu/drm/omapdrm/omap_gem.c | 1 +
drivers/gpu/drm/v3d/v3d_bo.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_binding.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 1 +
drivers/gpu/drm/xen/xen_drm_front_gem.c | 1 +
drivers/hwtracing/coresight/coresight-trbe.c | 1 +
drivers/iommu/dma-iommu.c | 2 +-
.../marvell/octeon_ep/octep_pfvf_mbox.c | 1 +
.../net/ethernet/microsoft/mana/hw_channel.c | 1 +
drivers/parisc/ccio-dma.c | 2 +-
drivers/parisc/sba_iommu.c | 2 +-
drivers/platform/x86/uv_sysfs.c | 1 +
drivers/scsi/mpi3mr/mpi3mr_transport.c | 2 +
drivers/staging/media/atomisp/pci/hmm/hmm.c | 2 +-
drivers/vfio/pci/pds/dirty.c | 1 +
drivers/virt/acrn/mm.c | 1 +
drivers/virtio/virtio_mem.c | 1 +
drivers/xen/grant-dma-ops.c | 2 +-
drivers/xen/swiotlb-xen.c | 2 +-
include/asm-generic/codetag.lds.h | 14 +
include/asm-generic/io.h | 1 -
include/asm-generic/vmlinux.lds.h | 3 +
include/linux/alloc_tag.h | 205 +++++++++++
include/linux/codetag.h | 81 +++++
include/linux/dma-map-ops.h | 2 +-
include/linux/fortify-string.h | 5 +-
include/linux/fs.h | 6 +-
include/linux/gfp.h | 126 ++++---
include/linux/gfp_types.h | 11 +
include/linux/memcontrol.h | 56 ++-
include/linux/mempool.h | 73 ++--
include/linux/mm.h | 9 +
include/linux/mm_types.h | 4 +-
include/linux/page_ext.h | 1 -
include/linux/pagemap.h | 9 +-
include/linux/pds/pds_common.h | 2 +
include/linux/percpu.h | 27 +-
include/linux/pgalloc_tag.h | 134 +++++++
include/linux/rhashtable-types.h | 11 +-
include/linux/sched.h | 24 ++
include/linux/slab.h | 175 +++++-----
include/linux/string.h | 4 +-
include/linux/vmalloc.h | 60 +++-
include/rdma/rdmavt_qp.h | 1 +
init/Kconfig | 4 +
kernel/dma/mapping.c | 4 +-
kernel/kallsyms_selftest.c | 2 +-
kernel/module/main.c | 29 +-
lib/Kconfig.debug | 31 ++
lib/Makefile | 3 +
lib/alloc_tag.c | 243 +++++++++++++
lib/codetag.c | 283 +++++++++++++++
lib/rhashtable.c | 28 +-
mm/compaction.c | 7 +-
mm/debug_vm_pgtable.c | 1 +
mm/filemap.c | 6 +-
mm/huge_memory.c | 2 +
mm/kfence/core.c | 14 +-
mm/kfence/kfence.h | 4 +-
mm/memcontrol.c | 56 +--
mm/mempolicy.c | 52 +--
mm/mempool.c | 36 +-
mm/mm_init.c | 13 +-
mm/nommu.c | 64 ++--
mm/page_alloc.c | 77 +++--
mm/page_ext.c | 13 +
mm/page_owner.c | 2 +-
mm/percpu-internal.h | 26 +-
mm/percpu.c | 120 +++----
mm/show_mem.c | 26 ++
mm/slab.h | 51 ++-
mm/slab_common.c | 6 +-
mm/slub.c | 327 +++++++++++++++---
mm/util.c | 44 +--
mm/vmalloc.c | 88 ++---
rust/helpers.c | 8 +
scripts/kallsyms.c | 13 +
scripts/module.lds.S | 7 +
sound/pci/hda/cs35l41_hda.c | 1 +
123 files changed, 2305 insertions(+), 657 deletions(-)
create mode 100644 Documentation/mm/allocation-profiling.rst
create mode 100644 include/asm-generic/codetag.lds.h
create mode 100644 include/linux/alloc_tag.h
create mode 100644 include/linux/codetag.h
create mode 100644 include/linux/pgalloc_tag.h
create mode 100644 lib/alloc_tag.c
create mode 100644 lib/codetag.c
base-commit: b38c34939fe4735b8716511f0a98814be3865a1b
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply [relevance 3%]
* Re: [syzbot] [nilfs?] KMSAN: uninit-value in nilfs_add_checksums_on_logs (2)
2024-03-03 12:45 0% ` Ryusuke Konishi
@ 2024-03-06 7:12 0% ` xingwei lee
0 siblings, 0 replies; 200+ results
From: xingwei lee @ 2024-03-06 7:12 UTC (permalink / raw)
To: Ryusuke Konishi
Cc: syzbot+47a017c46edb25eff048, linux-fsdevel, linux-kernel,
linux-nilfs, syzkaller-bugs
Ryusuke Konishi <konishi.ryusuke@gmail.com> 于2024年3月3日周日 20:46写道:
>
> On Sun, Mar 3, 2024 at 2:46 PM xingwei lee wrote:
> >
> > Hello, I reproduced this bug.
> >
> > If you fix this issue, please add the following tag to the commit:
> > Reported-by: xingwei lee <xrivendell7@gmail.com>
> >
> > Notice: I use the same config with syzbot dashboard.
> > kernel version: e326df53af0021f48a481ce9d489efda636c2dc6
> > kernel config: https://syzkaller.appspot.com/x/.config?x=e0c7078a6b901aa3
> > with KMSAN enabled
> > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> >
> > =====================================================
> > BUG: KMSAN: uninit-value in crc32_body lib/crc32.c:110 [inline]
> > BUG: KMSAN: uninit-value in crc32_le_generic lib/crc32.c:179 [inline]
> > BUG: KMSAN: uninit-value in crc32_le_base+0x475/0xe70 lib/crc32.c:197
> > crc32_body lib/crc32.c:110 [inline]
> > crc32_le_generic lib/crc32.c:179 [inline]
> > crc32_le_base+0x475/0xe70 lib/crc32.c:197
> > nilfs_segbuf_fill_in_data_crc fs/nilfs2/segbuf.c:224 [inline]
> > nilfs_add_checksums_on_logs+0xcb2/0x10a0 fs/nilfs2/segbuf.c:327
> > nilfs_segctor_do_construct+0xad1d/0xf640 fs/nilfs2/segment.c:2112
> > nilfs_segctor_construct+0x1fd/0xf30 fs/nilfs2/segment.c:2415
> > nilfs_segctor_thread_construct fs/nilfs2/segment.c:2523 [inline]
> > nilfs_segctor_thread+0x551/0x1350 fs/nilfs2/segment.c:2606
> > kthread+0x422/0x5a0 kernel/kthread.c:388
> > ret_from_fork+0x7f/0xa0 arch/x86/kernel/process.c:147
> > ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
> > Uninit was created at:
> > __alloc_pages+0x9a8/0xe00 mm/page_alloc.c:4591
> > alloc_pages_mpol+0x6b3/0xaa0 mm/mempolicy.c:2133
> > alloc_pages mm/mempolicy.c:2204 [inline]
> > folio_alloc+0x218/0x3f0 mm/mempolicy.c:2211
> > filemap_alloc_folio+0xb8/0x4b0 mm/filemap.c:974
> > __filemap_get_folio+0xa8a/0x1910 mm/filemap.c:1918
> > pagecache_get_page+0x56/0x1d0 mm/folio-compat.c:99
> > grab_cache_page_write_begin+0x61/0x80 mm/folio-compat.c:109
> > block_write_begin+0x5a/0x4a0 fs/buffer.c:2223
> > nilfs_write_begin+0x107/0x220 fs/nilfs2/inode.c:261
> > generic_perform_write+0x417/0xce0 mm/filemap.c:3927
> > __generic_file_write_iter+0x233/0x4b0 mm/filemap.c:4022
> > generic_file_write_iter+0x10e/0x600 mm/filemap.c:4048
> > __kernel_write_iter+0x365/0xa00 fs/read_write.c:523
> > dump_emit_page fs/coredump.c:888 [inline]
> > dump_user_range+0x5d7/0xe00 fs/coredump.c:915
> > elf_core_dump+0x5847/0x5fa0 fs/binfmt_elf.c:2077
> > do_coredump+0x3bb6/0x4e60 fs/coredump.c:764
> > get_signal+0x28f7/0x30b0 kernel/signal.c:2890
> > arch_do_signal_or_restart+0x5e/0xda0 arch/x86/kernel/signal.c:309
> > exit_to_user_mode_loop kernel/entry/common.c:105 [inline]
> > exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
> > irqentry_exit_to_user_mode+0xaa/0x160 kernel/entry/common.c:225
> > irqentry_exit+0x16/0x40 kernel/entry/common.c:328
> > exc_page_fault+0x246/0x6f0 arch/x86/mm/fault.c:1566
> > asm_exc_page_fault+0x2b/0x30 arch/x86/include/asm/idtentry.h:570
> > CPU: 1 PID: 11178 Comm: segctord Not tainted 6.7.0-00562-g9f8413c4a66f-dirty #2
> > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> > 1.16.2-debian-1.16.2-1 04/01/2014
> > =====================================================
> >
> > =* repro.c =*
> > #define _GNU_SOURCE
> >
> > #include <dirent.h>
> > #include <endian.h>
> > #include <errno.h>
> > #include <fcntl.h>
> > #include <sched.h>
> > #include <signal.h>
> > #include <stdarg.h>
> > #include <stdbool.h>
> > #include <stdint.h>
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <string.h>
> > #include <sys/mount.h>
> > #include <sys/prctl.h>
> > #include <sys/resource.h>
> > #include <sys/stat.h>
> > #include <sys/syscall.h>
> > #include <sys/time.h>
> > #include <sys/types.h>
> > #include <sys/wait.h>
> > #include <time.h>
> > #include <unistd.h>
> >
> > #include <linux/capability.h>
> >
> > static void sleep_ms(uint64_t ms)
> > {
> > usleep(ms * 1000);
> > }
> >
> > static uint64_t current_time_ms(void)
> > {
> > struct timespec ts;
> > if (clock_gettime(CLOCK_MONOTONIC, &ts))
> > exit(1);
> > return (uint64_t)ts.tv_sec * 1000 + (uint64_t)ts.tv_nsec / 1000000;
> > }
> >
> > static bool write_file(const char* file, const char* what, ...)
> > {
> > char buf[1024];
> > va_list args;
> > va_start(args, what);
> > vsnprintf(buf, sizeof(buf), what, args);
> > va_end(args);
> > buf[sizeof(buf) - 1] = 0;
> > int len = strlen(buf);
> > int fd = open(file, O_WRONLY | O_CLOEXEC);
> > if (fd == -1)
> > return false;
> > if (write(fd, buf, len) != len) {
> > int err = errno;
> > close(fd);
> > errno = err;
> > return false;
> > }
> > close(fd);
> > return true;
> > }
> >
> > #define MAX_FDS 30
> >
> > static void setup_common()
> > {
> > if (mount(0, "/sys/fs/fuse/connections", "fusectl", 0, 0)) {
> > }
> > }
> >
> > static void setup_binderfs()
> > {
> > if (mkdir("/dev/binderfs", 0777)) {
> > }
> > if (mount("binder", "/dev/binderfs", "binder", 0, NULL)) {
> > }
> > if (symlink("/dev/binderfs", "./binderfs")) {
> > }
> > }
> >
> > static void loop();
> >
> > static void sandbox_common()
> > {
> > prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
> > setsid();
> > struct rlimit rlim;
> > rlim.rlim_cur = rlim.rlim_max = (200 << 20);
> > setrlimit(RLIMIT_AS, &rlim);
> > rlim.rlim_cur = rlim.rlim_max = 32 << 20;
> > setrlimit(RLIMIT_MEMLOCK, &rlim);
> > rlim.rlim_cur = rlim.rlim_max = 136 << 20;
> > setrlimit(RLIMIT_FSIZE, &rlim);
> > rlim.rlim_cur = rlim.rlim_max = 1 << 20;
> > setrlimit(RLIMIT_STACK, &rlim);
> > rlim.rlim_cur = rlim.rlim_max = 128 << 20;
> > setrlimit(RLIMIT_CORE, &rlim);
> > rlim.rlim_cur = rlim.rlim_max = 256;
> > setrlimit(RLIMIT_NOFILE, &rlim);
> > if (unshare(CLONE_NEWNS)) {
> > }
> > if (mount(NULL, "/", NULL, MS_REC | MS_PRIVATE, NULL)) {
> > }
> > if (unshare(CLONE_NEWIPC)) {
> > }
> > if (unshare(0x02000000)) {
> > }
> > if (unshare(CLONE_NEWUTS)) {
> > }
> > if (unshare(CLONE_SYSVSEM)) {
> > }
> > typedef struct {
> > const char* name;
> > const char* value;
> > } sysctl_t;
> > static const sysctl_t sysctls[] = {
> > {"/proc/sys/kernel/shmmax", "16777216"},
> > {"/proc/sys/kernel/shmall", "536870912"},
> > {"/proc/sys/kernel/shmmni", "1024"},
> > {"/proc/sys/kernel/msgmax", "8192"},
> > {"/proc/sys/kernel/msgmni", "1024"},
> > {"/proc/sys/kernel/msgmnb", "1024"},
> > {"/proc/sys/kernel/sem", "1024 1048576 500 1024"},
> > };
> > unsigned i;
> > for (i = 0; i < sizeof(sysctls) / sizeof(sysctls[0]); i++)
> > write_file(sysctls[i].name, sysctls[i].value);
> > }
> >
> > static int wait_for_loop(int pid)
> > {
> > if (pid < 0)
> > exit(1);
> > int status = 0;
> > while (waitpid(-1, &status, __WALL) != pid) {
> > }
> > return WEXITSTATUS(status);
> > }
> >
> > static void drop_caps(void)
> > {
> > struct __user_cap_header_struct cap_hdr = {};
> > struct __user_cap_data_struct cap_data[2] = {};
> > cap_hdr.version = _LINUX_CAPABILITY_VERSION_3;
> > cap_hdr.pid = getpid();
> > if (syscall(SYS_capget, &cap_hdr, &cap_data))
> > exit(1);
> > const int drop = (1 << CAP_SYS_PTRACE) | (1 << CAP_SYS_NICE);
> > cap_data[0].effective &= ~drop;
> > cap_data[0].permitted &= ~drop;
> > cap_data[0].inheritable &= ~drop;
> > if (syscall(SYS_capset, &cap_hdr, &cap_data))
> > exit(1);
> > }
> >
> > static int do_sandbox_none(void)
> > {
> > if (unshare(CLONE_NEWPID)) {
> > }
> > int pid = fork();
> > if (pid != 0)
> > return wait_for_loop(pid);
> > setup_common();
> > sandbox_common();
> > drop_caps();
> > if (unshare(CLONE_NEWNET)) {
> > }
> > write_file("/proc/sys/net/ipv4/ping_group_range", "0 65535");
> > setup_binderfs();
> > loop();
> > exit(1);
> > }
> >
> > static void kill_and_wait(int pid, int* status)
> > {
> > kill(-pid, SIGKILL);
> > kill(pid, SIGKILL);
> > for (int i = 0; i < 100; i++) {
> > if (waitpid(-1, status, WNOHANG | __WALL) == pid)
> > return;
> > usleep(1000);
> > }
> > DIR* dir = opendir("/sys/fs/fuse/connections");
> > if (dir) {
> > for (;;) {
> > struct dirent* ent = readdir(dir);
> > if (!ent)
> > break;
> > if (strcmp(ent->d_name, ".") == 0 || strcmp(ent->d_name, "..") == 0)
> > continue;
> > char abort[300];
> > snprintf(abort, sizeof(abort), "/sys/fs/fuse/connections/%s/abort",
> > ent->d_name);
> > int fd = open(abort, O_WRONLY);
> > if (fd == -1) {
> > continue;
> > }
> > if (write(fd, abort, 1) < 0) {
> > }
> > close(fd);
> > }
> > closedir(dir);
> > } else {
> > }
> > while (waitpid(-1, status, __WALL) != pid) {
> > }
> > }
> >
> > static void setup_test()
> > {
> > prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
> > setpgrp();
> > write_file("/proc/self/oom_score_adj", "1000");
> > }
> >
> > static void close_fds()
> > {
> > for (int fd = 3; fd < MAX_FDS; fd++)
> > close(fd);
> > }
> >
> > #define USLEEP_FORKED_CHILD (3 * 50 * 1000)
> >
> > static long handle_clone_ret(long ret)
> > {
> > if (ret != 0) {
> > return ret;
> > }
> > usleep(USLEEP_FORKED_CHILD);
> > syscall(__NR_exit, 0);
> > while (1) {
> > }
> > }
> >
> > static long syz_clone(volatile long flags, volatile long stack,
> > volatile long stack_len, volatile long ptid,
> > volatile long ctid, volatile long tls)
> > {
> > long sp = (stack + stack_len) & ~15;
> > long ret = (long)syscall(__NR_clone, flags & ~CLONE_VM, sp, ptid, ctid, tls);
> > return handle_clone_ret(ret);
> > }
> >
> > static void execute_one(void);
> >
> > #define WAIT_FLAGS __WALL
> >
> > static void loop(void)
> > {
> > int iter = 0;
> > for (;; iter++) {
> > int pid = fork();
> > if (pid < 0)
> > exit(1);
> > if (pid == 0) {
> > setup_test();
> > execute_one();
> > close_fds();
> > exit(0);
> > }
> > int status = 0;
> > uint64_t start = current_time_ms();
> > for (;;) {
> > if (waitpid(-1, &status, WNOHANG | WAIT_FLAGS) == pid)
> > break;
> > sleep_ms(1);
> > if (current_time_ms() - start < 5000)
> > continue;
> > kill_and_wait(pid, &status);
> > break;
> > }
> > }
> > }
> >
> > void execute_one(void)
> > {
> > syz_clone(/*flags=CLONE_IO*/ 0x80000000, /*stack=*/0x20000140,
> > /*stack_len=*/0, /*parentid=*/0, /*childtid=*/0, /*tls=*/0);
> > }
> > int main(void)
> > {
> > syscall(__NR_mmap, /*addr=*/0x1ffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
> > /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
> > /*offset=*/0ul);
> > syscall(__NR_mmap, /*addr=*/0x20000000ul, /*len=*/0x1000000ul,
> > /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
> > /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
> > /*offset=*/0ul);
> > syscall(__NR_mmap, /*addr=*/0x21000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
> > /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
> > /*offset=*/0ul);
> > do_sandbox_none();
> > return 0;
> > }
> >
> >
> > Remember to run this repro.txt with the command: syz-execprog -repeat
> > 0 ./repro.txt and wait for about 1minus, the bug triggered very
> > steady.
> >
> > =* repro.txt =*
> > syz_mount_image$nilfs2(&(0x7f0000000000),
> > &(0x7f0000000a80)='./file0\x00', 0x808, &(0x7f00000000c0)=ANY=[], 0x1,
> > 0xa4a, &(0x7f0000001540)="$eJzs3U2MW0cdAPDx7nrTfJQ4JaFLGtqEQls+uttslvARQVI1QiJqKm6VKi5RmpaINCBSCVr1kOTEjVZVuPIhTr1UgJDoBUU9calEI1VIPRUOHIiCVIkDFJJF8c547X9sPXuzWa/Xv580O543Y8887/Pz83tvZhIwtiaafxcWZmopXXrr9aP/eOjvm28uOdwq0Wj+nWpL1VNKtZyeCq/3weRSfP3DV052i2tpvvm3pNNT11rP3ZpSOp/2psupkXZfuvLaO/NPHr9w7OK+d984dPXOrD0AAIyXb18+tLDrr3++b8dHb95/JG1qLS/H542c3paP+4/kA/9y/D+ROtO1ttBuOpSbymEilJvsUq69nnooN9Wj/unwuvUe5TZV1D/ZtqzbesMoK9txI9UmZjvSExOzs0u/yVPzd/10bfbs6TPPnRtSQ4FV968HUkp7RygcXgdtWGFYXAdtGMlwZB20YYOGxe3D3gMBLInXC29xPp5ZuD2tV5vqr/5rj090fz6sgrXe/tU/WvX/+oI9Dqtno25NZb3K52hbTsfrCPH+pd6fv3ilo3NpvB5R77Odva4jjMr1hV7tnFzjdqxUr/bH7WKj+nqOy/vwjZDf/vmJ/9NR+R8D3f171M7/C8K4h7R6r7U45P0PsH7F++YWs5If7+uL+Zsq8u+qyN9ckb+lIn9rRT6Ms9+9+NP0am35d378TT/o+fBynu3uHH9swPbE85GD1h/v+x3U7dYf7yeG9ewPJ54+9ZVnn7mydP9/rbX938jbe/m50cifrcu5QDlfGM+rt+79b3TWM9Gj3D2hPXd3Kd98vLOzXG3n8uuktv3MLe2Y6Xze9l7l9nSWa4Rym3O4K7Q3Hp9sCc8rxx9lv1rer6mwvvWwHtOhHWW/siPHsR2wEmV77HX/f9k+Z1K99tzpM6cey+mynf5psr7p5vL9a9xu4Pb12/9nJnX2/9nWWl6faN8vbF9eXmvfLzTC8vkeyw/kdPme++7k5uby2ZPfP/Psaq88jLlzL738vRNnzpz6oQcrfvDN9dEMDzxYxQfD3jMBd9rciy/8YO7cSy8/evqFE8+fev7U2QMHDx6Ynz/41QMLc83j+rn2o3tgI1n+0h92SwAAAAAAAAAAAIB+/ejY0Svvvf3l95f6/y/3/yv9/8udv6X//09C///YT770gy/9AHd0yW+WCQOsTody9Rw+Htq7M9SzKzzvEzluzeOX+/+X6uK4rqU994blcfzeUi4MJ3DLeCnTYQySOF/gp3N8Mce/SjBEtc3dF+e4anzrsq2X8SmMSzGayv+tbA1lHJPS/7vruE5t/+wda9BGVt9adCcc9joC3f3T+N+CMLZhcbHXLB79zmADsDqGPf9nOe9Z4rN//NZdN0Mpdu3xzv1lHL8UBvGX9zrT633+SfVvrPk/W/Pf9b3/CzPmNVZW739+fvX9tmrT7n7rj+tfxoHeOVj9H+X6y9o8nPqrf/GXof54QahP/w31b+mz/lvWf8/K6v9frr+8bY882G/9Sy2uTXS2I543Ltf/4nnj4npY/zK258Drv8KJGm/k+mGcjco8s4MK8/+2DtpXPv9vdn515//tJd6H8aWcLjvCcp9DnO9k0PaX+yvK98Cu8Pq1iu838/+Otq/luOrzUOb/LdtjI3/lt6Wb72VJ17u8txt1XwOj6gPX/wRhzUNrnrght2NxcfHOntCqMNTKGfr7P+zfCcOuf9jvf5U4/288ho/z/8b8OP9vzI/z/8b8OL9ezI/z/8b3M87/G/PvDa8b5weeqcj/ZEX+7u75rZ/t91U8f09F/qcq8vdV5N9fkf9ARf49FfkPVuR/piL/sxX5D1XkP1KR/7mK/I2u9EcZ1/WHcRb75/n8w/go1396ff53VuQDo+tnb+5/4pnffqex1P9/unU+pFzHO5LT9fzb+cc5Ha97p7b0zby3c/pvIX+9n++AcRLHz4jf7w9X5AOjq9zn5fMNY6jWfcSefset6nWcz2j5fI6/kOMv5vjRHM/meC7H+3M8v0bt48544je/P/Rqbfn3/vaQ3+/95LE/UMc4USmlA322J54fGPR+9jiO36But/4VdgcDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYmonm34WFmVpKl956/ejTx0/P3VxyuFWi0fw71Zaqt56X0mM5nszxL/KD6x++crI9vpHjWppPtVRrLU9PXWvVtDWldD7tTZdTI+2+dOW1d+afPH7h2MV9775x6OqdewcAAABg4/t/AAAA//+wuA6E")
> > r0 = open(&(0x7f0000000000)='./bus\x00', 0x0, 0x0) (async)
> > r1 = open(&(0x7f0000007f80)='./bus\x00', 0x145142, 0x0)
> > cachestat(r1, &(0x7f00000002c0)={0x6}, &(0x7f0000000300), 0x0) (async)
> > r2 = syz_open_procfs(0xffffffffffffffff,
> > &(0x7f0000000100)='mountinfo\x00') (async)
> > r3 = open(&(0x7f0000000a40)='./bus\x00', 0x141a42, 0x0)
> > r4 = openat$adsp1(0xffffffffffffff9c, &(0x7f0000000040), 0x20000, 0x0) (async)
> > ptrace(0x10, 0x0) (async)
> > r5 = syz_clone(0x80000000,
> > &(0x7f0000000140)="1d7f3ef3f0b0129f8d083226510ecc0713b2af6e7901a607532fa2a7176fefdd7e66e6402ef8b579a00dd83d555182afa044f65b0ac668c2063ac33b34bb48411c11d456d584ec4140aebe97e1950ad7c4bd2bffcef175625a27a11f559e8ddb031d27c2be3a2216a1e9f87f5d68b8b0b690e67bfcc8a8ec9af998c1a8eaef215c771e45eee015e8ce9b17015da79c48a7b87459c4a88781ffd9d1ec6870c4d7220ffc6a66f7828db1297aa12e00503dde7a5c",
> > 0xb3, &(0x7f0000000080), &(0x7f00000000c0),
> > &(0x7f0000000200)="994665d2b9d5239b789d65f6ec184c1ea67003ce8f474755e439f58560c42a241a31e540479e0752cad17884d9024cb854dc6798ada62550c8264b5488daff5387419b22f01fa57630317e8c24ac37d892d70e380b7164dfaa886b72a17f08df76c1057a2268b39aad4e0e759eef1abc6e5e664e7f3057c1d70d897ba5104664e96d92c1d8bd420f78368f522169f713ed03315d69de28d77af27ec8881f54633a5dd5d54635e74ad8c896918c")
> > fcntl$setown(r4, 0x8, r5) (async)
> > sendfile(r3, r2, 0x0, 0x100800001) (async)
> > sendfile(r0, r1, 0x0, 0x1000000201003)
> >
> >
> > and see also in
> > https://gist.github.com/xrivendell7/744812c87156085e12c7f617ef237875.
> > BTW, found in my personal observation, the syzlang reproducer can
> > trigger the bug more stably, so try to use the syz-execprog -repeat 0
> > ./repro.txt to trigger this bug.
> >
> > I hope it helps.
> > Best regards!
> > xingwei Lee
>
> Hi,
>
> Please let me know if you can test one.
>
> Does this issue still appear on 6.8-rc4 or later?
Hi, sorry for the delayed response.
I test my reproducer in the linux 6.8-rc4 with KMSAN kernel config for
one hours, it doesn’t trigger any crash or report as follows:
[ 315.607028][ T37] audit: type=1804 audit(1709708422.469:31293):
pid=86478 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 315.608038][T86480] 884-0[86480]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14 likely on CPU 2 (core 2,
socke)
[ 315.611270][T86480] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 320.575680][ T37] kauditd_printk_skb: 1253 callbacks suppressed
[ 320.575689][ T37] audit: type=1804 audit(1709708427.439:32130):
pid=88573 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 320.576419][T88575] 884-0[88575]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14
[ 320.576695][ T37] audit: type=1804 audit(1709708427.439:32131):
pid=88574 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 320.579042][T88575] likely on CPU 0 (core 0, socket 0)
[ 320.584184][T88575] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 320.593832][ T37] audit: type=1804 audit(1709708427.459:32132):
pid=88578 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 320.594549][T88580] 884-0[88580]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14 likely on CPU 1 (core 1,
socke)
[ 320.596256][ T37] audit: type=1804 audit(1709708427.459:32133):
pid=88579 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 320.597901][T88580] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 320.610954][ T37] audit: type=1804 audit(1709708427.479:32134):
pid=88583 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 320.611700][T88585] 884-0[88585]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14 likely on CPU 2 (core 2,
socke)
[ 320.613455][ T37] audit: type=1804 audit(1709708427.479:32135):
pid=88584 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 320.615959][T88585] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 320.628571][ T37] audit: type=1804 audit(1709708427.489:32136):
pid=88588 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.582663][ T37] kauditd_printk_skb: 1280 callbacks suppressed
[ 325.582673][ T37] audit: type=1804 audit(1709708432.449:32990):
pid=90727 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.583320][T90729] 884-0[90729]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14
[ 325.583460][ T37] audit: type=1804 audit(1709708432.449:32991):
pid=90728 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.585838][T90729] likely on CPU 1 (core 1, socket 0)
[ 325.590985][T90729] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 325.599620][ T37] audit: type=1804 audit(1709708432.459:32992):
pid=90732 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.601818][T90734] 884-0[90734]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14
[ 325.601827][ T37] audit: type=1804 audit(1709708432.459:32993):
pid=90733 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.603945][T90734] likely on CPU 2 (core 2, socket 0)
[ 325.607037][T90734] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 325.617928][ T37] audit: type=1804 audit(1709708432.479:32994):
pid=90737 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.618862][T90739] 884-0[90739]: segfault at 5c7ade ip
00000000005c7ade sp 00000000200001f8 error 14
[ 325.620190][ T37] audit: type=1804 audit(1709708432.479:32995):
pid=90738 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
[ 325.623238][T90739] likely on CPU 0 (core 0, socket 0)
[ 325.623803][T90739] Code: Unable to access opcode bytes at 0x5c7ab4.
[ 325.632693][ T37] audit: type=1804 audit(1709708432.499:32996):
pid=90742 uid=0 auid=0 ses=1 subj=unconfined op=invalid_pcr cause=0
It’s seems this issue have been fixed.
>
> I'd like to isolate that the issue is still not fixed with the latest
> fixes, but I need to do some trial and error to reestablish a testable
> (bootable) KMSAN-enabled kernel config.
>
> Thanks,
> Ryusuke Konishi
^ permalink raw reply [relevance 0%]
* Re: [PATCH 17/21] filemap: add FGP_CREAT_ONLY
2024-02-28 13:28 0% ` Paolo Bonzini
@ 2024-03-04 2:55 0% ` Xu Yilun
0 siblings, 0 replies; 200+ results
From: Xu Yilun @ 2024-03-04 2:55 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Matthew Wilcox, Yosry Ahmed, Sean Christopherson, linux-kernel,
kvm, michael.roth, isaku.yamahata, thomas.lendacky
On Wed, Feb 28, 2024 at 02:28:45PM +0100, Paolo Bonzini wrote:
> On Wed, Feb 28, 2024 at 2:15 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Tue, Feb 27, 2024 at 06:17:34PM -0800, Yosry Ahmed wrote:
> > > On Tue, Feb 27, 2024 at 6:15 PM Sean Christopherson <seanjc@google.com> wrote:
> > > >
> > > > On Tue, Feb 27, 2024, Paolo Bonzini wrote:
> > > >
> > > > This needs a changelog, and also needs to be Cc'd to someone(s) that can give it
> > > > a thumbs up.
> > >
> > > +Matthew Wilcox
> >
> > If only there were an entry in MAINTAINERS for filemap.c ...
>
> Not CCing you (or mm in general) was intentional because I first
> wanted a review of the KVM APIs; of course I wouldn't have committed
> it without an Acked-by. But yeah, not writing the changelog yet was
> pure laziness.
>
> Since you're here: KVM would like to add a ioctl to encrypt and
> install a page into guest_memfd, in preparation for launching an
> encrypted guest. For this API we want to rule out the possibility of
> overwriting a page that is already in the guest_memfd's filemap,
> therefore this API would pass FGP_CREAT_ONLY|FGP_CREAT
> into__filemap_get_folio. Do you think this is bogus...
>
> > This looks bogus to me, and if it's not bogus, it's incomplete.
>
> ... or if not, what incompleteness can you spot?
>
> Thanks,
>
> Paolo
>
> > But it's hard to judge without a commit message that describes what it's
> > supposed to mean.
> >
> > > >
> > > > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > > ---
> > > > > include/linux/pagemap.h | 2 ++
> > > > > mm/filemap.c | 4 ++++
> > > > > 2 files changed, 6 insertions(+)
> > > > >
> > > > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > > > > index 2df35e65557d..e8ac0b32f84d 100644
> > > > > --- a/include/linux/pagemap.h
> > > > > +++ b/include/linux/pagemap.h
> > > > > @@ -586,6 +586,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> > > > > * * %FGP_CREAT - If no folio is present then a new folio is allocated,
> > > > > * added to the page cache and the VM's LRU list. The folio is
> > > > > * returned locked.
> > > > > + * * %FGP_CREAT_ONLY - Fail if a folio is not present
^
So should be: Fail if a folio is present.
Thanks,
Yilun
> > > > > * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
> > > > > * folio is already in cache. If the folio was allocated, unlock it
> > > > > * before returning so the caller can do the same dance.
> > > > > @@ -606,6 +607,7 @@ typedef unsigned int __bitwise fgf_t;
> > > > > #define FGP_NOWAIT ((__force fgf_t)0x00000020)
> > > > > #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
> > > > > #define FGP_STABLE ((__force fgf_t)0x00000080)
> > > > > +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
> > > > > #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
> > > > >
> > > > > #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
> > > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > > index 750e779c23db..d5107bd0cd09 100644
> > > > > --- a/mm/filemap.c
> > > > > +++ b/mm/filemap.c
> > > > > @@ -1854,6 +1854,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > > > > folio = NULL;
> > > > > if (!folio)
> > > > > goto no_page;
> > > > > + if (fgp_flags & FGP_CREAT_ONLY) {
> > > > > + folio_put(folio);
> > > > > + return ERR_PTR(-EEXIST);
> > > > > + }
> > > > >
> > > > > if (fgp_flags & FGP_LOCK) {
> > > > > if (fgp_flags & FGP_NOWAIT) {
> > > > > --
> > > > > 2.39.0
> > > > >
> > > > >
> > > >
> >
>
>
^ permalink raw reply [relevance 0%]
* Re: [syzbot] [nilfs?] KMSAN: uninit-value in nilfs_add_checksums_on_logs (2)
2024-03-03 5:46 3% [syzbot] [nilfs?] KMSAN: uninit-value in nilfs_add_checksums_on_logs (2) xingwei lee
@ 2024-03-03 12:45 0% ` Ryusuke Konishi
2024-03-06 7:12 0% ` xingwei lee
0 siblings, 1 reply; 200+ results
From: Ryusuke Konishi @ 2024-03-03 12:45 UTC (permalink / raw)
To: xingwei lee
Cc: syzbot+47a017c46edb25eff048, linux-fsdevel, linux-kernel,
linux-nilfs, syzkaller-bugs
On Sun, Mar 3, 2024 at 2:46 PM xingwei lee wrote:
>
> Hello, I reproduced this bug.
>
> If you fix this issue, please add the following tag to the commit:
> Reported-by: xingwei lee <xrivendell7@gmail.com>
>
> Notice: I use the same config with syzbot dashboard.
> kernel version: e326df53af0021f48a481ce9d489efda636c2dc6
> kernel config: https://syzkaller.appspot.com/x/.config?x=e0c7078a6b901aa3
> with KMSAN enabled
> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
>
> =====================================================
> BUG: KMSAN: uninit-value in crc32_body lib/crc32.c:110 [inline]
> BUG: KMSAN: uninit-value in crc32_le_generic lib/crc32.c:179 [inline]
> BUG: KMSAN: uninit-value in crc32_le_base+0x475/0xe70 lib/crc32.c:197
> crc32_body lib/crc32.c:110 [inline]
> crc32_le_generic lib/crc32.c:179 [inline]
> crc32_le_base+0x475/0xe70 lib/crc32.c:197
> nilfs_segbuf_fill_in_data_crc fs/nilfs2/segbuf.c:224 [inline]
> nilfs_add_checksums_on_logs+0xcb2/0x10a0 fs/nilfs2/segbuf.c:327
> nilfs_segctor_do_construct+0xad1d/0xf640 fs/nilfs2/segment.c:2112
> nilfs_segctor_construct+0x1fd/0xf30 fs/nilfs2/segment.c:2415
> nilfs_segctor_thread_construct fs/nilfs2/segment.c:2523 [inline]
> nilfs_segctor_thread+0x551/0x1350 fs/nilfs2/segment.c:2606
> kthread+0x422/0x5a0 kernel/kthread.c:388
> ret_from_fork+0x7f/0xa0 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
> Uninit was created at:
> __alloc_pages+0x9a8/0xe00 mm/page_alloc.c:4591
> alloc_pages_mpol+0x6b3/0xaa0 mm/mempolicy.c:2133
> alloc_pages mm/mempolicy.c:2204 [inline]
> folio_alloc+0x218/0x3f0 mm/mempolicy.c:2211
> filemap_alloc_folio+0xb8/0x4b0 mm/filemap.c:974
> __filemap_get_folio+0xa8a/0x1910 mm/filemap.c:1918
> pagecache_get_page+0x56/0x1d0 mm/folio-compat.c:99
> grab_cache_page_write_begin+0x61/0x80 mm/folio-compat.c:109
> block_write_begin+0x5a/0x4a0 fs/buffer.c:2223
> nilfs_write_begin+0x107/0x220 fs/nilfs2/inode.c:261
> generic_perform_write+0x417/0xce0 mm/filemap.c:3927
> __generic_file_write_iter+0x233/0x4b0 mm/filemap.c:4022
> generic_file_write_iter+0x10e/0x600 mm/filemap.c:4048
> __kernel_write_iter+0x365/0xa00 fs/read_write.c:523
> dump_emit_page fs/coredump.c:888 [inline]
> dump_user_range+0x5d7/0xe00 fs/coredump.c:915
> elf_core_dump+0x5847/0x5fa0 fs/binfmt_elf.c:2077
> do_coredump+0x3bb6/0x4e60 fs/coredump.c:764
> get_signal+0x28f7/0x30b0 kernel/signal.c:2890
> arch_do_signal_or_restart+0x5e/0xda0 arch/x86/kernel/signal.c:309
> exit_to_user_mode_loop kernel/entry/common.c:105 [inline]
> exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
> irqentry_exit_to_user_mode+0xaa/0x160 kernel/entry/common.c:225
> irqentry_exit+0x16/0x40 kernel/entry/common.c:328
> exc_page_fault+0x246/0x6f0 arch/x86/mm/fault.c:1566
> asm_exc_page_fault+0x2b/0x30 arch/x86/include/asm/idtentry.h:570
> CPU: 1 PID: 11178 Comm: segctord Not tainted 6.7.0-00562-g9f8413c4a66f-dirty #2
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> 1.16.2-debian-1.16.2-1 04/01/2014
> =====================================================
>
> =* repro.c =*
> #define _GNU_SOURCE
>
> #include <dirent.h>
> #include <endian.h>
> #include <errno.h>
> #include <fcntl.h>
> #include <sched.h>
> #include <signal.h>
> #include <stdarg.h>
> #include <stdbool.h>
> #include <stdint.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <sys/mount.h>
> #include <sys/prctl.h>
> #include <sys/resource.h>
> #include <sys/stat.h>
> #include <sys/syscall.h>
> #include <sys/time.h>
> #include <sys/types.h>
> #include <sys/wait.h>
> #include <time.h>
> #include <unistd.h>
>
> #include <linux/capability.h>
>
> static void sleep_ms(uint64_t ms)
> {
> usleep(ms * 1000);
> }
>
> static uint64_t current_time_ms(void)
> {
> struct timespec ts;
> if (clock_gettime(CLOCK_MONOTONIC, &ts))
> exit(1);
> return (uint64_t)ts.tv_sec * 1000 + (uint64_t)ts.tv_nsec / 1000000;
> }
>
> static bool write_file(const char* file, const char* what, ...)
> {
> char buf[1024];
> va_list args;
> va_start(args, what);
> vsnprintf(buf, sizeof(buf), what, args);
> va_end(args);
> buf[sizeof(buf) - 1] = 0;
> int len = strlen(buf);
> int fd = open(file, O_WRONLY | O_CLOEXEC);
> if (fd == -1)
> return false;
> if (write(fd, buf, len) != len) {
> int err = errno;
> close(fd);
> errno = err;
> return false;
> }
> close(fd);
> return true;
> }
>
> #define MAX_FDS 30
>
> static void setup_common()
> {
> if (mount(0, "/sys/fs/fuse/connections", "fusectl", 0, 0)) {
> }
> }
>
> static void setup_binderfs()
> {
> if (mkdir("/dev/binderfs", 0777)) {
> }
> if (mount("binder", "/dev/binderfs", "binder", 0, NULL)) {
> }
> if (symlink("/dev/binderfs", "./binderfs")) {
> }
> }
>
> static void loop();
>
> static void sandbox_common()
> {
> prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
> setsid();
> struct rlimit rlim;
> rlim.rlim_cur = rlim.rlim_max = (200 << 20);
> setrlimit(RLIMIT_AS, &rlim);
> rlim.rlim_cur = rlim.rlim_max = 32 << 20;
> setrlimit(RLIMIT_MEMLOCK, &rlim);
> rlim.rlim_cur = rlim.rlim_max = 136 << 20;
> setrlimit(RLIMIT_FSIZE, &rlim);
> rlim.rlim_cur = rlim.rlim_max = 1 << 20;
> setrlimit(RLIMIT_STACK, &rlim);
> rlim.rlim_cur = rlim.rlim_max = 128 << 20;
> setrlimit(RLIMIT_CORE, &rlim);
> rlim.rlim_cur = rlim.rlim_max = 256;
> setrlimit(RLIMIT_NOFILE, &rlim);
> if (unshare(CLONE_NEWNS)) {
> }
> if (mount(NULL, "/", NULL, MS_REC | MS_PRIVATE, NULL)) {
> }
> if (unshare(CLONE_NEWIPC)) {
> }
> if (unshare(0x02000000)) {
> }
> if (unshare(CLONE_NEWUTS)) {
> }
> if (unshare(CLONE_SYSVSEM)) {
> }
> typedef struct {
> const char* name;
> const char* value;
> } sysctl_t;
> static const sysctl_t sysctls[] = {
> {"/proc/sys/kernel/shmmax", "16777216"},
> {"/proc/sys/kernel/shmall", "536870912"},
> {"/proc/sys/kernel/shmmni", "1024"},
> {"/proc/sys/kernel/msgmax", "8192"},
> {"/proc/sys/kernel/msgmni", "1024"},
> {"/proc/sys/kernel/msgmnb", "1024"},
> {"/proc/sys/kernel/sem", "1024 1048576 500 1024"},
> };
> unsigned i;
> for (i = 0; i < sizeof(sysctls) / sizeof(sysctls[0]); i++)
> write_file(sysctls[i].name, sysctls[i].value);
> }
>
> static int wait_for_loop(int pid)
> {
> if (pid < 0)
> exit(1);
> int status = 0;
> while (waitpid(-1, &status, __WALL) != pid) {
> }
> return WEXITSTATUS(status);
> }
>
> static void drop_caps(void)
> {
> struct __user_cap_header_struct cap_hdr = {};
> struct __user_cap_data_struct cap_data[2] = {};
> cap_hdr.version = _LINUX_CAPABILITY_VERSION_3;
> cap_hdr.pid = getpid();
> if (syscall(SYS_capget, &cap_hdr, &cap_data))
> exit(1);
> const int drop = (1 << CAP_SYS_PTRACE) | (1 << CAP_SYS_NICE);
> cap_data[0].effective &= ~drop;
> cap_data[0].permitted &= ~drop;
> cap_data[0].inheritable &= ~drop;
> if (syscall(SYS_capset, &cap_hdr, &cap_data))
> exit(1);
> }
>
> static int do_sandbox_none(void)
> {
> if (unshare(CLONE_NEWPID)) {
> }
> int pid = fork();
> if (pid != 0)
> return wait_for_loop(pid);
> setup_common();
> sandbox_common();
> drop_caps();
> if (unshare(CLONE_NEWNET)) {
> }
> write_file("/proc/sys/net/ipv4/ping_group_range", "0 65535");
> setup_binderfs();
> loop();
> exit(1);
> }
>
> static void kill_and_wait(int pid, int* status)
> {
> kill(-pid, SIGKILL);
> kill(pid, SIGKILL);
> for (int i = 0; i < 100; i++) {
> if (waitpid(-1, status, WNOHANG | __WALL) == pid)
> return;
> usleep(1000);
> }
> DIR* dir = opendir("/sys/fs/fuse/connections");
> if (dir) {
> for (;;) {
> struct dirent* ent = readdir(dir);
> if (!ent)
> break;
> if (strcmp(ent->d_name, ".") == 0 || strcmp(ent->d_name, "..") == 0)
> continue;
> char abort[300];
> snprintf(abort, sizeof(abort), "/sys/fs/fuse/connections/%s/abort",
> ent->d_name);
> int fd = open(abort, O_WRONLY);
> if (fd == -1) {
> continue;
> }
> if (write(fd, abort, 1) < 0) {
> }
> close(fd);
> }
> closedir(dir);
> } else {
> }
> while (waitpid(-1, status, __WALL) != pid) {
> }
> }
>
> static void setup_test()
> {
> prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
> setpgrp();
> write_file("/proc/self/oom_score_adj", "1000");
> }
>
> static void close_fds()
> {
> for (int fd = 3; fd < MAX_FDS; fd++)
> close(fd);
> }
>
> #define USLEEP_FORKED_CHILD (3 * 50 * 1000)
>
> static long handle_clone_ret(long ret)
> {
> if (ret != 0) {
> return ret;
> }
> usleep(USLEEP_FORKED_CHILD);
> syscall(__NR_exit, 0);
> while (1) {
> }
> }
>
> static long syz_clone(volatile long flags, volatile long stack,
> volatile long stack_len, volatile long ptid,
> volatile long ctid, volatile long tls)
> {
> long sp = (stack + stack_len) & ~15;
> long ret = (long)syscall(__NR_clone, flags & ~CLONE_VM, sp, ptid, ctid, tls);
> return handle_clone_ret(ret);
> }
>
> static void execute_one(void);
>
> #define WAIT_FLAGS __WALL
>
> static void loop(void)
> {
> int iter = 0;
> for (;; iter++) {
> int pid = fork();
> if (pid < 0)
> exit(1);
> if (pid == 0) {
> setup_test();
> execute_one();
> close_fds();
> exit(0);
> }
> int status = 0;
> uint64_t start = current_time_ms();
> for (;;) {
> if (waitpid(-1, &status, WNOHANG | WAIT_FLAGS) == pid)
> break;
> sleep_ms(1);
> if (current_time_ms() - start < 5000)
> continue;
> kill_and_wait(pid, &status);
> break;
> }
> }
> }
>
> void execute_one(void)
> {
> syz_clone(/*flags=CLONE_IO*/ 0x80000000, /*stack=*/0x20000140,
> /*stack_len=*/0, /*parentid=*/0, /*childtid=*/0, /*tls=*/0);
> }
> int main(void)
> {
> syscall(__NR_mmap, /*addr=*/0x1ffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
> /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
> /*offset=*/0ul);
> syscall(__NR_mmap, /*addr=*/0x20000000ul, /*len=*/0x1000000ul,
> /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
> /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
> /*offset=*/0ul);
> syscall(__NR_mmap, /*addr=*/0x21000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
> /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
> /*offset=*/0ul);
> do_sandbox_none();
> return 0;
> }
>
>
> Remember to run this repro.txt with the command: syz-execprog -repeat
> 0 ./repro.txt and wait for about 1minus, the bug triggered very
> steady.
>
> =* repro.txt =*
> syz_mount_image$nilfs2(&(0x7f0000000000),
> &(0x7f0000000a80)='./file0\x00', 0x808, &(0x7f00000000c0)=ANY=[], 0x1,
> 0xa4a, &(0x7f0000001540)="$eJzs3U2MW0cdAPDx7nrTfJQ4JaFLGtqEQls+uttslvARQVI1QiJqKm6VKi5RmpaINCBSCVr1kOTEjVZVuPIhTr1UgJDoBUU9calEI1VIPRUOHIiCVIkDFJJF8c547X9sPXuzWa/Xv580O543Y8887/Pz83tvZhIwtiaafxcWZmopXXrr9aP/eOjvm28uOdwq0Wj+nWpL1VNKtZyeCq/3weRSfP3DV052i2tpvvm3pNNT11rP3ZpSOp/2psupkXZfuvLaO/NPHr9w7OK+d984dPXOrD0AAIyXb18+tLDrr3++b8dHb95/JG1qLS/H542c3paP+4/kA/9y/D+ROtO1ttBuOpSbymEilJvsUq69nnooN9Wj/unwuvUe5TZV1D/ZtqzbesMoK9txI9UmZjvSExOzs0u/yVPzd/10bfbs6TPPnRtSQ4FV968HUkp7RygcXgdtWGFYXAdtGMlwZB20YYOGxe3D3gMBLInXC29xPp5ZuD2tV5vqr/5rj090fz6sgrXe/tU/WvX/+oI9Dqtno25NZb3K52hbTsfrCPH+pd6fv3ilo3NpvB5R77Odva4jjMr1hV7tnFzjdqxUr/bH7WKj+nqOy/vwjZDf/vmJ/9NR+R8D3f171M7/C8K4h7R6r7U45P0PsH7F++YWs5If7+uL+Zsq8u+qyN9ckb+lIn9rRT6Ms9+9+NP0am35d378TT/o+fBynu3uHH9swPbE85GD1h/v+x3U7dYf7yeG9ewPJ54+9ZVnn7mydP9/rbX938jbe/m50cifrcu5QDlfGM+rt+79b3TWM9Gj3D2hPXd3Kd98vLOzXG3n8uuktv3MLe2Y6Xze9l7l9nSWa4Rym3O4K7Q3Hp9sCc8rxx9lv1rer6mwvvWwHtOhHWW/siPHsR2wEmV77HX/f9k+Z1K99tzpM6cey+mynf5psr7p5vL9a9xu4Pb12/9nJnX2/9nWWl6faN8vbF9eXmvfLzTC8vkeyw/kdPme++7k5uby2ZPfP/Psaq88jLlzL738vRNnzpz6oQcrfvDN9dEMDzxYxQfD3jMBd9rciy/8YO7cSy8/evqFE8+fev7U2QMHDx6Ynz/41QMLc83j+rn2o3tgI1n+0h92SwAAAAAAAAAAAIB+/ejY0Svvvf3l95f6/y/3/yv9/8udv6X//09C///YT770gy/9AHd0yW+WCQOsTody9Rw+Htq7M9SzKzzvEzluzeOX+/+X6uK4rqU994blcfzeUi4MJ3DLeCnTYQySOF/gp3N8Mce/SjBEtc3dF+e4anzrsq2X8SmMSzGayv+tbA1lHJPS/7vruE5t/+wda9BGVt9adCcc9joC3f3T+N+CMLZhcbHXLB79zmADsDqGPf9nOe9Z4rN//NZdN0Mpdu3xzv1lHL8UBvGX9zrT633+SfVvrPk/W/Pf9b3/CzPmNVZW739+fvX9tmrT7n7rj+tfxoHeOVj9H+X6y9o8nPqrf/GXof54QahP/w31b+mz/lvWf8/K6v9frr+8bY882G/9Sy2uTXS2I543Ltf/4nnj4npY/zK258Drv8KJGm/k+mGcjco8s4MK8/+2DtpXPv9vdn515//tJd6H8aWcLjvCcp9DnO9k0PaX+yvK98Cu8Pq1iu838/+Otq/luOrzUOb/LdtjI3/lt6Wb72VJ17u8txt1XwOj6gPX/wRhzUNrnrght2NxcfHOntCqMNTKGfr7P+zfCcOuf9jvf5U4/288ho/z/8b8OP9vzI/z/8b8OL9ezI/z/8b3M87/G/PvDa8b5weeqcj/ZEX+7u75rZ/t91U8f09F/qcq8vdV5N9fkf9ARf49FfkPVuR/piL/sxX5D1XkP1KR/7mK/I2u9EcZ1/WHcRb75/n8w/go1396ff53VuQDo+tnb+5/4pnffqex1P9/unU+pFzHO5LT9fzb+cc5Ha97p7b0zby3c/pvIX+9n++AcRLHz4jf7w9X5AOjq9zn5fMNY6jWfcSefset6nWcz2j5fI6/kOMv5vjRHM/meC7H+3M8v0bt48544je/P/Rqbfn3/vaQ3+/95LE/UMc4USmlA322J54fGPR+9jiO36But/4VdgcDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYmonm34WFmVpKl956/ejTx0/P3VxyuFWi0fw71Zaqt56X0mM5nszxL/KD6x++crI9vpHjWppPtVRrLU9PXWvVtDWldD7tTZdTI+2+dOW1d+afPH7h2MV9775x6OqdewcAAABg4/t/AAAA//+wuA6E")
> r0 = open(&(0x7f0000000000)='./bus\x00', 0x0, 0x0) (async)
> r1 = open(&(0x7f0000007f80)='./bus\x00', 0x145142, 0x0)
> cachestat(r1, &(0x7f00000002c0)={0x6}, &(0x7f0000000300), 0x0) (async)
> r2 = syz_open_procfs(0xffffffffffffffff,
> &(0x7f0000000100)='mountinfo\x00') (async)
> r3 = open(&(0x7f0000000a40)='./bus\x00', 0x141a42, 0x0)
> r4 = openat$adsp1(0xffffffffffffff9c, &(0x7f0000000040), 0x20000, 0x0) (async)
> ptrace(0x10, 0x0) (async)
> r5 = syz_clone(0x80000000,
> &(0x7f0000000140)="1d7f3ef3f0b0129f8d083226510ecc0713b2af6e7901a607532fa2a7176fefdd7e66e6402ef8b579a00dd83d555182afa044f65b0ac668c2063ac33b34bb48411c11d456d584ec4140aebe97e1950ad7c4bd2bffcef175625a27a11f559e8ddb031d27c2be3a2216a1e9f87f5d68b8b0b690e67bfcc8a8ec9af998c1a8eaef215c771e45eee015e8ce9b17015da79c48a7b87459c4a88781ffd9d1ec6870c4d7220ffc6a66f7828db1297aa12e00503dde7a5c",
> 0xb3, &(0x7f0000000080), &(0x7f00000000c0),
> &(0x7f0000000200)="994665d2b9d5239b789d65f6ec184c1ea67003ce8f474755e439f58560c42a241a31e540479e0752cad17884d9024cb854dc6798ada62550c8264b5488daff5387419b22f01fa57630317e8c24ac37d892d70e380b7164dfaa886b72a17f08df76c1057a2268b39aad4e0e759eef1abc6e5e664e7f3057c1d70d897ba5104664e96d92c1d8bd420f78368f522169f713ed03315d69de28d77af27ec8881f54633a5dd5d54635e74ad8c896918c")
> fcntl$setown(r4, 0x8, r5) (async)
> sendfile(r3, r2, 0x0, 0x100800001) (async)
> sendfile(r0, r1, 0x0, 0x1000000201003)
>
>
> and see also in
> https://gist.github.com/xrivendell7/744812c87156085e12c7f617ef237875.
> BTW, found in my personal observation, the syzlang reproducer can
> trigger the bug more stably, so try to use the syz-execprog -repeat 0
> ./repro.txt to trigger this bug.
>
> I hope it helps.
> Best regards!
> xingwei Lee
Hi,
Please let me know if you can test one.
Does this issue still appear on 6.8-rc4 or later?
I'd like to isolate that the issue is still not fixed with the latest
fixes, but I need to do some trial and error to reestablish a testable
(bootable) KMSAN-enabled kernel config.
Thanks,
Ryusuke Konishi
^ permalink raw reply [relevance 0%]
* Re: [syzbot] [nilfs?] KMSAN: uninit-value in nilfs_add_checksums_on_logs (2)
@ 2024-03-03 5:46 3% xingwei lee
2024-03-03 12:45 0% ` Ryusuke Konishi
0 siblings, 1 reply; 200+ results
From: xingwei lee @ 2024-03-03 5:46 UTC (permalink / raw)
To: syzbot+47a017c46edb25eff048
Cc: konishi.ryusuke, linux-fsdevel, linux-kernel, linux-nilfs,
syzkaller-bugs
Hello, I reproduced this bug.
If you fix this issue, please add the following tag to the commit:
Reported-by: xingwei lee <xrivendell7@gmail.com>
Notice: I use the same config with syzbot dashboard.
kernel version: e326df53af0021f48a481ce9d489efda636c2dc6
kernel config: https://syzkaller.appspot.com/x/.config?x=e0c7078a6b901aa3
with KMSAN enabled
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
=====================================================
BUG: KMSAN: uninit-value in crc32_body lib/crc32.c:110 [inline]
BUG: KMSAN: uninit-value in crc32_le_generic lib/crc32.c:179 [inline]
BUG: KMSAN: uninit-value in crc32_le_base+0x475/0xe70 lib/crc32.c:197
crc32_body lib/crc32.c:110 [inline]
crc32_le_generic lib/crc32.c:179 [inline]
crc32_le_base+0x475/0xe70 lib/crc32.c:197
nilfs_segbuf_fill_in_data_crc fs/nilfs2/segbuf.c:224 [inline]
nilfs_add_checksums_on_logs+0xcb2/0x10a0 fs/nilfs2/segbuf.c:327
nilfs_segctor_do_construct+0xad1d/0xf640 fs/nilfs2/segment.c:2112
nilfs_segctor_construct+0x1fd/0xf30 fs/nilfs2/segment.c:2415
nilfs_segctor_thread_construct fs/nilfs2/segment.c:2523 [inline]
nilfs_segctor_thread+0x551/0x1350 fs/nilfs2/segment.c:2606
kthread+0x422/0x5a0 kernel/kthread.c:388
ret_from_fork+0x7f/0xa0 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
Uninit was created at:
__alloc_pages+0x9a8/0xe00 mm/page_alloc.c:4591
alloc_pages_mpol+0x6b3/0xaa0 mm/mempolicy.c:2133
alloc_pages mm/mempolicy.c:2204 [inline]
folio_alloc+0x218/0x3f0 mm/mempolicy.c:2211
filemap_alloc_folio+0xb8/0x4b0 mm/filemap.c:974
__filemap_get_folio+0xa8a/0x1910 mm/filemap.c:1918
pagecache_get_page+0x56/0x1d0 mm/folio-compat.c:99
grab_cache_page_write_begin+0x61/0x80 mm/folio-compat.c:109
block_write_begin+0x5a/0x4a0 fs/buffer.c:2223
nilfs_write_begin+0x107/0x220 fs/nilfs2/inode.c:261
generic_perform_write+0x417/0xce0 mm/filemap.c:3927
__generic_file_write_iter+0x233/0x4b0 mm/filemap.c:4022
generic_file_write_iter+0x10e/0x600 mm/filemap.c:4048
__kernel_write_iter+0x365/0xa00 fs/read_write.c:523
dump_emit_page fs/coredump.c:888 [inline]
dump_user_range+0x5d7/0xe00 fs/coredump.c:915
elf_core_dump+0x5847/0x5fa0 fs/binfmt_elf.c:2077
do_coredump+0x3bb6/0x4e60 fs/coredump.c:764
get_signal+0x28f7/0x30b0 kernel/signal.c:2890
arch_do_signal_or_restart+0x5e/0xda0 arch/x86/kernel/signal.c:309
exit_to_user_mode_loop kernel/entry/common.c:105 [inline]
exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
irqentry_exit_to_user_mode+0xaa/0x160 kernel/entry/common.c:225
irqentry_exit+0x16/0x40 kernel/entry/common.c:328
exc_page_fault+0x246/0x6f0 arch/x86/mm/fault.c:1566
asm_exc_page_fault+0x2b/0x30 arch/x86/include/asm/idtentry.h:570
CPU: 1 PID: 11178 Comm: segctord Not tainted 6.7.0-00562-g9f8413c4a66f-dirty #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.16.2-debian-1.16.2-1 04/01/2014
=====================================================
=* repro.c =*
#define _GNU_SOURCE
#include <dirent.h>
#include <endian.h>
#include <errno.h>
#include <fcntl.h>
#include <sched.h>
#include <signal.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mount.h>
#include <sys/prctl.h>
#include <sys/resource.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <time.h>
#include <unistd.h>
#include <linux/capability.h>
static void sleep_ms(uint64_t ms)
{
usleep(ms * 1000);
}
static uint64_t current_time_ms(void)
{
struct timespec ts;
if (clock_gettime(CLOCK_MONOTONIC, &ts))
exit(1);
return (uint64_t)ts.tv_sec * 1000 + (uint64_t)ts.tv_nsec / 1000000;
}
static bool write_file(const char* file, const char* what, ...)
{
char buf[1024];
va_list args;
va_start(args, what);
vsnprintf(buf, sizeof(buf), what, args);
va_end(args);
buf[sizeof(buf) - 1] = 0;
int len = strlen(buf);
int fd = open(file, O_WRONLY | O_CLOEXEC);
if (fd == -1)
return false;
if (write(fd, buf, len) != len) {
int err = errno;
close(fd);
errno = err;
return false;
}
close(fd);
return true;
}
#define MAX_FDS 30
static void setup_common()
{
if (mount(0, "/sys/fs/fuse/connections", "fusectl", 0, 0)) {
}
}
static void setup_binderfs()
{
if (mkdir("/dev/binderfs", 0777)) {
}
if (mount("binder", "/dev/binderfs", "binder", 0, NULL)) {
}
if (symlink("/dev/binderfs", "./binderfs")) {
}
}
static void loop();
static void sandbox_common()
{
prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
setsid();
struct rlimit rlim;
rlim.rlim_cur = rlim.rlim_max = (200 << 20);
setrlimit(RLIMIT_AS, &rlim);
rlim.rlim_cur = rlim.rlim_max = 32 << 20;
setrlimit(RLIMIT_MEMLOCK, &rlim);
rlim.rlim_cur = rlim.rlim_max = 136 << 20;
setrlimit(RLIMIT_FSIZE, &rlim);
rlim.rlim_cur = rlim.rlim_max = 1 << 20;
setrlimit(RLIMIT_STACK, &rlim);
rlim.rlim_cur = rlim.rlim_max = 128 << 20;
setrlimit(RLIMIT_CORE, &rlim);
rlim.rlim_cur = rlim.rlim_max = 256;
setrlimit(RLIMIT_NOFILE, &rlim);
if (unshare(CLONE_NEWNS)) {
}
if (mount(NULL, "/", NULL, MS_REC | MS_PRIVATE, NULL)) {
}
if (unshare(CLONE_NEWIPC)) {
}
if (unshare(0x02000000)) {
}
if (unshare(CLONE_NEWUTS)) {
}
if (unshare(CLONE_SYSVSEM)) {
}
typedef struct {
const char* name;
const char* value;
} sysctl_t;
static const sysctl_t sysctls[] = {
{"/proc/sys/kernel/shmmax", "16777216"},
{"/proc/sys/kernel/shmall", "536870912"},
{"/proc/sys/kernel/shmmni", "1024"},
{"/proc/sys/kernel/msgmax", "8192"},
{"/proc/sys/kernel/msgmni", "1024"},
{"/proc/sys/kernel/msgmnb", "1024"},
{"/proc/sys/kernel/sem", "1024 1048576 500 1024"},
};
unsigned i;
for (i = 0; i < sizeof(sysctls) / sizeof(sysctls[0]); i++)
write_file(sysctls[i].name, sysctls[i].value);
}
static int wait_for_loop(int pid)
{
if (pid < 0)
exit(1);
int status = 0;
while (waitpid(-1, &status, __WALL) != pid) {
}
return WEXITSTATUS(status);
}
static void drop_caps(void)
{
struct __user_cap_header_struct cap_hdr = {};
struct __user_cap_data_struct cap_data[2] = {};
cap_hdr.version = _LINUX_CAPABILITY_VERSION_3;
cap_hdr.pid = getpid();
if (syscall(SYS_capget, &cap_hdr, &cap_data))
exit(1);
const int drop = (1 << CAP_SYS_PTRACE) | (1 << CAP_SYS_NICE);
cap_data[0].effective &= ~drop;
cap_data[0].permitted &= ~drop;
cap_data[0].inheritable &= ~drop;
if (syscall(SYS_capset, &cap_hdr, &cap_data))
exit(1);
}
static int do_sandbox_none(void)
{
if (unshare(CLONE_NEWPID)) {
}
int pid = fork();
if (pid != 0)
return wait_for_loop(pid);
setup_common();
sandbox_common();
drop_caps();
if (unshare(CLONE_NEWNET)) {
}
write_file("/proc/sys/net/ipv4/ping_group_range", "0 65535");
setup_binderfs();
loop();
exit(1);
}
static void kill_and_wait(int pid, int* status)
{
kill(-pid, SIGKILL);
kill(pid, SIGKILL);
for (int i = 0; i < 100; i++) {
if (waitpid(-1, status, WNOHANG | __WALL) == pid)
return;
usleep(1000);
}
DIR* dir = opendir("/sys/fs/fuse/connections");
if (dir) {
for (;;) {
struct dirent* ent = readdir(dir);
if (!ent)
break;
if (strcmp(ent->d_name, ".") == 0 || strcmp(ent->d_name, "..") == 0)
continue;
char abort[300];
snprintf(abort, sizeof(abort), "/sys/fs/fuse/connections/%s/abort",
ent->d_name);
int fd = open(abort, O_WRONLY);
if (fd == -1) {
continue;
}
if (write(fd, abort, 1) < 0) {
}
close(fd);
}
closedir(dir);
} else {
}
while (waitpid(-1, status, __WALL) != pid) {
}
}
static void setup_test()
{
prctl(PR_SET_PDEATHSIG, SIGKILL, 0, 0, 0);
setpgrp();
write_file("/proc/self/oom_score_adj", "1000");
}
static void close_fds()
{
for (int fd = 3; fd < MAX_FDS; fd++)
close(fd);
}
#define USLEEP_FORKED_CHILD (3 * 50 * 1000)
static long handle_clone_ret(long ret)
{
if (ret != 0) {
return ret;
}
usleep(USLEEP_FORKED_CHILD);
syscall(__NR_exit, 0);
while (1) {
}
}
static long syz_clone(volatile long flags, volatile long stack,
volatile long stack_len, volatile long ptid,
volatile long ctid, volatile long tls)
{
long sp = (stack + stack_len) & ~15;
long ret = (long)syscall(__NR_clone, flags & ~CLONE_VM, sp, ptid, ctid, tls);
return handle_clone_ret(ret);
}
static void execute_one(void);
#define WAIT_FLAGS __WALL
static void loop(void)
{
int iter = 0;
for (;; iter++) {
int pid = fork();
if (pid < 0)
exit(1);
if (pid == 0) {
setup_test();
execute_one();
close_fds();
exit(0);
}
int status = 0;
uint64_t start = current_time_ms();
for (;;) {
if (waitpid(-1, &status, WNOHANG | WAIT_FLAGS) == pid)
break;
sleep_ms(1);
if (current_time_ms() - start < 5000)
continue;
kill_and_wait(pid, &status);
break;
}
}
}
void execute_one(void)
{
syz_clone(/*flags=CLONE_IO*/ 0x80000000, /*stack=*/0x20000140,
/*stack_len=*/0, /*parentid=*/0, /*childtid=*/0, /*tls=*/0);
}
int main(void)
{
syscall(__NR_mmap, /*addr=*/0x1ffff000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
/*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x20000000ul, /*len=*/0x1000000ul,
/*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
/*offset=*/0ul);
syscall(__NR_mmap, /*addr=*/0x21000000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/-1,
/*offset=*/0ul);
do_sandbox_none();
return 0;
}
Remember to run this repro.txt with the command: syz-execprog -repeat
0 ./repro.txt and wait for about 1minus, the bug triggered very
steady.
=* repro.txt =*
syz_mount_image$nilfs2(&(0x7f0000000000),
&(0x7f0000000a80)='./file0\x00', 0x808, &(0x7f00000000c0)=ANY=[], 0x1,
0xa4a, &(0x7f0000001540)="$eJzs3U2MW0cdAPDx7nrTfJQ4JaFLGtqEQls+uttslvARQVI1QiJqKm6VKi5RmpaINCBSCVr1kOTEjVZVuPIhTr1UgJDoBUU9calEI1VIPRUOHIiCVIkDFJJF8c547X9sPXuzWa/Xv580O543Y8887/Pz83tvZhIwtiaafxcWZmopXXrr9aP/eOjvm28uOdwq0Wj+nWpL1VNKtZyeCq/3weRSfP3DV052i2tpvvm3pNNT11rP3ZpSOp/2psupkXZfuvLaO/NPHr9w7OK+d984dPXOrD0AAIyXb18+tLDrr3++b8dHb95/JG1qLS/H542c3paP+4/kA/9y/D+ROtO1ttBuOpSbymEilJvsUq69nnooN9Wj/unwuvUe5TZV1D/ZtqzbesMoK9txI9UmZjvSExOzs0u/yVPzd/10bfbs6TPPnRtSQ4FV968HUkp7RygcXgdtWGFYXAdtGMlwZB20YYOGxe3D3gMBLInXC29xPp5ZuD2tV5vqr/5rj090fz6sgrXe/tU/WvX/+oI9Dqtno25NZb3K52hbTsfrCPH+pd6fv3ilo3NpvB5R77Odva4jjMr1hV7tnFzjdqxUr/bH7WKj+nqOy/vwjZDf/vmJ/9NR+R8D3f171M7/C8K4h7R6r7U45P0PsH7F++YWs5If7+uL+Zsq8u+qyN9ckb+lIn9rRT6Ms9+9+NP0am35d378TT/o+fBynu3uHH9swPbE85GD1h/v+x3U7dYf7yeG9ewPJ54+9ZVnn7mydP9/rbX938jbe/m50cifrcu5QDlfGM+rt+79b3TWM9Gj3D2hPXd3Kd98vLOzXG3n8uuktv3MLe2Y6Xze9l7l9nSWa4Rym3O4K7Q3Hp9sCc8rxx9lv1rer6mwvvWwHtOhHWW/siPHsR2wEmV77HX/f9k+Z1K99tzpM6cey+mynf5psr7p5vL9a9xu4Pb12/9nJnX2/9nWWl6faN8vbF9eXmvfLzTC8vkeyw/kdPme++7k5uby2ZPfP/Psaq88jLlzL738vRNnzpz6oQcrfvDN9dEMDzxYxQfD3jMBd9rciy/8YO7cSy8/evqFE8+fev7U2QMHDx6Ynz/41QMLc83j+rn2o3tgI1n+0h92SwAAAAAAAAAAAIB+/ejY0Svvvf3l95f6/y/3/yv9/8udv6X//09C///YT770gy/9AHd0yW+WCQOsTody9Rw+Htq7M9SzKzzvEzluzeOX+/+X6uK4rqU994blcfzeUi4MJ3DLeCnTYQySOF/gp3N8Mce/SjBEtc3dF+e4anzrsq2X8SmMSzGayv+tbA1lHJPS/7vruE5t/+wda9BGVt9adCcc9joC3f3T+N+CMLZhcbHXLB79zmADsDqGPf9nOe9Z4rN//NZdN0Mpdu3xzv1lHL8UBvGX9zrT633+SfVvrPk/W/Pf9b3/CzPmNVZW739+fvX9tmrT7n7rj+tfxoHeOVj9H+X6y9o8nPqrf/GXof54QahP/w31b+mz/lvWf8/K6v9frr+8bY882G/9Sy2uTXS2I543Ltf/4nnj4npY/zK258Drv8KJGm/k+mGcjco8s4MK8/+2DtpXPv9vdn515//tJd6H8aWcLjvCcp9DnO9k0PaX+yvK98Cu8Pq1iu838/+Otq/luOrzUOb/LdtjI3/lt6Wb72VJ17u8txt1XwOj6gPX/wRhzUNrnrght2NxcfHOntCqMNTKGfr7P+zfCcOuf9jvf5U4/288ho/z/8b8OP9vzI/z/8b8OL9ezI/z/8b3M87/G/PvDa8b5weeqcj/ZEX+7u75rZ/t91U8f09F/qcq8vdV5N9fkf9ARf49FfkPVuR/piL/sxX5D1XkP1KR/7mK/I2u9EcZ1/WHcRb75/n8w/go1396ff53VuQDo+tnb+5/4pnffqex1P9/unU+pFzHO5LT9fzb+cc5Ha97p7b0zby3c/pvIX+9n++AcRLHz4jf7w9X5AOjq9zn5fMNY6jWfcSefset6nWcz2j5fI6/kOMv5vjRHM/meC7H+3M8v0bt48544je/P/Rqbfn3/vaQ3+/95LE/UMc4USmlA322J54fGPR+9jiO36But/4VdgcDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYmonm34WFmVpKl956/ejTx0/P3VxyuFWi0fw71Zaqt56X0mM5nszxL/KD6x++crI9vpHjWppPtVRrLU9PXWvVtDWldD7tTZdTI+2+dOW1d+afPH7h2MV9775x6OqdewcAAABg4/t/AAAA//+wuA6E")
r0 = open(&(0x7f0000000000)='./bus\x00', 0x0, 0x0) (async)
r1 = open(&(0x7f0000007f80)='./bus\x00', 0x145142, 0x0)
cachestat(r1, &(0x7f00000002c0)={0x6}, &(0x7f0000000300), 0x0) (async)
r2 = syz_open_procfs(0xffffffffffffffff,
&(0x7f0000000100)='mountinfo\x00') (async)
r3 = open(&(0x7f0000000a40)='./bus\x00', 0x141a42, 0x0)
r4 = openat$adsp1(0xffffffffffffff9c, &(0x7f0000000040), 0x20000, 0x0) (async)
ptrace(0x10, 0x0) (async)
r5 = syz_clone(0x80000000,
&(0x7f0000000140)="1d7f3ef3f0b0129f8d083226510ecc0713b2af6e7901a607532fa2a7176fefdd7e66e6402ef8b579a00dd83d555182afa044f65b0ac668c2063ac33b34bb48411c11d456d584ec4140aebe97e1950ad7c4bd2bffcef175625a27a11f559e8ddb031d27c2be3a2216a1e9f87f5d68b8b0b690e67bfcc8a8ec9af998c1a8eaef215c771e45eee015e8ce9b17015da79c48a7b87459c4a88781ffd9d1ec6870c4d7220ffc6a66f7828db1297aa12e00503dde7a5c",
0xb3, &(0x7f0000000080), &(0x7f00000000c0),
&(0x7f0000000200)="994665d2b9d5239b789d65f6ec184c1ea67003ce8f474755e439f58560c42a241a31e540479e0752cad17884d9024cb854dc6798ada62550c8264b5488daff5387419b22f01fa57630317e8c24ac37d892d70e380b7164dfaa886b72a17f08df76c1057a2268b39aad4e0e759eef1abc6e5e664e7f3057c1d70d897ba5104664e96d92c1d8bd420f78368f522169f713ed03315d69de28d77af27ec8881f54633a5dd5d54635e74ad8c896918c")
fcntl$setown(r4, 0x8, r5) (async)
sendfile(r3, r2, 0x0, 0x100800001) (async)
sendfile(r0, r1, 0x0, 0x1000000201003)
and see also in
https://gist.github.com/xrivendell7/744812c87156085e12c7f617ef237875.
BTW, found in my personal observation, the syzlang reproducer can
trigger the bug more stably, so try to use the syz-execprog -repeat 0
./repro.txt to trigger this bug.
I hope it helps.
Best regards!
xingwei Lee
^ permalink raw reply [relevance 3%]
* [PATCH v2 04/13] filemap: use mapping_min_order while allocating folios
2024-03-01 16:44 6% ` [PATCH v2 01/13] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
@ 2024-03-01 16:44 16% ` Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-03-01 16:44 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: djwong, mcgrof, linux-mm, hare, david, akpm, gost.dev,
linux-kernel, chandan.babu, willy, Pankaj Raghav
From: Pankaj Raghav <p.raghav@samsung.com>
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
As we bring the notion of mapping_min_order, make sure these functions
allocate at least folio of mapping_min_order as we need to guarantee it
in the page cache.
Add some additional VM_BUG_ON() in __filemap_add_folio to catch errors
where we add folios that has order less than min_order.
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Acked-by: Darrick J. Wong <djwong@kernel.org>
---
mm/filemap.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 96fe5c7fe094..3e621c6344f7 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -849,6 +849,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
mapping_set_update(&xas, mapping);
if (!huge) {
@@ -1886,8 +1888,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
+ index = mapping_align_start_index(mapping, index);
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
gfp |= __GFP_WRITE;
@@ -1912,8 +1916,11 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
+ if (order < min_order)
+ order = min_order;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
+
folio = filemap_alloc_folio(alloc_gfp, order);
if (!folio)
continue;
@@ -1927,7 +1934,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2424,7 +2431,8 @@ static int filemap_create_folio(struct file *file,
unsigned int min_order = mapping_min_folio_order(mapping);
pgoff_t index;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ min_order);
if (!folio)
return -ENOMEM;
@@ -3666,7 +3674,8 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
err = filemap_add_folio(mapping, folio, index, gfp);
--
2.43.0
^ permalink raw reply related [relevance 16%]
* [PATCH v2 01/13] mm: Support order-1 folios in the page cache
@ 2024-03-01 16:44 6% ` Pankaj Raghav (Samsung)
2024-03-01 16:44 16% ` [PATCH v2 04/13] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-03-01 16:44 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: djwong, mcgrof, linux-mm, hare, david, akpm, gost.dev,
linux-kernel, chandan.babu, willy
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Folios of order 1 have no space to store the deferred list. This is
not a problem for the page cache as file-backed folios are never
placed on the deferred list. All we need to do is prevent the core
MM from touching the deferred list for order 1 folios and remove the
code which prevented us from allocating order 1 folios.
Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/huge_mm.h | 7 +++++--
mm/filemap.c | 2 --
mm/huge_memory.c | 23 ++++++++++++++++++-----
mm/internal.h | 4 +---
mm/readahead.c | 3 ---
5 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..916a2a539517 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags);
-void folio_prep_large_rmappable(struct folio *folio);
+struct folio *folio_prep_large_rmappable(struct folio *folio);
bool can_split_folio(struct folio *folio, int *pextra_pins);
int split_huge_page_to_list(struct page *page, struct list_head *list);
static inline int split_huge_page(struct page *page)
@@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
return 0;
}
-static inline void folio_prep_large_rmappable(struct folio *folio) {}
+static inline struct folio *folio_prep_large_rmappable(struct folio *folio)
+{
+ return folio;
+}
#define transparent_hugepage_flags 0UL
diff --git a/mm/filemap.c b/mm/filemap.c
index 750e779c23db..2b00442b9d19 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
- if (order == 1)
- order = 0;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
folio = filemap_alloc_folio(alloc_gfp, order);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94c958f7ebb5..81fd1ba57088 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio)
}
#endif
-void folio_prep_large_rmappable(struct folio *folio)
+struct folio *folio_prep_large_rmappable(struct folio *folio)
{
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
- INIT_LIST_HEAD(&folio->_deferred_list);
+ if (!folio || !folio_test_large(folio))
+ return folio;
+ if (folio_order(folio) > 1)
+ INIT_LIST_HEAD(&folio->_deferred_list);
folio_set_large_rmappable(folio);
+
+ return folio;
}
static inline bool is_transparent_hugepage(struct folio *folio)
@@ -3082,7 +3086,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
- if (!list_empty(&folio->_deferred_list)) {
+ if (folio_order(folio) > 1 &&
+ !list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
list_del(&folio->_deferred_list);
}
@@ -3133,6 +3138,9 @@ void folio_undo_large_rmappable(struct folio *folio)
struct deferred_split *ds_queue;
unsigned long flags;
+ if (folio_order(folio) <= 1)
+ return;
+
/*
* At this point, there is no one trying to add the folio to
* deferred_list. If folio is not in deferred_list, it's safe
@@ -3158,7 +3166,12 @@ void deferred_split_folio(struct folio *folio)
#endif
unsigned long flags;
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
+ /*
+ * Order 1 folios have no space for a deferred list, but we also
+ * won't waste much memory by not adding them to the deferred list.
+ */
+ if (folio_order(folio) <= 1)
+ return;
/*
* The try_to_unmap() in page reclaim path might reach here too,
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5174b5b0c344 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page)
{
struct folio *folio = (struct folio *)page;
- if (folio && folio_order(folio) > 1)
- folio_prep_large_rmappable(folio);
- return folio;
+ return folio_prep_large_rmappable(folio);
}
static inline void prep_compound_head(struct page *page, unsigned int order)
diff --git a/mm/readahead.c b/mm/readahead.c
index 2648ec4f0494..369c70e2be42 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -516,9 +516,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
/* Don't allocate pages past EOF */
while (index + (1UL << order) - 1 > limit)
order--;
- /* THP machinery does not support order-1 */
- if (order == 1)
- order = 0;
err = ra_alloc_folio(ractl, index, mark, order, gfp);
if (err)
break;
--
2.43.0
^ permalink raw reply related [relevance 6%]
* Re: [PATCH 2/2] mm/readahead: limit sync readahead while too many active refault
@ 2024-02-29 9:01 6% ` Liu Shixin
0 siblings, 0 replies; 200+ results
From: Liu Shixin @ 2024-02-29 9:01 UTC (permalink / raw)
To: Jan Kara
Cc: Alexander Viro, Christian Brauner, Matthew Wilcox, Andrew Morton,
linux-fsdevel, linux-kernel, linux-mm
On 2024/2/2 17:02, Liu Shixin wrote:
>
> On 2024/2/2 1:31, Jan Kara wrote:
>> On Thu 01-02-24 18:41:30, Liu Shixin wrote:
>>> On 2024/2/1 17:37, Jan Kara wrote:
>>>> On Thu 01-02-24 18:08:35, Liu Shixin wrote:
>>>>> When the pagefault is not for write and the refault distance is close,
>>>>> the page will be activated directly. If there are too many such pages in
>>>>> a file, that means the pages may be reclaimed immediately.
>>>>> In such situation, there is no positive effect to read-ahead since it will
>>>>> only waste IO. So collect the number of such pages and when the number is
>>>>> too large, stop bothering with read-ahead for a while until it decreased
>>>>> automatically.
>>>>>
>>>>> Define 'too large' as 10000 experientially, which can solves the problem
>>>>> and does not affect by the occasional active refault.
>>>>>
>>>>> Signed-off-by: Liu Shixin <liushixin2@huawei.com>
>>>> So I'm not convinced this new logic is needed. We already have
>>>> ra->mmap_miss which gets incremented when a page fault has to read the page
>>>> (and decremented when a page fault found the page already in cache). This
>>>> should already work to detect trashing as well, shouldn't it? If it does
>>>> not, why?
>>>>
>>>> Honza
>>> ra->mmap_miss doesn't help, it increased only one in do_sync_mmap_readahead()
>>> and then decreased one for every page in filemap_map_pages(). So in this scenario,
>>> it can't exceed MMAP_LOTSAMISS.
>> I see, OK. But that's a (longstanding) bug in how mmap_miss is handled. Can
>> you please test whether attached patches fix the trashing for you? At least
>> now I can see mmap_miss properly increments when we are hitting uncached
>> pages... Thanks!
>>
>> Honza
> The patch doesn't seem to have much effect. I will try to analyze why it doesn't work.
> The attached file is my testcase.
>
> Thanks,
I think I figured out why mmap_miss doesn't work. After do_sync_mmap_readahead(), there is a
__filemap_get_folio() to make sure the page is ready. Then, it is ready too in filemap_map_pages(),
so the mmap_miss will decreased once. mmap_miss goes back to 0, and can't stop read-ahead.
Overall, I don't think mmap_miss can solve this problem.
.
^ permalink raw reply [relevance 6%]
* [dhowells-fs:cifs-netfs] [cifs] a05396635d: filebench.sum_operations/s -98.8% regression
@ 2024-02-29 7:27 3% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-02-29 7:27 UTC (permalink / raw)
To: David Howells
Cc: oe-lkp, lkp, Steve French, Shyam Prasad N, Rohith Surabattula,
Jeff Layton, netfs, linux-fsdevel, linux-cifs, samba-technical,
ying.huang, feng.tang, fengwei.yin, oliver.sang
Hello,
kernel test robot noticed a -98.8% regression of filebench.sum_operations/s on:
commit: a05396635dc359f7047b4e35d2fdc66cd79bc3ee ("cifs: Cut over to using netfslib")
https://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git cifs-netfs
testcase: filebench
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
parameters:
disk: 1HDD
fs: btrfs
fs2: cifs
test: randomrw.f
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202402291526.7c11cd51-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240229/202402291526.7c11cd51-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/fs2/fs/kconfig/rootfs/tbox_group/test/testcase:
gcc-12/performance/1HDD/cifs/btrfs/x86_64-rhel-8.3/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp6/randomrw.f/filebench
commit:
f016508de3 ("cifs: Move cifs_loose_read_iter() and cifs_file_write_iter() to file.c")
a05396635d ("cifs: Cut over to using netfslib")
f016508de30bff78 a05396635dc359f7047b4e35d2f
---------------- ---------------------------
%stddev %change %stddev
\ | \
366.67 ± 56% +327.0% 1565 ± 20% perf-c2c.HITM.local
3.27 ± 7% +0.8 4.10 mpstat.cpu.all.irq%
1.28 ± 29% -1.1 0.14 ± 8% mpstat.cpu.all.sys%
0.15 ± 14% -0.1 0.06 ± 2% mpstat.cpu.all.usr%
67201 -42.4% 38722 ± 2% vmstat.io.bo
6.43 -13.8% 5.55 ± 2% vmstat.memory.buff
2.04 ± 40% -80.5% 0.40 ± 9% vmstat.procs.r
770257 ± 81% -77.9% 170216 ± 44% numa-meminfo.node0.AnonPages
2815098 ± 48% -89.3% 301906 ± 33% numa-meminfo.node0.AnonPages.max
774717 ± 81% -77.0% 177831 ± 43% numa-meminfo.node0.Inactive(anon)
4906919 ± 30% -88.5% 562079 ±127% numa-meminfo.node1.Active
67755 ± 28% -27.0% 49485 ± 4% numa-meminfo.node1.Active(anon)
4839164 ± 31% -89.4% 512593 ±139% numa-meminfo.node1.Active(file)
2429235 ± 21% -90.9% 221192 ±157% numa-meminfo.node1.Dirty
102.17 ± 12% -41.8% 59.50 turbostat.Avg_MHz
1646 ± 6% -40.8% 975.00 ± 2% turbostat.Bzy_MHz
207440 ± 55% -71.9% 58371 ± 33% turbostat.C1
908208 ± 58% +149.2% 2263209 ± 7% turbostat.C1E
89634 ± 66% -85.5% 13012 ± 4% turbostat.POLL
177.44 -17.9% 145.73 turbostat.PkgWatt
196.62 -1.7% 193.26 turbostat.RAMWatt
6445 ± 13% -98.8% 75.82 ± 7% filebench.sum_bytes_mb/s
49504320 ± 13% -98.8% 582393 ± 7% filebench.sum_operations
825011 ± 13% -98.8% 9705 ± 7% filebench.sum_operations/s
446193 ± 15% -98.9% 4924 ± 7% filebench.sum_reads/s
0.00 +10050.0% 0.20 ± 5% filebench.sum_time_ms/op
378818 ± 13% -98.7% 4781 ± 7% filebench.sum_writes/s
36223497 ± 17% -100.0% 0.00 filebench.time.file_system_outputs
72.33 -95.4% 3.33 ± 14% filebench.time.percent_of_cpu_this_job_got
102.51 ± 3% -93.7% 6.43 ± 12% filebench.time.system_time
19.94 ± 18% -98.6% 0.28 ± 9% filebench.time.user_time
66626 ± 95% +778.0% 584989 ± 7% filebench.time.voluntary_context_switches
192593 ± 81% -77.9% 42554 ± 44% numa-vmstat.node0.nr_anon_pages
193708 ± 81% -77.0% 44457 ± 43% numa-vmstat.node0.nr_inactive_anon
193708 ± 81% -77.0% 44457 ± 43% numa-vmstat.node0.nr_zone_inactive_anon
16939 ± 28% -27.0% 12371 ± 4% numa-vmstat.node1.nr_active_anon
1209717 ± 31% -89.4% 128098 ±139% numa-vmstat.node1.nr_active_file
4365951 ± 26% -89.7% 451077 ±145% numa-vmstat.node1.nr_dirtied
607224 ± 21% -90.9% 55289 ±157% numa-vmstat.node1.nr_dirty
4364476 ± 26% -74.1% 1130525 ±127% numa-vmstat.node1.nr_written
16939 ± 28% -27.0% 12371 ± 4% numa-vmstat.node1.nr_zone_active_anon
1209717 ± 31% -89.4% 128098 ±139% numa-vmstat.node1.nr_zone_active_file
613248 ± 21% -90.7% 57212 ±156% numa-vmstat.node1.nr_zone_write_pending
8414844 -78.7% 1793875 ± 5% meminfo.Active
70870 ± 26% -25.0% 53122 meminfo.Active(anon)
8343973 -79.1% 1740752 ± 5% meminfo.Active(file)
1060561 ± 62% -65.4% 366619 meminfo.AnonPages
1462800 ± 46% -56.0% 643519 meminfo.Committed_AS
3999161 -76.2% 949812 meminfo.Dirty
2993310 ± 21% +195.5% 8844258 meminfo.Inactive
1084787 ± 60% -64.3% 387289 meminfo.Inactive(anon)
1908522 ± 7% +343.1% 8456969 meminfo.Inactive(file)
9353 ± 12% -34.9% 6085 meminfo.PageTables
98169 ± 19% -21.2% 77321 meminfo.Shmem
61703 ± 49% -51.3% 30022 ± 4% meminfo.Writeback
34412 ± 47% -87.4% 4346 ± 23% sched_debug.cfs_rq:/.avg_vruntime.avg
183836 ± 22% -86.3% 25138 ± 11% sched_debug.cfs_rq:/.avg_vruntime.max
6982 ± 92% -95.9% 287.46 ± 34% sched_debug.cfs_rq:/.avg_vruntime.min
27635 ± 19% -81.3% 5174 ± 17% sched_debug.cfs_rq:/.avg_vruntime.stddev
34412 ± 47% -87.4% 4346 ± 23% sched_debug.cfs_rq:/.min_vruntime.avg
183836 ± 22% -86.3% 25138 ± 11% sched_debug.cfs_rq:/.min_vruntime.max
6982 ± 92% -95.9% 287.46 ± 34% sched_debug.cfs_rq:/.min_vruntime.min
27635 ± 19% -81.3% 5174 ± 17% sched_debug.cfs_rq:/.min_vruntime.stddev
946.39 ± 4% -10.7% 844.94 ± 4% sched_debug.cfs_rq:/.runnable_avg.max
209.58 ± 5% -10.6% 187.32 ± 6% sched_debug.cfs_rq:/.runnable_avg.stddev
938.61 ± 4% -10.4% 840.78 ± 5% sched_debug.cfs_rq:/.util_avg.max
208.81 ± 5% -10.7% 186.45 ± 7% sched_debug.cfs_rq:/.util_avg.stddev
24.32 ± 22% -45.7% 13.22 ± 35% sched_debug.cfs_rq:/.util_est.avg
693.56 ± 2% -38.8% 424.28 ± 11% sched_debug.cfs_rq:/.util_est.max
111.18 ± 10% -44.9% 61.23 ± 23% sched_debug.cfs_rq:/.util_est.stddev
134421 ± 81% +203.9% 408479 ± 22% sched_debug.cpu.nr_switches.max
2172 ± 96% -80.2% 430.94 ± 4% sched_debug.cpu.nr_switches.min
18269 ± 72% +202.2% 55214 ± 12% sched_debug.cpu.nr_switches.stddev
73.17 ± 54% -67.4% 23.89 ± 26% sched_debug.cpu.nr_uninterruptible.max
-41.61 -55.1% -18.67 sched_debug.cpu.nr_uninterruptible.min
13.17 ± 38% -59.6% 5.32 ± 11% sched_debug.cpu.nr_uninterruptible.stddev
17719 ± 26% -25.0% 13281 proc-vmstat.nr_active_anon
2085401 -79.1% 435069 ± 5% proc-vmstat.nr_active_file
264806 ± 62% -65.4% 91659 proc-vmstat.nr_anon_pages
7404786 ± 11% -77.7% 1652087 ± 2% proc-vmstat.nr_dirtied
999649 -76.2% 237433 proc-vmstat.nr_dirty
270866 ± 60% -64.3% 96829 proc-vmstat.nr_inactive_anon
477101 ± 7% +343.1% 2113828 proc-vmstat.nr_inactive_file
2337 ± 12% -34.9% 1521 proc-vmstat.nr_page_table_pages
24546 ± 19% -21.2% 19332 proc-vmstat.nr_shmem
15460 ± 50% -51.5% 7503 ± 4% proc-vmstat.nr_writeback
7402470 ± 11% -52.3% 3528032 ± 2% proc-vmstat.nr_written
17719 ± 26% -25.0% 13281 proc-vmstat.nr_zone_active_anon
2085401 -79.1% 435069 ± 5% proc-vmstat.nr_zone_active_file
270866 ± 60% -64.3% 96829 proc-vmstat.nr_zone_inactive_anon
477101 ± 7% +343.1% 2113828 proc-vmstat.nr_zone_inactive_file
1015085 -75.9% 244924 proc-vmstat.nr_zone_write_pending
245536 ± 60% -94.9% 12642 ± 23% proc-vmstat.numa_hint_faults
183808 ± 69% -94.8% 9568 ± 16% proc-vmstat.numa_hint_faults_local
4814514 ± 16% -24.3% 3646436 proc-vmstat.numa_hit
4682055 ± 16% -24.4% 3539376 proc-vmstat.numa_local
75208 ± 73% -86.0% 10525 ± 82% proc-vmstat.numa_pages_migrated
460513 ± 21% -82.8% 79134 ± 10% proc-vmstat.numa_pte_updates
2639438 -78.1% 578107 ± 6% proc-vmstat.pgactivate
9375153 ± 14% -41.5% 5480137 proc-vmstat.pgalloc_normal
1492361 ± 25% -56.6% 647374 proc-vmstat.pgfault
8814351 ± 15% -38.2% 5449272 proc-vmstat.pgfree
75208 ± 73% -86.0% 10525 ± 82% proc-vmstat.pgmigrate_success
11513882 -42.5% 6621853 ± 2% proc-vmstat.pgpgout
58851 ± 63% -50.3% 29269 ± 3% proc-vmstat.pgreuse
789.83 ± 36% -95.4% 36.33 ± 2% proc-vmstat.thp_fault_alloc
7.17 ± 9% -85.0% 1.07 ± 13% perf-stat.i.MPKI
6.68e+08 ± 17% -55.5% 2.975e+08 ± 2% perf-stat.i.branch-instructions
7.59 ± 4% +2.3 9.92 ± 6% perf-stat.i.branch-miss-rate%
8.81 ± 5% -7.4 1.39 ± 20% perf-stat.i.cache-miss-rate%
34419687 ± 9% -92.2% 2684939 ± 18% perf-stat.i.cache-misses
2.303e+08 ± 6% -41.0% 1.359e+08 perf-stat.i.cache-references
4.67 ± 2% +15.8% 5.40 perf-stat.i.cpi
1.099e+10 ± 14% -50.9% 5.398e+09 perf-stat.i.cpu-cycles
664.70 ± 71% -78.3% 144.02 perf-stat.i.cpu-migrations
7387 ± 2% +75.2% 12941 ± 2% perf-stat.i.cycles-between-cache-misses
0.97 ± 2% +0.4 1.37 perf-stat.i.dTLB-load-miss-rate%
8.978e+08 ± 17% -55.6% 3.988e+08 ± 2% perf-stat.i.dTLB-loads
0.25 +0.1 0.31 ± 2% perf-stat.i.dTLB-store-miss-rate%
495053 ± 7% -17.3% 409523 ± 2% perf-stat.i.dTLB-store-misses
3.783e+08 ± 10% -46.5% 2.024e+08 ± 2% perf-stat.i.dTLB-stores
3.294e+09 ± 17% -55.2% 1.477e+09 ± 2% perf-stat.i.instructions
0.25 ± 5% -10.2% 0.23 ± 2% perf-stat.i.ipc
0.09 ± 14% -50.9% 0.04 perf-stat.i.metric.GHz
15.93 ± 16% -56.6% 6.91 ± 2% perf-stat.i.metric.M/sec
7478 ± 30% -67.2% 2456 perf-stat.i.minor-faults
4183111 ± 69% -94.2% 243090 ± 56% perf-stat.i.node-load-misses
3532527 ± 84% -94.6% 189686 ± 17% perf-stat.i.node-loads
3482820 ±105% -93.7% 219634 ± 82% perf-stat.i.node-store-misses
6918310 ± 62% -90.4% 662878 ± 10% perf-stat.i.node-stores
7478 ± 30% -67.2% 2456 perf-stat.i.page-faults
10.72 ± 15% -83.1% 1.82 ± 17% perf-stat.overall.MPKI
2.99 ± 14% +3.4 6.36 ± 7% perf-stat.overall.branch-miss-rate%
14.94 ± 5% -13.0 1.98 ± 18% perf-stat.overall.cache-miss-rate%
3.36 ± 5% +8.8% 3.66 ± 2% perf-stat.overall.cpi
320.58 ± 14% +545.6% 2069 ± 15% perf-stat.overall.cycles-between-cache-misses
0.42 ± 17% +0.5 0.90 perf-stat.overall.dTLB-load-miss-rate%
0.13 ± 6% +0.1 0.20 ± 2% perf-stat.overall.dTLB-store-miss-rate%
6.643e+08 ± 17% -55.5% 2.954e+08 ± 2% perf-stat.ps.branch-instructions
34259811 ± 9% -92.2% 2667623 ± 18% perf-stat.ps.cache-misses
2.29e+08 ± 6% -41.1% 1.35e+08 perf-stat.ps.cache-references
1.093e+10 ± 14% -50.9% 5.364e+09 perf-stat.ps.cpu-cycles
661.95 ± 71% -78.4% 143.11 perf-stat.ps.cpu-migrations
8.93e+08 ± 17% -55.6% 3.961e+08 ± 2% perf-stat.ps.dTLB-loads
492202 ± 7% -17.3% 407012 ± 2% perf-stat.ps.dTLB-store-misses
3.764e+08 ± 10% -46.6% 2.011e+08 ± 2% perf-stat.ps.dTLB-stores
3.276e+09 ± 17% -55.2% 1.467e+09 ± 2% perf-stat.ps.instructions
7438 ± 30% -67.2% 2439 perf-stat.ps.minor-faults
4166071 ± 69% -94.2% 241572 ± 57% perf-stat.ps.node-load-misses
3514062 ± 84% -94.6% 188439 ± 17% perf-stat.ps.node-loads
3469703 ±105% -93.7% 218414 ± 82% perf-stat.ps.node-store-misses
6883534 ± 61% -90.4% 658670 ± 11% perf-stat.ps.node-stores
7439 ± 30% -67.2% 2439 perf-stat.ps.page-faults
5.535e+11 ± 18% -55.3% 2.472e+11 ± 3% perf-stat.total.instructions
0.00 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
0.01 ± 36% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
0.01 ± 21% +110.0% 0.02 ± 8% perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.09 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.cleaner_kthread.kthread.ret_from_fork.ret_from_fork_asm
0.02 ± 59% -80.3% 0.00 ± 36% perf-sched.sch_delay.avg.ms.futex_wait_queue.__futex_wait.futex_wait.do_futex
0.00 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_commit_transaction
0.01 ± 29% +101.7% 0.02 ± 18% perf-sched.sch_delay.avg.ms.irq_thread.kthread.ret_from_fork.ret_from_fork_asm
0.01 ± 35% +129.2% 0.02 ± 4% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
0.01 ± 58% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
0.01 ± 79% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
0.01 ± 68% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
0.01 ± 29% +66.7% 0.02 ± 15% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.02 ± 42% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.transaction_kthread.kthread.ret_from_fork
0.01 ± 19% +75.7% 0.01 ± 42% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.01 ± 54% +805.4% 0.11 ± 32% perf-sched.sch_delay.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.02 ± 19% +63.9% 0.04 ± 4% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
0.30 ± 40% -100.0% 0.00 perf-sched.sch_delay.max.ms.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
0.06 ±135% -100.0% 0.00 perf-sched.sch_delay.max.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
0.25 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
0.02 ± 33% +106.9% 0.04 ± 62% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.09 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.cleaner_kthread.kthread.ret_from_fork.ret_from_fork_asm
4.45 ±143% -96.8% 0.14 ± 20% perf-sched.sch_delay.max.ms.futex_wait_queue.__futex_wait.futex_wait.do_futex
0.00 ± 34% -100.0% 0.00 perf-sched.sch_delay.max.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_commit_transaction
0.02 ± 33% +157.3% 0.05 ± 52% perf-sched.sch_delay.max.ms.irq_thread.kthread.ret_from_fork.ret_from_fork_asm
0.02 ± 73% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
0.02 ±136% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
0.01 ± 68% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
0.02 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.transaction_kthread.kthread.ret_from_fork
41.21 ± 34% +265.4% 150.55 ± 5% perf-sched.total_wait_and_delay.average.ms
35294 ± 56% -80.1% 7009 ± 6% perf-sched.total_wait_and_delay.count.ms
3612 ± 9% +37.8% 4977 perf-sched.total_wait_and_delay.max.ms
41.19 ± 34% +265.4% 150.53 ± 5% perf-sched.total_wait_time.average.ms
3612 ± 9% +37.8% 4977 perf-sched.total_wait_time.max.ms
8.72 ± 18% +35.2% 11.78 perf-sched.wait_and_delay.avg.ms.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity
0.01 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
0.00 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
0.44 ± 99% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
0.18 ± 51% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
3.11 ± 45% -97.7% 0.07 ± 18% perf-sched.wait_and_delay.avg.ms.futex_wait_queue.__futex_wait.futex_wait.do_futex
29.45 ± 19% +165.1% 78.07 ± 24% perf-sched.wait_and_delay.avg.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
1.23 ± 23% +1435.4% 18.84 ± 24% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
999.94 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
688.75 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
19.70 ± 50% +187.1% 56.55 ± 5% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
67.85 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.transaction_kthread.kthread.ret_from_fork
0.13 ± 80% +12207.2% 16.47 ± 23% perf-sched.wait_and_delay.avg.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
75.06 ± 22% +407.1% 380.59 ± 8% perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
4648 ± 92% -100.0% 0.00 perf-sched.wait_and_delay.count.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
681.17 ±110% -100.0% 0.00 perf-sched.wait_and_delay.count.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
1762 ±117% -100.0% 0.00 perf-sched.wait_and_delay.count.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
784.33 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.count.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
10623 ± 82% -97.8% 238.00 ± 32% perf-sched.wait_and_delay.count.futex_wait_queue.__futex_wait.futex_wait.do_futex
27.17 ± 16% -22.7% 21.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
4396 ± 24% -88.8% 493.67 ± 31% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
7.00 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
3.50 ± 85% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
1.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.transaction_kthread.kthread.ret_from_fork
1380 ± 64% -82.5% 241.83 ± 31% perf-sched.wait_and_delay.count.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
4996 ± 47% -87.4% 631.50 ± 5% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
2757 ± 32% +80.5% 4977 perf-sched.wait_and_delay.max.ms.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity
1.72 ± 53% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
0.07 ±129% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
304.09 ±102% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
2.77 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
1000 -99.9% 0.93 ± 22% perf-sched.wait_and_delay.max.ms.futex_wait_queue.__futex_wait.futex_wait.do_futex
329.46 ± 19% +55.6% 512.76 ± 8% perf-sched.wait_and_delay.max.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
1576 ± 16% -36.5% 1000 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
499.97 +211.0% 1554 ± 14% perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
999.69 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
67.85 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.transaction_kthread.kthread.ret_from_fork
69.35 ±117% +2142.1% 1554 ± 14% perf-sched.wait_and_delay.max.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
0.01 ± 60% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.__filemap_get_folio.pagecache_get_page.cifs_write_begin.generic_perform_write
8.72 ± 18% +35.2% 11.78 perf-sched.wait_time.avg.ms.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity
0.01 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.generic_perform_write.cifs_strict_writev.vfs_write.__x64_sys_pwrite64
0.01 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
0.00 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
0.44 ± 99% -100.0% 0.00 perf-sched.wait_time.avg.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
0.17 ± 52% -100.0% 0.00 perf-sched.wait_time.avg.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
5.96 ±107% +2687.1% 166.05 ± 31% perf-sched.wait_time.avg.ms.btrfs_start_ordered_extent.lock_and_cleanup_extent_if_need.btrfs_buffered_write.btrfs_do_write_iter
16.05 ±114% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.09 ± 45% -97.8% 0.07 ± 18% perf-sched.wait_time.avg.ms.futex_wait_queue.__futex_wait.futex_wait.do_futex
6.95 ± 42% -100.0% 0.00 perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_commit_transaction
29.44 ± 19% +164.2% 77.78 ± 24% perf-sched.wait_time.avg.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
3.38 ± 6% +31.2% 4.43 ± 7% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
1.22 ± 23% +1440.1% 18.83 ± 24% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
16.73 ±105% +1142.3% 207.81 ± 58% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.__btrfs_tree_read_lock
999.93 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
688.74 ± 37% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
1.37 ±105% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
23.93 ± 19% +136.2% 56.52 ± 5% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
67.83 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.transaction_kthread.kthread.ret_from_fork
0.13 ± 84% +12930.6% 16.46 ± 23% perf-sched.wait_time.avg.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
5.58 ±215% -100.0% 0.00 perf-sched.wait_time.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
75.03 ± 22% +407.2% 380.55 ± 8% perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
0.02 ± 76% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.__filemap_get_folio.pagecache_get_page.cifs_write_begin.generic_perform_write
2757 ± 32% +80.5% 4977 perf-sched.wait_time.max.ms.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity
0.03 ± 43% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.generic_perform_write.cifs_strict_writev.vfs_write.__x64_sys_pwrite64
1.70 ± 54% -100.0% 0.00 perf-sched.wait_time.max.ms.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
0.06 ±135% -100.0% 0.00 perf-sched.wait_time.max.ms.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
304.04 ±102% -100.0% 0.00 perf-sched.wait_time.max.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
2.74 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
833.98 ± 84% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
1000 -99.9% 0.84 ± 25% perf-sched.wait_time.max.ms.futex_wait_queue.__futex_wait.futex_wait.do_futex
6.95 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_commit_transaction
329.45 ± 19% +55.6% 512.75 ± 8% perf-sched.wait_time.max.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
1576 ± 16% -36.5% 1000 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
499.95 +211.0% 1554 ± 14% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
520.29 ± 61% +335.4% 2265 ± 19% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.__btrfs_tree_read_lock
1000 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.do_madvise
999.67 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.exit_mm
1.37 ±105% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
67.83 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.transaction_kthread.kthread.ret_from_fork
69.26 ±117% +2144.9% 1554 ± 14% perf-sched.wait_time.max.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
60.14 ±220% -100.0% 0.00 perf-sched.wait_time.max.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.95 ± 6% -0.1 0.82 ± 7% perf-profile.calltrace.cycles-pp.hrtimer_next_event_without.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call.do_idle
0.81 ± 4% -0.1 0.70 ± 6% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call
0.88 ± 10% +0.2 1.06 ± 7% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.48 ± 45% +0.2 0.67 ± 3% perf-profile.calltrace.cycles-pp.arch_cpu_idle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.47 ± 46% +0.2 0.69 ± 13% perf-profile.calltrace.cycles-pp.irqtime_account_process_tick.update_process_times.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues
0.93 ± 4% +0.3 1.20 ± 6% perf-profile.calltrace.cycles-pp.run_posix_cpu_timers.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues.hrtimer_interrupt
1.34 ± 5% +0.3 1.65 ± 3% perf-profile.calltrace.cycles-pp.timerqueue_del.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.73 ± 4% +0.3 1.04 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_stop_idle.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.56 ± 5% +0.4 0.93 ± 6% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
3.14 ± 2% +0.4 3.50 ± 13% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.36 ± 71% +0.4 0.75 ± 9% perf-profile.calltrace.cycles-pp.irqentry_enter.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.58 ± 5% +0.4 0.98 ± 5% perf-profile.calltrace.cycles-pp.tick_check_oneshot_broadcast_this_cpu.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.09 ±223% +0.5 0.55 ± 7% perf-profile.calltrace.cycles-pp.rb_erase.timerqueue_del.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
1.28 ± 4% +0.5 1.79 ± 17% perf-profile.calltrace.cycles-pp.arch_scale_freq_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_nohz_highres_handler
0.00 +0.5 0.55 ± 5% perf-profile.calltrace.cycles-pp.nr_iowait_cpu.tick_nohz_stop_idle.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt
0.71 ± 15% +0.6 1.34 ± 19% perf-profile.calltrace.cycles-pp.check_cpu_stall.rcu_pending.rcu_sched_clock_irq.update_process_times.tick_sched_handle
0.00 +0.7 0.67 ± 7% perf-profile.calltrace.cycles-pp.rb_next.timerqueue_del.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
1.47 ± 9% +0.7 2.14 ± 10% perf-profile.calltrace.cycles-pp.rcu_pending.rcu_sched_clock_irq.update_process_times.tick_sched_handle.tick_nohz_highres_handler
1.71 ± 8% +0.7 2.42 ± 8% perf-profile.calltrace.cycles-pp.rcu_sched_clock_irq.update_process_times.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues
5.17 ± 2% +0.8 5.97 ± 4% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues
2.76 ± 4% +0.9 3.68 ± 2% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
2.62 ± 5% +1.0 3.58 ± 2% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
9.43 ± 2% +2.3 11.70 ± 4% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues.hrtimer_interrupt
10.57 ± 2% +2.5 13.04 ± 3% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
11.85 ± 2% +2.8 14.66 ± 5% perf-profile.calltrace.cycles-pp.tick_nohz_highres_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
95.06 +3.0 98.02 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
93.90 +3.1 96.97 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
93.90 +3.1 96.97 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
93.76 +3.1 96.86 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
92.07 +3.1 95.20 perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
19.21 +3.1 22.36 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
81.45 +3.2 84.68 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
22.86 +3.2 26.09 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
23.16 +3.3 26.50 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
82.52 +3.6 86.13 perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
33.90 +4.2 38.10 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
39.33 +4.5 43.82 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
3.58 ± 41% -3.1 0.45 ± 12% perf-profile.children.cycles-pp.kthread
3.58 ± 41% -3.1 0.47 ± 12% perf-profile.children.cycles-pp.ret_from_fork
3.58 ± 41% -3.1 0.47 ± 12% perf-profile.children.cycles-pp.ret_from_fork_asm
2.16 ± 19% -2.0 0.19 ± 28% perf-profile.children.cycles-pp.worker_thread
2.14 ± 20% -2.0 0.17 ± 35% perf-profile.children.cycles-pp.process_one_work
4.74 ± 3% -0.6 4.17 ± 4% perf-profile.children.cycles-pp.__do_softirq
5.47 ± 4% -0.5 4.93 ± 3% perf-profile.children.cycles-pp.irq_exit_rcu
1.12 ± 5% -0.2 0.91 ± 7% perf-profile.children.cycles-pp.native_irq_return_iret
1.81 ± 4% -0.2 1.64 ± 5% perf-profile.children.cycles-pp.irqtime_account_irq
1.29 ± 3% -0.2 1.13 ± 5% perf-profile.children.cycles-pp.native_sched_clock
1.24 ± 5% -0.1 1.09 ± 5% perf-profile.children.cycles-pp.read_tsc
0.65 ± 9% -0.1 0.52 ± 5% perf-profile.children.cycles-pp.rb_insert_color
0.98 ± 5% -0.1 0.86 ± 8% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.84 ± 5% -0.1 0.72 ± 6% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.33 ± 14% -0.1 0.22 ± 17% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
1.06 ± 5% -0.1 0.98 ± 5% perf-profile.children.cycles-pp.sched_clock
0.61 ± 7% -0.1 0.53 ± 7% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.18 ± 11% -0.1 0.11 ± 15% perf-profile.children.cycles-pp.hrtimer_forward
0.08 ± 24% -0.1 0.03 ±102% perf-profile.children.cycles-pp.tick_nohz_idle_got_tick
0.15 ± 18% -0.0 0.11 ± 15% perf-profile.children.cycles-pp.cpu_util
0.14 ± 12% -0.0 0.10 ± 12% perf-profile.children.cycles-pp.sched_idle_set_state
0.09 ± 23% +0.0 0.13 ± 8% perf-profile.children.cycles-pp.__memcpy
0.24 ± 8% +0.1 0.30 ± 8% perf-profile.children.cycles-pp.rcu_nocb_flush_deferred_wakeup
0.16 ± 17% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.ct_irq_enter
0.20 ± 17% +0.1 0.27 ± 14% perf-profile.children.cycles-pp.__schedule
0.18 ± 21% +0.1 0.26 ± 8% perf-profile.children.cycles-pp.check_tsc_unstable
0.25 ± 12% +0.1 0.35 ± 13% perf-profile.children.cycles-pp.ct_nmi_enter
0.22 ± 9% +0.1 0.32 ± 9% perf-profile.children.cycles-pp.account_process_tick
0.22 ± 18% +0.1 0.32 ± 19% perf-profile.children.cycles-pp._raw_spin_trylock
0.56 ± 9% +0.1 0.67 ± 4% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.58 ± 8% +0.1 0.69 ± 3% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.21 ± 13% +0.1 0.33 ± 10% perf-profile.children.cycles-pp.native_apic_mem_eoi
0.56 ± 11% +0.1 0.70 ± 13% perf-profile.children.cycles-pp.irqtime_account_process_tick
0.46 ± 5% +0.1 0.60 ± 4% perf-profile.children.cycles-pp.nr_iowait_cpu
0.65 ± 9% +0.2 0.82 ± 6% perf-profile.children.cycles-pp.rb_next
2.40 ± 3% +0.2 2.60 ± 2% perf-profile.children.cycles-pp.perf_rotate_context
0.59 ± 12% +0.2 0.84 ± 7% perf-profile.children.cycles-pp.irqentry_enter
0.94 ± 5% +0.3 1.22 ± 6% perf-profile.children.cycles-pp.run_posix_cpu_timers
1.39 ± 5% +0.3 1.69 ± 2% perf-profile.children.cycles-pp.timerqueue_del
0.75 ± 4% +0.3 1.07 ± 5% perf-profile.children.cycles-pp.tick_nohz_stop_idle
0.60 ± 5% +0.3 0.95 ± 6% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
3.22 ± 2% +0.3 3.56 ± 13% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.64 ± 3% +0.4 1.07 ± 4% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
1.29 ± 4% +0.5 1.81 ± 17% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.72 ± 15% +0.6 1.35 ± 18% perf-profile.children.cycles-pp.check_cpu_stall
1.50 ± 10% +0.7 2.17 ± 10% perf-profile.children.cycles-pp.rcu_pending
1.74 ± 9% +0.7 2.45 ± 8% perf-profile.children.cycles-pp.rcu_sched_clock_irq
5.32 ± 2% +0.8 6.12 ± 4% perf-profile.children.cycles-pp.scheduler_tick
2.79 ± 5% +0.9 3.72 ± 2% perf-profile.children.cycles-pp.irq_enter_rcu
2.74 ± 4% +0.9 3.68 ± 2% perf-profile.children.cycles-pp.tick_irq_enter
9.66 ± 2% +2.3 11.94 ± 4% perf-profile.children.cycles-pp.update_process_times
10.68 ± 2% +2.5 13.18 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
12.02 ± 2% +2.8 14.84 ± 5% perf-profile.children.cycles-pp.tick_nohz_highres_handler
95.06 +3.0 98.02 perf-profile.children.cycles-pp.cpu_startup_entry
95.06 +3.0 98.02 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
95.06 +3.0 98.02 perf-profile.children.cycles-pp.do_idle
93.41 +3.0 96.42 perf-profile.children.cycles-pp.cpuidle_idle_call
93.90 +3.1 96.97 perf-profile.children.cycles-pp.start_secondary
19.49 +3.2 22.65 perf-profile.children.cycles-pp.__hrtimer_run_queues
23.14 +3.2 26.38 perf-profile.children.cycles-pp.hrtimer_interrupt
23.40 +3.4 26.76 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
83.59 +3.5 87.13 perf-profile.children.cycles-pp.cpuidle_enter
83.12 +3.6 86.71 perf-profile.children.cycles-pp.cpuidle_enter_state
34.32 +4.3 38.58 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
37.37 +4.5 41.84 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.00 ± 8% -0.2 0.76 ± 7% perf-profile.self.cycles-pp.__hrtimer_run_queues
1.12 ± 5% -0.2 0.91 ± 7% perf-profile.self.cycles-pp.native_irq_return_iret
1.25 ± 3% -0.1 1.10 ± 5% perf-profile.self.cycles-pp.native_sched_clock
1.20 ± 5% -0.1 1.06 ± 5% perf-profile.self.cycles-pp.read_tsc
0.62 ± 9% -0.1 0.50 ± 6% perf-profile.self.cycles-pp.rb_insert_color
0.60 ± 9% -0.1 0.51 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.38 ± 12% -0.1 0.31 ± 7% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.15 ± 12% -0.1 0.10 ± 19% perf-profile.self.cycles-pp.hrtimer_forward
0.32 ± 6% -0.0 0.28 ± 5% perf-profile.self.cycles-pp.update_sd_lb_stats
0.10 ± 13% -0.0 0.07 ± 23% perf-profile.self.cycles-pp.sched_idle_set_state
0.09 ± 24% +0.0 0.13 ± 7% perf-profile.self.cycles-pp.__memcpy
0.22 ± 9% +0.1 0.28 ± 10% perf-profile.self.cycles-pp.rcu_nocb_flush_deferred_wakeup
0.20 ± 15% +0.1 0.26 ± 11% perf-profile.self.cycles-pp.local_clock_noinstr
0.10 ± 28% +0.1 0.16 ± 11% perf-profile.self.cycles-pp.ct_irq_enter
0.15 ± 22% +0.1 0.22 ± 11% perf-profile.self.cycles-pp.check_tsc_unstable
0.20 ± 16% +0.1 0.28 ± 10% perf-profile.self.cycles-pp.sched_clock
0.37 ± 11% +0.1 0.45 ± 3% perf-profile.self.cycles-pp.irq_work_tick
0.24 ± 21% +0.1 0.32 ± 8% perf-profile.self.cycles-pp.irqentry_enter
0.25 ± 14% +0.1 0.35 ± 12% perf-profile.self.cycles-pp.ct_nmi_enter
0.21 ± 11% +0.1 0.31 ± 8% perf-profile.self.cycles-pp.account_process_tick
0.22 ± 18% +0.1 0.32 ± 20% perf-profile.self.cycles-pp._raw_spin_trylock
0.20 ± 13% +0.1 0.33 ± 10% perf-profile.self.cycles-pp.native_apic_mem_eoi
0.56 ± 11% +0.1 0.70 ± 13% perf-profile.self.cycles-pp.irqtime_account_process_tick
0.44 ± 5% +0.2 0.59 ± 5% perf-profile.self.cycles-pp.nr_iowait_cpu
0.32 ± 4% +0.2 0.48 ± 9% perf-profile.self.cycles-pp.tick_nohz_stop_idle
0.64 ± 8% +0.2 0.82 ± 6% perf-profile.self.cycles-pp.rb_next
0.94 ± 9% +0.2 1.12 ± 6% perf-profile.self.cycles-pp.perf_rotate_context
0.94 ± 5% +0.3 1.21 ± 6% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.39 ± 5% +0.4 0.78 ± 8% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.62 ± 4% +0.4 1.05 ± 4% perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
1.28 ± 4% +0.5 1.80 ± 17% perf-profile.self.cycles-pp.arch_scale_freq_tick
1.06 ± 6% +0.5 1.60 ± 55% perf-profile.self.cycles-pp.ktime_get
0.72 ± 15% +0.6 1.34 ± 19% perf-profile.self.cycles-pp.check_cpu_stall
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 3%]
* Re: [PATCH 17/21] filemap: add FGP_CREAT_ONLY
2024-02-28 13:15 0% ` Matthew Wilcox
@ 2024-02-28 13:28 0% ` Paolo Bonzini
2024-03-04 2:55 0% ` Xu Yilun
0 siblings, 1 reply; 200+ results
From: Paolo Bonzini @ 2024-02-28 13:28 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Yosry Ahmed, Sean Christopherson, linux-kernel, kvm,
michael.roth, isaku.yamahata, thomas.lendacky
On Wed, Feb 28, 2024 at 2:15 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Feb 27, 2024 at 06:17:34PM -0800, Yosry Ahmed wrote:
> > On Tue, Feb 27, 2024 at 6:15 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Tue, Feb 27, 2024, Paolo Bonzini wrote:
> > >
> > > This needs a changelog, and also needs to be Cc'd to someone(s) that can give it
> > > a thumbs up.
> >
> > +Matthew Wilcox
>
> If only there were an entry in MAINTAINERS for filemap.c ...
Not CCing you (or mm in general) was intentional because I first
wanted a review of the KVM APIs; of course I wouldn't have committed
it without an Acked-by. But yeah, not writing the changelog yet was
pure laziness.
Since you're here: KVM would like to add a ioctl to encrypt and
install a page into guest_memfd, in preparation for launching an
encrypted guest. For this API we want to rule out the possibility of
overwriting a page that is already in the guest_memfd's filemap,
therefore this API would pass FGP_CREAT_ONLY|FGP_CREAT
into__filemap_get_folio. Do you think this is bogus...
> This looks bogus to me, and if it's not bogus, it's incomplete.
... or if not, what incompleteness can you spot?
Thanks,
Paolo
> But it's hard to judge without a commit message that describes what it's
> supposed to mean.
>
> > >
> > > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > ---
> > > > include/linux/pagemap.h | 2 ++
> > > > mm/filemap.c | 4 ++++
> > > > 2 files changed, 6 insertions(+)
> > > >
> > > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > > > index 2df35e65557d..e8ac0b32f84d 100644
> > > > --- a/include/linux/pagemap.h
> > > > +++ b/include/linux/pagemap.h
> > > > @@ -586,6 +586,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> > > > * * %FGP_CREAT - If no folio is present then a new folio is allocated,
> > > > * added to the page cache and the VM's LRU list. The folio is
> > > > * returned locked.
> > > > + * * %FGP_CREAT_ONLY - Fail if a folio is not present
> > > > * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
> > > > * folio is already in cache. If the folio was allocated, unlock it
> > > > * before returning so the caller can do the same dance.
> > > > @@ -606,6 +607,7 @@ typedef unsigned int __bitwise fgf_t;
> > > > #define FGP_NOWAIT ((__force fgf_t)0x00000020)
> > > > #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
> > > > #define FGP_STABLE ((__force fgf_t)0x00000080)
> > > > +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
> > > > #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
> > > >
> > > > #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
> > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > index 750e779c23db..d5107bd0cd09 100644
> > > > --- a/mm/filemap.c
> > > > +++ b/mm/filemap.c
> > > > @@ -1854,6 +1854,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > > > folio = NULL;
> > > > if (!folio)
> > > > goto no_page;
> > > > + if (fgp_flags & FGP_CREAT_ONLY) {
> > > > + folio_put(folio);
> > > > + return ERR_PTR(-EEXIST);
> > > > + }
> > > >
> > > > if (fgp_flags & FGP_LOCK) {
> > > > if (fgp_flags & FGP_NOWAIT) {
> > > > --
> > > > 2.39.0
> > > >
> > > >
> > >
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 17/21] filemap: add FGP_CREAT_ONLY
2024-02-28 2:17 0% ` Yosry Ahmed
@ 2024-02-28 13:15 0% ` Matthew Wilcox
2024-02-28 13:28 0% ` Paolo Bonzini
0 siblings, 1 reply; 200+ results
From: Matthew Wilcox @ 2024-02-28 13:15 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Sean Christopherson, Paolo Bonzini, linux-kernel, kvm,
michael.roth, isaku.yamahata, thomas.lendacky
On Tue, Feb 27, 2024 at 06:17:34PM -0800, Yosry Ahmed wrote:
> On Tue, Feb 27, 2024 at 6:15 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Tue, Feb 27, 2024, Paolo Bonzini wrote:
> >
> > This needs a changelog, and also needs to be Cc'd to someone(s) that can give it
> > a thumbs up.
>
> +Matthew Wilcox
If only there were an entry in MAINTAINERS for filemap.c ...
This looks bogus to me, and if it's not bogus, it's incomplete.
But it's hard to judge without a commit message that describes what it's
supposed to mean.
> >
> > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > > ---
> > > include/linux/pagemap.h | 2 ++
> > > mm/filemap.c | 4 ++++
> > > 2 files changed, 6 insertions(+)
> > >
> > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > > index 2df35e65557d..e8ac0b32f84d 100644
> > > --- a/include/linux/pagemap.h
> > > +++ b/include/linux/pagemap.h
> > > @@ -586,6 +586,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> > > * * %FGP_CREAT - If no folio is present then a new folio is allocated,
> > > * added to the page cache and the VM's LRU list. The folio is
> > > * returned locked.
> > > + * * %FGP_CREAT_ONLY - Fail if a folio is not present
> > > * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
> > > * folio is already in cache. If the folio was allocated, unlock it
> > > * before returning so the caller can do the same dance.
> > > @@ -606,6 +607,7 @@ typedef unsigned int __bitwise fgf_t;
> > > #define FGP_NOWAIT ((__force fgf_t)0x00000020)
> > > #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
> > > #define FGP_STABLE ((__force fgf_t)0x00000080)
> > > +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
> > > #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
> > >
> > > #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
> > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > index 750e779c23db..d5107bd0cd09 100644
> > > --- a/mm/filemap.c
> > > +++ b/mm/filemap.c
> > > @@ -1854,6 +1854,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > > folio = NULL;
> > > if (!folio)
> > > goto no_page;
> > > + if (fgp_flags & FGP_CREAT_ONLY) {
> > > + folio_put(folio);
> > > + return ERR_PTR(-EEXIST);
> > > + }
> > >
> > > if (fgp_flags & FGP_LOCK) {
> > > if (fgp_flags & FGP_NOWAIT) {
> > > --
> > > 2.39.0
> > >
> > >
> >
^ permalink raw reply [relevance 0%]
* Re: [PATCH 17/21] filemap: add FGP_CREAT_ONLY
2024-02-28 2:14 0% ` Sean Christopherson
@ 2024-02-28 2:17 0% ` Yosry Ahmed
2024-02-28 13:15 0% ` Matthew Wilcox
0 siblings, 1 reply; 200+ results
From: Yosry Ahmed @ 2024-02-28 2:17 UTC (permalink / raw)
To: Sean Christopherson, Matthew Wilcox
Cc: Paolo Bonzini, linux-kernel, kvm, michael.roth, isaku.yamahata,
thomas.lendacky
On Tue, Feb 27, 2024 at 6:15 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Feb 27, 2024, Paolo Bonzini wrote:
>
> This needs a changelog, and also needs to be Cc'd to someone(s) that can give it
> a thumbs up.
+Matthew Wilcox
>
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> > include/linux/pagemap.h | 2 ++
> > mm/filemap.c | 4 ++++
> > 2 files changed, 6 insertions(+)
> >
> > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > index 2df35e65557d..e8ac0b32f84d 100644
> > --- a/include/linux/pagemap.h
> > +++ b/include/linux/pagemap.h
> > @@ -586,6 +586,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> > * * %FGP_CREAT - If no folio is present then a new folio is allocated,
> > * added to the page cache and the VM's LRU list. The folio is
> > * returned locked.
> > + * * %FGP_CREAT_ONLY - Fail if a folio is not present
> > * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
> > * folio is already in cache. If the folio was allocated, unlock it
> > * before returning so the caller can do the same dance.
> > @@ -606,6 +607,7 @@ typedef unsigned int __bitwise fgf_t;
> > #define FGP_NOWAIT ((__force fgf_t)0x00000020)
> > #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
> > #define FGP_STABLE ((__force fgf_t)0x00000080)
> > +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
> > #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
> >
> > #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 750e779c23db..d5107bd0cd09 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -1854,6 +1854,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > folio = NULL;
> > if (!folio)
> > goto no_page;
> > + if (fgp_flags & FGP_CREAT_ONLY) {
> > + folio_put(folio);
> > + return ERR_PTR(-EEXIST);
> > + }
> >
> > if (fgp_flags & FGP_LOCK) {
> > if (fgp_flags & FGP_NOWAIT) {
> > --
> > 2.39.0
> >
> >
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 17/21] filemap: add FGP_CREAT_ONLY
2024-02-27 23:20 7% ` [PATCH 17/21] filemap: add FGP_CREAT_ONLY Paolo Bonzini
@ 2024-02-28 2:14 0% ` Sean Christopherson
2024-02-28 2:17 0% ` Yosry Ahmed
0 siblings, 1 reply; 200+ results
From: Sean Christopherson @ 2024-02-28 2:14 UTC (permalink / raw)
To: Paolo Bonzini
Cc: linux-kernel, kvm, michael.roth, isaku.yamahata, thomas.lendacky
On Tue, Feb 27, 2024, Paolo Bonzini wrote:
This needs a changelog, and also needs to be Cc'd to someone(s) that can give it
a thumbs up.
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> include/linux/pagemap.h | 2 ++
> mm/filemap.c | 4 ++++
> 2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 2df35e65557d..e8ac0b32f84d 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -586,6 +586,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> * * %FGP_CREAT - If no folio is present then a new folio is allocated,
> * added to the page cache and the VM's LRU list. The folio is
> * returned locked.
> + * * %FGP_CREAT_ONLY - Fail if a folio is not present
> * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
> * folio is already in cache. If the folio was allocated, unlock it
> * before returning so the caller can do the same dance.
> @@ -606,6 +607,7 @@ typedef unsigned int __bitwise fgf_t;
> #define FGP_NOWAIT ((__force fgf_t)0x00000020)
> #define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
> #define FGP_STABLE ((__force fgf_t)0x00000080)
> +#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
> #define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
>
> #define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 750e779c23db..d5107bd0cd09 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1854,6 +1854,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> folio = NULL;
> if (!folio)
> goto no_page;
> + if (fgp_flags & FGP_CREAT_ONLY) {
> + folio_put(folio);
> + return ERR_PTR(-EEXIST);
> + }
>
> if (fgp_flags & FGP_LOCK) {
> if (fgp_flags & FGP_NOWAIT) {
> --
> 2.39.0
>
>
^ permalink raw reply [relevance 0%]
* [PATCH 17/21] filemap: add FGP_CREAT_ONLY
@ 2024-02-27 23:20 7% ` Paolo Bonzini
2024-02-28 2:14 0% ` Sean Christopherson
2024-02-27 23:20 4% ` [PATCH 18/21] KVM: x86: Add gmem hook for initializing memory Paolo Bonzini
1 sibling, 1 reply; 200+ results
From: Paolo Bonzini @ 2024-02-27 23:20 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc, michael.roth, isaku.yamahata, thomas.lendacky
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/linux/pagemap.h | 2 ++
mm/filemap.c | 4 ++++
2 files changed, 6 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 2df35e65557d..e8ac0b32f84d 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -586,6 +586,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
* * %FGP_CREAT - If no folio is present then a new folio is allocated,
* added to the page cache and the VM's LRU list. The folio is
* returned locked.
+ * * %FGP_CREAT_ONLY - Fail if a folio is not present
* * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
* folio is already in cache. If the folio was allocated, unlock it
* before returning so the caller can do the same dance.
@@ -606,6 +607,7 @@ typedef unsigned int __bitwise fgf_t;
#define FGP_NOWAIT ((__force fgf_t)0x00000020)
#define FGP_FOR_MMAP ((__force fgf_t)0x00000040)
#define FGP_STABLE ((__force fgf_t)0x00000080)
+#define FGP_CREAT_ONLY ((__force fgf_t)0x00000100)
#define FGF_GET_ORDER(fgf) (((__force unsigned)fgf) >> 26) /* top 6 bits */
#define FGP_WRITEBEGIN (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
diff --git a/mm/filemap.c b/mm/filemap.c
index 750e779c23db..d5107bd0cd09 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1854,6 +1854,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio = NULL;
if (!folio)
goto no_page;
+ if (fgp_flags & FGP_CREAT_ONLY) {
+ folio_put(folio);
+ return ERR_PTR(-EEXIST);
+ }
if (fgp_flags & FGP_LOCK) {
if (fgp_flags & FGP_NOWAIT) {
--
2.39.0
^ permalink raw reply related [relevance 7%]
* [PATCH 18/21] KVM: x86: Add gmem hook for initializing memory
2024-02-27 23:20 7% ` [PATCH 17/21] filemap: add FGP_CREAT_ONLY Paolo Bonzini
@ 2024-02-27 23:20 4% ` Paolo Bonzini
1 sibling, 0 replies; 200+ results
From: Paolo Bonzini @ 2024-02-27 23:20 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc, michael.roth, isaku.yamahata, thomas.lendacky
guest_memfd pages are generally expected to be in some arch-defined
initial state prior to using them for guest memory. For SEV-SNP this
initial state is 'private', or 'guest-owned', and requires additional
operations to move these pages into a 'private' state by updating the
corresponding entries the RMP table.
Allow for an arch-defined hook to handle updates of this sort, and go
ahead and implement one for x86 so KVM implementations like AMD SVM can
register a kvm_x86_ops callback to handle these updates for SEV-SNP
guests.
The preparation callback is always called when allocating/grabbing
folios via gmem, and it is up to the architecture to keep track of
whether or not the pages are already in the expected state (e.g. the RMP
table in the case of SEV-SNP).
In some cases, it is necessary to defer the preparation of the pages to
handle things like in-place encryption of initial guest memory payloads
before marking these pages as 'private'/'guest-owned', so also add a
helper that performs the same function as kvm_gmem_get_pfn(), but allows
for the preparation callback to be bypassed to allow for pages to be
accessed beforehand.
Link: https://lore.kernel.org/lkml/ZLqVdvsF11Ddo7Dq@google.com/
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20231230172351.574091-5-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 6 +++
include/linux/kvm_host.h | 14 ++++++
virt/kvm/Kconfig | 4 ++
virt/kvm/guest_memfd.c | 72 +++++++++++++++++++++++++++---
6 files changed, 92 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index ac8b7614e79d..adfaad15e7e6 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -139,6 +139,7 @@ KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
KVM_X86_OP_OPTIONAL(get_untagged_addr)
+KVM_X86_OP_OPTIONAL_RET0(gmem_prepare)
#undef KVM_X86_OP
#undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7de8a3f2a118..6d873d08f739 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1804,6 +1804,7 @@ struct kvm_x86_ops {
unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu);
gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
+ int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
};
struct kvm_x86_nested_ops {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f10a5a617120..eff532ea59c9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13598,6 +13598,12 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_arch_no_poll);
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order)
+{
+ return static_call(kvm_x86_gmem_prepare)(kvm, pfn, gfn, max_order);
+}
+#endif
int kvm_spec_ctrl_test_value(u64 value)
{
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 97afe4519772..03bf616b7308 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2434,6 +2434,8 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
#ifdef CONFIG_KVM_PRIVATE_MEM
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, int *max_order);
+int kvm_gmem_get_uninit_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, int *max_order);
#else
static inline int kvm_gmem_get_pfn(struct kvm *kvm,
struct kvm_memory_slot *slot, gfn_t gfn,
@@ -2442,6 +2444,18 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
KVM_BUG_ON(1, kvm);
return -EIO;
}
+
+static inline int kvm_gmem_get_uninit_pfn(struct kvm *kvm,
+ struct kvm_memory_slot *slot, gfn_t gfn,
+ kvm_pfn_t *pfn, int *max_order)
+{
+ KVM_BUG_ON(1, kvm);
+ return -EIO;
+}
#endif /* CONFIG_KVM_PRIVATE_MEM */
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
+#endif
+
#endif
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index a11e9c80fac9..dcce0c3b5b13 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -111,3 +111,7 @@ config KVM_GENERIC_PRIVATE_MEM
select KVM_GENERIC_MEMORY_ATTRIBUTES
select KVM_PRIVATE_MEM
bool
+
+config HAVE_KVM_GMEM_PREPARE
+ bool
+ depends on KVM_PRIVATE_MEM
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index de0d5a5c210c..7ec7afafc960 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -13,12 +13,50 @@ struct kvm_gmem {
struct list_head entry;
};
-static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
+static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio)
+{
+#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
+ struct list_head *gmem_list = &inode->i_mapping->i_private_list;
+ struct kvm_gmem *gmem;
+
+ list_for_each_entry(gmem, gmem_list, entry) {
+ struct kvm_memory_slot *slot;
+ struct kvm *kvm = gmem->kvm;
+ struct page *page;
+ kvm_pfn_t pfn;
+ gfn_t gfn;
+ int rc;
+
+ slot = xa_load(&gmem->bindings, index);
+ if (!slot)
+ continue;
+
+ page = folio_file_page(folio, index);
+ pfn = page_to_pfn(page);
+ gfn = slot->base_gfn + index - slot->gmem.pgoff;
+ rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, compound_order(compound_head(page)));
+ if (rc) {
+ pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx, error %d.\n",
+ index, rc);
+ return rc;
+ }
+ }
+
+#endif
+ return 0;
+}
+
+static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool prepare)
{
struct folio *folio;
+ fgf_t fgp_flags = FGP_LOCK | FGP_ACCESSED | FGP_CREAT;
+
+ if (!prepare)
+ fgp_flags |= FGP_CREAT_ONLY;
/* TODO: Support huge pages. */
- folio = filemap_grab_folio(inode->i_mapping, index);
+ folio = __filemap_get_folio(inode->i_mapping, index, fgp_flags,
+ mapping_gfp_mask(inode->i_mapping));
if (IS_ERR_OR_NULL(folio))
return folio;
@@ -41,6 +79,15 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
folio_mark_uptodate(folio);
}
+ if (prepare) {
+ int r = kvm_gmem_prepare_folio(inode, index, folio);
+ if (r < 0) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return ERR_PTR(r);
+ }
+ }
+
/*
* Ignore accessed, referenced, and dirty flags. The memory is
* unevictable and there is no storage to write back to.
@@ -145,7 +192,7 @@ static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len)
break;
}
- folio = kvm_gmem_get_folio(inode, index);
+ folio = kvm_gmem_get_folio(inode, index, true);
if (IS_ERR_OR_NULL(folio)) {
r = folio ? PTR_ERR(folio) : -ENOMEM;
break;
@@ -482,8 +529,8 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot)
fput(file);
}
-int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
- gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
+static int __kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, int *max_order, bool prepare)
{
pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
struct kvm_gmem *gmem;
@@ -503,7 +550,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
goto out_fput;
}
- folio = kvm_gmem_get_folio(file_inode(file), index);
+ folio = kvm_gmem_get_folio(file_inode(file), index, prepare);
if (!folio) {
r = -ENOMEM;
goto out_fput;
@@ -529,4 +576,17 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
return r;
}
+
+int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
+{
+ return __kvm_gmem_get_pfn(kvm, slot, gfn, pfn, max_order, true);
+}
EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
+
+int kvm_gmem_get_uninit_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
+{
+ return __kvm_gmem_get_pfn(kvm, slot, gfn, pfn, max_order, false);
+}
+EXPORT_SYMBOL_GPL(kvm_gmem_get_uninit_pfn);
--
2.39.0
^ permalink raw reply related [relevance 4%]
* Re: [PATCH v4 00/36] Memory allocation profiling
2024-02-27 13:36 0% ` [PATCH v4 00/36] Memory allocation profiling Vlastimil Babka
@ 2024-02-27 16:10 0% ` Suren Baghdasaryan
0 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-02-27 16:10 UTC (permalink / raw)
To: Vlastimil Babka
Cc: akpm, kent.overstreet, mhocko, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, tj, muchun.song, rppt, paulmck, pasha.tatashin,
yosryahmed, yuzhao, dhowells, hughd, andreyknvl, keescook,
ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, bristot, vschneid, cl,
penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver, dvyukov,
shakeelb, songmuchun, jbaron, rientjes, minchan, kaleshsingh,
kernel-team, linux-doc, linux-kernel, iommu, linux-arch,
linux-fsdevel, linux-mm, linux-modules, kasan-dev, cgroups
On Tue, Feb 27, 2024 at 5:35 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 2/21/24 20:40, Suren Baghdasaryan wrote:
> > Overview:
> > Low overhead [1] per-callsite memory allocation profiling. Not just for
> > debug kernels, overhead low enough to be deployed in production.
> >
> > Example output:
> > root@moria-kvm:~# sort -rn /proc/allocinfo
> > 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
> > 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
> > 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
> > 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> > 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> > 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
> > 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
> > 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> > 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> > 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
> > 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
> > ...
> >
> > Since v3:
> > - Dropped patch changing string_get_size() [2] as not needed
> > - Dropped patch modifying xfs allocators [3] as non needed,
> > per Dave Chinner
> > - Added Reviewed-by, per Kees Cook
> > - Moved prepare_slab_obj_exts_hook() and alloc_slab_obj_exts() where they
> > are used, per Vlastimil Babka
> > - Fixed SLAB_NO_OBJ_EXT definition to use unused bit, per Vlastimil Babka
> > - Refactored patch [4] into other patches, per Vlastimil Babka
> > - Replaced snprintf() with seq_buf_printf(), per Kees Cook
> > - Changed output to report bytes, per Andrew Morton and Pasha Tatashin
> > - Changed output to report [module] only for loadable modules,
> > per Vlastimil Babka
> > - Moved mem_alloc_profiling_enabled() check earlier, per Vlastimil Babka
> > - Changed the code to handle page splitting to be more understandable,
> > per Vlastimil Babka
> > - Moved alloc_tagging_slab_free_hook(), mark_objexts_empty(),
> > mark_failed_objexts_alloc() and handle_failed_objexts_alloc(),
> > per Vlastimil Babka
> > - Fixed loss of __alloc_size(1, 2) in kvmalloc functions,
> > per Vlastimil Babka
> > - Refactored the code in show_mem() to avoid memory allocations,
> > per Michal Hocko
> > - Changed to trylock in show_mem() to avoid blocking in atomic context,
> > per Tetsuo Handa
> > - Added mm mailing list into MAINTAINERS, per Kees Cook
> > - Added base commit SHA, per Andy Shevchenko
> > - Added a patch with documentation, per Jani Nikula
> > - Fixed 0day bugs
> > - Added benchmark results [5], per Steven Rostedt
> > - Rebased over Linux 6.8-rc5
> >
> > Items not yet addressed:
> > - An early_boot option to prevent pageext overhead. We are looking into
> > ways for using the same sysctr instead of adding additional early boot
> > parameter.
>
> I have reviewed the parts that integrate the tracking with page and slab
> allocators, and besides some details to improve it seems ok to me. The
> early boot option seems coming so that might eventually be suitable for
> build-time enablement in a distro kernel.
Thanks for reviewing Vlastimil!
>
> The macros (and their potential spread to upper layers to keep the
> information useful enough) are of course ugly, but guess it can't be
> currently helped and I'm unable to decide whether it's worth it or not.
> That's up to those providing their success stories I guess. If there's
> at least a path ahead to replace that part with compiler support in the
> future, great. So I'm not against merging this. BTW, do we know Linus's
> opinion on the macros approach?
We haven't run it by Linus specifically but hopefully we will see a
comment from him on the mailing list at some point.
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 00/36] Memory allocation profiling
2024-02-21 19:40 3% [PATCH v4 00/36] Memory allocation profiling Suren Baghdasaryan
2024-02-21 19:40 3% ` [PATCH v4 14/36] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-02-21 19:40 5% ` [PATCH v4 36/36] memprofiling: Documentation Suren Baghdasaryan
@ 2024-02-27 13:36 0% ` Vlastimil Babka
2024-02-27 16:10 0% ` Suren Baghdasaryan
2 siblings, 1 reply; 200+ results
From: Vlastimil Babka @ 2024-02-27 13:36 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: kent.overstreet, mhocko, hannes, roman.gushchin, mgorman, dave,
willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, tj, muchun.song, rppt, paulmck, pasha.tatashin,
yosryahmed, yuzhao, dhowells, hughd, andreyknvl, keescook,
ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, bristot, vschneid, cl,
penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver, dvyukov,
shakeelb, songmuchun, jbaron, rientjes, minchan, kaleshsingh,
kernel-team, linux-doc, linux-kernel, iommu, linux-arch,
linux-fsdevel, linux-mm, linux-modules, kasan-dev, cgroups
On 2/21/24 20:40, Suren Baghdasaryan wrote:
> Overview:
> Low overhead [1] per-callsite memory allocation profiling. Not just for
> debug kernels, overhead low enough to be deployed in production.
>
> Example output:
> root@moria-kvm:~# sort -rn /proc/allocinfo
> 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
> 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
> 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
> 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
> 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
> 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
> 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
> 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
> 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
> 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
> 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
> ...
>
> Since v3:
> - Dropped patch changing string_get_size() [2] as not needed
> - Dropped patch modifying xfs allocators [3] as non needed,
> per Dave Chinner
> - Added Reviewed-by, per Kees Cook
> - Moved prepare_slab_obj_exts_hook() and alloc_slab_obj_exts() where they
> are used, per Vlastimil Babka
> - Fixed SLAB_NO_OBJ_EXT definition to use unused bit, per Vlastimil Babka
> - Refactored patch [4] into other patches, per Vlastimil Babka
> - Replaced snprintf() with seq_buf_printf(), per Kees Cook
> - Changed output to report bytes, per Andrew Morton and Pasha Tatashin
> - Changed output to report [module] only for loadable modules,
> per Vlastimil Babka
> - Moved mem_alloc_profiling_enabled() check earlier, per Vlastimil Babka
> - Changed the code to handle page splitting to be more understandable,
> per Vlastimil Babka
> - Moved alloc_tagging_slab_free_hook(), mark_objexts_empty(),
> mark_failed_objexts_alloc() and handle_failed_objexts_alloc(),
> per Vlastimil Babka
> - Fixed loss of __alloc_size(1, 2) in kvmalloc functions,
> per Vlastimil Babka
> - Refactored the code in show_mem() to avoid memory allocations,
> per Michal Hocko
> - Changed to trylock in show_mem() to avoid blocking in atomic context,
> per Tetsuo Handa
> - Added mm mailing list into MAINTAINERS, per Kees Cook
> - Added base commit SHA, per Andy Shevchenko
> - Added a patch with documentation, per Jani Nikula
> - Fixed 0day bugs
> - Added benchmark results [5], per Steven Rostedt
> - Rebased over Linux 6.8-rc5
>
> Items not yet addressed:
> - An early_boot option to prevent pageext overhead. We are looking into
> ways for using the same sysctr instead of adding additional early boot
> parameter.
I have reviewed the parts that integrate the tracking with page and slab
allocators, and besides some details to improve it seems ok to me. The
early boot option seems coming so that might eventually be suitable for
build-time enablement in a distro kernel.
The macros (and their potential spread to upper layers to keep the
information useful enough) are of course ugly, but guess it can't be
currently helped and I'm unable to decide whether it's worth it or not.
That's up to those providing their success stories I guess. If there's
at least a path ahead to replace that part with compiler support in the
future, great. So I'm not against merging this. BTW, do we know Linus's
opinion on the macros approach?
^ permalink raw reply [relevance 0%]
* Re: [PATCH 04/13] filemap: use mapping_min_order while allocating folios
2024-02-26 14:47 0% ` Matthew Wilcox
@ 2024-02-27 12:09 0% ` Pankaj Raghav (Samsung)
0 siblings, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-02-27 12:09 UTC (permalink / raw)
To: Matthew Wilcox
Cc: linux-xfs, linux-fsdevel, linux-kernel, david, chandan.babu,
akpm, mcgrof, ziy, hare, djwong, gost.dev, linux-mm,
Pankaj Raghav
On Mon, Feb 26, 2024 at 02:47:33PM +0000, Matthew Wilcox wrote:
> On Mon, Feb 26, 2024 at 10:49:27AM +0100, Pankaj Raghav (Samsung) wrote:
> > Add some additional VM_BUG_ON() in page_cache_delete[batch] and
> > __filemap_add_folio to catch errors where we delete or add folios that
> > has order less than min_order.
>
> I don't understand why we need these checks in the deletion path. The
> add path, yes, absolutely. But the delete path?
I think we initially added it to check if some split happened which
might mess up the page cache with min order support. But I think it is
not super critical anymore because of the changes in the split_folio
path. I will remove the checks.
>
> > @@ -896,6 +900,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
> > }
> > }
> >
> > + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
> > + folio);
>
> But I don't understand why you put it here, while we're holding the
> xa_lock. That seems designed to cause maximum disruption. Why not put
> it at the beginning of the function with all the other VM_BUG_ON_FOLIO?
Yeah. That makes sense as the folio itself is not changing.
>
> > @@ -1847,6 +1853,9 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > fgf_t fgp_flags, gfp_t gfp)
> > {
> > struct folio *folio;
> > + unsigned int min_order = mapping_min_folio_order(mapping);
> > +
> > + index = mapping_align_start_index(mapping, index);
>
> I would not do this here.
>
> > repeat:
> > folio = filemap_get_entry(mapping, index);
> > @@ -1886,7 +1895,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > folio_wait_stable(folio);
> > no_page:
> > if (!folio && (fgp_flags & FGP_CREAT)) {
> > - unsigned order = FGF_GET_ORDER(fgp_flags);
> > + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> > int err;
>
> Put it here instead.
>
> > if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
> > @@ -1912,8 +1921,13 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > gfp_t alloc_gfp = gfp;
> >
> > err = -ENOMEM;
> > + if (order < min_order)
> > + order = min_order;
> > if (order > 0)
> > alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
> > +
> > + VM_BUG_ON(index & ((1UL << order) - 1));
>
> Then you don't need this BUG_ON because it's obvious you just did it.
> And the one in filemap_add_folio() would catch it anyway.
I agree. I will change it in the next revision.
^ permalink raw reply [relevance 0%]
* [PATCH v5 2/8] mm: Support order-1 folios in the page cache
@ 2024-02-26 20:55 6% ` Zi Yan
0 siblings, 0 replies; 200+ results
From: Zi Yan @ 2024-02-26 20:55 UTC (permalink / raw)
To: Pankaj Raghav (Samsung), linux-mm
Cc: Zi Yan, Matthew Wilcox (Oracle),
David Hildenbrand, Yang Shi, Yu Zhao, Kirill A . Shutemov,
Ryan Roberts, Michal Koutný,
Roman Gushchin, Zach O'Keefe, Hugh Dickins, Luis Chamberlain,
Andrew Morton, linux-kernel, cgroups, linux-fsdevel,
linux-kselftest
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Folios of order 1 have no space to store the deferred list. This is
not a problem for the page cache as file-backed folios are never
placed on the deferred list. All we need to do is prevent the core
MM from touching the deferred list for order 1 folios and remove the
code which prevented us from allocating order 1 folios.
Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/filemap.c | 2 --
mm/huge_memory.c | 19 +++++++++++++++----
mm/internal.h | 3 +--
mm/readahead.c | 3 ---
4 files changed, 16 insertions(+), 11 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index b7a21551fbc7..b4858d89f1b1 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
- if (order == 1)
- order = 0;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
folio = filemap_alloc_folio(alloc_gfp, order);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b20e535e874c..9840f312c08f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -790,8 +790,10 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio)
void folio_prep_large_rmappable(struct folio *folio)
{
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
- INIT_LIST_HEAD(&folio->_deferred_list);
+ if (!folio || !folio_test_large(folio))
+ return;
+ if (folio_order(folio) > 1)
+ INIT_LIST_HEAD(&folio->_deferred_list);
folio_set_large_rmappable(folio);
}
@@ -3114,7 +3116,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
- if (!list_empty(&folio->_deferred_list)) {
+ if (folio_order(folio) > 1 &&
+ !list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
list_del(&folio->_deferred_list);
}
@@ -3165,6 +3168,9 @@ void folio_undo_large_rmappable(struct folio *folio)
struct deferred_split *ds_queue;
unsigned long flags;
+ if (folio_order(folio) <= 1)
+ return;
+
/*
* At this point, there is no one trying to add the folio to
* deferred_list. If folio is not in deferred_list, it's safe
@@ -3190,7 +3196,12 @@ void deferred_split_folio(struct folio *folio)
#endif
unsigned long flags;
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
+ /*
+ * Order 1 folios have no space for a deferred list, but we also
+ * won't waste much memory by not adding them to the deferred list.
+ */
+ if (folio_order(folio) <= 1)
+ return;
/*
* The try_to_unmap() in page reclaim path might reach here too,
diff --git a/mm/internal.h b/mm/internal.h
index 2b7efffbe4d7..c4853ebfa030 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -420,8 +420,7 @@ static inline struct folio *page_rmappable_folio(struct page *page)
{
struct folio *folio = (struct folio *)page;
- if (folio && folio_order(folio) > 1)
- folio_prep_large_rmappable(folio);
+ folio_prep_large_rmappable(folio);
return folio;
}
diff --git a/mm/readahead.c b/mm/readahead.c
index 1e74455f908e..130c0e7df99f 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -514,9 +514,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
/* Don't allocate pages past EOF */
while (index + (1UL << order) - 1 > limit)
order--;
- /* THP machinery does not support order-1 */
- if (order == 1)
- order = 0;
err = ra_alloc_folio(ractl, index, mark, order, gfp);
if (err)
break;
--
2.43.0
^ permalink raw reply related [relevance 6%]
* Re: [PATCH 04/13] filemap: use mapping_min_order while allocating folios
2024-02-26 9:49 16% ` [PATCH 04/13] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
@ 2024-02-26 14:47 0% ` Matthew Wilcox
2024-02-27 12:09 0% ` Pankaj Raghav (Samsung)
0 siblings, 1 reply; 200+ results
From: Matthew Wilcox @ 2024-02-26 14:47 UTC (permalink / raw)
To: Pankaj Raghav (Samsung)
Cc: linux-xfs, linux-fsdevel, linux-kernel, david, chandan.babu,
akpm, mcgrof, ziy, hare, djwong, gost.dev, linux-mm,
Pankaj Raghav
On Mon, Feb 26, 2024 at 10:49:27AM +0100, Pankaj Raghav (Samsung) wrote:
> Add some additional VM_BUG_ON() in page_cache_delete[batch] and
> __filemap_add_folio to catch errors where we delete or add folios that
> has order less than min_order.
I don't understand why we need these checks in the deletion path. The
add path, yes, absolutely. But the delete path?
> @@ -896,6 +900,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
> }
> }
>
> + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
> + folio);
But I don't understand why you put it here, while we're holding the
xa_lock. That seems designed to cause maximum disruption. Why not put
it at the beginning of the function with all the other VM_BUG_ON_FOLIO?
> @@ -1847,6 +1853,9 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> fgf_t fgp_flags, gfp_t gfp)
> {
> struct folio *folio;
> + unsigned int min_order = mapping_min_folio_order(mapping);
> +
> + index = mapping_align_start_index(mapping, index);
I would not do this here.
> repeat:
> folio = filemap_get_entry(mapping, index);
> @@ -1886,7 +1895,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> folio_wait_stable(folio);
> no_page:
> if (!folio && (fgp_flags & FGP_CREAT)) {
> - unsigned order = FGF_GET_ORDER(fgp_flags);
> + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> int err;
Put it here instead.
> if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
> @@ -1912,8 +1921,13 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> gfp_t alloc_gfp = gfp;
>
> err = -ENOMEM;
> + if (order < min_order)
> + order = min_order;
> if (order > 0)
> alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
> +
> + VM_BUG_ON(index & ((1UL << order) - 1));
Then you don't need this BUG_ON because it's obvious you just did it.
And the one in filemap_add_folio() would catch it anyway.
^ permalink raw reply [relevance 0%]
* [PATCH 04/13] filemap: use mapping_min_order while allocating folios
2024-02-26 9:49 6% ` [PATCH 01/13] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
@ 2024-02-26 9:49 16% ` Pankaj Raghav (Samsung)
2024-02-26 14:47 0% ` Matthew Wilcox
1 sibling, 1 reply; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-02-26 9:49 UTC (permalink / raw)
To: linux-xfs, linux-fsdevel
Cc: linux-kernel, david, chandan.babu, akpm, mcgrof, ziy, hare,
djwong, gost.dev, linux-mm, willy, Pankaj Raghav
From: Pankaj Raghav <p.raghav@samsung.com>
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
As we bring the notion of mapping_min_order, make sure these functions
allocate at least folio of mapping_min_order as we need to guarantee it
in the page cache.
Add some additional VM_BUG_ON() in page_cache_delete[batch] and
__filemap_add_folio to catch errors where we delete or add folios that
has order less than min_order.
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Acked-by: Darrick J. Wong <djwong@kernel.org>
---
mm/filemap.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index bdf4f65f597c..4b144479c4cb 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -135,6 +135,8 @@ static void page_cache_delete(struct address_space *mapping,
xas_set_order(&xas, folio->index, folio_order(folio));
nr = folio_nr_pages(folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
xas_store(&xas, shadow);
@@ -305,6 +307,8 @@ static void page_cache_delete_batch(struct address_space *mapping,
WARN_ON_ONCE(!folio_test_locked(folio));
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
folio->mapping = NULL;
/* Leave folio->index set: truncation lookup relies on it */
@@ -896,6 +900,8 @@ noinline int __filemap_add_folio(struct address_space *mapping,
}
}
+ VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
+ folio);
xas_store(&xas, folio);
if (xas_error(&xas))
goto unlock;
@@ -1847,6 +1853,9 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
fgf_t fgp_flags, gfp_t gfp)
{
struct folio *folio;
+ unsigned int min_order = mapping_min_folio_order(mapping);
+
+ index = mapping_align_start_index(mapping, index);
repeat:
folio = filemap_get_entry(mapping, index);
@@ -1886,7 +1895,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
@@ -1912,8 +1921,13 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
+ if (order < min_order)
+ order = min_order;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
+
+ VM_BUG_ON(index & ((1UL << order) - 1));
+
folio = filemap_alloc_folio(alloc_gfp, order);
if (!folio)
continue;
@@ -1927,7 +1941,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2422,7 +2436,8 @@ static int filemap_create_folio(struct file *file,
struct folio *folio;
int error;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ mapping_min_folio_order(mapping));
if (!folio)
return -ENOMEM;
@@ -3666,7 +3681,8 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
err = filemap_add_folio(mapping, folio, index, gfp);
--
2.43.0
^ permalink raw reply related [relevance 16%]
* [PATCH 01/13] mm: Support order-1 folios in the page cache
@ 2024-02-26 9:49 6% ` Pankaj Raghav (Samsung)
2024-02-26 9:49 16% ` [PATCH 04/13] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-02-26 9:49 UTC (permalink / raw)
To: linux-xfs, linux-fsdevel
Cc: linux-kernel, david, chandan.babu, akpm, mcgrof, ziy, hare,
djwong, gost.dev, linux-mm, willy
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Folios of order 1 have no space to store the deferred list. This is
not a problem for the page cache as file-backed folios are never
placed on the deferred list. All we need to do is prevent the core
MM from touching the deferred list for order 1 folios and remove the
code which prevented us from allocating order 1 folios.
Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/huge_mm.h | 7 +++++--
mm/filemap.c | 2 --
mm/huge_memory.c | 23 ++++++++++++++++++-----
mm/internal.h | 4 +---
mm/readahead.c | 3 ---
5 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..916a2a539517 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags);
-void folio_prep_large_rmappable(struct folio *folio);
+struct folio *folio_prep_large_rmappable(struct folio *folio);
bool can_split_folio(struct folio *folio, int *pextra_pins);
int split_huge_page_to_list(struct page *page, struct list_head *list);
static inline int split_huge_page(struct page *page)
@@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
return 0;
}
-static inline void folio_prep_large_rmappable(struct folio *folio) {}
+static inline struct folio *folio_prep_large_rmappable(struct folio *folio)
+{
+ return folio;
+}
#define transparent_hugepage_flags 0UL
diff --git a/mm/filemap.c b/mm/filemap.c
index 750e779c23db..2b00442b9d19 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
- if (order == 1)
- order = 0;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
folio = filemap_alloc_folio(alloc_gfp, order);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94c958f7ebb5..81fd1ba57088 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio)
}
#endif
-void folio_prep_large_rmappable(struct folio *folio)
+struct folio *folio_prep_large_rmappable(struct folio *folio)
{
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
- INIT_LIST_HEAD(&folio->_deferred_list);
+ if (!folio || !folio_test_large(folio))
+ return folio;
+ if (folio_order(folio) > 1)
+ INIT_LIST_HEAD(&folio->_deferred_list);
folio_set_large_rmappable(folio);
+
+ return folio;
}
static inline bool is_transparent_hugepage(struct folio *folio)
@@ -3082,7 +3086,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
- if (!list_empty(&folio->_deferred_list)) {
+ if (folio_order(folio) > 1 &&
+ !list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
list_del(&folio->_deferred_list);
}
@@ -3133,6 +3138,9 @@ void folio_undo_large_rmappable(struct folio *folio)
struct deferred_split *ds_queue;
unsigned long flags;
+ if (folio_order(folio) <= 1)
+ return;
+
/*
* At this point, there is no one trying to add the folio to
* deferred_list. If folio is not in deferred_list, it's safe
@@ -3158,7 +3166,12 @@ void deferred_split_folio(struct folio *folio)
#endif
unsigned long flags;
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
+ /*
+ * Order 1 folios have no space for a deferred list, but we also
+ * won't waste much memory by not adding them to the deferred list.
+ */
+ if (folio_order(folio) <= 1)
+ return;
/*
* The try_to_unmap() in page reclaim path might reach here too,
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5174b5b0c344 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page)
{
struct folio *folio = (struct folio *)page;
- if (folio && folio_order(folio) > 1)
- folio_prep_large_rmappable(folio);
- return folio;
+ return folio_prep_large_rmappable(folio);
}
static inline void prep_compound_head(struct page *page, unsigned int order)
diff --git a/mm/readahead.c b/mm/readahead.c
index 2648ec4f0494..369c70e2be42 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -516,9 +516,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
/* Don't allocate pages past EOF */
while (index + (1UL << order) - 1 > limit)
order--;
- /* THP machinery does not support order-1 */
- if (order == 1)
- order = 0;
err = ra_alloc_folio(ractl, index, mark, order, gfp);
if (err)
break;
--
2.43.0
^ permalink raw reply related [relevance 6%]
* [PATCH v12 7/8] udmabuf: Pin the pages using memfd_pin_folios() API
2024-02-25 7:56 4% [PATCH v12 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
@ 2024-02-25 7:57 5% ` Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-02-25 7:57 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Daniel Vetter, Mike Kravetz, Hugh Dickins, Peter Xu,
Jason Gunthorpe, Gerd Hoffmann, Dongwon Kim, Junxiao Chang
Using memfd_pin_folios() will ensure that the pages are pinned
correctly using FOLL_PIN. And, this also ensures that we don't
accidentally break features such as memory hotunplug as it would
not allow pinning pages in the movable zone.
Using this new API also simplifies the code as we no longer have
to deal with extracting individual pages from their mappings or
handle shmem and hugetlb cases separately.
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 153 +++++++++++++++++++-------------------
1 file changed, 78 insertions(+), 75 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index a8f3af61f7f2..afa8bfd2a2a9 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -30,6 +30,12 @@ struct udmabuf {
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
+ struct list_head unpin_list;
+};
+
+struct udmabuf_folio {
+ struct folio *folio;
+ struct list_head list;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -153,17 +159,43 @@ static void unmap_udmabuf(struct dma_buf_attachment *at,
return put_sg_table(at->dev, sg, direction);
}
+static void unpin_all_folios(struct list_head *unpin_list)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ while (!list_empty(unpin_list)) {
+ ubuf_folio = list_first_entry(unpin_list,
+ struct udmabuf_folio, list);
+ unpin_folio(ubuf_folio->folio);
+
+ list_del(&ubuf_folio->list);
+ kfree(ubuf_folio);
+ }
+}
+
+static int add_to_unpin_list(struct list_head *unpin_list,
+ struct folio *folio)
+{
+ struct udmabuf_folio *ubuf_folio;
+
+ ubuf_folio = kzalloc(sizeof(*ubuf_folio), GFP_KERNEL);
+ if (!ubuf_folio)
+ return -ENOMEM;
+
+ ubuf_folio->folio = folio;
+ list_add_tail(&ubuf_folio->list, unpin_list);
+ return 0;
+}
+
static void release_udmabuf(struct dma_buf *buf)
{
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
- pgoff_t pg;
if (ubuf->sg)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
- for (pg = 0; pg < ubuf->pagecount; pg++)
- folio_put(ubuf->folios[pg]);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
@@ -218,64 +250,6 @@ static const struct dma_buf_ops udmabuf_ops = {
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
-static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- struct hstate *hpstate = hstate_file(memfd);
- pgoff_t mapidx = offset >> huge_page_shift(hpstate);
- pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
- pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct folio *folio = NULL;
- pgoff_t pgidx;
-
- mapidx <<= huge_page_order(hpstate);
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!folio) {
- folio = __filemap_get_folio(memfd->f_mapping,
- mapidx,
- FGP_ACCESSED, 0);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
- }
-
- folio_get(folio);
- ubuf->folios[*pgbuf] = folio;
- ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
- (*pgbuf)++;
- if (++subpgoff == maxsubpgs) {
- folio_put(folio);
- folio = NULL;
- subpgoff = 0;
- mapidx += pages_per_huge_page(hpstate);
- }
- }
-
- if (folio)
- folio_put(folio);
-
- return 0;
-}
-
-static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
- pgoff_t offset, pgoff_t pgcnt,
- pgoff_t *pgbuf)
-{
- pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct folio *folio = NULL;
-
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
- if (IS_ERR(folio))
- return PTR_ERR(folio);
-
- ubuf->folios[*pgbuf] = folio;
- (*pgbuf)++;
- }
-
- return 0;
-}
-
static int check_memfd_seals(struct file *memfd)
{
int seals;
@@ -321,16 +295,19 @@ static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- pgoff_t pgcnt, pgbuf = 0, pglimit;
+ pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0;
+ long nr_folios, ret = -EINVAL;
struct file *memfd = NULL;
+ struct folio **folios;
struct udmabuf *ubuf;
- int ret = -EINVAL;
- u32 i, flags;
+ u32 i, j, k, flags;
+ loff_t end;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
if (!ubuf)
return -ENOMEM;
+ INIT_LIST_HEAD(&ubuf->unpin_list);
pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
for (i = 0; i < head->count; i++) {
if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
@@ -366,17 +343,44 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
- if (is_file_hugepages(memfd))
- ret = handle_hugetlb_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- else
- ret = handle_shmem_pages(ubuf, memfd,
- list[i].offset,
- pgcnt, &pgbuf);
- if (ret < 0)
+ folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
goto err;
+ }
+ end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1;
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+
+ nr_folios = ret;
+ pgoff >>= PAGE_SHIFT;
+ for (j = 0, k = 0; j < pgcnt; j++) {
+ ubuf->folios[pgbuf] = folios[k];
+ ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT;
+
+ if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) {
+ ret = add_to_unpin_list(&ubuf->unpin_list,
+ folios[k]);
+ if (ret < 0) {
+ kfree(folios);
+ goto err;
+ }
+ }
+
+ pgbuf++;
+ if (++pgoff == folio_nr_pages(folios[k])) {
+ pgoff = 0;
+ if (++k == nr_folios)
+ break;
+ }
+ }
+
+ kfree(folios);
fput(memfd);
}
@@ -388,10 +392,9 @@ static long udmabuf_create(struct miscdevice *device,
return ret;
err:
- while (pgbuf > 0)
- folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
kfree(ubuf->offsets);
kfree(ubuf->folios);
kfree(ubuf);
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH v12 6/8] udmabuf: Convert udmabuf driver to use folios
2024-02-25 7:56 4% [PATCH v12 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
@ 2024-02-25 7:57 4% ` Vivek Kasireddy
2024-02-25 7:57 5% ` [PATCH v12 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-02-25 7:57 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Daniel Vetter, Mike Kravetz, Hugh Dickins, Peter Xu,
Jason Gunthorpe, Gerd Hoffmann, Dongwon Kim, Junxiao Chang
This is mainly a preparatory patch to use memfd_pin_folios() API
for pinning folios. Using folios instead of pages makes sense as
the udmabuf driver needs to handle both shmem and hugetlb cases.
However, the function vmap_udmabuf() still needs a list of pages;
so, we collect all the head pages into a local array in this case.
Other changes in this patch include the addition of helpers for
checking the memfd seals and exporting dmabuf. Moving code from
udmabuf_create() into these helpers improves readability given
that udmabuf_create() is a bit long.
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 140 ++++++++++++++++++++++----------------
1 file changed, 83 insertions(+), 57 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 274defd3fa3e..a8f3af61f7f2 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -26,7 +26,7 @@ MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is
struct udmabuf {
pgoff_t pagecount;
- struct page **pages;
+ struct folio **folios;
struct sg_table *sg;
struct miscdevice *device;
pgoff_t *offsets;
@@ -42,7 +42,7 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
if (pgoff >= ubuf->pagecount)
return VM_FAULT_SIGBUS;
- pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn = folio_pfn(ubuf->folios[pgoff]);
pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
return vmf_insert_pfn(vma, vmf->address, pfn);
@@ -68,11 +68,21 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
{
struct udmabuf *ubuf = buf->priv;
+ struct page **pages;
void *vaddr;
+ pgoff_t pg;
dma_resv_assert_held(buf->resv);
- vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
+ pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ for (pg = 0; pg < ubuf->pagecount; pg++)
+ pages[pg] = &ubuf->folios[pg]->page;
+
+ vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+ kfree(pages);
if (!vaddr)
return -EINVAL;
@@ -107,7 +117,8 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
goto err_alloc;
for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
- sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+ sg_set_folio(sgl, ubuf->folios[i], PAGE_SIZE,
+ ubuf->offsets[i]);
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
@@ -152,9 +163,9 @@ static void release_udmabuf(struct dma_buf *buf)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
for (pg = 0; pg < ubuf->pagecount; pg++)
- put_page(ubuf->pages[pg]);
+ folio_put(ubuf->folios[pg]);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
}
@@ -215,36 +226,33 @@ static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
pgoff_t mapidx = offset >> huge_page_shift(hpstate);
pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
- struct page *hpage = NULL;
- struct folio *folio;
+ struct folio *folio = NULL;
pgoff_t pgidx;
mapidx <<= huge_page_order(hpstate);
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- if (!hpage) {
+ if (!folio) {
folio = __filemap_get_folio(memfd->f_mapping,
mapidx,
FGP_ACCESSED, 0);
if (IS_ERR(folio))
return PTR_ERR(folio);
-
- hpage = &folio->page;
}
- get_page(hpage);
- ubuf->pages[*pgbuf] = hpage;
+ folio_get(folio);
+ ubuf->folios[*pgbuf] = folio;
ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
(*pgbuf)++;
if (++subpgoff == maxsubpgs) {
- put_page(hpage);
- hpage = NULL;
+ folio_put(folio);
+ folio = NULL;
subpgoff = 0;
mapidx += pages_per_huge_page(hpstate);
}
}
- if (hpage)
- put_page(hpage);
+ if (folio)
+ folio_put(folio);
return 0;
}
@@ -254,31 +262,69 @@ static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
pgoff_t *pgbuf)
{
pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio = NULL;
for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(memfd->f_mapping,
- pgoff + pgidx);
- if (IS_ERR(page))
- return PTR_ERR(page);
+ folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
- ubuf->pages[*pgbuf] = page;
+ ubuf->folios[*pgbuf] = folio;
(*pgbuf)++;
}
return 0;
}
+static int check_memfd_seals(struct file *memfd)
+{
+ int seals;
+
+ if (!memfd)
+ return -EBADFD;
+
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EBADFD;
+
+ seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
+ if (seals == -EINVAL)
+ return -EBADFD;
+
+ if ((seals & SEALS_WANTED) != SEALS_WANTED ||
+ (seals & SEALS_DENIED) != 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int export_udmabuf(struct udmabuf *ubuf,
+ struct miscdevice *device,
+ u32 flags)
+{
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ struct dma_buf *buf;
+
+ ubuf->device = device;
+ exp_info.ops = &udmabuf_ops;
+ exp_info.size = ubuf->pagecount << PAGE_SHIFT;
+ exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
+
+ buf = dma_buf_export(&exp_info);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+
+ return dma_buf_fd(buf, flags);
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
struct file *memfd = NULL;
struct udmabuf *ubuf;
- struct dma_buf *buf;
- pgoff_t pgcnt, pgbuf = 0, pglimit;
- int seals, ret = -EINVAL;
+ int ret = -EINVAL;
u32 i, flags;
ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
@@ -299,9 +345,9 @@ static long udmabuf_create(struct miscdevice *device,
if (!ubuf->pagecount)
goto err;
- ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
GFP_KERNEL);
- if (!ubuf->pages) {
+ if (!ubuf->folios) {
ret = -ENOMEM;
goto err;
}
@@ -314,18 +360,9 @@ static long udmabuf_create(struct miscdevice *device,
pgbuf = 0;
for (i = 0; i < head->count; i++) {
- ret = -EBADFD;
memfd = fget(list[i].memfd);
- if (!memfd)
- goto err;
- if (!shmem_file(memfd) && !is_file_hugepages(memfd))
- goto err;
- seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
- if (seals == -EINVAL)
- goto err;
- ret = -EINVAL;
- if ((seals & SEALS_WANTED) != SEALS_WANTED ||
- (seals & SEALS_DENIED) != 0)
+ ret = check_memfd_seals(memfd);
+ if (ret < 0)
goto err;
pgcnt = list[i].size >> PAGE_SHIFT;
@@ -341,33 +378,22 @@ static long udmabuf_create(struct miscdevice *device,
goto err;
fput(memfd);
- memfd = NULL;
}
- exp_info.ops = &udmabuf_ops;
- exp_info.size = ubuf->pagecount << PAGE_SHIFT;
- exp_info.priv = ubuf;
- exp_info.flags = O_RDWR;
-
- ubuf->device = device;
- buf = dma_buf_export(&exp_info);
- if (IS_ERR(buf)) {
- ret = PTR_ERR(buf);
+ flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0;
+ ret = export_udmabuf(ubuf, device, flags);
+ if (ret < 0)
goto err;
- }
- flags = 0;
- if (head->flags & UDMABUF_FLAGS_CLOEXEC)
- flags |= O_CLOEXEC;
- return dma_buf_fd(buf, flags);
+ return ret;
err:
while (pgbuf > 0)
- put_page(ubuf->pages[--pgbuf]);
+ folio_put(ubuf->folios[--pgbuf]);
if (memfd)
fput(memfd);
kfree(ubuf->offsets);
- kfree(ubuf->pages);
+ kfree(ubuf->folios);
kfree(ubuf);
return ret;
}
--
2.43.0
^ permalink raw reply related [relevance 4%]
* [PATCH v12 5/8] udmabuf: Add back support for mapping hugetlb pages
2024-02-25 7:56 4% [PATCH v12 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
@ 2024-02-25 7:57 4% ` Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
2024-02-25 7:57 5% ` [PATCH v12 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2 siblings, 0 replies; 200+ results
From: Vivek Kasireddy @ 2024-02-25 7:57 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Daniel Vetter, Mike Kravetz,
Hugh Dickins, Peter Xu, Jason Gunthorpe, Gerd Hoffmann,
Dongwon Kim, Junxiao Chang
A user or admin can configure a VMM (Qemu) Guest's memory to be
backed by hugetlb pages for various reasons. However, a Guest OS
would still allocate (and pin) buffers that are backed by regular
4k sized pages. In order to map these buffers and create dma-bufs
for them on the Host, we first need to find the hugetlb pages where
the buffer allocations are located and then determine the offsets
of individual chunks (within those pages) and use this information
to eventually populate a scatterlist.
Testcase: default_hugepagesz=2M hugepagesz=2M hugepages=2500 options
were passed to the Host kernel and Qemu was launched with these
relevant options: qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-display gtk,gl=on
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
Replacing -display gtk,gl=on with -display gtk,gl=off above would
exercise the mmap handler.
Cc: David Hildenbrand <david@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com> (v2)
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
drivers/dma-buf/udmabuf.c | 122 +++++++++++++++++++++++++++++++-------
1 file changed, 101 insertions(+), 21 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 820c993c8659..274defd3fa3e 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -10,6 +10,7 @@
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/shmem_fs.h>
+#include <linux/hugetlb.h>
#include <linux/slab.h>
#include <linux/udmabuf.h>
#include <linux/vmalloc.h>
@@ -28,6 +29,7 @@ struct udmabuf {
struct page **pages;
struct sg_table *sg;
struct miscdevice *device;
+ pgoff_t *offsets;
};
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
@@ -41,6 +43,8 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
pfn = page_to_pfn(ubuf->pages[pgoff]);
+ pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
+
return vmf_insert_pfn(vma, vmf->address, pfn);
}
@@ -90,23 +94,29 @@ static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
{
struct udmabuf *ubuf = buf->priv;
struct sg_table *sg;
+ struct scatterlist *sgl;
+ unsigned int i = 0;
int ret;
sg = kzalloc(sizeof(*sg), GFP_KERNEL);
if (!sg)
return ERR_PTR(-ENOMEM);
- ret = sg_alloc_table_from_pages(sg, ubuf->pages, ubuf->pagecount,
- 0, ubuf->pagecount << PAGE_SHIFT,
- GFP_KERNEL);
+
+ ret = sg_alloc_table(sg, ubuf->pagecount, GFP_KERNEL);
if (ret < 0)
- goto err;
+ goto err_alloc;
+
+ for_each_sg(sg->sgl, sgl, ubuf->pagecount, i)
+ sg_set_page(sgl, ubuf->pages[i], PAGE_SIZE, ubuf->offsets[i]);
+
ret = dma_map_sgtable(dev, sg, direction, 0);
if (ret < 0)
- goto err;
+ goto err_map;
return sg;
-err:
+err_map:
sg_free_table(sg);
+err_alloc:
kfree(sg);
return ERR_PTR(ret);
}
@@ -143,6 +153,7 @@ static void release_udmabuf(struct dma_buf *buf)
for (pg = 0; pg < ubuf->pagecount; pg++)
put_page(ubuf->pages[pg]);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
}
@@ -196,17 +207,77 @@ static const struct dma_buf_ops udmabuf_ops = {
#define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE)
+static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ struct hstate *hpstate = hstate_file(memfd);
+ pgoff_t mapidx = offset >> huge_page_shift(hpstate);
+ pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT;
+ pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
+ struct page *hpage = NULL;
+ struct folio *folio;
+ pgoff_t pgidx;
+
+ mapidx <<= huge_page_order(hpstate);
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ if (!hpage) {
+ folio = __filemap_get_folio(memfd->f_mapping,
+ mapidx,
+ FGP_ACCESSED, 0);
+ if (IS_ERR(folio))
+ return PTR_ERR(folio);
+
+ hpage = &folio->page;
+ }
+
+ get_page(hpage);
+ ubuf->pages[*pgbuf] = hpage;
+ ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT;
+ (*pgbuf)++;
+ if (++subpgoff == maxsubpgs) {
+ put_page(hpage);
+ hpage = NULL;
+ subpgoff = 0;
+ mapidx += pages_per_huge_page(hpstate);
+ }
+ }
+
+ if (hpage)
+ put_page(hpage);
+
+ return 0;
+}
+
+static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
+ pgoff_t offset, pgoff_t pgcnt,
+ pgoff_t *pgbuf)
+{
+ pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT;
+ struct page *page;
+
+ for (pgidx = 0; pgidx < pgcnt; pgidx++) {
+ page = shmem_read_mapping_page(memfd->f_mapping,
+ pgoff + pgidx);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ ubuf->pages[*pgbuf] = page;
+ (*pgbuf)++;
+ }
+
+ return 0;
+}
+
static long udmabuf_create(struct miscdevice *device,
struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL;
- struct address_space *mapping = NULL;
struct udmabuf *ubuf;
struct dma_buf *buf;
- pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
- struct page *page;
+ pgoff_t pgcnt, pgbuf = 0, pglimit;
int seals, ret = -EINVAL;
u32 i, flags;
@@ -234,6 +305,12 @@ static long udmabuf_create(struct miscdevice *device,
ret = -ENOMEM;
goto err;
}
+ ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+ }
pgbuf = 0;
for (i = 0; i < head->count; i++) {
@@ -241,8 +318,7 @@ static long udmabuf_create(struct miscdevice *device,
memfd = fget(list[i].memfd);
if (!memfd)
goto err;
- mapping = memfd->f_mapping;
- if (!shmem_mapping(mapping))
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
goto err;
seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
if (seals == -EINVAL)
@@ -251,16 +327,19 @@ static long udmabuf_create(struct miscdevice *device,
if ((seals & SEALS_WANTED) != SEALS_WANTED ||
(seals & SEALS_DENIED) != 0)
goto err;
- pgoff = list[i].offset >> PAGE_SHIFT;
- pgcnt = list[i].size >> PAGE_SHIFT;
- for (pgidx = 0; pgidx < pgcnt; pgidx++) {
- page = shmem_read_mapping_page(mapping, pgoff + pgidx);
- if (IS_ERR(page)) {
- ret = PTR_ERR(page);
- goto err;
- }
- ubuf->pages[pgbuf++] = page;
- }
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+ if (is_file_hugepages(memfd))
+ ret = handle_hugetlb_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ else
+ ret = handle_shmem_pages(ubuf, memfd,
+ list[i].offset,
+ pgcnt, &pgbuf);
+ if (ret < 0)
+ goto err;
+
fput(memfd);
memfd = NULL;
}
@@ -287,6 +366,7 @@ static long udmabuf_create(struct miscdevice *device,
put_page(ubuf->pages[--pgbuf]);
if (memfd)
fput(memfd);
+ kfree(ubuf->offsets);
kfree(ubuf->pages);
kfree(ubuf);
return ret;
--
2.43.0
^ permalink raw reply related [relevance 4%]
* [PATCH v12 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios
@ 2024-02-25 7:56 4% Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Vivek Kasireddy @ 2024-02-25 7:56 UTC (permalink / raw)
To: dri-devel, linux-mm
Cc: Vivek Kasireddy, David Hildenbrand, Matthew Wilcox,
Christoph Hellwig, Andrew Morton, Daniel Vetter, Mike Kravetz,
Hugh Dickins, Peter Xu, Jason Gunthorpe, Gerd Hoffmann,
Dongwon Kim, Junxiao Chang
Currently, some drivers (e.g, Udmabuf) that want to longterm-pin
the pages/folios associated with a memfd, do so by simply taking a
reference on them. This is not desirable because the pages/folios
may reside in Movable zone or CMA block.
Therefore, having drivers use memfd_pin_folios() API ensures that
the folios are appropriately pinned via FOLL_PIN for longterm DMA.
This patchset also introduces a few helpers and converts the Udmabuf
driver to use folios and memfd_pin_folios() API to longterm-pin
the folios for DMA. Two new Udmabuf selftests are also included to
test the driver and the new API.
---
Patchset overview:
Patch 1-2: GUP helpers to migrate and unpin one or more folios
Patch 3: Introduce memfd_pin_folios() API
Patch 4-5: Udmabuf driver bug fixes for Qemu + hugetlb=on, blob=true case
Patch 6-8: Convert Udmabuf to use memfd_pin_folios() and add sefltests
This series is tested using the following methods:
- Run the subtests added in Patch 8
- Run Qemu (master) with the following options and a few additional
patches to Spice:
qemu-system-x86_64 -m 4096m....
-device virtio-gpu-pci,max_outputs=1,blob=true,xres=1920,yres=1080
-spice port=3001,gl=on,disable-ticketing=on,preferred-codec=gstreamer:h264
-object memory-backend-memfd,hugetlb=on,id=mem1,size=4096M
-machine memory-backend=mem1
- Run source ./run_vmtests.sh -t gup_test -a to check GUP regressions
Changelog:
v11 -> v12:
- Rebased and tested on mm-unstable
v10 -> v11:
- Remove the version string from the patch subject (Andrew)
- Move the changelog from the patches into the cover letter
- Rearrange the patchset to have GUP patches at the beginning
v9 -> v10:
- Introduce and use unpin_folio(), unpin_folios() and
check_and_migrate_movable_folios() helpers
- Use a list to track the folios that need to be unpinned in udmabuf
v8 -> v9: (suggestions from Matthew)
- Drop the extern while declaring memfd_alloc_folio()
- Fix memfd_alloc_folio() declaration to have it return struct folio *
instead of struct page * when CONFIG_MEMFD_CREATE is not defined
- Use folio_pfn() on the folio instead of page_to_pfn() on head page
in udmabuf
- Don't split the arguments to shmem_read_folio() on multiple lines
in udmabuf
v7 -> v8: (suggestions from David)
- Have caller pass [start, end], max_folios instead of start, nr_pages
- Replace offsets array with just offset into the first page
- Add comments explaning the need for next_idx
- Pin (and return) the folio (via FOLL_PIN) only once
v6 -> v7:
- Rename this API to memfd_pin_folios() and make it return folios
and offsets instead of pages (David)
- Don't continue processing the folios in the batch returned by
filemap_get_folios_contig() if they do not have correct next_idx
- Add the R-b tag from Christoph
v5 -> v6: (suggestions from Christoph)
- Rename this API to memfd_pin_user_pages() to make it clear that it
is intended for memfds
- Move the memfd page allocation helper from gup.c to memfd.c
- Fix indentation errors in memfd_pin_user_pages()
- For contiguous ranges of folios, use a helper such as
filemap_get_folios_contig() to lookup the page cache in batches
- Split the processing of hugetlb or shmem pages into helpers to
simplify the code in udmabuf_create()
v4 -> v5: (suggestions from David)
- For hugetlb case, ensure that we only obtain head pages from the
mapping by using __filemap_get_folio() instead of find_get_page_flags()
- Handle -EEXIST when two or more potential users try to simultaneously
add a huge page to the mapping by forcing them to retry on failure
v3 -> v4:
- Remove the local variable "page" and instead use 3 return statements
in alloc_file_page() (David)
- Add the R-b tag from David
v2 -> v3: (suggestions from David)
- Enclose the huge page allocation code with #ifdef CONFIG_HUGETLB_PAGE
(Build error reported by kernel test robot <lkp@intel.com>)
- Don't forget memalloc_pin_restore() on non-migration related errors
- Improve the readability of the cleanup code associated with
non-migration related errors
- Augment the comments by describing FOLL_LONGTERM like behavior
- Include the R-b tag from Jason
v1 -> v2:
- Drop gup_flags and improve comments and commit message (David)
- Allocate a page if we cannot find in page cache for the hugetlbfs
case as well (David)
- Don't unpin pages if there is a migration related failure (David)
- Drop the unnecessary nr_pages <= 0 check (Jason)
- Have the caller of the API pass in file * instead of fd (Jason)
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Junxiao Chang <junxiao.chang@intel.com>
Vivek Kasireddy (8):
mm/gup: Introduce unpin_folio/unpin_folios helpers
mm/gup: Introduce check_and_migrate_movable_folios()
mm/gup: Introduce memfd_pin_folios() for pinning memfd folios
udmabuf: Use vmf_insert_pfn and VM_PFNMAP for handling mmap
udmabuf: Add back support for mapping hugetlb pages
udmabuf: Convert udmabuf driver to use folios
udmabuf: Pin the pages using memfd_pin_folios() API
selftests/udmabuf: Add tests to verify data after page migration
drivers/dma-buf/udmabuf.c | 231 +++++++++---
include/linux/memfd.h | 5 +
include/linux/mm.h | 5 +
mm/gup.c | 346 +++++++++++++++---
mm/memfd.c | 34 ++
.../selftests/drivers/dma-buf/udmabuf.c | 151 +++++++-
6 files changed, 662 insertions(+), 110 deletions(-)
--
2.43.0
^ permalink raw reply [relevance 4%]
* Re: [syzbot] [mm] KMSAN: uninit-value in virtqueue_add (4)
@ 2024-02-25 0:52 5% ` syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-02-25 0:52 UTC (permalink / raw)
To: eadavis, linux-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KMSAN: uninit-value in virtqueue_add
=====================================================
BUG: KMSAN: uninit-value in vring_map_one_sg drivers/virtio/virtio_ring.c:380 [inline]
BUG: KMSAN: uninit-value in virtqueue_add_split drivers/virtio/virtio_ring.c:614 [inline]
BUG: KMSAN: uninit-value in virtqueue_add+0x21c6/0x6530 drivers/virtio/virtio_ring.c:2210
vring_map_one_sg drivers/virtio/virtio_ring.c:380 [inline]
virtqueue_add_split drivers/virtio/virtio_ring.c:614 [inline]
virtqueue_add+0x21c6/0x6530 drivers/virtio/virtio_ring.c:2210
virtqueue_add_sgs+0x186/0x1a0 drivers/virtio/virtio_ring.c:2244
__virtscsi_add_cmd drivers/scsi/virtio_scsi.c:467 [inline]
virtscsi_add_cmd+0x817/0xa90 drivers/scsi/virtio_scsi.c:501
virtscsi_queuecommand+0x896/0xa60 drivers/scsi/virtio_scsi.c:598
scsi_dispatch_cmd drivers/scsi/scsi_lib.c:1516 [inline]
scsi_queue_rq+0x4874/0x5790 drivers/scsi/scsi_lib.c:1758
blk_mq_dispatch_rq_list+0x13f8/0x3600 block/blk-mq.c:2049
__blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
__blk_mq_sched_dispatch_requests+0x10af/0x2500 block/blk-mq-sched.c:309
blk_mq_sched_dispatch_requests+0x160/0x2d0 block/blk-mq-sched.c:333
blk_mq_run_work_fn+0xd0/0x280 block/blk-mq.c:2434
process_one_work kernel/workqueue.c:2627 [inline]
process_scheduled_works+0x104e/0x1e70 kernel/workqueue.c:2700
worker_thread+0xf45/0x1490 kernel/workqueue.c:2781
kthread+0x3ed/0x540 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
Uninit was created at:
__alloc_pages+0x9a4/0xe00 mm/page_alloc.c:4591
alloc_pages_mpol+0x62b/0x9d0 mm/mempolicy.c:2133
alloc_pages mm/mempolicy.c:2204 [inline]
folio_alloc+0x1da/0x380 mm/mempolicy.c:2211
filemap_alloc_folio+0xa5/0x430 mm/filemap.c:974
__filemap_get_folio+0xa5a/0x1760 mm/filemap.c:1918
ext4_da_write_begin+0x7f8/0xec0 fs/ext4/inode.c:2891
generic_perform_write+0x3f5/0xc40 mm/filemap.c:3927
ext4_buffered_write_iter+0x564/0xaa0 fs/ext4/file.c:299
ext4_file_write_iter+0x20f/0x3460
__kernel_write_iter+0x329/0x930 fs/read_write.c:517
dump_emit_page fs/coredump.c:888 [inline]
dump_user_range+0x593/0xcd0 fs/coredump.c:915
elf_core_dump+0x528d/0x5a40 fs/binfmt_elf.c:2077
do_coredump+0x32c9/0x4920 fs/coredump.c:764
get_signal+0x2185/0x2d10 kernel/signal.c:2890
arch_do_signal_or_restart+0x53/0xca0 arch/x86/kernel/signal.c:309
exit_to_user_mode_loop+0xe8/0x320 kernel/entry/common.c:168
exit_to_user_mode_prepare+0x163/0x220 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0xd/0x30 kernel/entry/common.c:309
irqentry_exit+0x16/0x40 kernel/entry/common.c:412
exc_page_fault+0x246/0x6f0 arch/x86/mm/fault.c:1566
asm_exc_page_fault+0x2b/0x30 arch/x86/include/asm/idtentry.h:570
Bytes 0-1023 of 1024 are uninitialized
Memory access of size 1024 starts at ffff88801e7d9c00
CPU: 0 PID: 52 Comm: kworker/0:1H Not tainted 6.7.0-syzkaller-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: kblockd blk_mq_run_work_fn
=====================================================
Tested on:
commit: 0dd3ee31 Linux 6.7
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.7
console output: https://syzkaller.appspot.com/x/log.txt?x=15dee522180000
kernel config: https://syzkaller.appspot.com/x/.config?x=373206b1ae2fe3d4
dashboard link: https://syzkaller.appspot.com/bug?extid=d7521c1e3841ed075a42
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=1524ca02180000
^ permalink raw reply [relevance 5%]
* Re: [syzbot] [mm] KMSAN: uninit-value in virtqueue_add (4)
@ 2024-02-25 0:21 5% ` syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-02-25 0:21 UTC (permalink / raw)
To: linux-kernel, penguin-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KMSAN: uninit-value in virtqueue_add
=====================================================
BUG: KMSAN: uninit-value in vring_map_one_sg drivers/virtio/virtio_ring.c:380 [inline]
BUG: KMSAN: uninit-value in virtqueue_add_split drivers/virtio/virtio_ring.c:614 [inline]
BUG: KMSAN: uninit-value in virtqueue_add+0x21c6/0x6530 drivers/virtio/virtio_ring.c:2210
vring_map_one_sg drivers/virtio/virtio_ring.c:380 [inline]
virtqueue_add_split drivers/virtio/virtio_ring.c:614 [inline]
virtqueue_add+0x21c6/0x6530 drivers/virtio/virtio_ring.c:2210
virtqueue_add_sgs+0x186/0x1a0 drivers/virtio/virtio_ring.c:2244
__virtscsi_add_cmd drivers/scsi/virtio_scsi.c:467 [inline]
virtscsi_add_cmd+0x838/0xad0 drivers/scsi/virtio_scsi.c:501
virtscsi_queuecommand+0x896/0xa60 drivers/scsi/virtio_scsi.c:598
scsi_dispatch_cmd drivers/scsi/scsi_lib.c:1516 [inline]
scsi_queue_rq+0x4874/0x5790 drivers/scsi/scsi_lib.c:1758
blk_mq_dispatch_rq_list+0x13f8/0x3600 block/blk-mq.c:2049
__blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
__blk_mq_sched_dispatch_requests+0x10af/0x2500 block/blk-mq-sched.c:309
blk_mq_sched_dispatch_requests+0x160/0x2d0 block/blk-mq-sched.c:333
blk_mq_run_work_fn+0xd0/0x280 block/blk-mq.c:2434
process_one_work kernel/workqueue.c:2627 [inline]
process_scheduled_works+0x104e/0x1e70 kernel/workqueue.c:2700
worker_thread+0xf45/0x1490 kernel/workqueue.c:2781
kthread+0x3ed/0x540 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
Uninit was created at:
__alloc_pages+0x9a4/0xe00 mm/page_alloc.c:4591
alloc_pages_mpol+0x62b/0x9d0 mm/mempolicy.c:2133
alloc_pages mm/mempolicy.c:2204 [inline]
folio_alloc+0x1da/0x380 mm/mempolicy.c:2211
filemap_alloc_folio+0xa5/0x430 mm/filemap.c:974
__filemap_get_folio+0xa5a/0x1760 mm/filemap.c:1918
ext4_da_write_begin+0x7f8/0xec0 fs/ext4/inode.c:2891
generic_perform_write+0x3f5/0xc40 mm/filemap.c:3927
ext4_buffered_write_iter+0x564/0xaa0 fs/ext4/file.c:299
ext4_file_write_iter+0x20f/0x3460
__kernel_write_iter+0x329/0x930 fs/read_write.c:517
dump_emit_page fs/coredump.c:888 [inline]
dump_user_range+0x593/0xcd0 fs/coredump.c:915
elf_core_dump+0x528d/0x5a40 fs/binfmt_elf.c:2077
do_coredump+0x32c9/0x4920 fs/coredump.c:764
get_signal+0x2185/0x2d10 kernel/signal.c:2890
arch_do_signal_or_restart+0x53/0xca0 arch/x86/kernel/signal.c:309
exit_to_user_mode_loop+0xe8/0x320 kernel/entry/common.c:168
exit_to_user_mode_prepare+0x163/0x220 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0xd/0x30 kernel/entry/common.c:309
irqentry_exit+0x16/0x40 kernel/entry/common.c:412
exc_page_fault+0x246/0x6f0 arch/x86/mm/fault.c:1566
asm_exc_page_fault+0x2b/0x30 arch/x86/include/asm/idtentry.h:570
Bytes 0-4095 of 4096 are uninitialized
Memory access of size 4096 starts at ffff888037212000
CPU: 1 PID: 51 Comm: kworker/1:1H Not tainted 6.7.0-syzkaller-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: kblockd blk_mq_run_work_fn
=====================================================
Tested on:
commit: 0dd3ee31 Linux 6.7
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.7
console output: https://syzkaller.appspot.com/x/log.txt?x=1462a106180000
kernel config: https://syzkaller.appspot.com/x/.config?x=373206b1ae2fe3d4
dashboard link: https://syzkaller.appspot.com/bug?extid=d7521c1e3841ed075a42
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=1455d9d8180000
^ permalink raw reply [relevance 5%]
* Re: [syzbot] [mm] KMSAN: uninit-value in virtqueue_add (4)
@ 2024-02-24 14:24 5% ` syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-02-24 14:24 UTC (permalink / raw)
To: linux-kernel, penguin-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
KMSAN: uninit-value in virtqueue_add
=====================================================
BUG: KMSAN: uninit-value in vring_map_one_sg drivers/virtio/virtio_ring.c:380 [inline]
BUG: KMSAN: uninit-value in virtqueue_add_split drivers/virtio/virtio_ring.c:614 [inline]
BUG: KMSAN: uninit-value in virtqueue_add+0x21c6/0x6530 drivers/virtio/virtio_ring.c:2210
vring_map_one_sg drivers/virtio/virtio_ring.c:380 [inline]
virtqueue_add_split drivers/virtio/virtio_ring.c:614 [inline]
virtqueue_add+0x21c6/0x6530 drivers/virtio/virtio_ring.c:2210
virtqueue_add_sgs+0x186/0x1a0 drivers/virtio/virtio_ring.c:2244
__virtscsi_add_cmd drivers/scsi/virtio_scsi.c:467 [inline]
virtscsi_add_cmd+0x838/0xad0 drivers/scsi/virtio_scsi.c:501
virtscsi_queuecommand+0x896/0xa60 drivers/scsi/virtio_scsi.c:598
scsi_dispatch_cmd drivers/scsi/scsi_lib.c:1516 [inline]
scsi_queue_rq+0x4874/0x5790 drivers/scsi/scsi_lib.c:1758
blk_mq_dispatch_rq_list+0x13f8/0x3600 block/blk-mq.c:2049
__blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
__blk_mq_sched_dispatch_requests+0x10af/0x2500 block/blk-mq-sched.c:309
blk_mq_sched_dispatch_requests+0x160/0x2d0 block/blk-mq-sched.c:333
blk_mq_run_work_fn+0xd0/0x280 block/blk-mq.c:2434
process_one_work kernel/workqueue.c:2627 [inline]
process_scheduled_works+0x104e/0x1e70 kernel/workqueue.c:2700
worker_thread+0xf45/0x1490 kernel/workqueue.c:2781
kthread+0x3ed/0x540 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
Uninit was created at:
__alloc_pages+0x9a4/0xe00 mm/page_alloc.c:4591
alloc_pages_mpol+0x62b/0x9d0 mm/mempolicy.c:2133
alloc_pages mm/mempolicy.c:2204 [inline]
folio_alloc+0x1da/0x380 mm/mempolicy.c:2211
filemap_alloc_folio+0xa5/0x430 mm/filemap.c:974
__filemap_get_folio+0xa5a/0x1760 mm/filemap.c:1918
ext4_da_write_begin+0x7f8/0xec0 fs/ext4/inode.c:2891
generic_perform_write+0x3f5/0xc40 mm/filemap.c:3927
ext4_buffered_write_iter+0x564/0xaa0 fs/ext4/file.c:299
ext4_file_write_iter+0x20f/0x3460
__kernel_write_iter+0x329/0x930 fs/read_write.c:517
dump_emit_page fs/coredump.c:888 [inline]
dump_user_range+0x593/0xcd0 fs/coredump.c:915
elf_core_dump+0x528d/0x5a40 fs/binfmt_elf.c:2077
do_coredump+0x32c9/0x4920 fs/coredump.c:764
get_signal+0x2185/0x2d10 kernel/signal.c:2890
arch_do_signal_or_restart+0x53/0xca0 arch/x86/kernel/signal.c:309
exit_to_user_mode_loop+0xe8/0x320 kernel/entry/common.c:168
exit_to_user_mode_prepare+0x163/0x220 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0xd/0x30 kernel/entry/common.c:309
irqentry_exit+0x16/0x40 kernel/entry/common.c:412
exc_page_fault+0x246/0x6f0 arch/x86/mm/fault.c:1566
asm_exc_page_fault+0x2b/0x30 arch/x86/include/asm/idtentry.h:570
Bytes 0-4095 of 4096 are uninitialized
Memory access of size 4096 starts at ffff88803438f000
CPU: 0 PID: 51 Comm: kworker/0:1H Not tainted 6.7.0-syzkaller-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: kblockd blk_mq_run_work_fn
=====================================================
Tested on:
commit: 0dd3ee31 Linux 6.7
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.7
console output: https://syzkaller.appspot.com/x/log.txt?x=147162c4180000
kernel config: https://syzkaller.appspot.com/x/.config?x=373206b1ae2fe3d4
dashboard link: https://syzkaller.appspot.com/bug?extid=d7521c1e3841ed075a42
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=12a294c4180000
^ permalink raw reply [relevance 5%]
* [merged mm-stable] mmpage_owner-display-all-stacks-and-their-count.patch removed from -mm tree
@ 2024-02-24 1:49 5% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-24 1:49 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The quilt patch titled
Subject: mm,page_owner: display all stacks and their count
has been removed from the -mm tree. Its filename was
mmpage_owner-display-all-stacks-and-their-count.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: mm,page_owner: display all stacks and their count
Date: Thu, 15 Feb 2024 22:59:05 +0100
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it. Reading from
that file will show all stacks that were added by page_owner followed by
their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Link: https://lkml.kernel.org/r/20240215215907.20121-6-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Marco Elver <elver@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/page_owner.c | 93 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 92 insertions(+), 1 deletion(-)
--- a/mm/page_owner.c~mmpage_owner-display-all-stacks-and-their-count
+++ a/mm/page_owner.c
@@ -171,7 +171,13 @@ static void add_stack_record_to_list(str
spin_lock_irqsave(&stack_list_lock, flags);
stack->next = stack_list;
- stack_list = stack;
+ /*
+ * This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -805,8 +811,90 @@ static const struct file_operations proc
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ int i, stack_count;
+ struct stack *stack = v;
+ unsigned long *entries;
+ unsigned long nr_entries;
+ struct stack_record *stack_record = stack->stack_record;
+
+ nr_entries = stack_record->size;
+ entries = stack_record->entries;
+ stack_count = refcount_read(&stack_record->count) - 1;
+
+ if (!nr_entries || nr_entries < 0 || stack_count < 1)
+ return 0;
+
+ for (i = 0; i < nr_entries; i++)
+ seq_printf(m, " %pS\n", (void *)entries[i]);
+ seq_printf(m, "stack_count: %d\n\n", stack_count);
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -814,6 +902,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
_
Patches currently in -mm which might be from osalvador@suse.de are
^ permalink raw reply [relevance 5%]
* [merged mm-stable] lib-stackdepot-fix-first-entry-having-a-0-handle.patch removed from -mm tree
@ 2024-02-24 1:49 6% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-24 1:49 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The quilt patch titled
Subject: lib/stackdepot: fix first entry having a 0-handle
has been removed from the -mm tree. Its filename was
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: lib/stackdepot: fix first entry having a 0-handle
Date: Thu, 15 Feb 2024 22:59:01 +0100
Patch series "page_owner: print stacks and their outstanding allocations",
v10.
page_owner is a great debug functionality tool that lets us know about all
pages that have been allocated/freed and their specific stacktrace. This
comes very handy when debugging memory leaks, since with some scripting we
can see the outstanding allocations, which might point to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner. This functionality creates a new directory called
'page_owner_stacks' under 'sys/kernel//debug' with a read-only file called
'show_stacks', which prints out all the stacks followed by their
outstanding number of allocations (being that the times the stacktrace has
allocated but not freed yet). This gives us a clear and a quick overview
of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free
operation) call.
Unfortunately, we cannot use the new stackdepot api STACK_DEPOT_FLAG_GET
because it does not fulfill page_owner needs, meaning we would have to
special case things, at which point makes more sense for page_owner to do
its own {dec,inc}rementing of the stacks. E.g: Using
STACK_DEPOT_FLAG_PUT, once the refcount reaches 0, such stack gets
evicted, so page_owner would lose information.
This patchset also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
This patch (of 7):
The very first entry of stack_record gets a handle of 0, but this is wrong
because stackdepot treats a 0-handle as a non-valid one. E.g: See the
check in stack_depot_fetch()
Fix this by adding and offset of 1.
This bug has been lurking since the very beginning of stackdepot, but no
one really cared as it seems. Because of that I am not adding a Fixes
tag.
Link: https://lkml.kernel.org/r/20240215215907.20121-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20240215215907.20121-2-osalvador@suse.de
Co-developed-by: Marco Elver <elver@google.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
lib/stackdepot.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
--- a/lib/stackdepot.c~lib-stackdepot-fix-first-entry-having-a-0-handle
+++ a/lib/stackdepot.c
@@ -45,15 +45,16 @@
#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
STACK_DEPOT_EXTRA_BITS)
#define DEPOT_POOLS_CAP 8192
+/* The pool_index is offset by 1 so the first record does not have a 0 handle. */
#define DEPOT_MAX_POOLS \
- (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \
- (1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP)
+ (((1LL << (DEPOT_POOL_INDEX_BITS)) - 1 < DEPOT_POOLS_CAP) ? \
+ (1LL << (DEPOT_POOL_INDEX_BITS)) - 1 : DEPOT_POOLS_CAP)
/* Compact structure that stores a reference to a stack. */
union handle_parts {
depot_stack_handle_t handle;
struct {
- u32 pool_index : DEPOT_POOL_INDEX_BITS;
+ u32 pool_index : DEPOT_POOL_INDEX_BITS; /* pool_index is offset by 1 */
u32 offset : DEPOT_OFFSET_BITS;
u32 extra : STACK_DEPOT_EXTRA_BITS;
};
@@ -372,7 +373,7 @@ static struct stack_record *depot_pop_fr
stack = current_pool + pool_offset;
/* Pre-initialize handle once. */
- stack->handle.pool_index = pool_index;
+ stack->handle.pool_index = pool_index + 1;
stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
stack->handle.extra = 0;
INIT_LIST_HEAD(&stack->hash_list);
@@ -483,18 +484,19 @@ static struct stack_record *depot_fetch_
const int pools_num_cached = READ_ONCE(pools_num);
union handle_parts parts = { .handle = handle };
void *pool;
+ u32 pool_index = parts.pool_index - 1;
size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;
lockdep_assert_not_held(&pool_lock);
- if (parts.pool_index > pools_num_cached) {
+ if (pool_index > pools_num_cached) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pools_num_cached, handle);
+ pool_index, pools_num_cached, handle);
return NULL;
}
- pool = stack_pools[parts.pool_index];
+ pool = stack_pools[pool_index];
if (WARN_ON(!pool))
return NULL;
_
Patches currently in -mm which might be from osalvador@suse.de are
^ permalink raw reply [relevance 6%]
* Re: [PATCH v3 00/11] Mitigate a vmap lock contention v3
@ 2024-02-22 8:35 8% ` Uladzislau Rezki
0 siblings, 0 replies; 200+ results
From: Uladzislau Rezki @ 2024-02-22 8:35 UTC (permalink / raw)
To: Matthew Wilcox, Mel Gorman, kirill.shutemov, Vishal Moola
Cc: Andrew Morton, LKML, Baoquan He, Lorenzo Stoakes,
Christoph Hellwig, Liam R . Howlett, Dave Chinner,
Paul E . McKenney, Joel Fernandes, Oleksiy Avramchenko, linux-mm
Hello, Folk!
> This is v3. It is based on the 6.7.0-rc8.
>
> 1. Motivation
>
> - Offload global vmap locks making it scaled to number of CPUS;
> - If possible and there is an agreement, we can remove the "Per cpu kva allocator"
> to make the vmap code to be more simple;
> - There were complains from XFS folk that a vmalloc might be contented
> on the their workloads.
>
> 2. Design(high level overview)
>
> We introduce an effective vmap node logic. A node behaves as independent
> entity to serve an allocation request directly(if possible) from its pool.
> That way it bypasses a global vmap space that is protected by its own lock.
>
> An access to pools are serialized by CPUs. Number of nodes are equal to
> number of CPUs in a system. Please note the high threshold is bound to
> 128 nodes.
>
> Pools are size segregated and populated based on system demand. The maximum
> alloc request that can be stored into a segregated storage is 256 pages. The
> lazily drain path decays a pool by 25% as a first step and as second populates
> it by fresh freed VAs for reuse instead of returning them into a global space.
>
> When a VA is obtained(alloc path), it is stored in separate nodes. A va->va_start
> address is converted into a correct node where it should be placed and resided.
> Doing so we balance VAs across the nodes as a result an access becomes scalable.
> The addr_to_node() function does a proper address conversion to a correct node.
>
> A vmap space is divided on segments with fixed size, it is 16 pages. That way
> any address can be associated with a segment number. Number of segments are
> equal to num_possible_cpus() but not grater then 128. The numeration starts
> from 0. See below how it is converted:
>
> static inline unsigned int
> addr_to_node_id(unsigned long addr)
> {
> return (addr / zone_size) % nr_nodes;
> }
>
> On a free path, a VA can be easily found by converting its "va_start" address
> to a certain node it resides. It is moved from "busy" data to "lazy" data structure.
> Later on, as noted earlier, the lazy kworker decays each node pool and populates it
> by fresh incoming VAs. Please note, a VA is returned to a node that did an alloc
> request.
>
> 3. Test on AMD Ryzen Threadripper 3970X 32-Core Processor
>
> sudo ./test_vmalloc.sh run_test_mask=7 nr_threads=64
>
> <default perf>
> 94.41% 0.89% [kernel] [k] _raw_spin_lock
> 93.35% 93.07% [kernel] [k] native_queued_spin_lock_slowpath
> 76.13% 0.28% [kernel] [k] __vmalloc_node_range
> 72.96% 0.81% [kernel] [k] alloc_vmap_area
> 56.94% 0.00% [kernel] [k] __get_vm_area_node
> 41.95% 0.00% [kernel] [k] vmalloc
> 37.15% 0.01% [test_vmalloc] [k] full_fit_alloc_test
> 35.17% 0.00% [kernel] [k] ret_from_fork_asm
> 35.17% 0.00% [kernel] [k] ret_from_fork
> 35.17% 0.00% [kernel] [k] kthread
> 35.08% 0.00% [test_vmalloc] [k] test_func
> 34.45% 0.00% [test_vmalloc] [k] fix_size_alloc_test
> 28.09% 0.01% [test_vmalloc] [k] long_busy_list_alloc_test
> 23.53% 0.25% [kernel] [k] vfree.part.0
> 21.72% 0.00% [kernel] [k] remove_vm_area
> 20.08% 0.21% [kernel] [k] find_unlink_vmap_area
> 2.34% 0.61% [kernel] [k] free_vmap_area_noflush
> <default perf>
> vs
> <patch-series perf>
> 82.32% 0.22% [test_vmalloc] [k] long_busy_list_alloc_test
> 63.36% 0.02% [kernel] [k] vmalloc
> 63.34% 2.64% [kernel] [k] __vmalloc_node_range
> 30.42% 4.46% [kernel] [k] vfree.part.0
> 28.98% 2.51% [kernel] [k] __alloc_pages_bulk
> 27.28% 0.19% [kernel] [k] __get_vm_area_node
> 26.13% 1.50% [kernel] [k] alloc_vmap_area
> 21.72% 21.67% [kernel] [k] clear_page_rep
> 19.51% 2.43% [kernel] [k] _raw_spin_lock
> 16.61% 16.51% [kernel] [k] native_queued_spin_lock_slowpath
> 13.40% 2.07% [kernel] [k] free_unref_page
> 10.62% 0.01% [kernel] [k] remove_vm_area
> 9.02% 8.73% [kernel] [k] insert_vmap_area
> 8.94% 0.00% [kernel] [k] ret_from_fork_asm
> 8.94% 0.00% [kernel] [k] ret_from_fork
> 8.94% 0.00% [kernel] [k] kthread
> 8.29% 0.00% [test_vmalloc] [k] test_func
> 7.81% 0.05% [test_vmalloc] [k] full_fit_alloc_test
> 5.30% 4.73% [kernel] [k] purge_vmap_node
> 4.47% 2.65% [kernel] [k] free_vmap_area_noflush
> <patch-series perf>
>
> confirms that a native_queued_spin_lock_slowpath goes down to
> 16.51% percent from 93.07%.
>
> The throughput is ~12x higher:
>
> urezki@pc638:~$ time sudo ./test_vmalloc.sh run_test_mask=7 nr_threads=64
> Run the test with following parameters: run_test_mask=7 nr_threads=64
> Done.
> Check the kernel ring buffer to see the summary.
>
> real 10m51.271s
> user 0m0.013s
> sys 0m0.187s
> urezki@pc638:~$
>
> urezki@pc638:~$ time sudo ./test_vmalloc.sh run_test_mask=7 nr_threads=64
> Run the test with following parameters: run_test_mask=7 nr_threads=64
> Done.
> Check the kernel ring buffer to see the summary.
>
> real 0m51.301s
> user 0m0.015s
> sys 0m0.040s
> urezki@pc638:~$
>
> 4. Changelog
>
> v1: https://lore.kernel.org/linux-mm/ZIAqojPKjChJTssg@pc636/T/
> v2: https://lore.kernel.org/lkml/20230829081142.3619-1-urezki@gmail.com/
>
> Delta v2 -> v3:
> - fix comments from v2 feedback;
> - switch from pre-fetch chunk logic to a less complex size based pools.
>
> Baoquan He (1):
> mm/vmalloc: remove vmap_area_list
>
> Uladzislau Rezki (Sony) (10):
> mm: vmalloc: Add va_alloc() helper
> mm: vmalloc: Rename adjust_va_to_fit_type() function
> mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c
> mm: vmalloc: Remove global vmap_area_root rb-tree
> mm: vmalloc: Remove global purge_vmap_area_root rb-tree
> mm: vmalloc: Offload free_vmap_area_lock lock
> mm: vmalloc: Support multiple nodes in vread_iter
> mm: vmalloc: Support multiple nodes in vmallocinfo
> mm: vmalloc: Set nr_nodes based on CPUs in a system
> mm: vmalloc: Add a shrinker to drain vmap pools
>
> .../admin-guide/kdump/vmcoreinfo.rst | 8 +-
> arch/arm64/kernel/crash_core.c | 1 -
> arch/riscv/kernel/crash_core.c | 1 -
> include/linux/vmalloc.h | 1 -
> kernel/crash_core.c | 4 +-
> kernel/kallsyms_selftest.c | 1 -
> mm/nommu.c | 2 -
> mm/vmalloc.c | 1049 ++++++++++++-----
> 8 files changed, 786 insertions(+), 281 deletions(-)
>
> --
> 2.39.2
>
There is one thing that i have to clarify and which is open for me yet.
Test machine:
quemu x86_64 system
64 CPUs
64G of memory
test suite:
test_vmalloc.sh
environment:
mm-unstable, branch: next-20240220 where this series
is located. On top of it i added locally Suren's Baghdasaryan
Memory allocation profiling v3 for better understanding of memory
usage.
Before running test, the condition is as below:
urezki@pc638:~$ sort -h /proc/allocinfo
27.2MiB 6970 mm/memory.c:1122 module:memory func:folio_prealloc
79.1MiB 20245 mm/readahead.c:247 module:readahead func:page_cache_ra_unbounded
112MiB 8689 mm/slub.c:2202 module:slub func:alloc_slab_page
122MiB 31168 mm/page_ext.c:270 module:page_ext func:alloc_page_ext
urezki@pc638:~$ free -m
total used free shared buff/cache available
Mem: 64172 936 63618 0 134 63236
Swap: 0 0 0
urezki@pc638:~$
The test-suite stresses vmap/vmalloc layer by creating workers which in
a tight loop do alloc/free, i.e. it is considered as extreme. Below three
identical tests were done with only one difference, which is 64, 128 and 256 kworkers:
1) sudo tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=64
urezki@pc638:~$ sort -h /proc/allocinfo
80.1MiB 20518 mm/readahead.c:247 module:readahead func:page_cache_ra_unbounded
122MiB 31168 mm/page_ext.c:270 module:page_ext func:alloc_page_ext
153MiB 39048 mm/filemap.c:1919 module:filemap func:__filemap_get_folio
178MiB 13259 mm/slub.c:2202 module:slub func:alloc_slab_page
350MiB 89656 include/linux/mm.h:2848 module:memory func:pagetable_alloc
urezki@pc638:~$ free -m
total used free shared buff/cache available
Mem: 64172 1417 63054 0 298 62755
Swap: 0 0 0
urezki@pc638:~$
2) sudo tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=128
urezki@pc638:~$ sort -h /proc/allocinfo
122MiB 31168 mm/page_ext.c:270 module:page_ext func:alloc_page_ext
154MiB 39440 mm/filemap.c:1919 module:filemap func:__filemap_get_folio
196MiB 14038 mm/slub.c:2202 module:slub func:alloc_slab_page
1.20GiB 315655 include/linux/mm.h:2848 module:memory func:pagetable_alloc
urezki@pc638:~$ free -m
total used free shared buff/cache available
Mem: 64172 2556 61914 0 302 61616
Swap: 0 0 0
urezki@pc638:~$
3) sudo tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=256
urezki@pc638:~$ sort -h /proc/allocinfo
127MiB 32565 mm/readahead.c:247 module:readahead func:page_cache_ra_unbounded
197MiB 50506 mm/filemap.c:1919 module:filemap func:__filemap_get_folio
278MiB 18519 mm/slub.c:2202 module:slub func:alloc_slab_page
5.36GiB 1405072 include/linux/mm.h:2848 module:memory func:pagetable_alloc
urezki@pc638:~$ free -m
total used free shared buff/cache available
Mem: 64172 6741 57652 0 394 57431
Swap: 0 0 0
urezki@pc638:~$
pagetable_alloc - gets increased as soon as a higher pressure is applied by
increasing number of workers. Running same number of jobs on a next run
does not increase it and stays on same level as on previous.
/**
* pagetable_alloc - Allocate pagetables
* @gfp: GFP flags
* @order: desired pagetable order
*
* pagetable_alloc allocates memory for page tables as well as a page table
* descriptor to describe that memory.
*
* Return: The ptdesc describing the allocated page tables.
*/
static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order)
{
struct page *page = alloc_pages(gfp | __GFP_COMP, order);
return page_ptdesc(page);
}
Could you please comment on it? Or do you have any thought? Is it expected?
Is a page-table ever shrink?
/proc/slabinfo does not show any high "active" or "number" of objects to
be used by any cache.
/proc/meminfo - "VmallocUsed" stays low after those 3 tests.
I have checked it with KASAN, KMEMLEAK and i do not see any issues.
Thank you for the help!
--
Uladzislau Rezki
^ permalink raw reply [relevance 8%]
* [PATCH v4 36/36] memprofiling: Documentation
2024-02-21 19:40 3% [PATCH v4 00/36] Memory allocation profiling Suren Baghdasaryan
2024-02-21 19:40 3% ` [PATCH v4 14/36] lib: add allocation tagging support for memory " Suren Baghdasaryan
@ 2024-02-21 19:40 5% ` Suren Baghdasaryan
2024-02-27 13:36 0% ` [PATCH v4 00/36] Memory allocation profiling Vlastimil Babka
2 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-02-21 19:40 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, tj, muchun.song, rppt, paulmck, pasha.tatashin,
yosryahmed, yuzhao, dhowells, hughd, andreyknvl, keescook,
ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, bristot, vschneid, cl,
penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver, dvyukov,
shakeelb, songmuchun, jbaron, rientjes, minchan, kaleshsingh,
surenb, kernel-team, linux-doc, linux-kernel, iommu, linux-arch,
linux-fsdevel, linux-mm, linux-modules, kasan-dev, cgroups
From: Kent Overstreet <kent.overstreet@linux.dev>
Provide documentation for memory allocation profiling.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Documentation/mm/allocation-profiling.rst | 86 +++++++++++++++++++++++
1 file changed, 86 insertions(+)
create mode 100644 Documentation/mm/allocation-profiling.rst
diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
new file mode 100644
index 000000000000..2bcbd9e51fe4
--- /dev/null
+++ b/Documentation/mm/allocation-profiling.rst
@@ -0,0 +1,86 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+MEMORY ALLOCATION PROFILING
+===========================
+
+Low overhead (suitable for production) accounting of all memory allocations,
+tracked by file and line number.
+
+Usage:
+kconfig options:
+ - CONFIG_MEM_ALLOC_PROFILING
+ - CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ - CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ adds warnings for allocations that weren't accounted because of a
+ missing annotation
+
+Boot parameter:
+ sysctl.vm.mem_profiling=1
+
+sysctl:
+ /proc/sys/vm/mem_profiling
+
+Runtime info:
+ /proc/allocinfo
+
+Example output:
+ root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
+ 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
+ 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
+ 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
+ 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 55M 4887 mm/slub.c:2259 func:alloc_slab_page
+ 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+===================
+Theory of operation
+===================
+
+Memory allocation profiling builds off of code tagging, which is a library for
+declaring static structs (that typcially describe a file and line number in
+some way, hence code tagging) and then finding and operating on them at runtime
+- i.e. iterating over them to print them in debugfs/procfs.
+
+To add accounting for an allocation call, we replace it with a macro
+invocation, alloc_hooks(), that
+ - declares a code tag
+ - stashes a pointer to it in task_struct
+ - calls the real allocation function
+ - and finally, restores the task_struct alloc tag pointer to its previous value.
+
+This allows for alloc_hooks() calls to be nested, with the most recent one
+taking effect. This is important for allocations internal to the mm/ code that
+do not properly belong to the outer allocation context and should be counted
+separately: for example, slab object extension vectors, or when the slab
+allocates pages from the page allocator.
+
+Thus, proper usage requires determining which function in an allocation call
+stack should be tagged. There are many helper functions that essentially wrap
+e.g. kmalloc() and do a little more work, then are called in multiple places;
+we'll generally want the accounting to happen in the callers of these helpers,
+not in the helpers themselves.
+
+To fix up a given helper, for example foo(), do the following:
+ - switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
+ - rename it to foo_noprof()
+ - define a macro version of foo() like so:
+ #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
+
+It's also possible to stash a pointer to an alloc tag in your own data structures.
+
+Do this when you're implementing a generic data structure that does allocations
+"on behalf of" some other code - for example, the rhashtable code. This way,
+instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
+break it out by rhashtable type.
+
+To do so:
+ - Hook your data structure's init function, like any other allocation function
+ - Within your init function, use the convenience macro alloc_tag_record() to
+ record alloc tag in your data structure.
+ - Then, use the following form for your allocations:
+ alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
--
2.44.0.rc0.258.g7320e95886-goog
^ permalink raw reply related [relevance 5%]
* [PATCH v4 14/36] lib: add allocation tagging support for memory allocation profiling
2024-02-21 19:40 3% [PATCH v4 00/36] Memory allocation profiling Suren Baghdasaryan
@ 2024-02-21 19:40 3% ` Suren Baghdasaryan
2024-02-21 19:40 5% ` [PATCH v4 36/36] memprofiling: Documentation Suren Baghdasaryan
2024-02-27 13:36 0% ` [PATCH v4 00/36] Memory allocation profiling Vlastimil Babka
2 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-02-21 19:40 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, tj, muchun.song, rppt, paulmck, pasha.tatashin,
yosryahmed, yuzhao, dhowells, hughd, andreyknvl, keescook,
ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, bristot, vschneid, cl,
penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver, dvyukov,
shakeelb, songmuchun, jbaron, rientjes, minchan, kaleshsingh,
surenb, kernel-team, linux-doc, linux-kernel, iommu, linux-arch,
linux-fsdevel, linux-mm, linux-modules, kasan-dev, cgroups
Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily
instrument memory allocators. It registers an "alloc_tags" codetag type
with /proc/allocinfo interface to output allocation tag information when
the feature is enabled.
CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory
allocation profiling instrumentation.
Memory allocation profiling can be enabled or disabled at runtime using
/proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n.
CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation
profiling by default.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
---
Documentation/admin-guide/sysctl/vm.rst | 16 +++
Documentation/filesystems/proc.rst | 29 +++++
include/asm-generic/codetag.lds.h | 14 +++
include/asm-generic/vmlinux.lds.h | 3 +
include/linux/alloc_tag.h | 144 +++++++++++++++++++++++
include/linux/sched.h | 24 ++++
lib/Kconfig.debug | 25 ++++
lib/Makefile | 2 +
lib/alloc_tag.c | 149 ++++++++++++++++++++++++
scripts/module.lds.S | 7 ++
10 files changed, 413 insertions(+)
create mode 100644 include/asm-generic/codetag.lds.h
create mode 100644 include/linux/alloc_tag.h
create mode 100644 lib/alloc_tag.c
diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index c59889de122b..e86c968a7a0e 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/vm:
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
+- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
@@ -425,6 +426,21 @@ e.g., up to one or two maps per allocation.
The default value is 65530.
+mem_profiling
+==============
+
+Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
+
+1: Enable memory profiling.
+
+0: Disable memory profiling.
+
+Enabling memory profiling introduces a small performance overhead for all
+memory allocations.
+
+The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT.
+
+
memory_failure_early_kill:
==========================
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 104c6d047d9b..8150dc3d689c 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -688,6 +688,7 @@ files are there, and which are missing.
============ ===============================================================
File Content
============ ===============================================================
+ allocinfo Memory allocations profiling information
apm Advanced power management info
bootconfig Kernel command line obtained from boot config,
and, if there were kernel parameters from the
@@ -953,6 +954,34 @@ also be allocatable although a lot of filesystem metadata may have to be
reclaimed to achieve this.
+allocinfo
+~~~~~~~
+
+Provides information about memory allocations at all locations in the code
+base. Each allocation in the code is identified by its source file, line
+number, module (if originates from a loadable module) and the function calling
+the allocation. The number of bytes allocated and number of calls at each
+location are reported.
+
+Example output.
+
+::
+
+ > sort -rn /proc/allocinfo
+ 127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
+ 56373248 4737 mm/slub.c:2259 func:alloc_slab_page
+ 14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
+ 9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
+ 4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 3940352 962 mm/memory.c:4214 func:alloc_anon_folio
+ 2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ ...
+
+
meminfo
~~~~~~~
diff --git a/include/asm-generic/codetag.lds.h b/include/asm-generic/codetag.lds.h
new file mode 100644
index 000000000000..64f536b80380
--- /dev/null
+++ b/include/asm-generic/codetag.lds.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_GENERIC_CODETAG_LDS_H
+#define __ASM_GENERIC_CODETAG_LDS_H
+
+#define SECTION_WITH_BOUNDARIES(_name) \
+ . = ALIGN(8); \
+ __start_##_name = .; \
+ KEEP(*(_name)) \
+ __stop_##_name = .;
+
+#define CODETAG_SECTIONS() \
+ SECTION_WITH_BOUNDARIES(alloc_tags)
+
+#endif /* __ASM_GENERIC_CODETAG_LDS_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 5dd3a61d673d..c9997dc50c50 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -50,6 +50,8 @@
* [__nosave_begin, __nosave_end] for the nosave data
*/
+#include <asm-generic/codetag.lds.h>
+
#ifndef LOAD_OFFSET
#define LOAD_OFFSET 0
#endif
@@ -366,6 +368,7 @@
. = ALIGN(8); \
BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \
BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \
+ CODETAG_SECTIONS() \
LIKELY_PROFILE() \
BRANCH_PROFILE() \
TRACE_PRINTKS() \
diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
new file mode 100644
index 000000000000..be3ba955846c
--- /dev/null
+++ b/include/linux/alloc_tag.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * allocation tagging
+ */
+#ifndef _LINUX_ALLOC_TAG_H
+#define _LINUX_ALLOC_TAG_H
+
+#include <linux/bug.h>
+#include <linux/codetag.h>
+#include <linux/container_of.h>
+#include <linux/preempt.h>
+#include <asm/percpu.h>
+#include <linux/cpumask.h>
+#include <linux/static_key.h>
+
+struct alloc_tag_counters {
+ u64 bytes;
+ u64 calls;
+};
+
+/*
+ * An instance of this structure is created in a special ELF section at every
+ * allocation callsite. At runtime, the special section is treated as
+ * an array of these. Embedded codetag utilizes codetag framework.
+ */
+struct alloc_tag {
+ struct codetag ct;
+ struct alloc_tag_counters __percpu *counters;
+} __aligned(8);
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+
+static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct)
+{
+ return container_of(ct, struct alloc_tag, ct);
+}
+
+#ifdef ARCH_NEEDS_WEAK_PER_CPU
+/*
+ * When percpu variables are required to be defined as weak, static percpu
+ * variables can't be used inside a function (see comments for DECLARE_PER_CPU_SECTION).
+ */
+#error "Memory allocation profiling is incompatible with ARCH_NEEDS_WEAK_PER_CPU"
+#endif
+
+#define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+ __section("alloc_tags") = { \
+ .ct = CODE_TAG_INIT, \
+ .counters = &_alloc_tag_cntr };
+
+DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static inline bool mem_alloc_profiling_enabled(void)
+{
+ return static_branch_maybe(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ &mem_alloc_profiling_key);
+}
+
+static inline struct alloc_tag_counters alloc_tag_read(struct alloc_tag *tag)
+{
+ struct alloc_tag_counters v = { 0, 0 };
+ struct alloc_tag_counters *counter;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ counter = per_cpu_ptr(tag->counters, cpu);
+ v.bytes += counter->bytes;
+ v.calls += counter->calls;
+ }
+
+ return v;
+}
+
+static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes)
+{
+ struct alloc_tag *tag;
+
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n");
+#endif
+ if (!ref || !ref->ct)
+ return;
+
+ tag = ct_to_alloc_tag(ref->ct);
+
+ this_cpu_sub(tag->counters->bytes, bytes);
+ this_cpu_dec(tag->counters->calls);
+
+ ref->ct = NULL;
+}
+
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes)
+{
+ __alloc_tag_sub(ref, bytes);
+}
+
+static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes)
+{
+ __alloc_tag_sub(ref, bytes);
+}
+
+static inline void alloc_tag_ref_set(union codetag_ref *ref, struct alloc_tag *tag)
+{
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN_ONCE(ref && ref->ct,
+ "alloc_tag was not cleared (got tag for %s:%u)\n",\
+ ref->ct->filename, ref->ct->lineno);
+
+ WARN_ONCE(!tag, "current->alloc_tag not set");
+#endif
+ if (!ref || !tag)
+ return;
+
+ ref->ct = &tag->ct;
+ /*
+ * We need in increment the call counter every time we have a new
+ * allocation or when we split a large allocation into smaller ones.
+ * Each new reference for every sub-allocation needs to increment call
+ * counter because when we free each part the counter will be decremented.
+ */
+ this_cpu_inc(tag->counters->calls);
+}
+
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes)
+{
+ alloc_tag_ref_set(ref, tag);
+ this_cpu_add(tag->counters->bytes, bytes);
+}
+
+#else /* CONFIG_MEM_ALLOC_PROFILING */
+
+#define DEFINE_ALLOC_TAG(_alloc_tag)
+static inline bool mem_alloc_profiling_enabled(void) { return false; }
+static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {}
+static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) {}
+static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag,
+ size_t bytes) {}
+
+#endif /* CONFIG_MEM_ALLOC_PROFILING */
+
+#endif /* _LINUX_ALLOC_TAG_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ffe8f618ab86..eede1f92bcc6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -770,6 +770,10 @@ struct task_struct {
unsigned int flags;
unsigned int ptrace;
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+ struct alloc_tag *alloc_tag;
+#endif
+
#ifdef CONFIG_SMP
int on_cpu;
struct __call_single_node wake_entry;
@@ -810,6 +814,7 @@ struct task_struct {
struct task_group *sched_task_group;
#endif
+
#ifdef CONFIG_UCLAMP_TASK
/*
* Clamp values requested for a scheduling entity.
@@ -2183,4 +2188,23 @@ static inline int sched_core_idle_cpu(int cpu) { return idle_cpu(cpu); }
extern void sched_set_stop_task(int cpu, struct task_struct *stop);
+#ifdef CONFIG_MEM_ALLOC_PROFILING
+static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag)
+{
+ swap(current->alloc_tag, tag);
+ return tag;
+}
+
+static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old)
+{
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n");
+#endif
+ current->alloc_tag = old;
+}
+#else
+#define alloc_tag_save(_tag) NULL
+#define alloc_tag_restore(_tag, _old) do {} while (0)
+#endif
+
#endif
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 0be2d00c3696..78d258ca508f 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -972,6 +972,31 @@ config CODE_TAGGING
bool
select KALLSYMS
+config MEM_ALLOC_PROFILING
+ bool "Enable memory allocation profiling"
+ default n
+ depends on PROC_FS
+ depends on !DEBUG_FORCE_WEAK_PER_CPU
+ select CODE_TAGGING
+ help
+ Track allocation source code and record total allocation size
+ initiated at that code location. The mechanism can be used to track
+ memory leaks with a low performance and memory impact.
+
+config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ bool "Enable memory allocation profiling by default"
+ default y
+ depends on MEM_ALLOC_PROFILING
+
+config MEM_ALLOC_PROFILING_DEBUG
+ bool "Memory allocation profiler debugging"
+ default n
+ depends on MEM_ALLOC_PROFILING
+ select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+ help
+ Adds warnings with helpful error messages for memory allocation
+ profiling.
+
source "lib/Kconfig.kasan"
source "lib/Kconfig.kfence"
source "lib/Kconfig.kmsan"
diff --git a/lib/Makefile b/lib/Makefile
index 6b48b22fdfac..859112f09bf5 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -236,6 +236,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_CODE_TAGGING) += codetag.o
+obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o
+
lib-$(CONFIG_GENERIC_BUG) += bug.o
obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o
diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
new file mode 100644
index 000000000000..f09c8a422bc2
--- /dev/null
+++ b/lib/alloc_tag.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/alloc_tag.h>
+#include <linux/fs.h>
+#include <linux/gfp.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_buf.h>
+#include <linux/seq_file.h>
+
+static struct codetag_type *alloc_tag_cttype;
+
+DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+ mem_alloc_profiling_key);
+
+static void *allocinfo_start(struct seq_file *m, loff_t *pos)
+{
+ struct codetag_iterator *iter;
+ struct codetag *ct;
+ loff_t node = *pos;
+
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ m->private = iter;
+ if (!iter)
+ return NULL;
+
+ codetag_lock_module_list(alloc_tag_cttype, true);
+ *iter = codetag_get_ct_iter(alloc_tag_cttype);
+ while ((ct = codetag_next_ct(iter)) != NULL && node)
+ node--;
+
+ return ct ? iter : NULL;
+}
+
+static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ struct codetag *ct = codetag_next_ct(iter);
+
+ (*pos)++;
+ if (!ct)
+ return NULL;
+
+ return iter;
+}
+
+static void allocinfo_stop(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)m->private;
+
+ if (iter) {
+ codetag_lock_module_list(alloc_tag_cttype, false);
+ kfree(iter);
+ }
+}
+
+static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct)
+{
+ struct alloc_tag *tag = ct_to_alloc_tag(ct);
+ struct alloc_tag_counters counter = alloc_tag_read(tag);
+ s64 bytes = counter.bytes;
+
+ seq_buf_printf(out, "%12lli %8llu ", bytes, counter.calls);
+ codetag_to_text(out, ct);
+ seq_buf_putc(out, ' ');
+ seq_buf_putc(out, '\n');
+}
+
+static int allocinfo_show(struct seq_file *m, void *arg)
+{
+ struct codetag_iterator *iter = (struct codetag_iterator *)arg;
+ char *bufp;
+ size_t n = seq_get_buf(m, &bufp);
+ struct seq_buf buf;
+
+ seq_buf_init(&buf, bufp, n);
+ alloc_tag_to_text(&buf, iter->ct);
+ seq_commit(m, seq_buf_used(&buf));
+ return 0;
+}
+
+static const struct seq_operations allocinfo_seq_op = {
+ .start = allocinfo_start,
+ .next = allocinfo_next,
+ .stop = allocinfo_stop,
+ .show = allocinfo_show,
+};
+
+static void __init procfs_init(void)
+{
+ proc_create_seq("allocinfo", 0444, NULL, &allocinfo_seq_op);
+}
+
+static bool alloc_tag_module_unload(struct codetag_type *cttype,
+ struct codetag_module *cmod)
+{
+ struct codetag_iterator iter = codetag_get_ct_iter(cttype);
+ struct alloc_tag_counters counter;
+ bool module_unused = true;
+ struct alloc_tag *tag;
+ struct codetag *ct;
+
+ for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) {
+ if (iter.cmod != cmod)
+ continue;
+
+ tag = ct_to_alloc_tag(ct);
+ counter = alloc_tag_read(tag);
+
+ if (WARN(counter.bytes,
+ "%s:%u module %s func:%s has %llu allocated at module unload",
+ ct->filename, ct->lineno, ct->modname, ct->function, counter.bytes))
+ module_unused = false;
+ }
+
+ return module_unused;
+}
+
+static struct ctl_table memory_allocation_profiling_sysctls[] = {
+ {
+ .procname = "mem_profiling",
+ .data = &mem_alloc_profiling_key,
+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ .mode = 0444,
+#else
+ .mode = 0644,
+#endif
+ .proc_handler = proc_do_static_key,
+ },
+ { }
+};
+
+static int __init alloc_tag_init(void)
+{
+ const struct codetag_type_desc desc = {
+ .section = "alloc_tags",
+ .tag_size = sizeof(struct alloc_tag),
+ .module_unload = alloc_tag_module_unload,
+ };
+
+ alloc_tag_cttype = codetag_register_type(&desc);
+ if (IS_ERR_OR_NULL(alloc_tag_cttype))
+ return PTR_ERR(alloc_tag_cttype);
+
+ register_sysctl_init("vm", memory_allocation_profiling_sysctls);
+ procfs_init();
+
+ return 0;
+}
+module_init(alloc_tag_init);
diff --git a/scripts/module.lds.S b/scripts/module.lds.S
index bf5bcf2836d8..45c67a0994f3 100644
--- a/scripts/module.lds.S
+++ b/scripts/module.lds.S
@@ -9,6 +9,8 @@
#define DISCARD_EH_FRAME *(.eh_frame)
#endif
+#include <asm-generic/codetag.lds.h>
+
SECTIONS {
/DISCARD/ : {
*(.discard)
@@ -47,12 +49,17 @@ SECTIONS {
.data : {
*(.data .data.[0-9a-zA-Z_]*)
*(.data..L*)
+ CODETAG_SECTIONS()
}
.rodata : {
*(.rodata .rodata.[0-9a-zA-Z_]*)
*(.rodata..L*)
}
+#else
+ .data : {
+ CODETAG_SECTIONS()
+ }
#endif
}
--
2.44.0.rc0.258.g7320e95886-goog
^ permalink raw reply related [relevance 3%]
* [PATCH v4 00/36] Memory allocation profiling
@ 2024-02-21 19:40 3% Suren Baghdasaryan
2024-02-21 19:40 3% ` [PATCH v4 14/36] lib: add allocation tagging support for memory " Suren Baghdasaryan
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Suren Baghdasaryan @ 2024-02-21 19:40 UTC (permalink / raw)
To: akpm
Cc: kent.overstreet, mhocko, vbabka, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, penguin-kernel, corbet, void, peterz,
juri.lelli, catalin.marinas, will, arnd, tglx, mingo,
dave.hansen, x86, peterx, david, axboe, mcgrof, masahiroy,
nathan, dennis, tj, muchun.song, rppt, paulmck, pasha.tatashin,
yosryahmed, yuzhao, dhowells, hughd, andreyknvl, keescook,
ndesaulniers, vvvvvv, gregkh, ebiggers, ytcoode, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, bristot, vschneid, cl,
penberg, iamjoonsoo.kim, 42.hyeyoo, glider, elver, dvyukov,
shakeelb, songmuchun, jbaron, rientjes, minchan, kaleshsingh,
surenb, kernel-team, linux-doc, linux-kernel, iommu, linux-arch,
linux-fsdevel, linux-mm, linux-modules, kasan-dev, cgroups
Overview:
Low overhead [1] per-callsite memory allocation profiling. Not just for
debug kernels, overhead low enough to be deployed in production.
Example output:
root@moria-kvm:~# sort -rn /proc/allocinfo
127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
56373248 4737 mm/slub.c:2259 func:alloc_slab_page
14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
4206592 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
3940352 962 mm/memory.c:4214 func:alloc_anon_folio
2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
...
Since v3:
- Dropped patch changing string_get_size() [2] as not needed
- Dropped patch modifying xfs allocators [3] as non needed,
per Dave Chinner
- Added Reviewed-by, per Kees Cook
- Moved prepare_slab_obj_exts_hook() and alloc_slab_obj_exts() where they
are used, per Vlastimil Babka
- Fixed SLAB_NO_OBJ_EXT definition to use unused bit, per Vlastimil Babka
- Refactored patch [4] into other patches, per Vlastimil Babka
- Replaced snprintf() with seq_buf_printf(), per Kees Cook
- Changed output to report bytes, per Andrew Morton and Pasha Tatashin
- Changed output to report [module] only for loadable modules,
per Vlastimil Babka
- Moved mem_alloc_profiling_enabled() check earlier, per Vlastimil Babka
- Changed the code to handle page splitting to be more understandable,
per Vlastimil Babka
- Moved alloc_tagging_slab_free_hook(), mark_objexts_empty(),
mark_failed_objexts_alloc() and handle_failed_objexts_alloc(),
per Vlastimil Babka
- Fixed loss of __alloc_size(1, 2) in kvmalloc functions,
per Vlastimil Babka
- Refactored the code in show_mem() to avoid memory allocations,
per Michal Hocko
- Changed to trylock in show_mem() to avoid blocking in atomic context,
per Tetsuo Handa
- Added mm mailing list into MAINTAINERS, per Kees Cook
- Added base commit SHA, per Andy Shevchenko
- Added a patch with documentation, per Jani Nikula
- Fixed 0day bugs
- Added benchmark results [5], per Steven Rostedt
- Rebased over Linux 6.8-rc5
Items not yet addressed:
- An early_boot option to prevent pageext overhead. We are looking into
ways for using the same sysctr instead of adding additional early boot
parameter.
Usage:
kconfig options:
- CONFIG_MEM_ALLOC_PROFILING
- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
- CONFIG_MEM_ALLOC_PROFILING_DEBUG
adds warnings for allocations that weren't accounted because of a
missing annotation
sysctl:
/proc/sys/vm/mem_profiling
Runtime info:
/proc/allocinfo
Notes:
[1]: Overhead
To measure the overhead we are comparing the following configurations:
(1) Baseline with CONFIG_MEMCG_KMEM=n
(2) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n)
(3) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y)
(4) Enabled at runtime (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n && /proc/sys/vm/mem_profiling=1)
(5) Baseline with CONFIG_MEMCG_KMEM=y && allocating with __GFP_ACCOUNT
(6) Disabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=n) && CONFIG_MEMCG_KMEM=y
(7) Enabled by default (CONFIG_MEM_ALLOC_PROFILING=y &&
CONFIG_MEM_ALLOC_PROFILING_BY_DEFAULT=y) && CONFIG_MEMCG_KMEM=y
Performance overhead:
To evaluate performance we implemented an in-kernel test executing
multiple get_free_page/free_page and kmalloc/kfree calls with allocation
sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
affinity set to a specific CPU to minimize the noise. Below are results
from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
56 core Intel Xeon:
kmalloc pgalloc
(1 baseline) 6.764s 16.902s
(2 default disabled) 6.793s (+0.43%) 17.007s (+0.62%)
(3 default enabled) 7.197s (+6.40%) 23.666s (+40.02%)
(4 runtime enabled) 7.405s (+9.48%) 23.901s (+41.41%)
(5 memcg) 13.388s (+97.94%) 48.460s (+186.71%)
(6 def disabled+memcg) 13.332s (+97.10%) 48.105s (+184.61%)
(7 def enabled+memcg) 13.446s (+98.78%) 54.963s (+225.18%)
Memory overhead:
Kernel size:
text data bss dec diff
(1) 26515311 18890222 17018880 62424413
(2) 26524728 19423818 16740352 62688898 264485
(3) 26524724 19423818 16740352 62688894 264481
(4) 26524728 19423818 16740352 62688898 264485
(5) 26541782 18964374 16957440 62463596 39183
Memory consumption on a 56 core Intel CPU with 125GB of memory:
Code tags: 192 kB
PageExts: 262144 kB (256MB)
SlabExts: 9876 kB (9.6MB)
PcpuExts: 512 kB (0.5MB)
Total overhead is 0.2% of total memory.
[2] https://lore.kernel.org/all/20240212213922.783301-2-surenb@google.com/
[3] https://lore.kernel.org/all/20240212213922.783301-26-surenb@google.com/
[4] https://lore.kernel.org/all/20240212213922.783301-9-surenb@google.com/
[5] Benchmarks:
Hackbench tests run 100 times:
hackbench -s 512 -l 200 -g 15 -f 25 -P
baseline disabled profiling enabled profiling
avg 0.3543 0.3559 (+0.0016) 0.3566 (+0.0023)
stdev 0.0137 0.0188 0.0077
hackbench -l 10000
baseline disabled profiling enabled profiling
avg 6.4218 6.4306 (+0.0088) 6.5077 (+0.0859)
stdev 0.0933 0.0286 0.0489
stress-ng tests:
stress-ng --class memory --seq 4 -t 60
stress-ng --class cpu --seq 4 -t 60
Results posted at: https://evilpiepirate.org/~kent/memalloc_prof_v4_stress-ng/
Kent Overstreet (13):
fix missing vmalloc.h includes
asm-generic/io.h: Kill vmalloc.h dependency
mm/slub: Mark slab_free_freelist_hook() __always_inline
scripts/kallysms: Always include __start and __stop symbols
fs: Convert alloc_inode_sb() to a macro
rust: Add a rust helper for krealloc()
mempool: Hook up to memory allocation profiling
mm: percpu: Introduce pcpuobj_ext
mm: percpu: Add codetag reference into pcpuobj_ext
mm: vmalloc: Enable memory allocation profiling
rhashtable: Plumb through alloc tag
MAINTAINERS: Add entries for code tagging and memory allocation
profiling
memprofiling: Documentation
Suren Baghdasaryan (23):
mm: enumerate all gfp flags
mm: introduce slabobj_ext to support slab object extensions
mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext
creation
mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation
slab: objext: introduce objext_flags as extension to
page_memcg_data_flags
lib: code tagging framework
lib: code tagging module support
lib: prevent module unloading if memory is not freed
lib: add allocation tagging support for memory allocation profiling
lib: introduce support for page allocation tagging
mm: percpu: increase PERCPU_MODULE_RESERVE to accommodate allocation
tags
change alloc_pages name in dma_map_ops to avoid name conflicts
mm: enable page allocation tagging
mm: create new codetag references during page splitting
mm/page_ext: enable early_page_ext when
CONFIG_MEM_ALLOC_PROFILING_DEBUG=y
lib: add codetag reference into slabobj_ext
mm/slab: add allocation accounting into slab allocation and free paths
mm/slab: enable slab allocation tagging for kmalloc and friends
mm: percpu: enable per-cpu allocation tagging
lib: add memory allocations report in show_mem()
codetag: debug: skip objext checking when it's for objext itself
codetag: debug: mark codetags for reserved pages as empty
codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext
allocations
Documentation/admin-guide/sysctl/vm.rst | 16 +
Documentation/filesystems/proc.rst | 29 ++
Documentation/mm/allocation-profiling.rst | 86 ++++++
MAINTAINERS | 17 ++
arch/alpha/kernel/pci_iommu.c | 2 +-
arch/alpha/lib/checksum.c | 1 +
arch/alpha/lib/fpreg.c | 1 +
arch/alpha/lib/memcpy.c | 1 +
arch/arm/kernel/irq.c | 1 +
arch/arm/kernel/traps.c | 1 +
arch/arm64/kernel/efi.c | 1 +
arch/loongarch/include/asm/kfence.h | 1 +
arch/mips/jazz/jazzdma.c | 2 +-
arch/powerpc/kernel/dma-iommu.c | 2 +-
arch/powerpc/kernel/iommu.c | 1 +
arch/powerpc/mm/mem.c | 1 +
arch/powerpc/platforms/ps3/system-bus.c | 4 +-
arch/powerpc/platforms/pseries/vio.c | 2 +-
arch/riscv/kernel/elf_kexec.c | 1 +
arch/riscv/kernel/probes/kprobes.c | 1 +
arch/s390/kernel/cert_store.c | 1 +
arch/s390/kernel/ipl.c | 1 +
arch/x86/include/asm/io.h | 1 +
arch/x86/kernel/amd_gart_64.c | 2 +-
arch/x86/kernel/cpu/sgx/main.c | 1 +
arch/x86/kernel/irq_64.c | 1 +
arch/x86/mm/fault.c | 1 +
drivers/accel/ivpu/ivpu_mmu_context.c | 1 +
drivers/gpu/drm/gma500/mmu.c | 1 +
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 1 +
.../gpu/drm/i915/gem/selftests/mock_dmabuf.c | 1 +
drivers/gpu/drm/i915/gt/shmem_utils.c | 1 +
drivers/gpu/drm/i915/gvt/firmware.c | 1 +
drivers/gpu/drm/i915/gvt/gtt.c | 1 +
drivers/gpu/drm/i915/gvt/handlers.c | 1 +
drivers/gpu/drm/i915/gvt/mmio.c | 1 +
drivers/gpu/drm/i915/gvt/vgpu.c | 1 +
drivers/gpu/drm/i915/intel_gvt.c | 1 +
drivers/gpu/drm/imagination/pvr_vm_mips.c | 1 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 1 +
drivers/gpu/drm/omapdrm/omap_gem.c | 1 +
drivers/gpu/drm/v3d/v3d_bo.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_binding.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_devcaps.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 1 +
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 1 +
drivers/gpu/drm/xen/xen_drm_front_gem.c | 1 +
drivers/hwtracing/coresight/coresight-trbe.c | 1 +
drivers/iommu/dma-iommu.c | 2 +-
.../marvell/octeon_ep/octep_pfvf_mbox.c | 1 +
.../net/ethernet/microsoft/mana/hw_channel.c | 1 +
drivers/parisc/ccio-dma.c | 2 +-
drivers/parisc/sba_iommu.c | 2 +-
drivers/platform/x86/uv_sysfs.c | 1 +
drivers/scsi/mpi3mr/mpi3mr_transport.c | 2 +
drivers/staging/media/atomisp/pci/hmm/hmm.c | 2 +-
drivers/vfio/pci/pds/dirty.c | 1 +
drivers/virt/acrn/mm.c | 1 +
drivers/virtio/virtio_mem.c | 1 +
drivers/xen/grant-dma-ops.c | 2 +-
drivers/xen/swiotlb-xen.c | 2 +-
include/asm-generic/codetag.lds.h | 14 +
include/asm-generic/io.h | 1 -
include/asm-generic/vmlinux.lds.h | 3 +
include/linux/alloc_tag.h | 195 ++++++++++++
include/linux/codetag.h | 81 +++++
include/linux/dma-map-ops.h | 2 +-
include/linux/fortify-string.h | 5 +-
include/linux/fs.h | 6 +-
include/linux/gfp.h | 126 +++++---
include/linux/gfp_types.h | 101 +++++--
include/linux/memcontrol.h | 56 +++-
include/linux/mempool.h | 73 +++--
include/linux/mm.h | 9 +
include/linux/mm_types.h | 4 +-
include/linux/page_ext.h | 1 -
include/linux/pagemap.h | 9 +-
include/linux/pds/pds_common.h | 2 +
include/linux/percpu.h | 27 +-
include/linux/pgalloc_tag.h | 110 +++++++
include/linux/rhashtable-types.h | 11 +-
include/linux/sched.h | 24 ++
include/linux/slab.h | 175 +++++------
include/linux/string.h | 4 +-
include/linux/vmalloc.h | 60 +++-
include/rdma/rdmavt_qp.h | 1 +
init/Kconfig | 4 +
kernel/dma/mapping.c | 4 +-
kernel/kallsyms_selftest.c | 2 +-
kernel/module/main.c | 25 +-
lib/Kconfig.debug | 31 ++
lib/Makefile | 3 +
lib/alloc_tag.c | 204 +++++++++++++
lib/codetag.c | 283 ++++++++++++++++++
lib/rhashtable.c | 28 +-
mm/compaction.c | 7 +-
mm/debug_vm_pgtable.c | 1 +
mm/filemap.c | 6 +-
mm/huge_memory.c | 2 +
mm/kfence/core.c | 14 +-
mm/kfence/kfence.h | 4 +-
mm/memcontrol.c | 56 +---
mm/mempolicy.c | 52 ++--
mm/mempool.c | 36 +--
mm/mm_init.c | 13 +-
mm/nommu.c | 64 ++--
mm/page_alloc.c | 66 ++--
mm/page_ext.c | 13 +
mm/page_owner.c | 2 +-
mm/percpu-internal.h | 26 +-
mm/percpu.c | 120 +++-----
mm/show_mem.c | 26 ++
mm/slab.h | 126 ++++++--
mm/slab_common.c | 6 +-
mm/slub.c | 244 +++++++++++----
mm/util.c | 44 +--
mm/vmalloc.c | 88 +++---
rust/helpers.c | 8 +
scripts/kallsyms.c | 13 +
scripts/module.lds.S | 7 +
sound/pci/hda/cs35l41_hda.c | 1 +
123 files changed, 2269 insertions(+), 682 deletions(-)
create mode 100644 Documentation/mm/allocation-profiling.rst
create mode 100644 include/asm-generic/codetag.lds.h
create mode 100644 include/linux/alloc_tag.h
create mode 100644 include/linux/codetag.h
create mode 100644 include/linux/pgalloc_tag.h
create mode 100644 lib/alloc_tag.c
create mode 100644 lib/codetag.c
base-commit: 39133352cbed6626956d38ed72012f49b0421e7b
--
2.44.0.rc0.258.g7320e95886-goog
^ permalink raw reply [relevance 3%]
* Re: [PATCH 05/20] shmem: export shmem_get_folio
@ 2024-02-19 6:25 6% ` Christoph Hellwig
0 siblings, 0 replies; 200+ results
From: Christoph Hellwig @ 2024-02-19 6:25 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christoph Hellwig, Chandan Babu R, Darrick J. Wong, Hugh Dickins,
Andrew Morton, linux-xfs, linux-mm
On Fri, Feb 16, 2024 at 01:53:09PM +0000, Matthew Wilcox wrote:
> I know I gave an R-b on this earlier, but Hugh made me look again, and
> this comment clearly does not reflect what the function does.
> Presumably it returns an errno and sets foliop if it returns 0?
Almost. With SGP_READ it can set *foliop to NULL and still return 0.
> Also, should this function be called shmem_lock_folio() to mirror
> filemap_lock_folio()?
shmem_get_folio can also allocate (and sometimes zero) a new folio.
Except for the different calling conventions that closest filemap
equivalent is __filemap_get_folio. For now I'd like to avoid the
bikeshedding on the name and just get the work done.
^ permalink raw reply [relevance 6%]
* [linus:master] [mm] 9cee7e8ef3: netperf.Throughput_Mbps 4.0% improvement
@ 2024-02-18 13:16 6% kernel test robot
0 siblings, 0 replies; 200+ results
From: kernel test robot @ 2024-02-18 13:16 UTC (permalink / raw)
To: Yosry Ahmed
Cc: oe-lkp, lkp, linux-kernel, Andrew Morton, kernel test robot,
Shakeel Butt, Johannes Weiner, Michal Hocko, Muchun Song,
Roman Gushchin, Greg Thelen, cgroups, linux-mm, ying.huang,
feng.tang, fengwei.yin
hi, Yosry Ahmed,
we shared the performance impact of this commit in
https://lore.kernel.org/lkml/ZbDJsfsZt2ITyo61@xsang-OptiPlex-9020/
now we noticed the commit is merged in mainline, and we observed improvements
in other performance tests such like netperf and stress-ng.
the vm-scalability and will-it-scale results are also included FYI.
Hello,
kernel test robot noticed a 4.0% improvement of netperf.Throughput_Mbps on:
commit: 9cee7e8ef3e31ca25b40ca52b8585dc6935deff2 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: netperf
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
parameters:
ip: ipv4
runtime: 300s
nr_threads: 200%
cluster: cs-localhost
send_size: 10K
test: TCP_SENDFILE
cpufreq_governor: performance
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.metamix.ops_per_sec 4.1% improvement |
| test machine | 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=xfs |
| | nr_threads=10% |
| | test=metamix |
| | testtime=60s |
+------------------+----------------------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 42.0% improvement |
| test machine | 224 threads 4 sockets Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz (Cooper Lake) with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=128G |
| | test=truncate |
+------------------+----------------------------------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_thread_ops 54.9% improvement |
| test machine | 224 threads 4 sockets Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz (Cooper Lake) with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=thread |
| | nr_task=50% |
| | test=fallocate1 |
+------------------+----------------------------------------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240218/202402182000.f21279e1-oliver.sang@intel.com
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/send_size/tbox_group/test/testcase:
cs-localhost/gcc-12/performance/ipv4/x86_64-rhel-8.3/200%/debian-11.1-x86_64-20220510.cgz/300s/10K/lkp-icl-2sp2/TCP_SENDFILE/netperf
commit:
67b8bcbaed ("nilfs2: fix data corruption in dsync block recovery for small block sizes")
9cee7e8ef3 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()")
67b8bcbaed477787 9cee7e8ef3e31ca25b40ca52b85
---------------- ---------------------------
%stddev %change %stddev
\ | \
140392 ± 5% +9.2% 153362 ± 4% meminfo.DirectMap4k
772.17 ± 2% -19.0% 625.33 ± 4% perf-c2c.DRAM.remote
894.17 ± 3% -19.1% 723.17 ± 4% perf-c2c.HITM.local
-12.69 +55.8% -19.78 sched_debug.cpu.nr_uninterruptible.min
4.96 ± 8% +16.3% 5.77 ± 8% sched_debug.cpu.nr_uninterruptible.stddev
0.94 ± 2% -0.0 0.90 turbostat.C1%
34.22 -4.4% 32.70 ± 2% turbostat.RAMWatt
4939 +17.1% 5785 ± 6% perf-sched.total_wait_time.max.ms
1511 ± 32% -66.8% 502.34 ± 99% perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
4725 ± 7% +36.7% 6459 ± 25% perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
1506 ± 32% -66.6% 502.68 ± 99% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
4725 ± 7% +22.1% 5771 ± 6% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
9040 +4.0% 9402 netperf.ThroughputBoth_Mbps
2314243 +4.0% 2406983 netperf.ThroughputBoth_total_Mbps
9040 +4.0% 9402 netperf.Throughput_Mbps
2314243 +4.0% 2406983 netperf.Throughput_total_Mbps
1056 +3.9% 1098 netperf.time.user_time
15571304 +5.9% 16491497 netperf.time.voluntary_context_switches
8.475e+09 +4.0% 8.815e+09 netperf.workload
0.65 ± 2% -33.0% 0.44 ± 7% perf-stat.i.MPKI
4.977e+10 +3.2% 5.138e+10 perf-stat.i.branch-instructions
0.71 -0.0 0.68 perf-stat.i.branch-miss-rate%
20.77 ± 2% -3.3 17.49 ± 6% perf-stat.i.cache-miss-rate%
1.708e+08 ± 2% -30.9% 1.181e+08 ± 6% perf-stat.i.cache-misses
8.234e+08 -17.7% 6.776e+08 perf-stat.i.cache-references
1.25 -3.1% 1.21 perf-stat.i.cpi
1908 ± 2% +45.7% 2779 ± 7% perf-stat.i.cycles-between-cache-misses
7.258e+10 +3.1% 7.482e+10 perf-stat.i.dTLB-loads
4.018e+10 +3.2% 4.145e+10 perf-stat.i.dTLB-stores
2.608e+11 +3.2% 2.692e+11 perf-stat.i.instructions
0.80 +3.2% 0.83 perf-stat.i.ipc
1276 +3.0% 1315 perf-stat.i.metric.M/sec
15636176 ± 2% -19.5% 12582173 ± 5% perf-stat.i.node-load-misses
951084 ± 7% -39.3% 577496 ± 14% perf-stat.i.node-loads
48.91 ± 2% +5.6 54.54 ± 2% perf-stat.i.node-store-miss-rate%
0.66 ± 2% -33.0% 0.44 ± 7% perf-stat.overall.MPKI
0.70 -0.0 0.68 perf-stat.overall.branch-miss-rate%
20.75 ± 2% -3.3 17.43 ± 6% perf-stat.overall.cache-miss-rate%
1.25 -3.1% 1.21 perf-stat.overall.cpi
1903 ± 2% +45.3% 2766 ± 7% perf-stat.overall.cycles-between-cache-misses
0.80 +3.2% 0.83 perf-stat.overall.ipc
47.72 ± 3% +5.6 53.30 ± 3% perf-stat.overall.node-store-miss-rate%
4.961e+10 +3.2% 5.122e+10 perf-stat.ps.branch-instructions
1.703e+08 ± 2% -30.9% 1.177e+08 ± 6% perf-stat.ps.cache-misses
8.207e+08 -17.7% 6.754e+08 perf-stat.ps.cache-references
7.233e+10 +3.1% 7.457e+10 perf-stat.ps.dTLB-loads
4.005e+10 +3.2% 4.131e+10 perf-stat.ps.dTLB-stores
2.6e+11 +3.2% 2.683e+11 perf-stat.ps.instructions
15585093 ± 2% -19.5% 12543422 ± 5% perf-stat.ps.node-load-misses
947879 ± 7% -39.3% 575590 ± 14% perf-stat.ps.node-loads
7.848e+13 +3.1% 8.093e+13 perf-stat.total.instructions
3.80 ± 3% -2.1 1.71 ± 5% perf-profile.calltrace.cycles-pp.__mod_memcg_state.mem_cgroup_charge_skmem.__sk_mem_raise_allocated.__sk_mem_schedule.tcp_wmem_schedule
6.27 -2.0 4.24 ± 2% perf-profile.calltrace.cycles-pp.mem_cgroup_charge_skmem.__sk_mem_raise_allocated.__sk_mem_schedule.tcp_wmem_schedule.tcp_sendmsg_locked
46.14 -1.5 44.60 perf-profile.calltrace.cycles-pp.sock_sendmsg.splice_to_socket.direct_splice_actor.splice_direct_to_actor.do_splice_direct
44.24 -1.5 42.75 perf-profile.calltrace.cycles-pp.tcp_sendmsg.sock_sendmsg.splice_to_socket.direct_splice_actor.splice_direct_to_actor
11.92 -1.5 10.44 perf-profile.calltrace.cycles-pp.__sk_mem_raise_allocated.__sk_mem_schedule.tcp_wmem_schedule.tcp_sendmsg_locked.tcp_sendmsg
12.15 -1.5 10.68 perf-profile.calltrace.cycles-pp.tcp_wmem_schedule.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.splice_to_socket
12.02 -1.5 10.56 perf-profile.calltrace.cycles-pp.__sk_mem_schedule.tcp_wmem_schedule.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
50.08 -1.3 48.81 perf-profile.calltrace.cycles-pp.splice_to_socket.direct_splice_actor.splice_direct_to_actor.do_splice_direct.do_sendfile
50.52 -1.3 49.26 perf-profile.calltrace.cycles-pp.direct_splice_actor.splice_direct_to_actor.do_splice_direct.do_sendfile.__x64_sys_sendfile64
39.04 -1.2 37.88 perf-profile.calltrace.cycles-pp.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.splice_to_socket.direct_splice_actor
60.56 -0.8 59.77 perf-profile.calltrace.cycles-pp.splice_direct_to_actor.do_splice_direct.do_sendfile.__x64_sys_sendfile64.do_syscall_64
60.81 -0.8 60.02 perf-profile.calltrace.cycles-pp.do_splice_direct.do_sendfile.__x64_sys_sendfile64.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.84 -0.7 3.14 perf-profile.calltrace.cycles-pp.tcp_try_rmem_schedule.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
3.72 ± 5% -0.6 3.11 perf-profile.calltrace.cycles-pp.__sk_mem_schedule.tcp_try_rmem_schedule.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv
64.97 -0.6 64.37 perf-profile.calltrace.cycles-pp.do_sendfile.__x64_sys_sendfile64.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendfile
11.66 -0.5 11.15 perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
11.61 -0.5 11.10 perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.__netif_receive_skb_one_core.process_backlog.__napi_poll
10.02 -0.5 9.52 perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.__netif_receive_skb_one_core
11.34 -0.5 10.85 perf-profile.calltrace.cycles-pp.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.__netif_receive_skb_one_core.process_backlog
9.51 -0.5 9.02 perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
12.61 -0.5 12.12 perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__do_softirq
13.00 -0.5 12.53 perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__do_softirq.do_softirq
13.06 -0.5 12.60 perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__do_softirq.do_softirq.__local_bh_enable_ip
14.22 -0.5 13.76 perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.__dev_queue_xmit.ip_finish_output2.__ip_queue_xmit.__tcp_transmit_skb
14.11 -0.5 13.65 perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.__dev_queue_xmit.ip_finish_output2.__ip_queue_xmit
13.30 -0.4 12.86 perf-profile.calltrace.cycles-pp.net_rx_action.__do_softirq.do_softirq.__local_bh_enable_ip.__dev_queue_xmit
13.96 -0.4 13.52 perf-profile.calltrace.cycles-pp.__do_softirq.do_softirq.__local_bh_enable_ip.__dev_queue_xmit.ip_finish_output2
1.88 -0.4 1.49 perf-profile.calltrace.cycles-pp.__sk_mem_reduce_allocated.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_recvmsg
3.45 ± 5% -0.4 3.09 perf-profile.calltrace.cycles-pp.__sk_mem_raise_allocated.__sk_mem_schedule.tcp_try_rmem_schedule.tcp_data_queue.tcp_rcv_established
6.74 -0.3 6.39 perf-profile.calltrace.cycles-pp.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu
3.53 -0.3 3.18 ± 2% perf-profile.calltrace.cycles-pp.__release_sock.release_sock.tcp_sendmsg.sock_sendmsg.splice_to_socket
4.11 -0.3 3.78 ± 2% perf-profile.calltrace.cycles-pp.release_sock.tcp_sendmsg.sock_sendmsg.splice_to_socket.direct_splice_actor
4.51 ± 2% -0.3 4.21 perf-profile.calltrace.cycles-pp.ip_finish_output2.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames
2.43 ± 3% -0.3 2.15 perf-profile.calltrace.cycles-pp.mem_cgroup_charge_skmem.__sk_mem_raise_allocated.__sk_mem_schedule.tcp_try_rmem_schedule.tcp_data_queue
12.72 -0.3 12.46 perf-profile.calltrace.cycles-pp.__dev_queue_xmit.ip_finish_output2.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit
2.28 -0.2 2.04 ± 2% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock.tcp_sendmsg
2.32 -0.2 2.08 ± 2% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.tcp_sendmsg.sock_sendmsg
3.50 ± 2% -0.2 3.32 perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg
3.22 ± 2% -0.2 3.03 perf-profile.calltrace.cycles-pp.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked
1.01 -0.2 0.83 perf-profile.calltrace.cycles-pp.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock
0.84 -0.2 0.67 perf-profile.calltrace.cycles-pp.tcp_clean_rtx_queue.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.__release_sock
4.10 -0.2 3.94 perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_recvmsg
1.91 ± 2% -0.2 1.76 ± 3% perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_rcv_established.tcp_v4_do_rcv.__release_sock
1.85 -0.2 1.70 ± 2% perf-profile.calltrace.cycles-pp.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu
3.84 -0.2 3.69 perf-profile.calltrace.cycles-pp.__ip_queue_xmit.__tcp_transmit_skb.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
1.92 ± 2% -0.2 1.77 ± 3% perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock
3.59 -0.2 3.44 perf-profile.calltrace.cycles-pp.ip_finish_output2.__ip_queue_xmit.__tcp_transmit_skb.tcp_recvmsg_locked.tcp_recvmsg
3.48 -0.1 3.34 perf-profile.calltrace.cycles-pp.__dev_queue_xmit.ip_finish_output2.__ip_queue_xmit.__tcp_transmit_skb.tcp_recvmsg_locked
1.72 ± 2% -0.1 1.58 ± 3% perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_rcv_established.tcp_v4_do_rcv
1.58 ± 2% -0.1 1.45 ± 3% perf-profile.calltrace.cycles-pp.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_rcv_established
0.55 -0.1 0.43 ± 44% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_sync_key.sock_def_readable
1.83 -0.1 1.71 perf-profile.calltrace.cycles-pp.tcp_stream_alloc_skb.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.splice_to_socket
0.58 -0.0 0.55 perf-profile.calltrace.cycles-pp.schedule.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendfile
0.55 -0.0 0.53 ± 2% perf-profile.calltrace.cycles-pp.__schedule.schedule.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.59 -0.0 0.57 perf-profile.calltrace.cycles-pp.lock_sock_nested.tcp_sendmsg.sock_sendmsg.splice_to_socket.direct_splice_actor
0.64 +0.0 0.68 perf-profile.calltrace.cycles-pp.tcp_event_new_data_sent.tcp_write_xmit.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
0.78 +0.0 0.81 perf-profile.calltrace.cycles-pp._copy_from_user.__x64_sys_sendfile64.do_syscall_64.entry_SYSCALL_64_after_hwframe.sendfile
1.11 +0.0 1.14 perf-profile.calltrace.cycles-pp.tcp_send_mss.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.splice_to_socket
0.67 +0.0 0.70 perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.splice_direct_to_actor.do_splice_direct.do_sendfile
0.82 +0.0 0.85 perf-profile.calltrace.cycles-pp.touch_atime.splice_direct_to_actor.do_splice_direct.do_sendfile.__x64_sys_sendfile64
0.66 +0.0 0.69 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.sendfile.sendfile_tcp_stream.main.__libc_start_main
0.94 +0.0 0.98 perf-profile.calltrace.cycles-pp.__alloc_skb.tcp_stream_alloc_skb.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
0.52 +0.0 0.56 perf-profile.calltrace.cycles-pp.aa_sk_perm.security_socket_sendmsg.sock_sendmsg.splice_to_socket.direct_splice_actor
1.11 +0.0 1.16 perf-profile.calltrace.cycles-pp.rw_verify_area.do_sendfile.__x64_sys_sendfile64.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.70 +0.0 0.74 perf-profile.calltrace.cycles-pp.iov_iter_advance.iov_iter_extract_pages.skb_splice_from_iter.tcp_sendmsg_locked.tcp_sendmsg
0.95 ± 2% +0.0 1.00 ± 2% perf-profile.calltrace.cycles-pp.page_cache_pipe_buf_release.splice_to_socket.direct_splice_actor.splice_direct_to_actor.do_splice_direct
0.58 +0.1 0.64 perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.filemap_splice_read.splice_direct_to_actor.do_splice_direct
1.21 +0.1 1.28 perf-profile.calltrace.cycles-pp.__fsnotify_parent.do_sendfile.__x64_sys_sendfile64.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.56 +0.1 0.62 perf-profile.calltrace.cycles-pp.netperf_sendfile.sendfile_tcp_stream.main.__libc_start_main
0.68 +0.1 0.75 perf-profile.calltrace.cycles-pp.touch_atime.filemap_splice_read.splice_direct_to_actor.do_splice_direct.do_sendfile
1.16 +0.1 1.24 perf-profile.calltrace.cycles-pp.release_pages.__folio_batch_release.filemap_splice_read.splice_direct_to_actor.do_splice_direct
1.62 +0.1 1.70 perf-profile.calltrace.cycles-pp.splice_folio_into_pipe.filemap_splice_read.splice_direct_to_actor.do_splice_direct.do_sendfile
1.57 +0.1 1.66 perf-profile.calltrace.cycles-pp.skb_append_pagefrags.skb_splice_from_iter.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
1.46 +0.1 1.54 perf-profile.calltrace.cycles-pp.__folio_batch_release.filemap_splice_read.splice_direct_to_actor.do_splice_direct.do_sendfile
2.42 +0.1 2.53 perf-profile.calltrace.cycles-pp.filemap_get_read_batch.filemap_get_pages.filemap_splice_read.splice_direct_to_actor.do_splice_direct
1.83 +0.1 1.95 perf-profile.calltrace.cycles-pp.iov_iter_extract_pages.skb_splice_from_iter.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
2.06 +0.1 2.19 perf-profile.calltrace.cycles-pp.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg
0.72 ± 14% +0.1 0.85 perf-profile.calltrace.cycles-pp.skb_release_data.skb_attempt_defer_free.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
0.62 ± 12% +0.1 0.75 ± 4% perf-profile.calltrace.cycles-pp.drain_stock.refill_stock.__sk_mem_reduce_allocated.tcp_recvmsg_locked.tcp_recvmsg
2.76 +0.1 2.89 perf-profile.calltrace.cycles-pp.filemap_get_pages.filemap_splice_read.splice_direct_to_actor.do_splice_direct.do_sendfile
0.76 ± 14% +0.1 0.90 perf-profile.calltrace.cycles-pp.skb_attempt_defer_free.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_recvmsg
0.74 ± 13% +0.1 0.88 ± 3% perf-profile.calltrace.cycles-pp.refill_stock.__sk_mem_reduce_allocated.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
0.66 ± 13% +0.1 0.80 ± 3% perf-profile.calltrace.cycles-pp.refill_stock.__sk_mem_reduce_allocated.tcp_clean_rtx_queue.tcp_ack.tcp_rcv_established
24.66 +0.2 24.83 perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.sock_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
24.30 +0.2 24.47 perf-profile.calltrace.cycles-pp.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_recvmsg.__sys_recvfrom
24.76 +0.2 24.94 perf-profile.calltrace.cycles-pp.inet_recvmsg.sock_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64
24.90 +0.2 25.07 perf-profile.calltrace.cycles-pp.sock_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.86 +0.2 26.03 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.recv.process_requests.spawn_child.accept_connection
1.60 ± 11% +0.2 1.78 perf-profile.calltrace.cycles-pp.__check_object_size.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked
25.82 +0.2 26.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.recv.process_requests.spawn_child
1.01 ± 16% +0.2 1.20 perf-profile.calltrace.cycles-pp.check_heap_object.__check_object_size.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
26.12 +0.2 26.30 perf-profile.calltrace.cycles-pp.recv.process_requests.spawn_child.accept_connection.accept_connections
0.35 ± 70% +0.2 0.55 ± 2% perf-profile.calltrace.cycles-pp.__virt_addr_valid.check_heap_object.__check_object_size.simple_copy_to_iter.__skb_datagram_iter
25.41 +0.2 25.61 perf-profile.calltrace.cycles-pp.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe.recv.process_requests
25.36 +0.2 25.56 perf-profile.calltrace.cycles-pp.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe.recv
0.52 ± 46% +0.2 0.73 ± 3% perf-profile.calltrace.cycles-pp.page_counter_uncharge.drain_stock.refill_stock.__sk_mem_reduce_allocated.tcp_clean_rtx_queue
0.52 ± 45% +0.2 0.74 ± 4% perf-profile.calltrace.cycles-pp.page_counter_uncharge.drain_stock.refill_stock.__sk_mem_reduce_allocated.tcp_recvmsg_locked
0.52 ± 46% +0.2 0.74 ± 4% perf-profile.calltrace.cycles-pp.drain_stock.refill_stock.__sk_mem_reduce_allocated.tcp_clean_rtx_queue.tcp_ack
5.10 +0.3 5.40 perf-profile.calltrace.cycles-pp.skb_splice_from_iter.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.splice_to_socket
10.48 +0.4 10.92 perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg
8.16 +0.4 8.60 perf-profile.calltrace.cycles-pp.filemap_splice_read.splice_direct_to_actor.do_splice_direct.do_sendfile.__x64_sys_sendfile64
13.73 +0.6 14.36 perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
13.84 +0.7 14.50 perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_recvmsg
23.63 ± 16% +2.9 26.56 perf-profile.calltrace.cycles-pp.accept_connection.accept_connections.main.__libc_start_main
23.63 ± 16% +2.9 26.56 perf-profile.calltrace.cycles-pp.accept_connections.main.__libc_start_main
23.63 ± 16% +2.9 26.56 perf-profile.calltrace.cycles-pp.process_requests.spawn_child.accept_connection.accept_connections.main
23.63 ± 16% +2.9 26.56 perf-profile.calltrace.cycles-pp.spawn_child.accept_connection.accept_connections.main.__libc_start_main
6.60 ± 3% -3.6 3.00 ± 4% perf-profile.children.cycles-pp.__mod_memcg_state
9.75 -2.6 7.11 perf-profile.children.cycles-pp.mem_cgroup_charge_skmem
16.64 -1.8 14.81 perf-profile.children.cycles-pp.__sk_mem_raise_allocated
16.77 -1.8 14.94 perf-profile.children.cycles-pp.__sk_mem_schedule
12.25 -1.4 10.82 perf-profile.children.cycles-pp.tcp_wmem_schedule
46.67 -1.4 45.29 perf-profile.children.cycles-pp.sock_sendmsg
44.82 -1.3 43.49 perf-profile.children.cycles-pp.tcp_sendmsg
50.77 -1.2 49.59 perf-profile.children.cycles-pp.splice_to_socket
51.05 -1.2 49.88 perf-profile.children.cycles-pp.direct_splice_actor
39.54 -1.0 38.53 perf-profile.children.cycles-pp.tcp_sendmsg_locked
1.62 ± 3% -0.8 0.80 ± 3% perf-profile.children.cycles-pp.mem_cgroup_uncharge_skmem
12.44 -0.7 11.73 perf-profile.children.cycles-pp.tcp_v4_do_rcv
11.91 -0.7 11.21 perf-profile.children.cycles-pp.tcp_rcv_established
3.49 -0.7 2.81 perf-profile.children.cycles-pp.__sk_mem_reduce_allocated
61.00 -0.6 60.40 perf-profile.children.cycles-pp.splice_direct_to_actor
61.22 -0.6 60.62 perf-profile.children.cycles-pp.do_splice_direct
11.68 -0.5 11.21 perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
11.72 -0.5 11.25 perf-profile.children.cycles-pp.ip_local_deliver_finish
12.68 -0.5 12.22 perf-profile.children.cycles-pp.__netif_receive_skb_one_core
11.45 -0.5 10.99 perf-profile.children.cycles-pp.tcp_v4_rcv
13.08 -0.4 12.64 perf-profile.children.cycles-pp.process_backlog
13.14 -0.4 12.71 perf-profile.children.cycles-pp.__napi_poll
14.19 -0.4 13.78 perf-profile.children.cycles-pp.do_softirq
14.53 -0.4 14.12 perf-profile.children.cycles-pp.__local_bh_enable_ip
13.38 -0.4 12.98 perf-profile.children.cycles-pp.net_rx_action
14.06 -0.4 13.67 perf-profile.children.cycles-pp.__do_softirq
65.51 -0.4 65.12 perf-profile.children.cycles-pp.do_sendfile
16.79 -0.4 16.41 perf-profile.children.cycles-pp.ip_finish_output2
19.45 -0.4 19.08 perf-profile.children.cycles-pp.__tcp_transmit_skb
16.36 -0.4 16.00 perf-profile.children.cycles-pp.__dev_queue_xmit
17.89 -0.4 17.53 perf-profile.children.cycles-pp.__ip_queue_xmit
4.16 -0.3 3.83 perf-profile.children.cycles-pp.tcp_try_rmem_schedule
3.67 -0.3 3.33 ± 2% perf-profile.children.cycles-pp.__release_sock
4.47 -0.3 4.14 ± 2% perf-profile.children.cycles-pp.release_sock
67.14 -0.3 66.81 perf-profile.children.cycles-pp.__x64_sys_sendfile64
6.84 -0.3 6.52 perf-profile.children.cycles-pp.tcp_data_queue
3.20 -0.3 2.88 perf-profile.children.cycles-pp.tcp_ack
2.60 -0.3 2.29 perf-profile.children.cycles-pp.tcp_clean_rtx_queue
7.57 -0.3 7.31 perf-profile.children.cycles-pp.__tcp_push_pending_frames
95.15 -0.1 95.00 perf-profile.children.cycles-pp.do_syscall_64
95.61 -0.1 95.48 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.94 -0.1 1.84 perf-profile.children.cycles-pp.tcp_stream_alloc_skb
1.10 -0.1 1.04 perf-profile.children.cycles-pp.ttwu_do_activate
0.15 ± 2% -0.1 0.09 ± 5% perf-profile.children.cycles-pp.apparmor_socket_sendmsg
0.83 -0.1 0.78 perf-profile.children.cycles-pp.enqueue_task_fair
0.86 -0.1 0.81 perf-profile.children.cycles-pp.activate_task
0.43 -0.0 0.39 perf-profile.children.cycles-pp.enqueue_entity
0.11 -0.0 0.09 ± 5% perf-profile.children.cycles-pp.iov_iter_bvec
0.27 ± 3% -0.0 0.25 perf-profile.children.cycles-pp.pick_eevdf
0.35 ± 2% -0.0 0.33 perf-profile.children.cycles-pp.prepare_task_switch
0.16 ± 4% -0.0 0.14 ± 4% perf-profile.children.cycles-pp.check_preempt_wakeup_fair
0.74 -0.0 0.72 perf-profile.children.cycles-pp.dequeue_task_fair
0.09 -0.0 0.08 perf-profile.children.cycles-pp.rb_first
0.07 +0.0 0.08 perf-profile.children.cycles-pp.security_socket_recvmsg
0.08 +0.0 0.09 perf-profile.children.cycles-pp.tcp_event_data_recv
0.12 ± 3% +0.0 0.13 perf-profile.children.cycles-pp.tcp_rearm_rto
0.23 ± 2% +0.0 0.25 perf-profile.children.cycles-pp.tcp_rcv_space_adjust
0.16 ± 3% +0.0 0.18 perf-profile.children.cycles-pp.lock_timer_base
0.24 ± 2% +0.0 0.26 perf-profile.children.cycles-pp.validate_xmit_skb
0.23 ± 2% +0.0 0.24 perf-profile.children.cycles-pp.__slab_free
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.nf_hook_slow
0.16 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.do_splice_read
0.12 +0.0 0.14 ± 3% perf-profile.children.cycles-pp.netif_skb_features
0.68 +0.0 0.70 perf-profile.children.cycles-pp.sk_reset_timer
0.38 +0.0 0.40 perf-profile.children.cycles-pp.__netif_rx
0.40 +0.0 0.42 perf-profile.children.cycles-pp.tcp_mstamp_refresh
0.36 +0.0 0.38 perf-profile.children.cycles-pp.netif_rx_internal
0.58 +0.0 0.60 perf-profile.children.cycles-pp.xas_load
0.30 ± 2% +0.0 0.32 ± 2% perf-profile.children.cycles-pp.rcu_all_qs
0.17 ± 2% +0.0 0.19 ± 2% perf-profile.children.cycles-pp.tcp_queue_rcv
0.56 +0.0 0.58 perf-profile.children.cycles-pp.kmem_cache_free
0.36 +0.0 0.38 perf-profile.children.cycles-pp.page_cache_pipe_buf_confirm
0.24 +0.0 0.26 ± 2% perf-profile.children.cycles-pp.ip_output
0.18 ± 2% +0.0 0.21 perf-profile.children.cycles-pp.ip_rcv_core
0.20 +0.0 0.22 ± 2% perf-profile.children.cycles-pp.is_vmalloc_addr
1.04 +0.0 1.06 perf-profile.children.cycles-pp.dev_hard_start_xmit
0.50 +0.0 0.52 perf-profile.children.cycles-pp.__put_user_8
0.95 +0.0 0.98 perf-profile.children.cycles-pp.loopback_xmit
0.66 +0.0 0.68 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.19 ± 2% +0.0 0.22 ± 2% perf-profile.children.cycles-pp.sockfd_lookup_light
0.72 +0.0 0.75 perf-profile.children.cycles-pp.read_tsc
0.82 +0.0 0.85 perf-profile.children.cycles-pp.tcp_event_new_data_sent
0.86 +0.0 0.89 perf-profile.children.cycles-pp._copy_from_user
0.90 +0.0 0.94 perf-profile.children.cycles-pp.security_file_permission
0.57 +0.0 0.61 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.71 +0.0 0.75 perf-profile.children.cycles-pp.netperf_sendfile
0.77 +0.0 0.81 perf-profile.children.cycles-pp.entry_SYSCALL_64
1.15 +0.0 1.19 perf-profile.children.cycles-pp.tcp_send_mss
0.90 +0.0 0.95 perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.89 +0.0 0.94 perf-profile.children.cycles-pp.__fdget
0.60 +0.0 0.65 perf-profile.children.cycles-pp.aa_sk_perm
0.57 +0.0 0.62 perf-profile.children.cycles-pp.current_time
1.22 +0.0 1.27 perf-profile.children.cycles-pp.skb_release_data
1.25 +0.1 1.30 perf-profile.children.cycles-pp.__alloc_skb
0.74 +0.1 0.80 perf-profile.children.cycles-pp.iov_iter_advance
0.86 +0.1 0.91 perf-profile.children.cycles-pp.skb_attempt_defer_free
1.18 +0.1 1.23 perf-profile.children.cycles-pp.rw_verify_area
0.99 +0.1 1.05 perf-profile.children.cycles-pp.page_cache_pipe_buf_release
1.10 +0.1 1.16 ± 2% perf-profile.children.cycles-pp.ktime_get
1.25 +0.1 1.32 perf-profile.children.cycles-pp.__fsnotify_parent
1.24 +0.1 1.31 perf-profile.children.cycles-pp.check_heap_object
1.21 +0.1 1.29 perf-profile.children.cycles-pp.release_pages
1.67 +0.1 1.76 perf-profile.children.cycles-pp.splice_folio_into_pipe
1.63 +0.1 1.72 perf-profile.children.cycles-pp.skb_append_pagefrags
1.52 +0.1 1.62 perf-profile.children.cycles-pp.__folio_batch_release
1.37 +0.1 1.47 perf-profile.children.cycles-pp.atime_needs_update
1.56 +0.1 1.68 perf-profile.children.cycles-pp.touch_atime
2.48 +0.1 2.60 perf-profile.children.cycles-pp.filemap_get_read_batch
1.98 +0.1 2.11 perf-profile.children.cycles-pp.__check_object_size
1.96 +0.1 2.08 perf-profile.children.cycles-pp.iov_iter_extract_pages
2.13 +0.1 2.26 perf-profile.children.cycles-pp.simple_copy_to_iter
2.80 +0.1 2.95 perf-profile.children.cycles-pp.filemap_get_pages
25.18 +0.2 25.35 perf-profile.children.cycles-pp.inet_recvmsg
24.74 +0.2 24.91 perf-profile.children.cycles-pp.tcp_recvmsg_locked
25.32 +0.2 25.49 perf-profile.children.cycles-pp.sock_recvmsg
25.10 +0.2 25.27 perf-profile.children.cycles-pp.tcp_recvmsg
26.38 +0.2 26.56 perf-profile.children.cycles-pp.accept_connection
26.38 +0.2 26.56 perf-profile.children.cycles-pp.accept_connections
26.38 +0.2 26.56 perf-profile.children.cycles-pp.process_requests
26.38 +0.2 26.56 perf-profile.children.cycles-pp.spawn_child
27.00 +0.2 27.18 perf-profile.children.cycles-pp.recv
25.83 +0.2 26.03 perf-profile.children.cycles-pp.__x64_sys_recvfrom
25.78 +0.2 25.98 perf-profile.children.cycles-pp.__sys_recvfrom
5.31 +0.3 5.62 perf-profile.children.cycles-pp.skb_splice_from_iter
10.52 +0.4 10.96 perf-profile.children.cycles-pp._copy_to_iter
8.40 +0.5 8.89 perf-profile.children.cycles-pp.filemap_splice_read
13.82 +0.6 14.47 perf-profile.children.cycles-pp.__skb_datagram_iter
13.85 +0.7 14.50 perf-profile.children.cycles-pp.skb_copy_datagram_iter
6.34 ± 3% -3.6 2.71 ± 5% perf-profile.self.cycles-pp.__mod_memcg_state
0.12 ± 4% -0.0 0.08 ± 6% perf-profile.self.cycles-pp.apparmor_socket_sendmsg
0.10 ± 4% -0.0 0.07 perf-profile.self.cycles-pp.iov_iter_bvec
0.80 -0.0 0.78 perf-profile.self.cycles-pp.sock_sendmsg
0.20 -0.0 0.18 ± 2% perf-profile.self.cycles-pp.pick_eevdf
0.11 ± 3% -0.0 0.09 perf-profile.self.cycles-pp.enqueue_task_fair
0.12 -0.0 0.10 ± 3% perf-profile.self.cycles-pp.sk_wait_data
0.20 ± 2% -0.0 0.18 ± 2% perf-profile.self.cycles-pp.release_sock
0.08 ± 6% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.enqueue_entity
0.15 ± 5% -0.0 0.13 ± 2% perf-profile.self.cycles-pp.do_softirq
0.26 -0.0 0.24 perf-profile.self.cycles-pp.refill_stock
0.06 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.rb_erase
0.12 -0.0 0.11 perf-profile.self.cycles-pp.avg_vruntime
0.61 +0.0 0.63 perf-profile.self.cycles-pp.mem_cgroup_charge_skmem
0.14 ± 2% +0.0 0.16 ± 3% perf-profile.self.cycles-pp.tcp_data_queue
0.18 ± 2% +0.0 0.20 ± 2% perf-profile.self.cycles-pp.simple_copy_to_iter
0.22 +0.0 0.24 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.28 +0.0 0.29 perf-profile.self.cycles-pp.direct_splice_actor
0.21 ± 2% +0.0 0.23 ± 2% perf-profile.self.cycles-pp.__slab_free
0.17 ± 2% +0.0 0.19 ± 3% perf-profile.self.cycles-pp.tcp_send_mss
0.14 ± 2% +0.0 0.16 ± 3% perf-profile.self.cycles-pp.do_splice_read
0.22 ± 2% +0.0 0.24 ± 2% perf-profile.self.cycles-pp.net_rx_action
0.27 +0.0 0.29 perf-profile.self.cycles-pp.rw_verify_area
0.25 +0.0 0.27 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.20 +0.0 0.22 ± 2% perf-profile.self.cycles-pp.process_backlog
0.21 +0.0 0.23 ± 2% perf-profile.self.cycles-pp.rcu_all_qs
0.12 ± 4% +0.0 0.14 ± 3% perf-profile.self.cycles-pp.lock_sock_nested
0.52 +0.0 0.54 perf-profile.self.cycles-pp.__virt_addr_valid
0.44 +0.0 0.46 perf-profile.self.cycles-pp.__schedule
0.48 +0.0 0.50 perf-profile.self.cycles-pp.check_heap_object
0.33 +0.0 0.35 perf-profile.self.cycles-pp.filemap_get_pages
0.16 ± 2% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.is_vmalloc_addr
0.32 ± 2% +0.0 0.34 perf-profile.self.cycles-pp.page_cache_pipe_buf_confirm
0.56 +0.0 0.59 perf-profile.self.cycles-pp.sendfile
0.39 +0.0 0.42 perf-profile.self.cycles-pp.tcp_recvmsg_locked
0.47 +0.0 0.50 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.48 +0.0 0.51 perf-profile.self.cycles-pp.__put_user_8
0.18 ± 4% +0.0 0.20 perf-profile.self.cycles-pp.ip_rcv_core
0.24 ± 3% +0.0 0.26 perf-profile.self.cycles-pp.__sk_mem_reduce_allocated
0.40 +0.0 0.43 perf-profile.self.cycles-pp.current_time
0.69 +0.0 0.72 perf-profile.self.cycles-pp.sendfile_tcp_stream
0.68 +0.0 0.72 perf-profile.self.cycles-pp.read_tsc
0.47 +0.0 0.50 perf-profile.self.cycles-pp.aa_sk_perm
0.95 +0.0 0.98 perf-profile.self.cycles-pp.skb_release_data
0.84 +0.0 0.87 perf-profile.self.cycles-pp._copy_from_user
0.55 +0.0 0.59 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.64 +0.0 0.67 perf-profile.self.cycles-pp.netperf_sendfile
0.61 +0.0 0.65 perf-profile.self.cycles-pp.__check_object_size
1.00 +0.0 1.04 perf-profile.self.cycles-pp.tcp_write_xmit
0.82 +0.0 0.87 perf-profile.self.cycles-pp.__fdget
0.62 +0.0 0.67 perf-profile.self.cycles-pp.atime_needs_update
0.68 +0.0 0.73 perf-profile.self.cycles-pp.iov_iter_advance
0.00 +0.1 0.05 perf-profile.self.cycles-pp.free_unref_page_list
0.98 +0.1 1.03 perf-profile.self.cycles-pp.__skb_datagram_iter
0.95 +0.1 1.00 perf-profile.self.cycles-pp.page_cache_pipe_buf_release
1.22 +0.1 1.28 perf-profile.self.cycles-pp.__fsnotify_parent
1.06 +0.1 1.13 perf-profile.self.cycles-pp.release_pages
1.24 +0.1 1.31 perf-profile.self.cycles-pp.tcp_sendmsg_locked
1.47 +0.1 1.54 perf-profile.self.cycles-pp.filemap_splice_read
1.19 +0.1 1.27 perf-profile.self.cycles-pp.iov_iter_extract_pages
1.58 +0.1 1.67 perf-profile.self.cycles-pp.splice_folio_into_pipe
1.54 +0.1 1.62 perf-profile.self.cycles-pp.skb_append_pagefrags
1.85 +0.1 1.95 perf-profile.self.cycles-pp.skb_splice_from_iter
1.90 +0.1 2.00 perf-profile.self.cycles-pp.filemap_get_read_batch
2.59 +0.1 2.72 perf-profile.self.cycles-pp.splice_to_socket
1.17 ± 5% +0.1 1.32 ± 3% perf-profile.self.cycles-pp.page_counter_uncharge
10.43 +0.4 10.87 perf-profile.self.cycles-pp._copy_to_iter
7.04 +0.7 7.76 ± 2% perf-profile.self.cycles-pp.__sk_mem_raise_allocated
***************************************************************************************************
lkp-icl-2sp8: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
gcc-12/performance/1HDD/xfs/x86_64-rhel-8.3/10%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp8/metamix/stress-ng/60s
commit:
67b8bcbaed ("nilfs2: fix data corruption in dsync block recovery for small block sizes")
9cee7e8ef3 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()")
67b8bcbaed477787 9cee7e8ef3e31ca25b40ca52b85
---------------- ---------------------------
%stddev %change %stddev
\ | \
1356 ± 8% +21.8% 1652 ± 16% sched_debug.cfs_rq:/.util_est.max
20.30 ± 8% +3.9 24.17 ± 9% turbostat.PKG_%
3152098 +4.1% 3281361 stress-ng.metamix.ops
52508 +4.1% 54686 stress-ng.metamix.ops_per_sec
15793876 +4.1% 16439912 stress-ng.time.minor_page_faults
218.90 +1.9% 223.16 stress-ng.time.user_time
7.965e+08 +4.3% 8.306e+08 proc-vmstat.numa_hit
7.967e+08 +4.3% 8.307e+08 proc-vmstat.numa_local
7.935e+08 +4.3% 8.276e+08 proc-vmstat.pgalloc_normal
16118636 +4.3% 16808308 proc-vmstat.pgfault
7.933e+08 +4.3% 8.274e+08 proc-vmstat.pgfree
7.913e+08 +4.3% 8.253e+08 proc-vmstat.unevictable_pgs_culled
0.04 ± 18% +69.1% 0.06 ± 16% perf-sched.sch_delay.avg.ms.__cond_resched.__alloc_pages.alloc_pages_mpol.folio_alloc.__filemap_get_folio
0.03 ± 86% +268.9% 0.09 ± 34% perf-sched.sch_delay.avg.ms.__cond_resched.down_write.generic_file_write_iter.vfs_write.ksys_write
0.03 ±118% +454.7% 0.18 ± 44% perf-sched.sch_delay.avg.ms.__cond_resched.dput.open_last_lookups.path_openat.do_filp_open
0.04 ± 9% +44.9% 0.06 ± 17% perf-sched.sch_delay.avg.ms.__cond_resched.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
0.07 ± 10% -51.1% 0.04 ± 8% perf-sched.sch_delay.avg.ms.__cond_resched.truncate_inode_pages_range.evict.do_unlinkat.__x64_sys_unlink
0.01 ± 63% +438.3% 0.04 ± 18% perf-sched.sch_delay.avg.ms.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.02 ± 17% +49.1% 0.03 ± 17% perf-sched.sch_delay.avg.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
0.02 ± 13% +56.5% 0.03 ± 17% perf-sched.sch_delay.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
5.62 ± 37% -31.7% 3.84 ± 22% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
9.83 ± 52% +128.3% 22.45 ± 42% perf-sched.wait_time.max.ms.__cond_resched.dput.path_put.user_statfs.__do_sys_statfs
11.49 ± 49% +144.8% 28.13 ± 44% perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc_lru.alloc_inode.new_inode.ramfs_get_inode
1.04 -4.9% 0.99 perf-stat.i.MPKI
2.484e+10 +3.4% 2.569e+10 perf-stat.i.branch-instructions
1.098e+08 +2.7% 1.127e+08 perf-stat.i.branch-misses
11.78 -0.5 11.32 perf-stat.i.cache-miss-rate%
1.123e+09 +2.3% 1.149e+09 perf-stat.i.cache-references
1.78 -3.4% 1.72 perf-stat.i.cpi
0.00 ± 4% -0.0 0.00 ± 7% perf-stat.i.dTLB-load-miss-rate%
943921 ± 4% -10.4% 845743 ± 7% perf-stat.i.dTLB-load-misses
3.127e+10 +3.4% 3.232e+10 perf-stat.i.dTLB-loads
2.362e+10 +4.2% 2.46e+10 perf-stat.i.dTLB-stores
1.265e+11 +3.5% 1.31e+11 perf-stat.i.instructions
0.56 +3.5% 0.58 perf-stat.i.ipc
1262 +3.6% 1308 perf-stat.i.metric.M/sec
38.08 -1.7 36.37 ± 2% perf-stat.i.node-load-miss-rate%
3411848 ± 2% -6.2% 3199316 ± 2% perf-stat.i.node-load-misses
3111347 ± 2% +3.7% 3226199 perf-stat.i.node-store-misses
1.05 -4.9% 0.99 perf-stat.overall.MPKI
11.79 -0.5 11.34 perf-stat.overall.cache-miss-rate%
1.78 -3.4% 1.72 perf-stat.overall.cpi
0.00 ± 4% -0.0 0.00 ± 7% perf-stat.overall.dTLB-load-miss-rate%
0.56 +3.5% 0.58 perf-stat.overall.ipc
37.14 ± 2% -1.7 35.49 ± 2% perf-stat.overall.node-load-miss-rate%
2.442e+10 +3.4% 2.525e+10 perf-stat.ps.branch-instructions
1.079e+08 +2.7% 1.108e+08 perf-stat.ps.branch-misses
1.104e+09 +2.3% 1.13e+09 perf-stat.ps.cache-references
935750 ± 4% -10.1% 841448 ± 7% perf-stat.ps.dTLB-load-misses
3.075e+10 +3.3% 3.178e+10 perf-stat.ps.dTLB-loads
2.323e+10 +4.1% 2.419e+10 perf-stat.ps.dTLB-stores
1.244e+11 +3.5% 1.288e+11 perf-stat.ps.instructions
3354652 ± 2% -6.2% 3145099 ± 2% perf-stat.ps.node-load-misses
3060027 ± 2% +3.7% 3172859 perf-stat.ps.node-store-misses
7.565e+12 +3.0% 7.793e+12 perf-stat.total.instructions
32.38 -0.6 31.75 perf-profile.calltrace.cycles-pp.filemap_add_folio.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
26.28 -0.6 25.67 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release
28.78 -0.6 28.18 perf-profile.calltrace.cycles-pp.release_pages.__folio_batch_release.truncate_inode_pages_range.evict.do_unlinkat
26.40 -0.6 25.80 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release.truncate_inode_pages_range.evict
26.36 -0.6 25.77 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release.truncate_inode_pages_range
28.82 -0.6 28.22 perf-profile.calltrace.cycles-pp.__folio_batch_release.truncate_inode_pages_range.evict.do_unlinkat.__x64_sys_unlink
34.84 -0.6 34.30 perf-profile.calltrace.cycles-pp.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write
32.18 -0.5 31.68 perf-profile.calltrace.cycles-pp.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
32.12 -0.5 31.63 perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64
32.40 -0.5 31.92 perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
32.41 -0.5 31.93 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
32.41 -0.5 31.93 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
32.39 -0.5 31.91 perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
32.42 -0.5 31.94 perf-profile.calltrace.cycles-pp.unlink
40.74 -0.4 40.32 perf-profile.calltrace.cycles-pp.simple_write_begin.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write
24.93 -0.4 24.52 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru
25.05 -0.4 24.65 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.filemap_add_folio.__filemap_get_folio
25.02 -0.4 24.62 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.filemap_add_folio
27.07 -0.4 26.68 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_add_lru.filemap_add_folio.__filemap_get_folio.simple_write_begin
27.25 -0.4 26.87 perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.__filemap_get_folio.simple_write_begin.generic_perform_write
45.36 -0.4 45.01 perf-profile.calltrace.cycles-pp.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.32 -0.3 42.98 perf-profile.calltrace.cycles-pp.generic_perform_write.generic_file_write_iter.vfs_write.ksys_write.do_syscall_64
47.13 -0.3 46.84 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
47.48 -0.3 47.20 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write.stress_metamix
47.82 -0.3 47.55 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write.stress_metamix
47.95 -0.3 47.68 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write.stress_metamix
4.88 -0.3 4.62 perf-profile.calltrace.cycles-pp.__filemap_add_folio.filemap_add_folio.__filemap_get_folio.simple_write_begin.generic_perform_write
48.84 -0.2 48.62 perf-profile.calltrace.cycles-pp.write.stress_metamix
1.62 -0.1 1.49 perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio.__filemap_get_folio.simple_write_begin
1.46 -0.1 1.34 ± 2% perf-profile.calltrace.cycles-pp.__lruvec_stat_mod_folio.__filemap_add_folio.filemap_add_folio.__filemap_get_folio.simple_write_begin
1.52 -0.0 1.50 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu
1.64 -0.0 1.62 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.populate_vma_page_range.__mm_populate
1.53 -0.0 1.51 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.populate_vma_page_range
0.68 -0.0 0.65 perf-profile.calltrace.cycles-pp.__file_remove_privs.__generic_file_write_iter.generic_file_write_iter.vfs_write.ksys_write
0.57 +0.0 0.59 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.53 +0.0 0.56 perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages.alloc_pages_mpol.folio_alloc
0.78 +0.0 0.81 perf-profile.calltrace.cycles-pp.__fsnotify_parent.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.54 +0.0 0.57 perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.__folio_batch_release.truncate_inode_pages_range.evict
0.73 +0.0 0.76 perf-profile.calltrace.cycles-pp.xas_store.delete_from_page_cache_batch.truncate_inode_pages_range.evict.do_unlinkat
0.86 +0.0 0.90 perf-profile.calltrace.cycles-pp.find_lock_entries.truncate_inode_pages_range.evict.do_unlinkat.__x64_sys_unlink
0.84 +0.0 0.88 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.alloc_pages_mpol.folio_alloc.__filemap_get_folio
0.80 +0.0 0.84 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.llseek.stress_metamix
1.44 +0.0 1.48 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.llseek.stress_metamix
1.29 +0.0 1.34 perf-profile.calltrace.cycles-pp.__alloc_pages.alloc_pages_mpol.folio_alloc.__filemap_get_folio.simple_write_begin
1.16 +0.1 1.21 ± 2% perf-profile.calltrace.cycles-pp._copy_to_iter.copy_page_to_iter.filemap_read.vfs_read.ksys_read
1.35 +0.1 1.40 perf-profile.calltrace.cycles-pp.filemap_get_read_batch.filemap_get_pages.filemap_read.vfs_read.ksys_read
1.60 +0.1 1.65 perf-profile.calltrace.cycles-pp.alloc_pages_mpol.folio_alloc.__filemap_get_folio.simple_write_begin.generic_perform_write
0.66 ± 2% +0.1 0.71 perf-profile.calltrace.cycles-pp.rw_verify_area.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.26 +0.1 1.32 ± 2% perf-profile.calltrace.cycles-pp.copy_page_to_iter.filemap_read.vfs_read.ksys_read.do_syscall_64
1.54 +0.1 1.60 perf-profile.calltrace.cycles-pp.filemap_get_pages.filemap_read.vfs_read.ksys_read.do_syscall_64
1.68 +0.1 1.74 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.llseek.stress_metamix
1.72 +0.1 1.78 perf-profile.calltrace.cycles-pp.folio_alloc.__filemap_get_folio.simple_write_begin.generic_perform_write.generic_file_write_iter
3.99 +0.2 4.15 perf-profile.calltrace.cycles-pp.llseek.stress_metamix
4.40 +0.2 4.58 perf-profile.calltrace.cycles-pp.filemap_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.68 +0.3 6.99 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
7.01 +0.3 7.33 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.stress_metamix
7.36 +0.3 7.70 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.stress_metamix
7.48 +0.3 7.83 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read.stress_metamix
8.40 +0.4 8.80 perf-profile.calltrace.cycles-pp.read.stress_metamix
62.98 +0.4 63.37 perf-profile.calltrace.cycles-pp.stress_metamix
53.00 -1.0 51.98 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
52.86 -1.0 51.84 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
52.97 -1.0 51.95 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
32.40 -0.6 31.78 perf-profile.children.cycles-pp.filemap_add_folio
28.82 -0.6 28.22 perf-profile.children.cycles-pp.__folio_batch_release
29.09 -0.6 28.51 perf-profile.children.cycles-pp.release_pages
34.90 -0.5 34.35 perf-profile.children.cycles-pp.__filemap_get_folio
32.18 -0.5 31.68 perf-profile.children.cycles-pp.evict
32.14 -0.5 31.65 perf-profile.children.cycles-pp.truncate_inode_pages_range
32.40 -0.5 31.92 perf-profile.children.cycles-pp.__x64_sys_unlink
32.42 -0.5 31.94 perf-profile.children.cycles-pp.unlink
32.39 -0.5 31.91 perf-profile.children.cycles-pp.do_unlinkat
40.76 -0.4 40.34 perf-profile.children.cycles-pp.simple_write_begin
28.75 -0.4 28.34 perf-profile.children.cycles-pp.folio_batch_move_lru
27.27 -0.4 26.89 perf-profile.children.cycles-pp.folio_add_lru
45.41 -0.4 45.06 perf-profile.children.cycles-pp.generic_file_write_iter
43.41 -0.3 43.08 perf-profile.children.cycles-pp.generic_perform_write
93.64 -0.3 93.35 perf-profile.children.cycles-pp.do_syscall_64
47.22 -0.3 46.93 perf-profile.children.cycles-pp.vfs_write
47.54 -0.3 47.26 perf-profile.children.cycles-pp.ksys_write
94.03 -0.3 93.75 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
5.05 -0.2 4.80 perf-profile.children.cycles-pp.__filemap_add_folio
1.24 ± 2% -0.2 1.00 ± 2% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
49.16 -0.2 48.95 perf-profile.children.cycles-pp.write
0.49 -0.2 0.30 ± 3% perf-profile.children.cycles-pp.__count_memcg_events
0.63 -0.2 0.47 ± 2% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
2.12 -0.1 1.98 ± 2% perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
1.68 -0.1 1.54 perf-profile.children.cycles-pp.__mem_cgroup_charge
0.44 -0.0 0.42 perf-profile.children.cycles-pp.security_inode_need_killpriv
0.11 ± 3% -0.0 0.10 ± 5% perf-profile.children.cycles-pp.xattr_resolve_name
0.23 +0.0 0.24 perf-profile.children.cycles-pp.free_unref_page_prepare
0.45 +0.0 0.47 perf-profile.children.cycles-pp.fault_in_readable
0.38 +0.0 0.40 perf-profile.children.cycles-pp.stress_hash_jenkin
0.52 +0.0 0.54 perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.39 ± 2% +0.0 0.42 ± 2% perf-profile.children.cycles-pp.truncate_cleanup_folio
0.31 +0.0 0.33 perf-profile.children.cycles-pp.try_charge_memcg
0.51 +0.0 0.54 ± 2% perf-profile.children.cycles-pp.do_vmi_munmap
0.22 ± 2% +0.0 0.25 ± 3% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.42 +0.0 0.44 perf-profile.children.cycles-pp.atime_needs_update
0.66 +0.0 0.68 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.58 +0.0 0.60 perf-profile.children.cycles-pp.mmap_region
0.56 +0.0 0.59 perf-profile.children.cycles-pp.rmqueue
0.26 ± 4% +0.0 0.29 ± 3% perf-profile.children.cycles-pp.run_ksoftirqd
0.56 +0.0 0.59 perf-profile.children.cycles-pp.free_unref_page_list
0.49 +0.0 0.52 perf-profile.children.cycles-pp.touch_atime
0.33 ± 3% +0.0 0.36 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
0.29 ± 5% +0.0 0.32 ± 3% perf-profile.children.cycles-pp.kthread
0.29 ± 5% +0.0 0.32 ± 3% perf-profile.children.cycles-pp.ret_from_fork
0.29 ± 5% +0.0 0.32 ± 3% perf-profile.children.cycles-pp.ret_from_fork_asm
0.41 +0.0 0.44 perf-profile.children.cycles-pp.syscall_return_via_sysret
1.42 +0.0 1.45 perf-profile.children.cycles-pp.xas_store
0.27 ± 3% +0.0 0.30 ± 3% perf-profile.children.cycles-pp.smpboot_thread_fn
0.88 +0.0 0.91 perf-profile.children.cycles-pp.get_page_from_freelist
0.61 ± 2% +0.0 0.64 perf-profile.children.cycles-pp.stress_metamix_file
0.37 ± 4% +0.0 0.40 ± 3% perf-profile.children.cycles-pp.rcu_do_batch
0.87 +0.0 0.91 perf-profile.children.cycles-pp.find_lock_entries
0.89 +0.0 0.93 perf-profile.children.cycles-pp.simple_write_end
0.38 ± 4% +0.0 0.41 ± 4% perf-profile.children.cycles-pp.rcu_core
0.40 ± 4% +0.0 0.43 ± 4% perf-profile.children.cycles-pp.__do_softirq
1.19 +0.0 1.24 perf-profile.children.cycles-pp.__fsnotify_parent
1.38 +0.0 1.42 perf-profile.children.cycles-pp.filemap_get_read_batch
1.17 +0.0 1.22 ± 2% perf-profile.children.cycles-pp._copy_to_iter
1.34 +0.1 1.39 perf-profile.children.cycles-pp.__alloc_pages
1.29 +0.1 1.34 perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
1.63 +0.1 1.68 perf-profile.children.cycles-pp.alloc_pages_mpol
1.28 +0.1 1.34 ± 2% perf-profile.children.cycles-pp.copy_page_to_iter
1.75 +0.1 1.81 perf-profile.children.cycles-pp.folio_alloc
1.56 +0.1 1.62 perf-profile.children.cycles-pp.filemap_get_pages
1.65 +0.1 1.72 perf-profile.children.cycles-pp.entry_SYSCALL_64
4.03 +0.2 4.19 perf-profile.children.cycles-pp.llseek
4.45 +0.2 4.64 perf-profile.children.cycles-pp.filemap_read
6.72 +0.3 7.03 perf-profile.children.cycles-pp.vfs_read
7.05 +0.3 7.37 perf-profile.children.cycles-pp.ksys_read
62.98 +0.4 63.37 perf-profile.children.cycles-pp.stress_metamix
8.72 +0.4 9.12 perf-profile.children.cycles-pp.read
52.86 -1.0 51.84 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.07 ± 2% -0.3 0.81 ± 2% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.44 -0.2 0.24 ± 3% perf-profile.self.cycles-pp.__count_memcg_events
0.09 +0.0 0.10 perf-profile.self.cycles-pp.get_pfnblock_flags_mask
0.25 ± 2% +0.0 0.26 perf-profile.self.cycles-pp.__filemap_get_folio
0.20 +0.0 0.22 ± 2% perf-profile.self.cycles-pp.delete_from_page_cache_batch
0.33 +0.0 0.35 perf-profile.self.cycles-pp.stress_hash_jenkin
0.21 +0.0 0.23 ± 2% perf-profile.self.cycles-pp.try_charge_memcg
0.49 +0.0 0.51 perf-profile.self.cycles-pp.xas_descend
0.21 ± 3% +0.0 0.23 ± 2% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.29 ± 3% +0.0 0.32 perf-profile.self.cycles-pp.rw_verify_area
0.65 +0.0 0.68 perf-profile.self.cycles-pp.llseek
0.51 +0.0 0.54 perf-profile.self.cycles-pp.stress_metamix_file
0.75 +0.0 0.78 perf-profile.self.cycles-pp.vfs_read
0.81 +0.0 0.84 perf-profile.self.cycles-pp.filemap_read
0.71 +0.0 0.74 perf-profile.self.cycles-pp.find_lock_entries
0.40 +0.0 0.43 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.72 +0.0 0.75 perf-profile.self.cycles-pp.do_syscall_64
0.77 +0.0 0.80 perf-profile.self.cycles-pp.release_pages
0.88 +0.0 0.92 perf-profile.self.cycles-pp.lru_add_fn
0.80 +0.0 0.84 perf-profile.self.cycles-pp.vfs_write
1.00 +0.0 1.04 perf-profile.self.cycles-pp.filemap_get_read_batch
1.14 +0.0 1.19 perf-profile.self.cycles-pp.__fsnotify_parent
1.15 +0.0 1.20 ± 2% perf-profile.self.cycles-pp._copy_to_iter
1.25 +0.1 1.30 perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
***************************************************************************************************
lkp-cpl-4sp2: 224 threads 4 sockets Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz (Cooper Lake) with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/debian-11.1-x86_64-20220510.cgz/300s/128G/lkp-cpl-4sp2/truncate/vm-scalability
commit:
67b8bcbaed ("nilfs2: fix data corruption in dsync block recovery for small block sizes")
9cee7e8ef3 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()")
67b8bcbaed477787 9cee7e8ef3e31ca25b40ca52b85
---------------- ---------------------------
%stddev %change %stddev
\ | \
5.129e+08 ± 2% +42.0% 7.286e+08 ± 2% vm-scalability.median
5.129e+08 ± 2% +42.0% 7.286e+08 ± 2% vm-scalability.throughput
3842 ± 9% -23.4% 2943 ± 2% vm-scalability.time.involuntary_context_switches
251.17 ± 3% -20.2% 200.50 ± 3% vm-scalability.time.percent_of_cpu_this_job_got
544.92 ± 2% -20.3% 434.06 ± 4% vm-scalability.time.system_time
1.17 ± 2% -0.2 0.94 ± 4% mpstat.cpu.all.sys%
55.67 ± 10% -21.6% 43.67 ± 11% perf-c2c.DRAM.remote
4.50 +10.4% 4.97 ± 7% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
4.50 +10.4% 4.97 ± 7% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
80.83 -10.1% 72.67 ± 2% turbostat.Avg_MHz
2.13 -0.2 1.91 turbostat.Busy%
18971204 ± 24% +52.8% 28988952 ± 23% numa-meminfo.node0.MemFree
16667888 ± 47% +77.3% 29546741 ± 14% numa-meminfo.node2.Inactive
16530544 ± 47% +77.9% 29415298 ± 15% numa-meminfo.node2.Inactive(file)
32514972 ± 26% -40.4% 19367348 ± 21% numa-meminfo.node3.FilePages
31946066 ± 27% -39.6% 19280221 ± 22% numa-meminfo.node3.Inactive
31785044 ± 27% -39.8% 19134640 ± 22% numa-meminfo.node3.Inactive(file)
16347998 ± 52% +80.4% 29486790 ± 14% numa-meminfo.node3.MemFree
33131649 ± 26% -39.7% 19992857 ± 20% numa-meminfo.node3.MemUsed
359118 ± 41% +115.0% 772100 ± 51% numa-numastat.node1.local_node
431596 ± 35% +101.6% 869942 ± 44% numa-numastat.node1.numa_hit
906620 ± 16% -42.5% 521019 ± 58% numa-numastat.node1.numa_miss
977834 ± 15% -36.7% 619153 ± 50% numa-numastat.node1.other_node
836149 ± 40% -70.2% 248916 ± 56% numa-numastat.node3.local_node
1689066 ± 62% -86.6% 225607 ±118% numa-numastat.node3.numa_foreign
942394 ± 36% -62.7% 351650 ± 40% numa-numastat.node3.numa_hit
415036 ± 83% +113.8% 887345 ± 15% numa-numastat.node3.numa_miss
521278 ± 65% +90.1% 990792 ± 14% numa-numastat.node3.other_node
0.69 ± 53% +0.5 1.15 ± 18% perf-profile.calltrace.cycles-pp.trigger_load_balance.update_process_times.tick_sched_handle.tick_nohz_highres_handler.__hrtimer_run_queues
0.08 ± 16% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.irqtime_account_process_tick
0.13 ± 13% +0.1 0.18 ± 17% perf-profile.children.cycles-pp.get_cpu_device
0.26 ± 15% +0.1 0.33 ± 8% perf-profile.children.cycles-pp.rcu_core
0.21 ± 11% +0.1 0.29 ± 7% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.76 ± 35% +0.4 1.16 ± 17% perf-profile.children.cycles-pp.trigger_load_balance
0.08 ± 16% +0.0 0.10 ± 8% perf-profile.self.cycles-pp.irqtime_account_process_tick
0.13 ± 13% +0.1 0.18 ± 17% perf-profile.self.cycles-pp.get_cpu_device
0.75 ± 35% +0.4 1.15 ± 18% perf-profile.self.cycles-pp.trigger_load_balance
1.512e+10 -8.3% 1.387e+10 ± 3% perf-stat.i.cpu-cycles
2609319 -2.8% 2535671 perf-stat.i.iTLB-loads
0.07 -8.1% 0.06 ± 3% perf-stat.i.metric.GHz
4.78 -7.9% 4.40 ± 3% perf-stat.overall.cpi
168.84 -8.4% 154.73 ± 3% perf-stat.overall.cycles-between-cache-misses
0.21 +8.6% 0.23 ± 3% perf-stat.overall.ipc
5.544e+08 -1.1% 5.484e+08 perf-stat.ps.branch-instructions
1.51e+10 -8.6% 1.381e+10 ± 3% perf-stat.ps.cpu-cycles
2596703 -2.8% 2523269 perf-stat.ps.iTLB-loads
4744339 ± 24% +52.7% 7243893 ± 24% numa-vmstat.node0.nr_free_pages
431693 ± 35% +101.6% 870245 ± 44% numa-vmstat.node1.numa_hit
359215 ± 41% +115.0% 772404 ± 51% numa-vmstat.node1.numa_local
906620 ± 16% -42.5% 521065 ± 58% numa-vmstat.node1.numa_miss
977834 ± 15% -36.7% 619199 ± 50% numa-vmstat.node1.numa_other
4134651 ± 47% +78.1% 7362301 ± 14% numa-vmstat.node2.nr_inactive_file
4134668 ± 47% +78.1% 7362321 ± 14% numa-vmstat.node2.nr_zone_inactive_file
8128688 ± 26% -40.4% 4844440 ± 21% numa-vmstat.node3.nr_file_pages
4087062 ± 52% +80.3% 7369033 ± 13% numa-vmstat.node3.nr_free_pages
7946196 ± 27% -39.8% 4786206 ± 22% numa-vmstat.node3.nr_inactive_file
7946213 ± 27% -39.8% 4786223 ± 22% numa-vmstat.node3.nr_zone_inactive_file
1689066 ± 62% -86.6% 225607 ±118% numa-vmstat.node3.numa_foreign
942361 ± 36% -62.6% 352113 ± 40% numa-vmstat.node3.numa_hit
836116 ± 40% -70.2% 249379 ± 56% numa-vmstat.node3.numa_local
415036 ± 83% +113.9% 887836 ± 15% numa-vmstat.node3.numa_miss
521278 ± 65% +90.2% 991283 ± 14% numa-vmstat.node3.numa_other
72.96 ± 68% -72.8% 19.85 ± 66% numa-vmstat.node3.workingset_nodes
***************************************************************************************************
lkp-cpl-4sp2: 224 threads 4 sockets Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz (Cooper Lake) with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/thread/50%/debian-11.1-x86_64-20220510.cgz/lkp-cpl-4sp2/fallocate1/will-it-scale
commit:
67b8bcbaed ("nilfs2: fix data corruption in dsync block recovery for small block sizes")
9cee7e8ef3 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()")
67b8bcbaed477787 9cee7e8ef3e31ca25b40ca52b85
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.07 ± 2% +0.0 0.09 ± 2% mpstat.cpu.all.usr%
2980 ± 8% +320.7% 12537 ±105% numa-meminfo.node0.Mapped
3605 +25.4% 4522 vmstat.system.cs
276.17 ± 16% -44.2% 154.00 ± 15% perf-c2c.DRAM.local
3338 ± 3% -31.1% 2300 ± 3% perf-c2c.DRAM.remote
0.02 +50.0% 0.03 turbostat.IPC
9174 ± 22% -62.1% 3476 ± 26% turbostat.POLL
19.05 -2.6% 18.56 turbostat.RAMWatt
2492160 +54.9% 3861385 will-it-scale.112.threads
22251 +54.9% 34476 will-it-scale.per_thread_ops
2492160 +54.9% 3861385 will-it-scale.workload
5794888 ± 5% -14.7% 4940830 ± 5% sched_debug.cfs_rq:/.avg_vruntime.stddev
5794888 ± 5% -14.7% 4940829 ± 5% sched_debug.cfs_rq:/.min_vruntime.stddev
14.21 ± 5% +34.6% 19.12 ± 12% sched_debug.cpu.clock.stddev
3677 +14.2% 4198 ± 5% sched_debug.cpu.nr_switches.avg
41405 +7.3% 44427 proc-vmstat.nr_slab_reclaimable
1.499e+09 +55.1% 2.325e+09 proc-vmstat.numa_hit
1.498e+09 +55.1% 2.324e+09 proc-vmstat.numa_local
100185 -3.4% 96743 ± 2% proc-vmstat.pgactivate
1.499e+09 +55.0% 2.324e+09 proc-vmstat.pgalloc_normal
1.499e+09 +55.0% 2.324e+09 proc-vmstat.pgfree
3.466e+08 ± 2% +40.2% 4.861e+08 ± 14% numa-numastat.node0.local_node
3.468e+08 ± 2% +40.2% 4.863e+08 ± 14% numa-numastat.node0.numa_hit
3.825e+08 ± 2% +60.6% 6.142e+08 ± 2% numa-numastat.node1.local_node
3.827e+08 ± 2% +60.5% 6.144e+08 ± 2% numa-numastat.node1.numa_hit
3.831e+08 ± 2% +62.1% 6.21e+08 ± 2% numa-numastat.node2.local_node
3.832e+08 ± 2% +62.1% 6.212e+08 ± 2% numa-numastat.node2.numa_hit
3.858e+08 ± 2% +56.2% 6.026e+08 ± 11% numa-numastat.node3.local_node
3.86e+08 ± 2% +56.2% 6.027e+08 ± 11% numa-numastat.node3.numa_hit
3.468e+08 ± 2% +40.2% 4.863e+08 ± 14% numa-vmstat.node0.numa_hit
3.467e+08 ± 2% +40.2% 4.86e+08 ± 14% numa-vmstat.node0.numa_local
3.828e+08 ± 2% +60.5% 6.144e+08 ± 2% numa-vmstat.node1.numa_hit
3.826e+08 ± 2% +60.5% 6.142e+08 ± 2% numa-vmstat.node1.numa_local
3.833e+08 ± 2% +62.1% 6.212e+08 ± 2% numa-vmstat.node2.numa_hit
3.832e+08 ± 2% +62.1% 6.21e+08 ± 2% numa-vmstat.node2.numa_local
3.861e+08 ± 2% +56.1% 6.027e+08 ± 11% numa-vmstat.node3.numa_hit
3.858e+08 ± 2% +56.2% 6.026e+08 ± 11% numa-vmstat.node3.numa_local
0.02 ± 57% +149.5% 0.04 ± 58% perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.01 ± 6% +29.7% 0.01 perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.01 ± 17% +56.2% 0.01 ± 8% perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
0.01 ± 26% +91.4% 0.02 ± 28% perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.common_nsleep.__x64_sys_clock_nanosleep
0.02 ± 19% +78.2% 0.03 ± 16% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.02 ± 25% +87.9% 0.03 ± 22% perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
168.11 -21.1% 132.56 ± 3% perf-sched.total_wait_and_delay.average.ms
13857 ± 4% +29.3% 17912 ± 2% perf-sched.total_wait_and_delay.count.ms
167.97 -21.2% 132.44 ± 3% perf-sched.total_wait_time.average.ms
65.81 ± 18% +35.1% 88.88 ± 15% perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
423.14 -37.1% 266.13 ± 4% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
1078 ± 7% +31.1% 1413 ± 10% perf-sched.wait_and_delay.count.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
1012 ± 8% +149.3% 2523 ± 7% perf-sched.wait_and_delay.count.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
1361 ± 8% +23.1% 1675 ± 7% perf-sched.wait_and_delay.count.__cond_resched.shmem_undo_range.shmem_setattr.notify_change.do_truncate
3600 ± 4% +61.1% 5799 ± 4% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.20 ± 13% +78.4% 0.36 ± 7% perf-sched.wait_and_delay.max.ms.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
0.20 ± 15% +102.4% 0.41 ± 21% perf-sched.wait_and_delay.max.ms.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
0.23 ± 25% +58.8% 0.36 ± 7% perf-sched.wait_and_delay.max.ms.__cond_resched.shmem_undo_range.shmem_setattr.notify_change.do_truncate
29.38 ± 8% +562.6% 194.68 ±185% perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
3.95 ± 8% +17.7% 4.65 ± 6% perf-sched.wait_and_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
1.55 ± 5% +15.4% 1.79 ± 3% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
423.12 -37.1% 266.12 ± 4% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
1.57 ± 7% +16.2% 1.82 ± 4% perf-sched.wait_time.avg.ms.syslog_print.do_syslog.kmsg_read.vfs_read
0.20 ± 13% +78.4% 0.36 ± 7% perf-sched.wait_time.max.ms.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
0.20 ± 15% +102.4% 0.41 ± 21% perf-sched.wait_time.max.ms.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
0.23 ± 25% +58.8% 0.36 ± 7% perf-sched.wait_time.max.ms.__cond_resched.shmem_undo_range.shmem_setattr.notify_change.do_truncate
3.11 ± 5% +15.4% 3.59 ± 3% perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
29.38 ± 8% +562.7% 194.68 ±185% perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
3.14 ± 7% +16.1% 3.65 ± 4% perf-sched.wait_time.max.ms.syslog_print.do_syslog.kmsg_read.vfs_read
5.05 ± 3% -47.5% 2.65 perf-stat.i.MPKI
6.255e+09 +33.9% 8.375e+09 perf-stat.i.branch-instructions
0.37 ± 2% +0.0 0.39 perf-stat.i.branch-miss-rate%
23190779 +41.9% 32908628 perf-stat.i.branch-misses
32.92 -4.9 28.01 perf-stat.i.cache-miss-rate%
1.497e+08 ± 2% -28.1% 1.076e+08 perf-stat.i.cache-misses
4.548e+08 -15.6% 3.837e+08 perf-stat.i.cache-references
3482 +26.6% 4408 perf-stat.i.context-switches
14.39 -27.0% 10.50 perf-stat.i.cpi
262.60 +0.9% 265.06 perf-stat.i.cpu-migrations
2854 ± 2% +39.0% 3968 perf-stat.i.cycles-between-cache-misses
7.783e+09 +36.5% 1.062e+10 perf-stat.i.dTLB-loads
3.804e+09 +51.9% 5.779e+09 perf-stat.i.dTLB-stores
80.49 +4.8 85.30 perf-stat.i.iTLB-load-miss-rate%
10924802 +41.4% 15443362 perf-stat.i.iTLB-load-misses
2.972e+10 +36.5% 4.057e+10 perf-stat.i.instructions
2749 ± 2% -4.1% 2636 perf-stat.i.instructions-per-iTLB-miss
0.07 +37.2% 0.10 perf-stat.i.ipc
120.11 -21.3% 94.54 ± 5% perf-stat.i.metric.K/sec
81.63 +37.5% 112.27 perf-stat.i.metric.M/sec
20471399 -31.0% 14134700 ± 2% perf-stat.i.node-load-misses
1500875 ± 19% -44.9% 827495 ± 10% perf-stat.i.node-loads
2312406 +29.6% 2997675 perf-stat.i.node-store-misses
5.04 ± 3% -47.3% 2.65 perf-stat.overall.MPKI
0.37 ± 2% +0.0 0.39 perf-stat.overall.branch-miss-rate%
32.90 -4.9 28.04 perf-stat.overall.cache-miss-rate%
14.37 -26.9% 10.50 perf-stat.overall.cpi
2854 ± 2% +38.7% 3958 perf-stat.overall.cycles-between-cache-misses
0.00 ± 12% -0.0 0.00 ± 11% perf-stat.overall.dTLB-store-miss-rate%
80.63 +4.8 85.46 perf-stat.overall.iTLB-load-miss-rate%
0.07 +36.8% 0.10 perf-stat.overall.ipc
3580231 -11.7% 3162678 perf-stat.overall.path-length
6.232e+09 +33.9% 8.346e+09 perf-stat.ps.branch-instructions
23162804 +41.7% 32833133 perf-stat.ps.branch-misses
1.491e+08 ± 2% -28.1% 1.072e+08 perf-stat.ps.cache-misses
4.532e+08 -15.6% 3.825e+08 perf-stat.ps.cache-references
3470 +26.6% 4393 perf-stat.ps.context-switches
7.754e+09 +36.5% 1.059e+10 perf-stat.ps.dTLB-loads
3.789e+09 +52.0% 5.758e+09 perf-stat.ps.dTLB-stores
10884687 +41.4% 15387479 perf-stat.ps.iTLB-load-misses
2.962e+10 +36.5% 4.043e+10 perf-stat.ps.instructions
20394582 -30.9% 14085516 ± 2% perf-stat.ps.node-load-misses
1497500 ± 19% -44.8% 827353 ± 10% perf-stat.ps.node-loads
2303483 +29.7% 2986868 perf-stat.ps.node-store-misses
8.923e+12 +36.9% 1.221e+13 perf-stat.total.instructions
21.45 ± 4% -7.9 13.52 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release
21.47 ± 4% -7.9 13.55 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release.shmem_undo_range
21.49 ± 4% -7.9 13.58 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release.shmem_undo_range.shmem_setattr
21.70 ± 8% -6.8 14.87 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru
21.72 ± 8% -6.8 14.90 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio
21.74 ± 8% -6.8 14.93 ± 5% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp
23.54 ± 7% -6.7 16.85 ± 4% perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
23.59 ± 7% -6.7 16.90 ± 4% perf-profile.calltrace.cycles-pp.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
30.62 ± 2% -5.5 25.14 perf-profile.calltrace.cycles-pp.shmem_undo_range.shmem_setattr.notify_change.do_truncate.do_sys_ftruncate
30.63 ± 2% -5.5 25.16 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.ftruncate64
30.63 ± 2% -5.5 25.16 perf-profile.calltrace.cycles-pp.do_sys_ftruncate.do_syscall_64.entry_SYSCALL_64_after_hwframe.ftruncate64
30.63 ± 2% -5.5 25.16 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.ftruncate64
30.62 ± 2% -5.5 25.16 perf-profile.calltrace.cycles-pp.do_truncate.do_sys_ftruncate.do_syscall_64.entry_SYSCALL_64_after_hwframe.ftruncate64
30.62 ± 2% -5.5 25.15 perf-profile.calltrace.cycles-pp.shmem_setattr.notify_change.do_truncate.do_sys_ftruncate.do_syscall_64
30.63 ± 2% -5.5 25.16 perf-profile.calltrace.cycles-pp.ftruncate64
30.62 ± 2% -5.5 25.16 perf-profile.calltrace.cycles-pp.notify_change.do_truncate.do_sys_ftruncate.do_syscall_64.entry_SYSCALL_64_after_hwframe
26.52 ± 2% -5.2 21.36 perf-profile.calltrace.cycles-pp.__folio_batch_release.shmem_undo_range.shmem_setattr.notify_change.do_truncate
25.28 ± 2% -4.8 20.46 perf-profile.calltrace.cycles-pp.release_pages.__folio_batch_release.shmem_undo_range.shmem_setattr.notify_change
4.03 ± 4% -1.0 3.02 ± 5% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__lruvec_stat_mod_folio.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp
1.56 ± 5% -0.7 0.82 ± 9% perf-profile.calltrace.cycles-pp.__count_memcg_events.mem_cgroup_commit_charge.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp
4.99 ± 3% -0.7 4.32 ± 3% perf-profile.calltrace.cycles-pp.__lruvec_stat_mod_folio.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
3.02 -0.6 2.37 ± 3% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__lruvec_stat_mod_folio.filemap_unaccount_folio.__filemap_remove_folio.filemap_remove_folio
5.21 ± 3% -0.6 4.65 ± 3% perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
3.64 ± 2% -0.5 3.09 ± 5% perf-profile.calltrace.cycles-pp.__lruvec_stat_mod_folio.filemap_unaccount_folio.__filemap_remove_folio.filemap_remove_folio.truncate_inode_folio
3.64 ± 2% -0.5 3.10 ± 5% perf-profile.calltrace.cycles-pp.filemap_unaccount_folio.__filemap_remove_folio.filemap_remove_folio.truncate_inode_folio.shmem_undo_range
3.77 ± 2% -0.5 3.31 ± 4% perf-profile.calltrace.cycles-pp.__filemap_remove_folio.filemap_remove_folio.truncate_inode_folio.shmem_undo_range.shmem_setattr
3.86 -0.4 3.43 ± 4% perf-profile.calltrace.cycles-pp.filemap_remove_folio.truncate_inode_folio.shmem_undo_range.shmem_setattr.notify_change
3.94 -0.4 3.56 ± 4% perf-profile.calltrace.cycles-pp.truncate_inode_folio.shmem_undo_range.shmem_setattr.notify_change.do_truncate
1.19 ± 3% -0.3 0.86 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.__folio_batch_release
1.19 ± 3% -0.3 0.86 ± 2% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.__folio_batch_release.shmem_undo_range
1.19 ± 3% -0.3 0.86 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu
1.21 ± 3% -0.3 0.88 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.__folio_batch_release.shmem_undo_range.shmem_setattr.notify_change
1.21 ± 3% -0.3 0.88 ± 2% perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.__folio_batch_release.shmem_undo_range.shmem_setattr
0.93 -0.2 0.68 perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.release_pages.__folio_batch_release.shmem_undo_range.shmem_setattr
1.42 -0.2 1.26 ± 2% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.lru_add_fn.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio
0.00 +0.6 0.57 ± 3% perf-profile.calltrace.cycles-pp.page_counter_uncharge.uncharge_batch.__mem_cgroup_uncharge_list.release_pages.__folio_batch_release
1.02 ± 6% +0.8 1.80 perf-profile.calltrace.cycles-pp.uncharge_batch.__mem_cgroup_uncharge_list.release_pages.__folio_batch_release.shmem_undo_range
1.62 ± 12% +2.5 4.07 ± 4% perf-profile.calltrace.cycles-pp.uncharge_folio.__mem_cgroup_uncharge_list.release_pages.__folio_batch_release.shmem_undo_range
2.64 ± 9% +3.2 5.87 ± 3% perf-profile.calltrace.cycles-pp.__mem_cgroup_uncharge_list.release_pages.__folio_batch_release.shmem_undo_range.shmem_setattr
6.40 ± 10% +3.4 9.84 ± 3% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
48.46 +4.9 53.38 perf-profile.calltrace.cycles-pp.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate
48.53 +5.0 53.48 perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
48.72 +5.1 53.78 perf-profile.calltrace.cycles-pp.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe
48.77 +5.1 53.84 perf-profile.calltrace.cycles-pp.vfs_fallocate.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe.fallocate64
48.81 +5.1 53.91 perf-profile.calltrace.cycles-pp.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe.fallocate64
48.83 +5.1 53.94 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.fallocate64
48.84 +5.1 53.96 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.fallocate64
48.90 +5.1 54.05 perf-profile.calltrace.cycles-pp.fallocate64
6.97 ± 11% +6.2 13.17 ± 4% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
19.24 ± 11% +12.0 31.20 ± 4% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
44.41 ± 5% -15.1 29.28 ± 3% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
44.43 ± 5% -15.1 29.34 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
44.47 ± 5% -15.1 29.39 ± 3% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
24.81 ± 7% -7.1 17.76 ± 4% perf-profile.children.cycles-pp.folio_batch_move_lru
23.62 ± 7% -6.7 16.92 ± 4% perf-profile.children.cycles-pp.folio_add_lru
30.63 ± 2% -5.5 25.16 perf-profile.children.cycles-pp.do_sys_ftruncate
30.62 ± 2% -5.5 25.15 perf-profile.children.cycles-pp.shmem_undo_range
30.62 ± 2% -5.5 25.16 perf-profile.children.cycles-pp.do_truncate
30.62 ± 2% -5.5 25.15 perf-profile.children.cycles-pp.shmem_setattr
30.62 ± 2% -5.5 25.16 perf-profile.children.cycles-pp.notify_change
30.63 ± 2% -5.5 25.16 perf-profile.children.cycles-pp.ftruncate64
26.52 ± 2% -5.2 21.36 perf-profile.children.cycles-pp.__folio_batch_release
25.35 ± 2% -4.8 20.54 perf-profile.children.cycles-pp.release_pages
9.44 -2.1 7.38 ± 3% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
8.64 ± 2% -1.2 7.42 ± 3% perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
1.76 ± 5% -0.7 1.05 ± 7% perf-profile.children.cycles-pp.__count_memcg_events
5.23 ± 3% -0.6 4.67 ± 3% perf-profile.children.cycles-pp.shmem_add_to_page_cache
3.65 ± 2% -0.5 3.10 ± 5% perf-profile.children.cycles-pp.filemap_unaccount_folio
3.78 ± 2% -0.5 3.31 ± 4% perf-profile.children.cycles-pp.__filemap_remove_folio
3.86 ± 2% -0.4 3.44 ± 4% perf-profile.children.cycles-pp.filemap_remove_folio
3.94 -0.4 3.56 ± 4% perf-profile.children.cycles-pp.truncate_inode_folio
1.22 ± 3% -0.3 0.88 ± 2% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.06 ± 11% +0.0 0.08 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.06 ± 13% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.xas_alloc
0.06 ± 9% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.kmem_cache_alloc_lru
0.05 ± 7% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.kmem_cache_free
0.06 +0.0 0.09 ± 5% perf-profile.children.cycles-pp.xas_load
0.05 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.truncate_cleanup_folio
0.06 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.cgroup_rstat_updated
0.07 ± 11% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.xas_create
0.05 ± 8% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__do_softirq
0.06 ± 7% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.rmqueue
0.07 ± 7% +0.0 0.10 perf-profile.children.cycles-pp.__dquot_alloc_space
0.06 ± 7% +0.0 0.10 ± 3% perf-profile.children.cycles-pp.free_unref_page_list
0.05 ± 7% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.rcu_core
0.05 ± 7% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.rcu_do_batch
0.03 ±100% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.__memcg_slab_pre_alloc_hook
0.09 ± 6% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.page_counter_try_charge
0.02 ± 99% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.__memcg_slab_free_hook
0.10 ± 4% +0.0 0.15 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__cond_resched
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__folio_cancel_dirty
0.00 +0.1 0.05 perf-profile.children.cycles-pp.shmem_recalc_inode
0.00 +0.1 0.05 perf-profile.children.cycles-pp.xas_init_marks
0.00 +0.1 0.05 ± 7% perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.1 0.05 ± 7% perf-profile.children.cycles-pp.memcg_check_events
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.01 ±223% +0.1 0.06 ± 7% perf-profile.children.cycles-pp.obj_cgroup_charge
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.filemap_get_entry
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.xas_descend
0.13 ± 4% +0.1 0.19 ± 2% perf-profile.children.cycles-pp.find_lock_entries
0.10 ± 4% +0.1 0.16 ± 5% perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +0.1 0.06 ± 9% perf-profile.children.cycles-pp.folio_unlock
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.xas_clear_mark
0.22 ± 5% +0.1 0.28 ± 5% perf-profile.children.cycles-pp.propagate_protected_usage
0.14 ± 2% +0.1 0.22 perf-profile.children.cycles-pp.shmem_inode_acct_blocks
0.15 ± 3% +0.1 0.24 ± 3% perf-profile.children.cycles-pp.__alloc_pages
0.10 ± 5% +0.1 0.19 ± 5% perf-profile.children.cycles-pp.__mod_node_page_state
0.17 ± 2% +0.1 0.27 ± 2% perf-profile.children.cycles-pp.xas_store
0.18 ± 7% +0.1 0.28 ± 4% perf-profile.children.cycles-pp.try_charge_memcg
0.18 ± 2% +0.1 0.29 ± 3% perf-profile.children.cycles-pp.alloc_pages_mpol
0.13 ± 3% +0.1 0.23 ± 4% perf-profile.children.cycles-pp.__mod_lruvec_state
0.20 ± 2% +0.1 0.32 ± 3% perf-profile.children.cycles-pp.shmem_alloc_folio
0.41 ± 4% +0.2 0.57 ± 3% perf-profile.children.cycles-pp.page_counter_uncharge
1.02 ± 6% +0.8 1.80 perf-profile.children.cycles-pp.uncharge_batch
1.62 ± 12% +2.5 4.07 ± 4% perf-profile.children.cycles-pp.uncharge_folio
2.64 ± 9% +3.2 5.87 ± 3% perf-profile.children.cycles-pp.__mem_cgroup_uncharge_list
6.42 ± 10% +3.4 9.85 ± 3% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
48.51 +4.9 53.42 perf-profile.children.cycles-pp.shmem_alloc_and_add_folio
48.58 +4.9 53.53 perf-profile.children.cycles-pp.shmem_get_folio_gfp
48.72 +5.1 53.78 perf-profile.children.cycles-pp.shmem_fallocate
48.77 +5.1 53.85 perf-profile.children.cycles-pp.vfs_fallocate
48.81 +5.1 53.91 perf-profile.children.cycles-pp.__x64_sys_fallocate
48.93 +5.2 54.08 perf-profile.children.cycles-pp.fallocate64
6.98 ± 11% +6.2 13.18 ± 4% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
19.26 ± 11% +12.0 31.22 ± 4% perf-profile.children.cycles-pp.__mem_cgroup_charge
44.41 ± 5% -15.1 29.28 ± 3% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
9.40 -2.1 7.32 ± 3% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
1.75 ± 5% -0.7 1.04 ± 7% perf-profile.self.cycles-pp.__count_memcg_events
0.05 +0.0 0.07 ± 5% perf-profile.self.cycles-pp.cgroup_rstat_updated
0.06 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.shmem_fallocate
0.06 ± 7% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.page_counter_try_charge
0.06 ± 6% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.shmem_add_to_page_cache
0.06 ± 6% +0.0 0.10 perf-profile.self.cycles-pp.xas_store
0.02 ± 99% +0.0 0.06 ± 7% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.11 ± 5% +0.0 0.16 ± 2% perf-profile.self.cycles-pp.find_lock_entries
0.01 ±223% +0.0 0.06 ± 6% perf-profile.self.cycles-pp.obj_cgroup_charge
0.10 ± 3% +0.1 0.15 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.1 0.05 perf-profile.self.cycles-pp.fallocate64
0.00 +0.1 0.05 ± 7% perf-profile.self.cycles-pp.__dquot_alloc_space
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.xas_descend
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.folio_unlock
0.08 ± 5% +0.1 0.14 ± 3% perf-profile.self.cycles-pp.try_charge_memcg
0.00 +0.1 0.06 perf-profile.self.cycles-pp.__alloc_pages
0.00 +0.1 0.06 perf-profile.self.cycles-pp.xas_clear_mark
0.01 ±223% +0.1 0.07 ± 10% perf-profile.self.cycles-pp.get_page_from_freelist
0.13 +0.1 0.19 ± 3% perf-profile.self.cycles-pp.release_pages
0.00 +0.1 0.07 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.21 ± 5% +0.1 0.28 ± 4% perf-profile.self.cycles-pp.propagate_protected_usage
0.10 ± 5% +0.1 0.18 ± 6% perf-profile.self.cycles-pp.lru_add_fn
0.09 ± 4% +0.1 0.18 ± 4% perf-profile.self.cycles-pp.__mod_node_page_state
0.22 ± 4% +0.1 0.33 ± 5% perf-profile.self.cycles-pp.page_counter_uncharge
0.18 ± 2% +0.1 0.31 ± 3% perf-profile.self.cycles-pp.folio_batch_move_lru
0.08 ± 36% +0.1 0.22 ± 9% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
1.48 ± 4% +0.4 1.86 ± 9% perf-profile.self.cycles-pp.__lruvec_stat_mod_folio
0.40 ± 11% +0.6 0.96 ± 3% perf-profile.self.cycles-pp.uncharge_batch
5.68 ± 12% +2.2 7.89 ± 4% perf-profile.self.cycles-pp.__mem_cgroup_charge
1.61 ± 12% +2.4 4.06 ± 4% perf-profile.self.cycles-pp.uncharge_folio
4.82 ± 12% +4.1 8.97 ± 4% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
6.95 ± 11% +6.2 13.14 ± 4% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [relevance 6%]
* Re: [PATCH v3 13/35] lib: add allocation tagging support for memory allocation profiling
@ 2024-02-18 2:21 5% ` Suren Baghdasaryan
0 siblings, 0 replies; 200+ results
From: Suren Baghdasaryan @ 2024-02-18 2:21 UTC (permalink / raw)
To: Vlastimil Babka
Cc: akpm, kent.overstreet, mhocko, hannes, roman.gushchin, mgorman,
dave, willy, liam.howlett, corbet, void, peterz, juri.lelli,
catalin.marinas, will, arnd, tglx, mingo, dave.hansen, x86,
peterx, david, axboe, mcgrof, masahiroy, nathan, dennis, tj,
muchun.song, rppt, paulmck, pasha.tatashin, yosryahmed, yuzhao,
dhowells, hughd, andreyknvl, keescook, ndesaulniers, vvvvvv,
gregkh, ebiggers, ytcoode, vincent.guittot, dietmar.eggemann,
rostedt, bsegall, bristot, vschneid, cl, penberg, iamjoonsoo.kim,
42.hyeyoo, glider, elver, dvyukov, shakeelb, songmuchun, jbaron,
rientjes, minchan, kaleshsingh, kernel-team, linux-doc,
linux-kernel, iommu, linux-arch, linux-fsdevel, linux-mm,
linux-modules, kasan-dev, cgroups
On Fri, Feb 16, 2024 at 8:57 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 2/12/24 22:38, Suren Baghdasaryan wrote:
> > Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily
> > instrument memory allocators. It registers an "alloc_tags" codetag type
> > with /proc/allocinfo interface to output allocation tag information when
> > the feature is enabled.
> > CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory
> > allocation profiling instrumentation.
> > Memory allocation profiling can be enabled or disabled at runtime using
> > /proc/sys/vm/mem_profiling sysctl when CONFIG_MEM_ALLOC_PROFILING_DEBUG=n.
> > CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT enables memory allocation
> > profiling by default.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
> > ---
> > Documentation/admin-guide/sysctl/vm.rst | 16 +++
> > Documentation/filesystems/proc.rst | 28 +++++
> > include/asm-generic/codetag.lds.h | 14 +++
> > include/asm-generic/vmlinux.lds.h | 3 +
> > include/linux/alloc_tag.h | 133 ++++++++++++++++++++
> > include/linux/sched.h | 24 ++++
> > lib/Kconfig.debug | 25 ++++
> > lib/Makefile | 2 +
> > lib/alloc_tag.c | 158 ++++++++++++++++++++++++
> > scripts/module.lds.S | 7 ++
> > 10 files changed, 410 insertions(+)
> > create mode 100644 include/asm-generic/codetag.lds.h
> > create mode 100644 include/linux/alloc_tag.h
> > create mode 100644 lib/alloc_tag.c
> >
> > diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
> > index c59889de122b..a214719492ea 100644
> > --- a/Documentation/admin-guide/sysctl/vm.rst
> > +++ b/Documentation/admin-guide/sysctl/vm.rst
> > @@ -43,6 +43,7 @@ Currently, these files are in /proc/sys/vm:
> > - legacy_va_layout
> > - lowmem_reserve_ratio
> > - max_map_count
> > +- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
> > - memory_failure_early_kill
> > - memory_failure_recovery
> > - min_free_kbytes
> > @@ -425,6 +426,21 @@ e.g., up to one or two maps per allocation.
> > The default value is 65530.
> >
> >
> > +mem_profiling
> > +==============
> > +
> > +Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
> > +
> > +1: Enable memory profiling.
> > +
> > +0: Disabld memory profiling.
>
> Disable
Ack.
>
> ...
>
> > +allocinfo
> > +~~~~~~~
> > +
> > +Provides information about memory allocations at all locations in the code
> > +base. Each allocation in the code is identified by its source file, line
> > +number, module and the function calling the allocation. The number of bytes
> > +allocated at each location is reported.
>
> See, it even says "number of bytes" :)
Yes, we are changing the output to bytes.
>
> > +
> > +Example output.
> > +
> > +::
> > +
> > + > cat /proc/allocinfo
> > +
> > + 153MiB mm/slub.c:1826 module:slub func:alloc_slab_page
>
> Is "module" meant in the usual kernel module sense? In that case IIRC is
> more common to annotate things e.g. [xfs] in case it's really a module, and
> nothing if it's built it, such as slub. Is that "slub" simply derived from
> "mm/slub.c"? Then it's just redundant?
Sounds good. The new example would look like this:
> sort -rn /proc/allocinfo
127664128 31168 mm/page_ext.c:270 func:alloc_page_ext
56373248 4737 mm/slub.c:2259 func:alloc_slab_page
14880768 3633 mm/readahead.c:247 func:page_cache_ra_unbounded
14417920 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
13377536 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
11718656 2861 mm/filemap.c:1919 func:__filemap_get_folio
9192960 2800 kernel/fork.c:307 func:alloc_thread_stack_node
4206592 4 net/netfilter/nf_conntrack_core.c:2567
func:nf_ct_alloc_hashtable
4136960 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod]
func:ctagmod_start
3940352 962 mm/memory.c:4214 func:alloc_anon_folio
2894464 22613 fs/kernfs/dir.c:615 func:__kernfs_new_node
...
Note that [ctagmod] is the only allocation from a module in this example.
>
> > + 6.08MiB mm/slab_common.c:950 module:slab_common func:_kmalloc_order
> > + 5.09MiB mm/memcontrol.c:2814 module:memcontrol func:alloc_slab_obj_exts
> > + 4.54MiB mm/page_alloc.c:5777 module:page_alloc func:alloc_pages_exact
> > + 1.32MiB include/asm-generic/pgalloc.h:63 module:pgtable func:__pte_alloc_one
> > + 1.16MiB fs/xfs/xfs_log_priv.h:700 module:xfs func:xlog_kvmalloc
> > + 1.00MiB mm/swap_cgroup.c:48 module:swap_cgroup func:swap_cgroup_prepare
> > + 734KiB fs/xfs/kmem.c:20 module:xfs func:kmem_alloc
> > + 640KiB kernel/rcu/tree.c:3184 module:tree func:fill_page_cache_func
> > + 640KiB drivers/char/virtio_console.c:452 module:virtio_console func:alloc_buf
> > + ...
> > +
> > +
> > meminfo
>
> ...
>
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index 0be2d00c3696..78d258ca508f 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -972,6 +972,31 @@ config CODE_TAGGING
> > bool
> > select KALLSYMS
> >
> > +config MEM_ALLOC_PROFILING
> > + bool "Enable memory allocation profiling"
> > + default n
> > + depends on PROC_FS
> > + depends on !DEBUG_FORCE_WEAK_PER_CPU
> > + select CODE_TAGGING
> > + help
> > + Track allocation source code and record total allocation size
> > + initiated at that code location. The mechanism can be used to track
> > + memory leaks with a low performance and memory impact.
> > +
> > +config MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> > + bool "Enable memory allocation profiling by default"
> > + default y
>
> I'd go with default n as that I'd select for a general distro.
Well, we have MEM_ALLOC_PROFILING=n by default, so if it was switched
on manually, that is a strong sign that the user wants it enabled IMO.
So, enabling this switch by default seems logical to me. If a distro
wants to have the feature compiled in but disabled by default then
this is perfectly doable, just need to set both options appropriately.
Does my logic make sense?
>
> > + depends on MEM_ALLOC_PROFILING
> > +
> > +config MEM_ALLOC_PROFILING_DEBUG
> > + bool "Memory allocation profiler debugging"
> > + default n
> > + depends on MEM_ALLOC_PROFILING
> > + select MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
> > + help
> > + Adds warnings with helpful error messages for memory allocation
> > + profiling.
> > +
>
^ permalink raw reply [relevance 5%]
* Re: [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations
2024-02-15 21:59 6% [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations Oscar Salvador
2024-02-15 21:59 5% ` [PATCH v10 5/7] mm,page_owner: Display all stacks and their count Oscar Salvador
@ 2024-02-15 23:37 0% ` Andrey Konovalov
1 sibling, 0 replies; 200+ results
From: Andrey Konovalov @ 2024-02-15 23:37 UTC (permalink / raw)
To: Oscar Salvador
Cc: Andrew Morton, linux-kernel, linux-mm, Michal Hocko,
Vlastimil Babka, Marco Elver, Alexander Potapenko
On Thu, Feb 15, 2024 at 10:57 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> Changes v9 -> v10
> - Fix unwanted change in patch#2
> - Collect Acked-by and Reviewed-by from Marco and Vlastimil
> for the missing patches
> - Fix stack_record count by substracting 1 in stack_print by Vlastimil
>
> Changes v8 -> v9
> - Fix handle-0 for the very first stack_record entry
> - Collect Acked-by and Reviewed-by from Marco and Vlastimil
> - Adressed feedback from Marco and Vlastimil
> - stack_print() no longer allocates a memory buffer, prints directly
> using seq_printf: by Vlastimil
> - Added two static struct stack for dummy_handle and faiure_handle
> - add_stack_record_to_list() now filters out the gfp_mask the same way
> stackdepot does, for consistency
> - Rename set_threshold to count_threshold
>
> Changes v7 -> v8
> - Rebased on top of -next
> - page_owner maintains its own stack_records list now
> - Kill auxiliary stackdepot function to traverse buckets
> - page_owner_stacks is now a directory with 'show_stacks'
> and 'set_threshold'
> - Update Documentation/mm/page_owner.rst
> - Adressed feedback from Marco
>
> Changes v6 -> v7:
> - Rebased on top of Andrey Konovalov's libstackdepot patchset
> - Reformulated the changelogs
>
> Changes v5 -> v6:
> - Rebase on top of v6.7-rc1
> - Move stack_record struct to the header
> - Addressed feedback from Vlastimil
> (some code tweaks and changelogs suggestions)
>
> Changes v4 -> v5:
> - Addressed feedback from Alexander Potapenko
>
> Changes v3 -> v4:
> - Rebase (long time has passed)
> - Use boolean instead of enum for action by Alexander Potapenko
> - (I left some feedback untouched because it's been long and
> would like to discuss it here now instead of re-vamping
> and old thread)
>
> Changes v2 -> v3:
> - Replace interface in favor of seq operations
> (suggested by Vlastimil)
> - Use debugfs interface to store/read valued (suggested by Ammar)
>
>
> page_owner is a great debug functionality tool that lets us know
> about all pages that have been allocated/freed and their specific
> stacktrace.
> This comes very handy when debugging memory leaks, since with
> some scripting we can see the outstanding allocations, which might point
> to a memory leak.
>
> In my experience, that is one of the most useful cases, but it can get
> really tedious to screen through all pages and try to reconstruct the
> stack <-> allocated/freed relationship, becoming most of the time a
> daunting and slow process when we have tons of allocation/free operations.
>
> This patchset aims to ease that by adding a new functionality into
> page_owner.
> This functionality creates a new directory called 'page_owner_stacks'
> under 'sys/kernel//debug' with a read-only file called 'show_stacks',
> which prints out all the stacks followed by their outstanding number
> of allocations (being that the times the stacktrace has allocated
> but not freed yet).
> This gives us a clear and a quick overview of stacks <-> allocated/free.
>
> We take advantage of the new refcount_f field that stack_record struct
> gained, and increment/decrement the stack refcount on every
> __set_page_owner() (alloc operation) and __reset_page_owner (free operation)
> call.
>
> Unfortunately, we cannot use the new stackdepot api
> STACK_DEPOT_FLAG_GET because it does not fulfill page_owner needs,
> meaning we would have to special case things, at which point
> makes more sense for page_owner to do its own {dec,inc}rementing
> of the stacks.
> E.g: Using STACK_DEPOT_FLAG_PUT, once the refcount reaches 0,
> such stack gets evicted, so page_owner would lose information.
>
> This patch also creates a new file called 'set_threshold' within
> 'page_owner_stacks' directory, and by writing a value to it, the stacks
> which refcount is below such value will be filtered out.
>
> A PoC can be found below:
>
> # cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
> # head -40 page_owner_full_stacks.txt
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> page_cache_ra_unbounded+0x96/0x180
> filemap_get_pages+0xfd/0x590
> filemap_read+0xcc/0x330
> blkdev_read_iter+0xb8/0x150
> vfs_read+0x285/0x320
> ksys_read+0xa5/0xe0
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 521
>
>
>
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> __filemap_get_folio+0x14a/0x490
> ext4_write_begin+0xbd/0x4b0 [ext4]
> generic_perform_write+0xc1/0x1e0
> ext4_buffered_write_iter+0x68/0xe0 [ext4]
> ext4_file_write_iter+0x70/0x740 [ext4]
> vfs_write+0x33d/0x420
> ksys_write+0xa5/0xe0
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 4609
> ...
> ...
>
> # echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
> # cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
> # head -40 page_owner_full_stacks_5000.txt
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> __filemap_get_folio+0x14a/0x490
> ext4_write_begin+0xbd/0x4b0 [ext4]
> generic_perform_write+0xc1/0x1e0
> ext4_buffered_write_iter+0x68/0xe0 [ext4]
> ext4_file_write_iter+0x70/0x740 [ext4]
> vfs_write+0x33d/0x420
> ksys_pwrite64+0x75/0x90
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 6781
>
>
>
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> pcpu_populate_chunk+0xec/0x350
> pcpu_balance_workfn+0x2d1/0x4a0
> process_scheduled_works+0x84/0x380
> worker_thread+0x12a/0x2a0
> kthread+0xe3/0x110
> ret_from_fork+0x30/0x50
> ret_from_fork_asm+0x1b/0x30
> stack_count: 8641
>
> Oscar Salvador (7):
> lib/stackdepot: Fix first entry having a 0-handle
> lib/stackdepot: Move stack_record struct definition into the header
> mm,page_owner: Maintain own list of stack_records structs
> mm,page_owner: Implement the tracking of the stacks count
> mm,page_owner: Display all stacks and their count
> mm,page_owner: Filter out stacks by a threshold
> mm,page_owner: Update Documentation regarding page_owner_stacks
>
> Documentation/mm/page_owner.rst | 45 +++++++
> include/linux/stackdepot.h | 58 +++++++++
> lib/stackdepot.c | 65 +++--------
> mm/page_owner.c | 200 +++++++++++++++++++++++++++++++-
> 4 files changed, 318 insertions(+), 50 deletions(-)
>
> --
> 2.43.0
>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
^ permalink raw reply [relevance 0%]
* + mmpage_owner-display-all-stacks-and-their-count.patch added to mm-unstable branch
@ 2024-02-15 23:34 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-15 23:34 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The patch titled
Subject: mm,page_owner: display all stacks and their count
has been added to the -mm mm-unstable branch. Its filename is
mmpage_owner-display-all-stacks-and-their-count.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mmpage_owner-display-all-stacks-and-their-count.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: mm,page_owner: display all stacks and their count
Date: Thu, 15 Feb 2024 22:59:05 +0100
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it. Reading from
that file will show all stacks that were added by page_owner followed by
their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Link: https://lkml.kernel.org/r/20240215215907.20121-6-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Marco Elver <elver@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/page_owner.c | 93 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 92 insertions(+), 1 deletion(-)
--- a/mm/page_owner.c~mmpage_owner-display-all-stacks-and-their-count
+++ a/mm/page_owner.c
@@ -171,7 +171,13 @@ static void add_stack_record_to_list(str
spin_lock_irqsave(&stack_list_lock, flags);
stack->next = stack_list;
- stack_list = stack;
+ /*
+ * This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -805,8 +811,90 @@ static const struct file_operations proc
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ int i, stack_count;
+ struct stack *stack = v;
+ unsigned long *entries;
+ unsigned long nr_entries;
+ struct stack_record *stack_record = stack->stack_record;
+
+ nr_entries = stack_record->size;
+ entries = stack_record->entries;
+ stack_count = refcount_read(&stack_record->count) - 1;
+
+ if (!nr_entries || nr_entries < 0 || stack_count < 1)
+ return 0;
+
+ for (i = 0; i < nr_entries; i++)
+ seq_printf(m, " %pS\n", (void *)entries[i]);
+ seq_printf(m, "stack_count: %d\n\n", stack_count);
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -814,6 +902,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
_
Patches currently in -mm which might be from osalvador@suse.de are
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
mmpage_owner-maintain-own-list-of-stack_records-structs.patch
mmpage_owner-implement-the-tracking-of-the-stacks-count.patch
mmpage_owner-display-all-stacks-and-their-count.patch
mmpage_owner-filter-out-stacks-by-a-threshold.patch
mmpage_owner-update-documentation-regarding-page_owner_stacks.patch
^ permalink raw reply [relevance 4%]
* + lib-stackdepot-fix-first-entry-having-a-0-handle.patch added to mm-unstable branch
@ 2024-02-15 23:34 6% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-15 23:34 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The patch titled
Subject: lib/stackdepot: fix first entry having a 0-handle
has been added to the -mm mm-unstable branch. Its filename is
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-stackdepot-fix-first-entry-having-a-0-handle.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: lib/stackdepot: fix first entry having a 0-handle
Date: Thu, 15 Feb 2024 22:59:01 +0100
Patch series "page_owner: print stacks and their outstanding allocations",
v10.
page_owner is a great debug functionality tool that lets us know about all
pages that have been allocated/freed and their specific stacktrace. This
comes very handy when debugging memory leaks, since with some scripting we
can see the outstanding allocations, which might point to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner. This functionality creates a new directory called
'page_owner_stacks' under 'sys/kernel//debug' with a read-only file called
'show_stacks', which prints out all the stacks followed by their
outstanding number of allocations (being that the times the stacktrace has
allocated but not freed yet). This gives us a clear and a quick overview
of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free
operation) call.
Unfortunately, we cannot use the new stackdepot api STACK_DEPOT_FLAG_GET
because it does not fulfill page_owner needs, meaning we would have to
special case things, at which point makes more sense for page_owner to do
its own {dec,inc}rementing of the stacks. E.g: Using
STACK_DEPOT_FLAG_PUT, once the refcount reaches 0, such stack gets
evicted, so page_owner would lose information.
This patchset also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
This patch (of 7):
The very first entry of stack_record gets a handle of 0, but this is wrong
because stackdepot treats a 0-handle as a non-valid one. E.g: See the
check in stack_depot_fetch()
Fix this by adding and offset of 1.
This bug has been lurking since the very beginning of stackdepot, but no
one really cared as it seems. Because of that I am not adding a Fixes
tag.
Link: https://lkml.kernel.org/r/20240215215907.20121-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20240215215907.20121-2-osalvador@suse.de
Co-developed-by: Marco Elver <elver@google.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
lib/stackdepot.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
--- a/lib/stackdepot.c~lib-stackdepot-fix-first-entry-having-a-0-handle
+++ a/lib/stackdepot.c
@@ -45,15 +45,16 @@
#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
STACK_DEPOT_EXTRA_BITS)
#define DEPOT_POOLS_CAP 8192
+/* The pool_index is offset by 1 so the first record does not have a 0 handle. */
#define DEPOT_MAX_POOLS \
- (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \
- (1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP)
+ (((1LL << (DEPOT_POOL_INDEX_BITS)) - 1 < DEPOT_POOLS_CAP) ? \
+ (1LL << (DEPOT_POOL_INDEX_BITS)) - 1 : DEPOT_POOLS_CAP)
/* Compact structure that stores a reference to a stack. */
union handle_parts {
depot_stack_handle_t handle;
struct {
- u32 pool_index : DEPOT_POOL_INDEX_BITS;
+ u32 pool_index : DEPOT_POOL_INDEX_BITS; /* pool_index is offset by 1 */
u32 offset : DEPOT_OFFSET_BITS;
u32 extra : STACK_DEPOT_EXTRA_BITS;
};
@@ -372,7 +373,7 @@ static struct stack_record *depot_pop_fr
stack = current_pool + pool_offset;
/* Pre-initialize handle once. */
- stack->handle.pool_index = pool_index;
+ stack->handle.pool_index = pool_index + 1;
stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
stack->handle.extra = 0;
INIT_LIST_HEAD(&stack->hash_list);
@@ -483,18 +484,19 @@ static struct stack_record *depot_fetch_
const int pools_num_cached = READ_ONCE(pools_num);
union handle_parts parts = { .handle = handle };
void *pool;
+ u32 pool_index = parts.pool_index - 1;
size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;
lockdep_assert_not_held(&pool_lock);
- if (parts.pool_index > pools_num_cached) {
+ if (pool_index > pools_num_cached) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pools_num_cached, handle);
+ pool_index, pools_num_cached, handle);
return NULL;
}
- pool = stack_pools[parts.pool_index];
+ pool = stack_pools[pool_index];
if (WARN_ON(!pool))
return NULL;
_
Patches currently in -mm which might be from osalvador@suse.de are
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
mmpage_owner-maintain-own-list-of-stack_records-structs.patch
mmpage_owner-implement-the-tracking-of-the-stacks-count.patch
mmpage_owner-display-all-stacks-and-their-count.patch
mmpage_owner-filter-out-stacks-by-a-threshold.patch
mmpage_owner-update-documentation-regarding-page_owner_stacks.patch
^ permalink raw reply [relevance 6%]
* [PATCH v10 5/7] mm,page_owner: Display all stacks and their count
2024-02-15 21:59 6% [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations Oscar Salvador
@ 2024-02-15 21:59 5% ` Oscar Salvador
2024-02-15 23:37 0% ` [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations Andrey Konovalov
1 sibling, 0 replies; 200+ results
From: Oscar Salvador @ 2024-02-15 21:59 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it.
Reading from that file will show all stacks that were added by page_owner
followed by their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Marco Elver <elver@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/page_owner.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 92 insertions(+), 1 deletion(-)
diff --git a/mm/page_owner.c b/mm/page_owner.c
index df6a923af5de..e99fbf822dd6 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -171,7 +171,13 @@ static void add_stack_record_to_list(struct stack_record *stack_record,
spin_lock_irqsave(&stack_list_lock, flags);
stack->next = stack_list;
- stack_list = stack;
+ /*
+ * This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -805,8 +811,90 @@ static const struct file_operations proc_page_owner_operations = {
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ int i, stack_count;
+ struct stack *stack = v;
+ unsigned long *entries;
+ unsigned long nr_entries;
+ struct stack_record *stack_record = stack->stack_record;
+
+ nr_entries = stack_record->size;
+ entries = stack_record->entries;
+ stack_count = refcount_read(&stack_record->count) - 1;
+
+ if (!nr_entries || nr_entries < 0 || stack_count < 1)
+ return 0;
+
+ for (i = 0; i < nr_entries; i++)
+ seq_printf(m, " %pS\n", (void *)entries[i]);
+ seq_printf(m, "stack_count: %d\n\n", stack_count);
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -814,6 +902,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations
@ 2024-02-15 21:59 6% Oscar Salvador
2024-02-15 21:59 5% ` [PATCH v10 5/7] mm,page_owner: Display all stacks and their count Oscar Salvador
2024-02-15 23:37 0% ` [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations Andrey Konovalov
0 siblings, 2 replies; 200+ results
From: Oscar Salvador @ 2024-02-15 21:59 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
Changes v9 -> v10
- Fix unwanted change in patch#2
- Collect Acked-by and Reviewed-by from Marco and Vlastimil
for the missing patches
- Fix stack_record count by substracting 1 in stack_print by Vlastimil
Changes v8 -> v9
- Fix handle-0 for the very first stack_record entry
- Collect Acked-by and Reviewed-by from Marco and Vlastimil
- Adressed feedback from Marco and Vlastimil
- stack_print() no longer allocates a memory buffer, prints directly
using seq_printf: by Vlastimil
- Added two static struct stack for dummy_handle and faiure_handle
- add_stack_record_to_list() now filters out the gfp_mask the same way
stackdepot does, for consistency
- Rename set_threshold to count_threshold
Changes v7 -> v8
- Rebased on top of -next
- page_owner maintains its own stack_records list now
- Kill auxiliary stackdepot function to traverse buckets
- page_owner_stacks is now a directory with 'show_stacks'
and 'set_threshold'
- Update Documentation/mm/page_owner.rst
- Adressed feedback from Marco
Changes v6 -> v7:
- Rebased on top of Andrey Konovalov's libstackdepot patchset
- Reformulated the changelogs
Changes v5 -> v6:
- Rebase on top of v6.7-rc1
- Move stack_record struct to the header
- Addressed feedback from Vlastimil
(some code tweaks and changelogs suggestions)
Changes v4 -> v5:
- Addressed feedback from Alexander Potapenko
Changes v3 -> v4:
- Rebase (long time has passed)
- Use boolean instead of enum for action by Alexander Potapenko
- (I left some feedback untouched because it's been long and
would like to discuss it here now instead of re-vamping
and old thread)
Changes v2 -> v3:
- Replace interface in favor of seq operations
(suggested by Vlastimil)
- Use debugfs interface to store/read valued (suggested by Ammar)
page_owner is a great debug functionality tool that lets us know
about all pages that have been allocated/freed and their specific
stacktrace.
This comes very handy when debugging memory leaks, since with
some scripting we can see the outstanding allocations, which might point
to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner.
This functionality creates a new directory called 'page_owner_stacks'
under 'sys/kernel//debug' with a read-only file called 'show_stacks',
which prints out all the stacks followed by their outstanding number
of allocations (being that the times the stacktrace has allocated
but not freed yet).
This gives us a clear and a quick overview of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free operation)
call.
Unfortunately, we cannot use the new stackdepot api
STACK_DEPOT_FLAG_GET because it does not fulfill page_owner needs,
meaning we would have to special case things, at which point
makes more sense for page_owner to do its own {dec,inc}rementing
of the stacks.
E.g: Using STACK_DEPOT_FLAG_PUT, once the refcount reaches 0,
such stack gets evicted, so page_owner would lose information.
This patch also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
Oscar Salvador (7):
lib/stackdepot: Fix first entry having a 0-handle
lib/stackdepot: Move stack_record struct definition into the header
mm,page_owner: Maintain own list of stack_records structs
mm,page_owner: Implement the tracking of the stacks count
mm,page_owner: Display all stacks and their count
mm,page_owner: Filter out stacks by a threshold
mm,page_owner: Update Documentation regarding page_owner_stacks
Documentation/mm/page_owner.rst | 45 +++++++
include/linux/stackdepot.h | 58 +++++++++
lib/stackdepot.c | 65 +++--------
mm/page_owner.c | 200 +++++++++++++++++++++++++++++++-
4 files changed, 318 insertions(+), 50 deletions(-)
--
2.43.0
^ permalink raw reply [relevance 6%]
* Re: [PATCH v9 5/7] mm,page_owner: Display all stacks and their count
2024-02-14 17:01 5% ` [PATCH v9 5/7] mm,page_owner: Display all stacks and their count Oscar Salvador
@ 2024-02-15 11:10 0% ` Vlastimil Babka
0 siblings, 0 replies; 200+ results
From: Vlastimil Babka @ 2024-02-15 11:10 UTC (permalink / raw)
To: Oscar Salvador, Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Marco Elver,
Andrey Konovalov, Alexander Potapenko
On 2/14/24 18:01, Oscar Salvador wrote:
> This patch adds a new directory called 'page_owner_stacks' under
> /sys/kernel/debug/, with a file called 'show_stacks' in it.
> Reading from that file will show all stacks that were added by page_owner
> followed by their counting, giving us a clear overview of stack <-> count
> relationship.
>
> E.g:
>
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> __filemap_get_folio+0x14a/0x490
> ext4_write_begin+0xbd/0x4b0 [ext4]
> generic_perform_write+0xc1/0x1e0
> ext4_buffered_write_iter+0x68/0xe0 [ext4]
> ext4_file_write_iter+0x70/0x740 [ext4]
> vfs_write+0x33d/0x420
> ksys_write+0xa5/0xe0
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 4578
>
> The seq stack_{start,next} functions will iterate through the list
> stack_list in order to print all stacks.
>
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: Marco Elver <elver@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
...
> +static int stack_print(struct seq_file *m, void *v)
> +{
> + int i;
> + struct stack *stack = v;
> + unsigned long *entries;
> + unsigned long nr_entries;
> + struct stack_record *stack_record = stack->stack_record;
> +
> + nr_entries = stack_record->size;
> + entries = stack_record->entries;
> +
> + if (!nr_entries || nr_entries < 0 ||
> + refcount_read(&stack_record->count) < 2)
> + return 0;
> +
> + for (i = 0; i < nr_entries; i++)
> + seq_printf(m, " %pS\n", (void *)entries[i]);
> + seq_printf(m, "stack_count: %d\n\n", refcount_read(&stack_record->count));
So count - 1 here to report actual usage, as explained in reply to 4/7?
^ permalink raw reply [relevance 0%]
* + mmpage_owner-display-all-stacks-and-their-count.patch added to mm-unstable branch
@ 2024-02-14 18:32 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-14 18:32 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The patch titled
Subject: mm,page_owner: display all stacks and their count
has been added to the -mm mm-unstable branch. Its filename is
mmpage_owner-display-all-stacks-and-their-count.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mmpage_owner-display-all-stacks-and-their-count.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: mm,page_owner: display all stacks and their count
Date: Wed, 14 Feb 2024 18:01:55 +0100
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it. Reading from
that file will show all stacks that were added by page_owner followed by
their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Link: https://lkml.kernel.org/r/20240214170157.17530-6-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/page_owner.c | 93 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 92 insertions(+), 1 deletion(-)
--- a/mm/page_owner.c~mmpage_owner-display-all-stacks-and-their-count
+++ a/mm/page_owner.c
@@ -171,7 +171,13 @@ static void add_stack_record_to_list(str
spin_lock_irqsave(&stack_list_lock, flags);
stack->next = stack_list;
- stack_list = stack;
+ /*
+ * This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -805,8 +811,90 @@ static const struct file_operations proc
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ int i;
+ struct stack *stack = v;
+ unsigned long *entries;
+ unsigned long nr_entries;
+ struct stack_record *stack_record = stack->stack_record;
+
+ nr_entries = stack_record->size;
+ entries = stack_record->entries;
+
+ if (!nr_entries || nr_entries < 0 ||
+ refcount_read(&stack_record->count) < 2)
+ return 0;
+
+ for (i = 0; i < nr_entries; i++)
+ seq_printf(m, " %pS\n", (void *)entries[i]);
+ seq_printf(m, "stack_count: %d\n\n", refcount_read(&stack_record->count));
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -814,6 +902,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
_
Patches currently in -mm which might be from osalvador@suse.de are
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
mmpage_owner-maintain-own-list-of-stack_records-structs.patch
mmpage_owner-implement-the-tracking-of-the-stacks-count.patch
mmpage_owner-display-all-stacks-and-their-count.patch
mmpage_owner-filter-out-stacks-by-a-threshold.patch
mmpage_owner-update-documentation-regarding-page_owner_stacks.patch
^ permalink raw reply [relevance 4%]
* + lib-stackdepot-fix-first-entry-having-a-0-handle.patch added to mm-unstable branch
@ 2024-02-14 18:32 6% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-14 18:32 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The patch titled
Subject: lib/stackdepot: fix first entry having a 0-handle
has been added to the -mm mm-unstable branch. Its filename is
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-stackdepot-fix-first-entry-having-a-0-handle.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: lib/stackdepot: fix first entry having a 0-handle
Date: Wed, 14 Feb 2024 18:01:51 +0100
Patch series "page_owner: print stacks and their outstanding allocations", v9.
page_owner is a great debug functionality tool that lets us know about all
pages that have been allocated/freed and their specific stacktrace. This
comes very handy when debugging memory leaks, since with some scripting we
can see the outstanding allocations, which might point to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner. This functionality creates a new directory called
'page_owner_stacks' under 'sys/kernel//debug' with a read-only file called
'show_stacks', which prints out all the stacks followed by their
outstanding number of allocations (being that the times the stacktrace has
allocated but not freed yet). This gives us a clear and a quick overview
of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free
operation) call.
Unfortunately, we cannot use the new stackdepot api STACK_DEPOT_FLAG_GET
because it does not fulfill page_owner needs, meaning we would have to
special case things, at which point makes more sense for page_owner to do
its own {dec,inc}rementing of the stacks. E.g: Using
STACK_DEPOT_FLAG_PUT, once the refcount reaches 0, such stack gets
evicted, so page_owner would lose information.
This patchset also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
This patch (of 7):
The very first entry of stack_record gets a handle of 0, but this is wrong
because stackdepot treats a 0-handle as a non-valid one. E.g: See the
check in stack_depot_fetch()
Fix this by adding and offset of 1.
This bug has been lurking since the very beginning of stackdepot, but no
one really cared as it seems. Because of that I am not adding a Fixes
tag.
Link: https://lkml.kernel.org/r/20240214170157.17530-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20240214170157.17530-2-osalvador@suse.de
Co-developed-by: Marco Elver <elver@google.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
lib/stackdepot.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
--- a/lib/stackdepot.c~lib-stackdepot-fix-first-entry-having-a-0-handle
+++ a/lib/stackdepot.c
@@ -45,15 +45,16 @@
#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
STACK_DEPOT_EXTRA_BITS)
#define DEPOT_POOLS_CAP 8192
+/* The pool_index is offset by 1 so the first record does not have a 0 handle. */
#define DEPOT_MAX_POOLS \
- (((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \
- (1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP)
+ (((1LL << (DEPOT_POOL_INDEX_BITS)) - 1 < DEPOT_POOLS_CAP) ? \
+ (1LL << (DEPOT_POOL_INDEX_BITS)) - 1 : DEPOT_POOLS_CAP)
/* Compact structure that stores a reference to a stack. */
union handle_parts {
depot_stack_handle_t handle;
struct {
- u32 pool_index : DEPOT_POOL_INDEX_BITS;
+ u32 pool_index : DEPOT_POOL_INDEX_BITS; /* pool_index is offset by 1 */
u32 offset : DEPOT_OFFSET_BITS;
u32 extra : STACK_DEPOT_EXTRA_BITS;
};
@@ -372,7 +373,7 @@ static struct stack_record *depot_pop_fr
stack = current_pool + pool_offset;
/* Pre-initialize handle once. */
- stack->handle.pool_index = pool_index;
+ stack->handle.pool_index = pool_index + 1;
stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
stack->handle.extra = 0;
INIT_LIST_HEAD(&stack->hash_list);
@@ -483,18 +484,19 @@ static struct stack_record *depot_fetch_
const int pools_num_cached = READ_ONCE(pools_num);
union handle_parts parts = { .handle = handle };
void *pool;
+ u32 pool_index = parts.pool_index - 1;
size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;
lockdep_assert_not_held(&pool_lock);
- if (parts.pool_index > pools_num_cached) {
+ if (pool_index > pools_num_cached) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pools_num_cached, handle);
+ pool_index, pools_num_cached, handle);
return NULL;
}
- pool = stack_pools[parts.pool_index];
+ pool = stack_pools[pool_index];
if (WARN_ON(!pool))
return NULL;
_
Patches currently in -mm which might be from osalvador@suse.de are
lib-stackdepot-fix-first-entry-having-a-0-handle.patch
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
mmpage_owner-maintain-own-list-of-stack_records-structs.patch
mmpage_owner-implement-the-tracking-of-the-stacks-count.patch
mmpage_owner-display-all-stacks-and-their-count.patch
mmpage_owner-filter-out-stacks-by-a-threshold.patch
mmpage_owner-update-documentation-regarding-page_owner_stacks.patch
^ permalink raw reply [relevance 6%]
* [PATCH v9 5/7] mm,page_owner: Display all stacks and their count
2024-02-14 17:01 6% [PATCH v9 0/7] page_owner: print stacks and their outstanding allocations Oscar Salvador
@ 2024-02-14 17:01 5% ` Oscar Salvador
2024-02-15 11:10 0% ` Vlastimil Babka
0 siblings, 1 reply; 200+ results
From: Oscar Salvador @ 2024-02-14 17:01 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it.
Reading from that file will show all stacks that were added by page_owner
followed by their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Marco Elver <elver@google.com>
---
mm/page_owner.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 92 insertions(+), 1 deletion(-)
diff --git a/mm/page_owner.c b/mm/page_owner.c
index df6a923af5de..5258a417f4d1 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -171,7 +171,13 @@ static void add_stack_record_to_list(struct stack_record *stack_record,
spin_lock_irqsave(&stack_list_lock, flags);
stack->next = stack_list;
- stack_list = stack;
+ /*
+ * This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -805,8 +811,90 @@ static const struct file_operations proc_page_owner_operations = {
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ int i;
+ struct stack *stack = v;
+ unsigned long *entries;
+ unsigned long nr_entries;
+ struct stack_record *stack_record = stack->stack_record;
+
+ nr_entries = stack_record->size;
+ entries = stack_record->entries;
+
+ if (!nr_entries || nr_entries < 0 ||
+ refcount_read(&stack_record->count) < 2)
+ return 0;
+
+ for (i = 0; i < nr_entries; i++)
+ seq_printf(m, " %pS\n", (void *)entries[i]);
+ seq_printf(m, "stack_count: %d\n\n", refcount_read(&stack_record->count));
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -814,6 +902,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH v9 0/7] page_owner: print stacks and their outstanding allocations
@ 2024-02-14 17:01 6% Oscar Salvador
2024-02-14 17:01 5% ` [PATCH v9 5/7] mm,page_owner: Display all stacks and their count Oscar Salvador
0 siblings, 1 reply; 200+ results
From: Oscar Salvador @ 2024-02-14 17:01 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
Changes v8 -> v9
- Fix handle-0 for the very first stack_record entry
- Collect Acked-by and Reviewed-by from Marco and Vlastimil
- Adressed feedback from Marco and Vlastimil
- stack_print() no longer allocates a memory buffer, prints directly
using seq_printf: by Vlastimil
- Added two static struct stack for dummy_handle and faiure_handle
- add_stack_record_to_list() now filters out the gfp_mask the same way
stackdepot does, for consistency
- Rename set_threshold to count_threshold
Changes v7 -> v8
- Rebased on top of -next
- page_owner maintains its own stack_records list now
- Kill auxiliary stackdepot function to traverse buckets
- page_owner_stacks is now a directory with 'show_stacks'
and 'set_threshold'
- Update Documentation/mm/page_owner.rst
- Adressed feedback from Marco
Changes v6 -> v7:
- Rebased on top of Andrey Konovalov's libstackdepot patchset
- Reformulated the changelogs
Changes v5 -> v6:
- Rebase on top of v6.7-rc1
- Move stack_record struct to the header
- Addressed feedback from Vlastimil
(some code tweaks and changelogs suggestions)
Changes v4 -> v5:
- Addressed feedback from Alexander Potapenko
Changes v3 -> v4:
- Rebase (long time has passed)
- Use boolean instead of enum for action by Alexander Potapenko
- (I left some feedback untouched because it's been long and
would like to discuss it here now instead of re-vamping
and old thread)
Changes v2 -> v3:
- Replace interface in favor of seq operations
(suggested by Vlastimil)
- Use debugfs interface to store/read valued (suggested by Ammar)
page_owner is a great debug functionality tool that lets us know
about all pages that have been allocated/freed and their specific
stacktrace.
This comes very handy when debugging memory leaks, since with
some scripting we can see the outstanding allocations, which might point
to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner.
This functionality creates a new directory called 'page_owner_stacks'
under 'sys/kernel//debug' with a read-only file called 'show_stacks',
which prints out all the stacks followed by their outstanding number
of allocations (being that the times the stacktrace has allocated
but not freed yet).
This gives us a clear and a quick overview of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free operation)
call.
Unfortunately, we cannot use the new stackdepot api
STACK_DEPOT_FLAG_GET because it does not fulfill page_owner needs,
meaning we would have to special case things, at which point
makes more sense for page_owner to do its own {dec,inc}rementing
of the stacks.
E.g: Using STACK_DEPOT_FLAG_PUT, once the refcount reaches 0,
such stack gets evicted, so page_owner would lose information.
This patch also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
Oscar Salvador (7):
lib/stackdepot: Fix first entry having a 0-handle
lib/stackdepot: Move stack_record struct definition into the header
mm,page_owner: Maintain own list of stack_records structs
mm,page_owner: Implement the tracking of the stacks count
mm,page_owner: Display all stacks and their count
mm,page_owner: Filter out stacks by a threshold
mm,page_owner: Update Documentation regarding page_owner_stacks
Documentation/mm/page_owner.rst | 45 +++++++
include/linux/stackdepot.h | 58 +++++++++
lib/stackdepot.c | 65 +++--------
mm/page_owner.c | 200 +++++++++++++++++++++++++++++++-
4 files changed, 318 insertions(+), 50 deletions(-)
--
2.43.0
^ permalink raw reply [relevance 6%]
* Re: [RFC v2 03/14] filemap: use mapping_min_order while allocating folios
2024-02-13 22:05 0% ` Dave Chinner
@ 2024-02-14 10:13 0% ` Pankaj Raghav (Samsung)
0 siblings, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-02-14 10:13 UTC (permalink / raw)
To: Dave Chinner
Cc: linux-xfs, linux-fsdevel, mcgrof, gost.dev, akpm, kbusch, djwong,
chandan.babu, p.raghav, linux-kernel, hare, willy, linux-mm
> > +++ b/mm/filemap.c
> > @@ -127,6 +127,7 @@
> > static void page_cache_delete(struct address_space *mapping,
> > struct folio *folio, void *shadow)
> > {
> > + unsigned int min_order = mapping_min_folio_order(mapping);
> > XA_STATE(xas, &mapping->i_pages, folio->index);
> > long nr = 1;
> >
> > @@ -135,6 +136,7 @@ static void page_cache_delete(struct address_space *mapping,
> > xas_set_order(&xas, folio->index, folio_order(folio));
> > nr = folio_nr_pages(folio);
> >
> > + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
> > VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
>
> If you are only using min_order in the VM_BUG_ON_FOLIO() macro, then
> please just do:
>
> VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
> folio);
>
> There is no need to clutter up the function with variables that are
> only used in one debug-only check.
>
Got it. I will fold it in.
> > @@ -1847,6 +1853,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> > fgf_t fgp_flags, gfp_t gfp)
> > {
> > struct folio *folio;
> > + unsigned int min_order = mapping_min_folio_order(mapping);
> > + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping);
> > +
> > + index = round_down(index, min_nrpages);
>
> index = mapping_align_start_index(mapping, index);
I will add this helper. Makes the intent more clear. Thanks.
>
> The rest of the function only cares about min_order, not
> min_nrpages....
>
> -Dave.
> --
> Dave Chinner
> david@fromorbit.com
^ permalink raw reply [relevance 0%]
* Re: [RFC v2 03/14] filemap: use mapping_min_order while allocating folios
2024-02-13 9:37 16% ` [RFC v2 03/14] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
2024-02-13 14:58 0% ` Hannes Reinecke
2024-02-13 16:38 0% ` Darrick J. Wong
@ 2024-02-13 22:05 0% ` Dave Chinner
2024-02-14 10:13 0% ` Pankaj Raghav (Samsung)
2 siblings, 1 reply; 200+ results
From: Dave Chinner @ 2024-02-13 22:05 UTC (permalink / raw)
To: Pankaj Raghav (Samsung)
Cc: linux-xfs, linux-fsdevel, mcgrof, gost.dev, akpm, kbusch, djwong,
chandan.babu, p.raghav, linux-kernel, hare, willy, linux-mm
On Tue, Feb 13, 2024 at 10:37:02AM +0100, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
>
> filemap_create_folio() and do_read_cache_folio() were always allocating
> folio of order 0. __filemap_get_folio was trying to allocate higher
> order folios when fgp_flags had higher order hint set but it will default
> to order 0 folio if higher order memory allocation fails.
>
> As we bring the notion of mapping_min_order, make sure these functions
> allocate at least folio of mapping_min_order as we need to guarantee it
> in the page cache.
>
> Add some additional VM_BUG_ON() in page_cache_delete[batch] and
> __filemap_add_folio to catch errors where we delete or add folios that
> has order less than min_order.
>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/filemap.c | 25 +++++++++++++++++++++----
> 1 file changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 323a8e169581..7a6e15c47150 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -127,6 +127,7 @@
> static void page_cache_delete(struct address_space *mapping,
> struct folio *folio, void *shadow)
> {
> + unsigned int min_order = mapping_min_folio_order(mapping);
> XA_STATE(xas, &mapping->i_pages, folio->index);
> long nr = 1;
>
> @@ -135,6 +136,7 @@ static void page_cache_delete(struct address_space *mapping,
> xas_set_order(&xas, folio->index, folio_order(folio));
> nr = folio_nr_pages(folio);
>
> + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
If you are only using min_order in the VM_BUG_ON_FOLIO() macro, then
please just do:
VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping),
folio);
There is no need to clutter up the function with variables that are
only used in one debug-only check.
> @@ -1847,6 +1853,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> fgf_t fgp_flags, gfp_t gfp)
> {
> struct folio *folio;
> + unsigned int min_order = mapping_min_folio_order(mapping);
> + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping);
> +
> + index = round_down(index, min_nrpages);
index = mapping_align_start_index(mapping, index);
The rest of the function only cares about min_order, not
min_nrpages....
-Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [relevance 0%]
* Re: [RFC v2 03/14] filemap: use mapping_min_order while allocating folios
2024-02-13 9:37 16% ` [RFC v2 03/14] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
2024-02-13 14:58 0% ` Hannes Reinecke
@ 2024-02-13 16:38 0% ` Darrick J. Wong
2024-02-13 22:05 0% ` Dave Chinner
2 siblings, 0 replies; 200+ results
From: Darrick J. Wong @ 2024-02-13 16:38 UTC (permalink / raw)
To: Pankaj Raghav (Samsung)
Cc: linux-xfs, linux-fsdevel, mcgrof, gost.dev, akpm, kbusch,
chandan.babu, p.raghav, linux-kernel, hare, willy, linux-mm,
david
On Tue, Feb 13, 2024 at 10:37:02AM +0100, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
>
> filemap_create_folio() and do_read_cache_folio() were always allocating
> folio of order 0. __filemap_get_folio was trying to allocate higher
> order folios when fgp_flags had higher order hint set but it will default
> to order 0 folio if higher order memory allocation fails.
>
> As we bring the notion of mapping_min_order, make sure these functions
> allocate at least folio of mapping_min_order as we need to guarantee it
> in the page cache.
>
> Add some additional VM_BUG_ON() in page_cache_delete[batch] and
> __filemap_add_folio to catch errors where we delete or add folios that
> has order less than min_order.
>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Looks good to me,
Acked-by: Darrick J. Wong <djwong@kernel.org>
--D
> ---
> mm/filemap.c | 25 +++++++++++++++++++++----
> 1 file changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 323a8e169581..7a6e15c47150 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -127,6 +127,7 @@
> static void page_cache_delete(struct address_space *mapping,
> struct folio *folio, void *shadow)
> {
> + unsigned int min_order = mapping_min_folio_order(mapping);
> XA_STATE(xas, &mapping->i_pages, folio->index);
> long nr = 1;
>
> @@ -135,6 +136,7 @@ static void page_cache_delete(struct address_space *mapping,
> xas_set_order(&xas, folio->index, folio_order(folio));
> nr = folio_nr_pages(folio);
>
> + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
>
> xas_store(&xas, shadow);
> @@ -277,6 +279,7 @@ void filemap_remove_folio(struct folio *folio)
> static void page_cache_delete_batch(struct address_space *mapping,
> struct folio_batch *fbatch)
> {
> + unsigned int min_order = mapping_min_folio_order(mapping);
> XA_STATE(xas, &mapping->i_pages, fbatch->folios[0]->index);
> long total_pages = 0;
> int i = 0;
> @@ -305,6 +308,7 @@ static void page_cache_delete_batch(struct address_space *mapping,
>
> WARN_ON_ONCE(!folio_test_locked(folio));
>
> + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
> folio->mapping = NULL;
> /* Leave folio->index set: truncation lookup relies on it */
>
> @@ -846,6 +850,7 @@ noinline int __filemap_add_folio(struct address_space *mapping,
> int huge = folio_test_hugetlb(folio);
> bool charged = false;
> long nr = 1;
> + unsigned int min_order = mapping_min_folio_order(mapping);
>
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
> @@ -896,6 +901,7 @@ noinline int __filemap_add_folio(struct address_space *mapping,
> }
> }
>
> + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
> xas_store(&xas, folio);
> if (xas_error(&xas))
> goto unlock;
> @@ -1847,6 +1853,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> fgf_t fgp_flags, gfp_t gfp)
> {
> struct folio *folio;
> + unsigned int min_order = mapping_min_folio_order(mapping);
> + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping);
> +
> + index = round_down(index, min_nrpages);
>
> repeat:
> folio = filemap_get_entry(mapping, index);
> @@ -1886,7 +1896,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> folio_wait_stable(folio);
> no_page:
> if (!folio && (fgp_flags & FGP_CREAT)) {
> - unsigned order = FGF_GET_ORDER(fgp_flags);
> + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
> int err;
>
> if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
> @@ -1914,8 +1924,13 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> err = -ENOMEM;
> if (order == 1)
> order = 0;
> + if (order < min_order)
> + order = min_order;
> if (order > 0)
> alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
> +
> + VM_BUG_ON(index & ((1UL << order) - 1));
> +
> folio = filemap_alloc_folio(alloc_gfp, order);
> if (!folio)
> continue;
> @@ -1929,7 +1944,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> break;
> folio_put(folio);
> folio = NULL;
> - } while (order-- > 0);
> + } while (order-- > min_order);
>
> if (err == -EEXIST)
> goto repeat;
> @@ -2424,7 +2439,8 @@ static int filemap_create_folio(struct file *file,
> struct folio *folio;
> int error;
>
> - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
> + folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
> + mapping_min_folio_order(mapping));
> if (!folio)
> return -ENOMEM;
>
> @@ -3682,7 +3698,8 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
> repeat:
> folio = filemap_get_folio(mapping, index);
> if (IS_ERR(folio)) {
> - folio = filemap_alloc_folio(gfp, 0);
> + folio = filemap_alloc_folio(gfp,
> + mapping_min_folio_order(mapping));
> if (!folio)
> return ERR_PTR(-ENOMEM);
> err = filemap_add_folio(mapping, folio, index, gfp);
> --
> 2.43.0
>
>
^ permalink raw reply [relevance 0%]
* Re: [RFC v2 03/14] filemap: use mapping_min_order while allocating folios
2024-02-13 9:37 16% ` [RFC v2 03/14] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
@ 2024-02-13 14:58 0% ` Hannes Reinecke
2024-02-13 16:38 0% ` Darrick J. Wong
2024-02-13 22:05 0% ` Dave Chinner
2 siblings, 0 replies; 200+ results
From: Hannes Reinecke @ 2024-02-13 14:58 UTC (permalink / raw)
To: Pankaj Raghav (Samsung), linux-xfs, linux-fsdevel
Cc: mcgrof, gost.dev, akpm, kbusch, djwong, chandan.babu, p.raghav,
linux-kernel, willy, linux-mm, david
On 2/13/24 10:37, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
>
> filemap_create_folio() and do_read_cache_folio() were always allocating
> folio of order 0. __filemap_get_folio was trying to allocate higher
> order folios when fgp_flags had higher order hint set but it will default
> to order 0 folio if higher order memory allocation fails.
>
> As we bring the notion of mapping_min_order, make sure these functions
> allocate at least folio of mapping_min_order as we need to guarantee it
> in the page cache.
>
> Add some additional VM_BUG_ON() in page_cache_delete[batch] and
> __filemap_add_folio to catch errors where we delete or add folios that
> has order less than min_order.
>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/filemap.c | 25 +++++++++++++++++++++----
> 1 file changed, 21 insertions(+), 4 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
^ permalink raw reply [relevance 0%]
* Re: [PATCH v8 3/5] mm,page_owner: Display all stacks and their count
2024-02-12 22:30 5% ` [PATCH v8 3/5] mm,page_owner: Display all stacks and their count Oscar Salvador
2024-02-13 8:38 0% ` Marco Elver
@ 2024-02-13 14:25 0% ` Vlastimil Babka
1 sibling, 0 replies; 200+ results
From: Vlastimil Babka @ 2024-02-13 14:25 UTC (permalink / raw)
To: Oscar Salvador, Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Marco Elver,
Andrey Konovalov, Alexander Potapenko
On 2/12/24 23:30, Oscar Salvador wrote:
> This patch adds a new directory called 'page_owner_stacks' under
> /sys/kernel/debug/, with a file called 'show_stacks' in it.
> Reading from that file will show all stacks that were added by page_owner
> followed by their counting, giving us a clear overview of stack <-> count
> relationship.
>
> E.g:
>
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> __filemap_get_folio+0x14a/0x490
> ext4_write_begin+0xbd/0x4b0 [ext4]
> generic_perform_write+0xc1/0x1e0
> ext4_buffered_write_iter+0x68/0xe0 [ext4]
> ext4_file_write_iter+0x70/0x740 [ext4]
> vfs_write+0x33d/0x420
> ksys_write+0xa5/0xe0
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 4578
>
> The seq stack_{start,next} functions will iterate through the list
> stack_list in order to print all stacks.
>
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
...
> +static int stack_print(struct seq_file *m, void *v)
> +{
> + char *buf;
> + int ret = 0;
> + struct stack *stack = v;
> + struct stack_record *stack_record = stack->stack_record;
> +
> + if (!stack_record->size || stack_record->size < 0 ||
> + refcount_read(&stack_record->count) < 2)
> + return 0;
> +
> + buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
> +
> + ret += stack_trace_snprint(buf, PAGE_SIZE, stack_record->entries,
> + stack_record->size, 0);
> + if (!ret)
> + goto out;
> +
> + scnprintf(buf + ret, PAGE_SIZE - ret, "stack_count: %d\n\n",
> + refcount_read(&stack_record->count));
> +
> + seq_printf(m, buf);
> + seq_puts(m, "\n\n");
> +out:
> + kfree(buf);
Seems rather wasteful to do kzalloc/kfree so you can print into that buffer
first and then print/copy it again using seq_printf. If you give up on using
stack_trace_snprintf() it's not much harder to print the stack directly with
a loop of seq_printf. See e.g. slab_debugfs_show().
> +
> + return 0;
> +}
> +
^ permalink raw reply [relevance 0%]
* [RFC v2 09/14] mm: Support order-1 folios in the page cache
2024-02-13 9:37 16% ` [RFC v2 03/14] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
@ 2024-02-13 9:37 6% ` Pankaj Raghav (Samsung)
1 sibling, 0 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-02-13 9:37 UTC (permalink / raw)
To: linux-xfs, linux-fsdevel
Cc: mcgrof, gost.dev, akpm, kbusch, djwong, chandan.babu, p.raghav,
linux-kernel, hare, willy, linux-mm, david
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Folios of order 1 have no space to store the deferred list. This is
not a problem for the page cache as file-backed folios are never
placed on the deferred list. All we need to do is prevent the core
MM from touching the deferred list for order 1 folios and remove the
code which prevented us from allocating order 1 folios.
Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/huge_mm.h | 7 +++++--
mm/filemap.c | 2 --
mm/huge_memory.c | 23 ++++++++++++++++++-----
mm/internal.h | 4 +---
mm/readahead.c | 3 ---
5 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..916a2a539517 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags);
-void folio_prep_large_rmappable(struct folio *folio);
+struct folio *folio_prep_large_rmappable(struct folio *folio);
bool can_split_folio(struct folio *folio, int *pextra_pins);
int split_huge_page_to_list(struct page *page, struct list_head *list);
static inline int split_huge_page(struct page *page)
@@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
return 0;
}
-static inline void folio_prep_large_rmappable(struct folio *folio) {}
+static inline struct folio *folio_prep_large_rmappable(struct folio *folio)
+{
+ return folio;
+}
#define transparent_hugepage_flags 0UL
diff --git a/mm/filemap.c b/mm/filemap.c
index 7a6e15c47150..c8205a534532 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1922,8 +1922,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
gfp_t alloc_gfp = gfp;
err = -ENOMEM;
- if (order == 1)
- order = 0;
if (order < min_order)
order = min_order;
if (order > 0)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d897efc51025..6ec3417638a1 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio)
}
#endif
-void folio_prep_large_rmappable(struct folio *folio)
+struct folio *folio_prep_large_rmappable(struct folio *folio)
{
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
- INIT_LIST_HEAD(&folio->_deferred_list);
+ if (!folio || !folio_test_large(folio))
+ return folio;
+ if (folio_order(folio) > 1)
+ INIT_LIST_HEAD(&folio->_deferred_list);
folio_set_large_rmappable(folio);
+
+ return folio;
}
static inline bool is_transparent_hugepage(struct folio *folio)
@@ -3095,7 +3099,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
- if (!list_empty(&folio->_deferred_list)) {
+ if (folio_order(folio) > 1 &&
+ !list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
list_del(&folio->_deferred_list);
}
@@ -3146,6 +3151,9 @@ void folio_undo_large_rmappable(struct folio *folio)
struct deferred_split *ds_queue;
unsigned long flags;
+ if (folio_order(folio) <= 1)
+ return;
+
/*
* At this point, there is no one trying to add the folio to
* deferred_list. If folio is not in deferred_list, it's safe
@@ -3171,7 +3179,12 @@ void deferred_split_folio(struct folio *folio)
#endif
unsigned long flags;
- VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio);
+ /*
+ * Order 1 folios have no space for a deferred list, but we also
+ * won't waste much memory by not adding them to the deferred list.
+ */
+ if (folio_order(folio) <= 1)
+ return;
/*
* The try_to_unmap() in page reclaim path might reach here too,
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5174b5b0c344 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page)
{
struct folio *folio = (struct folio *)page;
- if (folio && folio_order(folio) > 1)
- folio_prep_large_rmappable(folio);
- return folio;
+ return folio_prep_large_rmappable(folio);
}
static inline void prep_compound_head(struct page *page, unsigned int order)
diff --git a/mm/readahead.c b/mm/readahead.c
index a361fba18674..7d5f6a8792a8 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -560,9 +560,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
/* Don't allocate pages past EOF */
while (order > min_order && index + (1UL << order) - 1 > limit)
order--;
- /* THP machinery does not support order-1 */
- if (order == 1)
- order = 0;
if (order < min_order)
order = min_order;
--
2.43.0
^ permalink raw reply related [relevance 6%]
* [RFC v2 03/14] filemap: use mapping_min_order while allocating folios
@ 2024-02-13 9:37 16% ` Pankaj Raghav (Samsung)
2024-02-13 14:58 0% ` Hannes Reinecke
` (2 more replies)
2024-02-13 9:37 6% ` [RFC v2 09/14] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
1 sibling, 3 replies; 200+ results
From: Pankaj Raghav (Samsung) @ 2024-02-13 9:37 UTC (permalink / raw)
To: linux-xfs, linux-fsdevel
Cc: mcgrof, gost.dev, akpm, kbusch, djwong, chandan.babu, p.raghav,
linux-kernel, hare, willy, linux-mm, david
From: Pankaj Raghav <p.raghav@samsung.com>
filemap_create_folio() and do_read_cache_folio() were always allocating
folio of order 0. __filemap_get_folio was trying to allocate higher
order folios when fgp_flags had higher order hint set but it will default
to order 0 folio if higher order memory allocation fails.
As we bring the notion of mapping_min_order, make sure these functions
allocate at least folio of mapping_min_order as we need to guarantee it
in the page cache.
Add some additional VM_BUG_ON() in page_cache_delete[batch] and
__filemap_add_folio to catch errors where we delete or add folios that
has order less than min_order.
Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
mm/filemap.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 323a8e169581..7a6e15c47150 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -127,6 +127,7 @@
static void page_cache_delete(struct address_space *mapping,
struct folio *folio, void *shadow)
{
+ unsigned int min_order = mapping_min_folio_order(mapping);
XA_STATE(xas, &mapping->i_pages, folio->index);
long nr = 1;
@@ -135,6 +136,7 @@ static void page_cache_delete(struct address_space *mapping,
xas_set_order(&xas, folio->index, folio_order(folio));
nr = folio_nr_pages(folio);
+ VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
xas_store(&xas, shadow);
@@ -277,6 +279,7 @@ void filemap_remove_folio(struct folio *folio)
static void page_cache_delete_batch(struct address_space *mapping,
struct folio_batch *fbatch)
{
+ unsigned int min_order = mapping_min_folio_order(mapping);
XA_STATE(xas, &mapping->i_pages, fbatch->folios[0]->index);
long total_pages = 0;
int i = 0;
@@ -305,6 +308,7 @@ static void page_cache_delete_batch(struct address_space *mapping,
WARN_ON_ONCE(!folio_test_locked(folio));
+ VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
folio->mapping = NULL;
/* Leave folio->index set: truncation lookup relies on it */
@@ -846,6 +850,7 @@ noinline int __filemap_add_folio(struct address_space *mapping,
int huge = folio_test_hugetlb(folio);
bool charged = false;
long nr = 1;
+ unsigned int min_order = mapping_min_folio_order(mapping);
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio);
@@ -896,6 +901,7 @@ noinline int __filemap_add_folio(struct address_space *mapping,
}
}
+ VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio);
xas_store(&xas, folio);
if (xas_error(&xas))
goto unlock;
@@ -1847,6 +1853,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
fgf_t fgp_flags, gfp_t gfp)
{
struct folio *folio;
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ unsigned int min_nrpages = mapping_min_folio_nrpages(mapping);
+
+ index = round_down(index, min_nrpages);
repeat:
folio = filemap_get_entry(mapping, index);
@@ -1886,7 +1896,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_wait_stable(folio);
no_page:
if (!folio && (fgp_flags & FGP_CREAT)) {
- unsigned order = FGF_GET_ORDER(fgp_flags);
+ unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags));
int err;
if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping))
@@ -1914,8 +1924,13 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
err = -ENOMEM;
if (order == 1)
order = 0;
+ if (order < min_order)
+ order = min_order;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
+
+ VM_BUG_ON(index & ((1UL << order) - 1));
+
folio = filemap_alloc_folio(alloc_gfp, order);
if (!folio)
continue;
@@ -1929,7 +1944,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
break;
folio_put(folio);
folio = NULL;
- } while (order-- > 0);
+ } while (order-- > min_order);
if (err == -EEXIST)
goto repeat;
@@ -2424,7 +2439,8 @@ static int filemap_create_folio(struct file *file,
struct folio *folio;
int error;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping),
+ mapping_min_folio_order(mapping));
if (!folio)
return -ENOMEM;
@@ -3682,7 +3698,8 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp, 0);
+ folio = filemap_alloc_folio(gfp,
+ mapping_min_folio_order(mapping));
if (!folio)
return ERR_PTR(-ENOMEM);
err = filemap_add_folio(mapping, folio, index, gfp);
--
2.43.0
^ permalink raw reply related [relevance 16%]
* Re: [PATCH v8 3/5] mm,page_owner: Display all stacks and their count
2024-02-12 22:30 5% ` [PATCH v8 3/5] mm,page_owner: Display all stacks and their count Oscar Salvador
@ 2024-02-13 8:38 0% ` Marco Elver
2024-02-13 14:25 0% ` Vlastimil Babka
1 sibling, 0 replies; 200+ results
From: Marco Elver @ 2024-02-13 8:38 UTC (permalink / raw)
To: Oscar Salvador
Cc: Andrew Morton, linux-kernel, linux-mm, Michal Hocko,
Vlastimil Babka, Andrey Konovalov, Alexander Potapenko
On Mon, 12 Feb 2024 at 23:29, Oscar Salvador <osalvador@suse.de> wrote:
>
> This patch adds a new directory called 'page_owner_stacks' under
> /sys/kernel/debug/, with a file called 'show_stacks' in it.
> Reading from that file will show all stacks that were added by page_owner
> followed by their counting, giving us a clear overview of stack <-> count
> relationship.
>
> E.g:
>
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> __filemap_get_folio+0x14a/0x490
> ext4_write_begin+0xbd/0x4b0 [ext4]
> generic_perform_write+0xc1/0x1e0
> ext4_buffered_write_iter+0x68/0xe0 [ext4]
> ext4_file_write_iter+0x70/0x740 [ext4]
> vfs_write+0x33d/0x420
> ksys_write+0xa5/0xe0
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 4578
>
> The seq stack_{start,next} functions will iterate through the list
> stack_list in order to print all stacks.
>
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Marco Elver <elver@google.com>
Minor comments below.
> ---
> mm/page_owner.c | 99 ++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 98 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 7d1b3f75cef3..3e4b7cd7c8f8 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -84,7 +84,12 @@ static void add_stack_record_to_list(struct stack_record *stack_record)
> stack_list = stack;
> } else {
> stack->next = stack_list;
> - stack_list = stack;
> + /* This pairs with smp_load_acquire() from function
Comment should be
/*
*
...
*/
(Unless in networking or other special subsystems with their own comment style.)
> + * stack_start(). This guarantees that stack_start()
> + * will see an updated stack_list before starting to
> + * traverse the list.
> + */
> + smp_store_release(&stack_list, stack);
> }
> spin_unlock_irqrestore(&stack_list_lock, flags);
> }
> @@ -792,8 +797,97 @@ static const struct file_operations proc_page_owner_operations = {
> .llseek = lseek_page_owner,
> };
>
> +static void *stack_start(struct seq_file *m, loff_t *ppos)
> +{
> + struct stack *stack;
> +
> + if (*ppos == -1UL)
> + return NULL;
> +
> + if (!*ppos) {
> + /*
> + * This pairs with smp_store_release() from function
> + * add_stack_record_to_list(), so we get a consistent
> + * value of stack_list.
> + */
> + stack = smp_load_acquire(&stack_list);
I'm not sure if it'd make your code simpler or not: there is
<linux/llist.h> for singly-linked linked lists, although the code to
manage the list is simple enough I'm indifferent here. Only consider
it if it helps you make the code simpler.
> + } else {
> + stack = m->private;
> + stack = stack->next;
> + }
> +
> + m->private = stack;
> +
> + return stack;
> +}
> +
> +static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
> +{
> + struct stack *stack = v;
> +
> + stack = stack->next;
> + *ppos = stack ? *ppos + 1 : -1UL;
> + m->private = stack;
> +
> + return stack;
> +}
> +
> +static int stack_print(struct seq_file *m, void *v)
> +{
> + char *buf;
> + int ret = 0;
> + struct stack *stack = v;
> + struct stack_record *stack_record = stack->stack_record;
> +
> + if (!stack_record->size || stack_record->size < 0 ||
> + refcount_read(&stack_record->count) < 2)
> + return 0;
> +
> + buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
> +
> + ret += stack_trace_snprint(buf, PAGE_SIZE, stack_record->entries,
> + stack_record->size, 0);
> + if (!ret)
> + goto out;
> +
> + scnprintf(buf + ret, PAGE_SIZE - ret, "stack_count: %d\n\n",
> + refcount_read(&stack_record->count));
> +
> + seq_printf(m, buf);
> + seq_puts(m, "\n\n");
> +out:
> + kfree(buf);
> +
> + return 0;
> +}
> +
> +static void stack_stop(struct seq_file *m, void *v)
> +{
> +}
Is this function even needed if it's empty? I recall there were some
boilerplate "nop" functions that could be used.
> +static const struct seq_operations page_owner_stack_op = {
> + .start = stack_start,
> + .next = stack_next,
> + .stop = stack_stop,
> + .show = stack_print
> +};
> +
> +static int page_owner_stack_open(struct inode *inode, struct file *file)
> +{
> + return seq_open_private(file, &page_owner_stack_op, 0);
> +}
> +
> +static const struct file_operations page_owner_stack_operations = {
> + .open = page_owner_stack_open,
> + .read = seq_read,
> + .llseek = seq_lseek,
> + .release = seq_release,
> +};
> +
> static int __init pageowner_init(void)
> {
> + struct dentry *dir;
> +
> if (!static_branch_unlikely(&page_owner_inited)) {
> pr_info("page_owner is disabled\n");
> return 0;
> @@ -801,6 +895,9 @@ static int __init pageowner_init(void)
>
> debugfs_create_file("page_owner", 0400, NULL, NULL,
> &proc_page_owner_operations);
> + dir = debugfs_create_dir("page_owner_stacks", NULL);
> + debugfs_create_file("show_stacks", 0400, dir, NULL,
> + &page_owner_stack_operations);
>
> return 0;
> }
> --
> 2.43.0
>
^ permalink raw reply [relevance 0%]
* + mmpage_owner-display-all-stacks-and-their-count.patch added to mm-unstable branch
@ 2024-02-12 23:28 4% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-12 23:28 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The patch titled
Subject: mm,page_owner: display all stacks and their count
has been added to the -mm mm-unstable branch. Its filename is
mmpage_owner-display-all-stacks-and-their-count.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mmpage_owner-display-all-stacks-and-their-count.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: mm,page_owner: display all stacks and their count
Date: Mon, 12 Feb 2024 23:30:27 +0100
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it. Reading from
that file will show all stacks that were added by page_owner followed by
their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Link: https://lkml.kernel.org/r/20240212223029.30769-4-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Marco Elver <elver@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/page_owner.c | 99 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 98 insertions(+), 1 deletion(-)
--- a/mm/page_owner.c~mmpage_owner-display-all-stacks-and-their-count
+++ a/mm/page_owner.c
@@ -84,7 +84,12 @@ static void add_stack_record_to_list(str
stack_list = stack;
} else {
stack->next = stack_list;
- stack_list = stack;
+ /* This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
}
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -792,8 +797,97 @@ static const struct file_operations proc
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ char *buf;
+ int ret = 0;
+ struct stack *stack = v;
+ struct stack_record *stack_record = stack->stack_record;
+
+ if (!stack_record->size || stack_record->size < 0 ||
+ refcount_read(&stack_record->count) < 2)
+ return 0;
+
+ buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+
+ ret += stack_trace_snprint(buf, PAGE_SIZE, stack_record->entries,
+ stack_record->size, 0);
+ if (!ret)
+ goto out;
+
+ scnprintf(buf + ret, PAGE_SIZE - ret, "stack_count: %d\n\n",
+ refcount_read(&stack_record->count));
+
+ seq_printf(m, buf);
+ seq_puts(m, "\n\n");
+out:
+ kfree(buf);
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -801,6 +895,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
_
Patches currently in -mm which might be from osalvador@suse.de are
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
mmpage_owner-implement-the-tracking-of-the-stacks-count.patch
mmpage_owner-display-all-stacks-and-their-count.patch
mmpage_owner-filter-out-stacks-by-a-threshold.patch
mmpage_owner-update-documentation-regarding-page_owner_stacks.patch
^ permalink raw reply [relevance 4%]
* + lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch added to mm-unstable branch
@ 2024-02-12 23:28 5% Andrew Morton
0 siblings, 0 replies; 200+ results
From: Andrew Morton @ 2024-02-12 23:28 UTC (permalink / raw)
To: mm-commits, vbabka, mhocko, glider, elver, andreyknvl, osalvador, akpm
The patch titled
Subject: lib/stackdepot: move stack_record struct definition into the header
has been added to the -mm mm-unstable branch. Its filename is
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: lib/stackdepot: move stack_record struct definition into the header
Date: Mon, 12 Feb 2024 23:30:25 +0100
Patch eries "page_owner: print stacks and their outstanding allocations",
v8.
page_owner is a great debug functionality tool that lets us know about all
pages that have been allocated/freed and their specific stacktrace. This
comes in very handy when debugging memory leaks, since with some scripting
we can see the outstanding allocations, which might point to a memory
leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner. This functionality creates a new directory called
'page_owner_stacks' under 'sys/kernel//debug' with a read-only file called
'show_stacks', which prints out all the stacks followed by their
outstanding number of allocations (being that the times the stacktrace has
allocated but not freed yet). This gives us a clear and a quick overview
of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free
operation) call.
Unfortunately, we cannot use the new stackdepot api
STACK_DEPOT_FLAG_{GET,PUT} because it does not fulfill page_owner needs,
meaning we would have to special case things, at which point makes more
sense for page_owner to do its own {dec,inc}rementing of the stacks. E.g:
Using STACK_DEPOT_FLAG_PUT, once the refcount reaches 0, such stack gets
evicted, so page_owner would lose information.
This patch also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
This patch (of 5):
In order to move the heavy lifting into page_owner code, this one needs to
have access to the stack_record structure, which right now sits in
lib/stackdepot.c. Move it to the stackdepot.h header so page_owner can
access stack_record's struct fields.
Link: https://lkml.kernel.org/r/20240212223029.30769-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20240212223029.30769-2-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Marco Elver <elver@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/stackdepot.h | 44 +++++++++++++++++++++++++++++++++++
lib/stackdepot.c | 43 ----------------------------------
2 files changed, 44 insertions(+), 43 deletions(-)
--- a/include/linux/stackdepot.h~lib-stackdepot-move-stack_record-struct-definition-into-the-header
+++ a/include/linux/stackdepot.h
@@ -30,6 +30,50 @@ typedef u32 depot_stack_handle_t;
*/
#define STACK_DEPOT_EXTRA_BITS 5
+#define DEPOT_HANDLE_BITS (sizeof(depot_stack_handle_t) * 8)
+
+#define DEPOT_POOL_ORDER 2 /* Pool size order, 4 pages */
+#define DEPOT_POOL_SIZE (1LL << (PAGE_SHIFT + DEPOT_POOL_ORDER))
+#define DEPOT_STACK_ALIGN 4
+#define DEPOT_OFFSET_BITS (DEPOT_POOL_ORDER + PAGE_SHIFT - DEPOT_STACK_ALIGN)
+#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
+ STACK_DEPOT_EXTRA_BITS)
+
+/* Compact structure that stores a reference to a stack. */
+union handle_parts {
+ depot_stack_handle_t handle;
+ struct {
+ u32 pool_index : DEPOT_POOL_INDEX_BITS;
+ u32 offset : DEPOT_OFFSET_BITS;
+ u32 extra : STACK_DEPOT_EXTRA_BITS;
+ };
+};
+
+struct stack_record {
+ struct list_head hash_list; /* Links in the hash table */
+ u32 hash; /* Hash in hash table */
+ u32 size; /* Number of stored frames */
+ union handle_parts handle; /* Constant after initialization */
+ refcount_t count;
+ union {
+ unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */
+ struct {
+ /*
+ * An important invariant of the implementation is to
+ * only place a stack record onto the freelist iff its
+ * refcount is zero. Because stack records with a zero
+ * refcount are never considered as valid, it is safe to
+ * union @entries and freelist management state below.
+ * Conversely, as soon as an entry is off the freelist
+ * and its refcount becomes non-zero, the below must not
+ * be accessed until being placed back on the freelist.
+ */
+ struct list_head free_list; /* Links in the freelist */
+ unsigned long rcu_state; /* RCU cookie */
+ };
+ };
+};
+
typedef u32 depot_flags_t;
/*
--- a/lib/stackdepot.c~lib-stackdepot-move-stack_record-struct-definition-into-the-header
+++ a/lib/stackdepot.c
@@ -36,54 +36,11 @@
#include <linux/memblock.h>
#include <linux/kasan-enabled.h>
-#define DEPOT_HANDLE_BITS (sizeof(depot_stack_handle_t) * 8)
-
-#define DEPOT_POOL_ORDER 2 /* Pool size order, 4 pages */
-#define DEPOT_POOL_SIZE (1LL << (PAGE_SHIFT + DEPOT_POOL_ORDER))
-#define DEPOT_STACK_ALIGN 4
-#define DEPOT_OFFSET_BITS (DEPOT_POOL_ORDER + PAGE_SHIFT - DEPOT_STACK_ALIGN)
-#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
- STACK_DEPOT_EXTRA_BITS)
#define DEPOT_POOLS_CAP 8192
#define DEPOT_MAX_POOLS \
(((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \
(1LL << (DEPOT_POOL_INDEX_BITS)) : DEPOT_POOLS_CAP)
-/* Compact structure that stores a reference to a stack. */
-union handle_parts {
- depot_stack_handle_t handle;
- struct {
- u32 pool_index : DEPOT_POOL_INDEX_BITS;
- u32 offset : DEPOT_OFFSET_BITS;
- u32 extra : STACK_DEPOT_EXTRA_BITS;
- };
-};
-
-struct stack_record {
- struct list_head hash_list; /* Links in the hash table */
- u32 hash; /* Hash in hash table */
- u32 size; /* Number of stored frames */
- union handle_parts handle; /* Constant after initialization */
- refcount_t count;
- union {
- unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */
- struct {
- /*
- * An important invariant of the implementation is to
- * only place a stack record onto the freelist iff its
- * refcount is zero. Because stack records with a zero
- * refcount are never considered as valid, it is safe to
- * union @entries and freelist management state below.
- * Conversely, as soon as an entry is off the freelist
- * and its refcount becomes non-zero, the below must not
- * be accessed until being placed back on the freelist.
- */
- struct list_head free_list; /* Links in the freelist */
- unsigned long rcu_state; /* RCU cookie */
- };
- };
-};
-
static bool stack_depot_disabled;
static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
static bool __stack_depot_early_init_passed __initdata;
_
Patches currently in -mm which might be from osalvador@suse.de are
lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch
mmpage_owner-implement-the-tracking-of-the-stacks-count.patch
mmpage_owner-display-all-stacks-and-their-count.patch
mmpage_owner-filter-out-stacks-by-a-threshold.patch
mmpage_owner-update-documentation-regarding-page_owner_stacks.patch
^ permalink raw reply [relevance 5%]
* [PATCH v8 3/5] mm,page_owner: Display all stacks and their count
2024-02-12 22:30 6% [PATCH v8 0/5] page_owner: print stacks and their outstanding allocations Oscar Salvador
@ 2024-02-12 22:30 5% ` Oscar Salvador
2024-02-13 8:38 0% ` Marco Elver
2024-02-13 14:25 0% ` Vlastimil Babka
0 siblings, 2 replies; 200+ results
From: Oscar Salvador @ 2024-02-12 22:30 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
This patch adds a new directory called 'page_owner_stacks' under
/sys/kernel/debug/, with a file called 'show_stacks' in it.
Reading from that file will show all stacks that were added by page_owner
followed by their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
The seq stack_{start,next} functions will iterate through the list
stack_list in order to print all stacks.
Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
mm/page_owner.c | 99 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 98 insertions(+), 1 deletion(-)
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 7d1b3f75cef3..3e4b7cd7c8f8 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -84,7 +84,12 @@ static void add_stack_record_to_list(struct stack_record *stack_record)
stack_list = stack;
} else {
stack->next = stack_list;
- stack_list = stack;
+ /* This pairs with smp_load_acquire() from function
+ * stack_start(). This guarantees that stack_start()
+ * will see an updated stack_list before starting to
+ * traverse the list.
+ */
+ smp_store_release(&stack_list, stack);
}
spin_unlock_irqrestore(&stack_list_lock, flags);
}
@@ -792,8 +797,97 @@ static const struct file_operations proc_page_owner_operations = {
.llseek = lseek_page_owner,
};
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack *stack;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ if (!*ppos) {
+ /*
+ * This pairs with smp_store_release() from function
+ * add_stack_record_to_list(), so we get a consistent
+ * value of stack_list.
+ */
+ stack = smp_load_acquire(&stack_list);
+ } else {
+ stack = m->private;
+ stack = stack->next;
+ }
+
+ m->private = stack;
+
+ return stack;
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack *stack = v;
+
+ stack = stack->next;
+ *ppos = stack ? *ppos + 1 : -1UL;
+ m->private = stack;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ char *buf;
+ int ret = 0;
+ struct stack *stack = v;
+ struct stack_record *stack_record = stack->stack_record;
+
+ if (!stack_record->size || stack_record->size < 0 ||
+ refcount_read(&stack_record->count) < 2)
+ return 0;
+
+ buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+
+ ret += stack_trace_snprint(buf, PAGE_SIZE, stack_record->entries,
+ stack_record->size, 0);
+ if (!ret)
+ goto out;
+
+ scnprintf(buf + ret, PAGE_SIZE - ret, "stack_count: %d\n\n",
+ refcount_read(&stack_record->count));
+
+ seq_printf(m, buf);
+ seq_puts(m, "\n\n");
+out:
+ kfree(buf);
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op, 0);
+}
+
+static const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
+ struct dentry *dir;
+
if (!static_branch_unlikely(&page_owner_inited)) {
pr_info("page_owner is disabled\n");
return 0;
@@ -801,6 +895,9 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ dir = debugfs_create_dir("page_owner_stacks", NULL);
+ debugfs_create_file("show_stacks", 0400, dir, NULL,
+ &page_owner_stack_operations);
return 0;
}
--
2.43.0
^ permalink raw reply related [relevance 5%]
* [PATCH v8 0/5] page_owner: print stacks and their outstanding allocations
@ 2024-02-12 22:30 6% Oscar Salvador
2024-02-12 22:30 5% ` [PATCH v8 3/5] mm,page_owner: Display all stacks and their count Oscar Salvador
0 siblings, 1 reply; 200+ results
From: Oscar Salvador @ 2024-02-12 22:30 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
Changes v7 -> v8
- Rebased on top of -next
- page_owner maintains its own stack_records list now
- Kill auxiliary stackdepot function to traverse buckets
- page_owner_stacks is now a directory with 'show_stacks'
and 'set_threshold'
- Update Documentation/mm/page_owner.rst
- Adressed feedback from Marco
Changes v6 -> v7:
- Rebased on top of Andrey Konovalov's libstackdepot patchset
- Reformulated the changelogs
Changes v5 -> v6:
- Rebase on top of v6.7-rc1
- Move stack_record struct to the header
- Addressed feedback from Vlastimil
(some code tweaks and changelogs suggestions)
Changes v4 -> v5:
- Addressed feedback from Alexander Potapenko
Changes v3 -> v4:
- Rebase (long time has passed)
- Use boolean instead of enum for action by Alexander Potapenko
- (I left some feedback untouched because it's been long and
would like to discuss it here now instead of re-vamping
and old thread)
Changes v2 -> v3:
- Replace interface in favor of seq operations
(suggested by Vlastimil)
- Use debugfs interface to store/read valued (suggested by Ammar)
page_owner is a great debug functionality tool that lets us know
about all pages that have been allocated/freed and their specific
stacktrace.
This comes very handy when debugging memory leaks, since with
some scripting we can see the outstanding allocations, which might point
to a memory leak.
In my experience, that is one of the most useful cases, but it can get
really tedious to screen through all pages and try to reconstruct the
stack <-> allocated/freed relationship, becoming most of the time a
daunting and slow process when we have tons of allocation/free operations.
This patchset aims to ease that by adding a new functionality into
page_owner.
This functionality creates a new directory called 'page_owner_stacks'
under 'sys/kernel//debug' with a read-only file called 'show_stacks',
which prints out all the stacks followed by their outstanding number
of allocations (being that the times the stacktrace has allocated
but not freed yet).
This gives us a clear and a quick overview of stacks <-> allocated/free.
We take advantage of the new refcount_f field that stack_record struct
gained, and increment/decrement the stack refcount on every
__set_page_owner() (alloc operation) and __reset_page_owner (free operation)
call.
Unfortunately, we cannot use the new stackdepot api
STACK_DEPOT_FLAG_{GET,PUT} because it does not fulfill page_owner needs,
meaning we would have to special case things, at which point
makes more sense for page_owner to do its own {dec,inc}rementing
of the stacks.
E.g: Using STACK_DEPOT_FLAG_PUT, once the refcount reaches 0,
such stack gets evicted, so page_owner would lose information.
This patch also creates a new file called 'set_threshold' within
'page_owner_stacks' directory, and by writing a value to it, the stacks
which refcount is below such value will be filtered out.
A PoC can be found below:
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks.txt
# head -40 page_owner_full_stacks.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
page_cache_ra_unbounded+0x96/0x180
filemap_get_pages+0xfd/0x590
filemap_read+0xcc/0x330
blkdev_read_iter+0xb8/0x150
vfs_read+0x285/0x320
ksys_read+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 521
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4609
...
...
# echo 5000 > /sys/kernel/debug/page_owner_stacks/set_threshold
# cat /sys/kernel/debug/page_owner_stacks/show_stacks > page_owner_full_stacks_5000.txt
# head -40 page_owner_full_stacks_5000.txt
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_pwrite64+0x75/0x90
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 6781
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
pcpu_populate_chunk+0xec/0x350
pcpu_balance_workfn+0x2d1/0x4a0
process_scheduled_works+0x84/0x380
worker_thread+0x12a/0x2a0
kthread+0xe3/0x110
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1b/0x30
stack_count: 8641
Oscar Salvador (5):
lib/stackdepot: Move stack_record struct definition into the header
mm,page_owner: Implement the tracking of the stacks count
mm,page_owner: Display all stacks and their count
mm,page_owner: Filter out stacks by a threshold
mm,page_owner: Update Documentation regarding page_owner_stacks
Documentation/mm/page_owner.rst | 44 ++++++++
include/linux/stackdepot.h | 53 +++++++++
lib/stackdepot.c | 51 ++-------
mm/page_owner.c | 190 ++++++++++++++++++++++++++++++++
4 files changed, 295 insertions(+), 43 deletions(-)
--
2.43.0
^ permalink raw reply [relevance 6%]
* Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize
@ 2024-02-11 11:59 7% ` Muchun Song
0 siblings, 0 replies; 200+ results
From: Muchun Song @ 2024-02-11 11:59 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Jane Chu, Will Deacon, Nanyong Sun, Catalin Marinas, akpm,
anshuman.khandual, wangkefeng.wang, linux-arm-kernel,
linux-kernel, linux-mm
> On Feb 8, 2024, at 23:49, Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Feb 07, 2024 at 06:24:52PM -0800, Jane Chu wrote:
>> On 2/7/2024 6:17 AM, Matthew Wilcox wrote:
>>> While this array of ~512 pages have been allocated to hugetlbfs, and one
>>> would think that there would be no way that there could still be
>>> references to them, another CPU can have a pointer to this struct page
>>> (eg attempting a speculative page cache reference or
>>> get_user_pages_fast()). That means it will try to call
>>> atomic_add_unless(&page->_refcount, 1, 0);
>>>
>>> Actually, I wonder if this isn't a problem on x86 too? Do we need to
>>> explicitly go through an RCU grace period before freeing the pages
>>> for use by somebody else?
>>>
>> Sorry, not sure what I'm missing, please help.
>
> Having written out the analysis, I now think it can't happen on x86,
> but let's walk through it because it's non-obvious (and I think it
> illustrates what people are afraid of on Arm).
>
> CPU A calls either get_user_pages_fast() or __filemap_get_folio().
> Let's do the latter this time.
>
> folio = filemap_get_entry(mapping, index);
> filemap_get_entry:
> rcu_read_lock();
> folio = xas_load(&xas);
> if (!folio_try_get_rcu(folio))
> goto repeat;
> if (unlikely(folio != xas_reload(&xas))) {
> folio_put(folio);
> goto repeat;
> }
> folio_try_get_rcu:
> folio_ref_try_add_rcu(folio, 1);
> folio_ref_try_add_rcu:
> if (unlikely(!folio_ref_add_unless(folio, count, 0))) {
> /* Either the folio has been freed, or will be freed. */
> return false;
> folio_ref_add_unless:
> return page_ref_add_unless(&folio->page, nr, u);
> page_ref_add_unless:
> atomic_add_unless(&page->_refcount, nr, u);
>
> A rather deep callchain there, but for our purposes the important part
> is: we take the RCU read lock, we look up a folio, we increment its
> refcount if it's not zero, then check that looking up this index gets
> the same folio; if it doesn't, we decrement the refcount again and retry
> the lookup.
>
> For this analysis, we can be preempted at any point after we've got the
> folio pointer from xa_load().
>
>> From hugetlb allocation perspective, one of the scenarios is run time
>> hugetlb page allocation (say 2M pages), starting from the buddy allocator
>> returns compound pages, then the head page is set to frozen, then the
>> folio(compound pages) is put thru the HVO process, one of which is
>> vmemmap_split_pmd() in case a vmemmap page is a PMD page.
>>
>> Until the HVO process completes, none of the vmemmap represented pages are
>> available to any threads, so what are the causes for IRQ threads to access
>> their vmemmap pages?
>
> Yup, this sounds like enough, but it's not. The problem is the person
> who's looking up the folio in the pagecache under RCU. They've got
> the folio pointer and have been preempted. So now what happens to our
> victim folio?
>
> Something happens to remove it from the page cache. Maybe the file is
> truncated, perhaps vmscan comes along and kicks it out. Either way, it's
> removed from the xarray and gets its refcount set to 0. If the lookup
> were to continue at this time, everything would be fine because it would
> see a refcount of 0 and not increment it (in page_ref_add_unless()).
> And this is where my analysis of RCU tends to go wrong, because I only
> think of interleaving event A and B. I don't think about B and then C
> happening before A resumes. But it can! Let's follow the journey of
> this struct page.
>
> Now that it's been removed from the page cache, it's allocated by hugetlb,
> as you describe. And it's one of the tail pages towards the end of
> the 512 contiguous struct pages. That means that we alter vmemmap so
> that the pointer to struct page now points to a different struct page
> (one of the earlier ones). Then the original page of vmemmap containing
> our lucky struct page is returned to the page allocator. At this point,
> it no longer contains struct pages; it can contain literally anything.
>
> Where my analysis went wrong was that CPU A _no longer has a pointer
> to it_. CPU A has a pointer into vmemmap. So it will access the
> replacement struct page (which definitely has a refcount 0) instead of
> the one which has been freed. I had thought that CPU A would access the
> original memory which has now been allocated to someone else. But no,
> it can't because its pointer is virtual, not physical.
>
>
> ---
>
> Now I'm thinking more about this and there's another scenario which I
> thought might go wrong, and doesn't. For 7 of the 512 pages which are
> freed, the struct page pointer gathered by CPU A will not point to a
> page with a refcount of 0. Instead it will point to an alias of the
> head page with a positive refcount. For those pages, CPU A will see
> folio_try_get_rcu() succeed. Then it will call xas_reload() and see
> the folio isn't there any more, so it will call folio_put() on something
> which used to be a folio, and isn't any more.
>
> But folio_put() calls folio_put_testzero() which calls put_page_testzero()
> without asserting that the pointer is actually to a folio.
> So everything's fine, but really only by coincidence; I don't think
> anybody's thought about this scenario before (maybe Muchun has, but I
> don't remember it being discussed).
I have to say it is a really great analysis, I haven't thought about the
case of get_page_unless_zero() so deeply.
To avoid increasing a refcount to a tail page struct, I have made
all the 7 tail pages read-only when I first write those code. But it
is a really problem, because it will panic (due to RO permission)
when encountering the above scenario to increase its refcount.
In order to fix the race with __filemap_get_folio(), my first
thought of fixing this issue is to add a rcu_synchronize() after
the processing of HVO optimization and before being allocated to
users. Note that HugePage pages are frozen before going through
the precessing of HVO optimization meaning all the refcount of all
the struct pages are 0. Therefore, folio_try_get_rcu() in
__filemap_get_folio() will fail unless the HugeTLB page has been
allocated to the user.
But I realized there are some users who may pass a arbitrary
page struct (which may be those 7 special tail page structs,
alias of the head page struct, of a HugeTLB page) to the following
helpers, which also could get a refcount of a tail page struct.
Those helpers also need to be fixed.
1) get_page_unless_zero
2) folio_try_get
3) folio_try_get_rcu
I have checked all the users of 1), If I am not wrong, all the users
already handle the HugeTLB pages before calling to get_page_unless_zero().
Although there is no problem with 1) now, it will be fragile to let users
guarantee that it will not pass any tail pages of a HugeTLB page to
1). So I want to change 1) to the following to fix this.
static inline bool get_page_unless_zero(struct page *page)
{
if (page_ref_add_unless(page, 1, 0)) {
/* @page must be a genuine head or alias head page here. */
struct page *head = page_fixed_fake_head(page);
if (likely(head == page))
return true;
put_page(head);
}
return false;
}
2) and 3) should adopt the similar approach to make sure we cannot increase
tail pages' refcount. 2) and 3) will be like the following (only demonstrate
the key logic):
static inline bool folio_try_get(struct folio *folio)/folio_ref_try_add_rcu
{
if (folio_ref_add_unless(folio, 1, 0)) {
struct folio *genuine = page_folio(&folio->page);
if (likely(genuine == folio))
return true;
folio_put(genuine);
}
return false;
}
Additionally, we also should alter RO permission of those 7 tail pages
to RW to avoid panic().
There is no problem in the following helpers since all of them already
handle HVO case through _compound_head(), they will get the __genuine__
head page struct and increase its refcount.
1) try_get_page
2) folio_get
3) get_page
Just some thoughts from mine, maybe you guys have more simple and graceful
approaches. Comments are welcome.
Muchun,
Thanks.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [relevance 7%]
* Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize
@ 2024-02-11 11:59 7% ` Muchun Song
0 siblings, 0 replies; 200+ results
From: Muchun Song @ 2024-02-11 11:59 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Jane Chu, Will Deacon, Nanyong Sun, Catalin Marinas, akpm,
anshuman.khandual, wangkefeng.wang, linux-arm-kernel,
linux-kernel, linux-mm
> On Feb 8, 2024, at 23:49, Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Feb 07, 2024 at 06:24:52PM -0800, Jane Chu wrote:
>> On 2/7/2024 6:17 AM, Matthew Wilcox wrote:
>>> While this array of ~512 pages have been allocated to hugetlbfs, and one
>>> would think that there would be no way that there could still be
>>> references to them, another CPU can have a pointer to this struct page
>>> (eg attempting a speculative page cache reference or
>>> get_user_pages_fast()). That means it will try to call
>>> atomic_add_unless(&page->_refcount, 1, 0);
>>>
>>> Actually, I wonder if this isn't a problem on x86 too? Do we need to
>>> explicitly go through an RCU grace period before freeing the pages
>>> for use by somebody else?
>>>
>> Sorry, not sure what I'm missing, please help.
>
> Having written out the analysis, I now think it can't happen on x86,
> but let's walk through it because it's non-obvious (and I think it
> illustrates what people are afraid of on Arm).
>
> CPU A calls either get_user_pages_fast() or __filemap_get_folio().
> Let's do the latter this time.
>
> folio = filemap_get_entry(mapping, index);
> filemap_get_entry:
> rcu_read_lock();
> folio = xas_load(&xas);
> if (!folio_try_get_rcu(folio))
> goto repeat;
> if (unlikely(folio != xas_reload(&xas))) {
> folio_put(folio);
> goto repeat;
> }
> folio_try_get_rcu:
> folio_ref_try_add_rcu(folio, 1);
> folio_ref_try_add_rcu:
> if (unlikely(!folio_ref_add_unless(folio, count, 0))) {
> /* Either the folio has been freed, or will be freed. */
> return false;
> folio_ref_add_unless:
> return page_ref_add_unless(&folio->page, nr, u);
> page_ref_add_unless:
> atomic_add_unless(&page->_refcount, nr, u);
>
> A rather deep callchain there, but for our purposes the important part
> is: we take the RCU read lock, we look up a folio, we increment its
> refcount if it's not zero, then check that looking up this index gets
> the same folio; if it doesn't, we decrement the refcount again and retry
> the lookup.
>
> For this analysis, we can be preempted at any point after we've got the
> folio pointer from xa_load().
>
>> From hugetlb allocation perspective, one of the scenarios is run time
>> hugetlb page allocation (say 2M pages), starting from the buddy allocator
>> returns compound pages, then the head page is set to frozen, then the
>> folio(compound pages) is put thru the HVO process, one of which is
>> vmemmap_split_pmd() in case a vmemmap page is a PMD page.
>>
>> Until the HVO process completes, none of the vmemmap represented pages are
>> available to any threads, so what are the causes for IRQ threads to access
>> their vmemmap pages?
>
> Yup, this sounds like enough, but it's not. The problem is the person
> who's looking up the folio in the pagecache under RCU. They've got
> the folio pointer and have been preempted. So now what happens to our
> victim folio?
>
> Something happens to remove it from the page cache. Maybe the file is
> truncated, perhaps vmscan comes along and kicks it out. Either way, it's
> removed from the xarray and gets its refcount set to 0. If the lookup
> were to continue at this time, everything would be fine because it would
> see a refcount of 0 and not increment it (in page_ref_add_unless()).
> And this is where my analysis of RCU tends to go wrong, because I only
> think of interleaving event A and B. I don't think about B and then C
> happening before A resumes. But it can! Let's follow the journey of
> this struct page.
>
> Now that it's been removed from the page cache, it's allocated by hugetlb,
> as you describe. And it's one of the tail pages towards the end of
> the 512 contiguous struct pages. That means that we alter vmemmap so
> that the pointer to struct page now points to a different struct page
> (one of the earlier ones). Then the original page of vmemmap containing
> our lucky struct page is returned to the page allocator. At this point,
> it no longer contains struct pages; it can contain literally anything.
>
> Where my analysis went wrong was that CPU A _no longer has a pointer
> to it_. CPU A has a pointer into vmemmap. So it will access the
> replacement struct page (which definitely has a refcount 0) instead of
> the one which has been freed. I had thought that CPU A would access the
> original memory which has now been allocated to someone else. But no,
> it can't because its pointer is virtual, not physical.
>
>
> ---
>
> Now I'm thinking more about this and there's another scenario which I
> thought might go wrong, and doesn't. For 7 of the 512 pages which are
> freed, the struct page pointer gathered by CPU A will not point to a
> page with a refcount of 0. Instead it will point to an alias of the
> head page with a positive refcount. For those pages, CPU A will see
> folio_try_get_rcu() succeed. Then it will call xas_reload() and see
> the folio isn't there any more, so it will call folio_put() on something
> which used to be a folio, and isn't any more.
>
> But folio_put() calls folio_put_testzero() which calls put_page_testzero()
> without asserting that the pointer is actually to a folio.
> So everything's fine, but really only by coincidence; I don't think
> anybody's thought about this scenario before (maybe Muchun has, but I
> don't remember it being discussed).
I have to say it is a really great analysis, I haven't thought about the
case of get_page_unless_zero() so deeply.
To avoid increasing a refcount to a tail page struct, I have made
all the 7 tail pages read-only when I first write those code. But it
is a really problem, because it will panic (due to RO permission)
when encountering the above scenario to increase its refcount.
In order to fix the race with __filemap_get_folio(), my first
thought of fixing this issue is to add a rcu_synchronize() after
the processing of HVO optimization and before being allocated to
users. Note that HugePage pages are frozen before going through
the precessing of HVO optimization meaning all the refcount of all
the struct pages are 0. Therefore, folio_try_get_rcu() in
__filemap_get_folio() will fail unless the HugeTLB page has been
allocated to the user.
But I realized there are some users who may pass a arbitrary
page struct (which may be those 7 special tail page structs,
alias of the head page struct, of a HugeTLB page) to the following
helpers, which also could get a refcount of a tail page struct.
Those helpers also need to be fixed.
1) get_page_unless_zero
2) folio_try_get
3) folio_try_get_rcu
I have checked all the users of 1), If I am not wrong, all the users
already handle the HugeTLB pages before calling to get_page_unless_zero().
Although there is no problem with 1) now, it will be fragile to let users
guarantee that it will not pass any tail pages of a HugeTLB page to
1). So I want to change 1) to the following to fix this.
static inline bool get_page_unless_zero(struct page *page)
{
if (page_ref_add_unless(page, 1, 0)) {
/* @page must be a genuine head or alias head page here. */
struct page *head = page_fixed_fake_head(page);
if (likely(head == page))
return true;
put_page(head);
}
return false;
}
2) and 3) should adopt the similar approach to make sure we cannot increase
tail pages' refcount. 2) and 3) will be like the following (only demonstrate
the key logic):
static inline bool folio_try_get(struct folio *folio)/folio_ref_try_add_rcu
{
if (folio_ref_add_unless(folio, 1, 0)) {
struct folio *genuine = page_folio(&folio->page);
if (likely(genuine == folio))
return true;
folio_put(genuine);
}
return false;
}
Additionally, we also should alter RO permission of those 7 tail pages
to RW to avoid panic().
There is no problem in the following helpers since all of them already
handle HVO case through _compound_head(), they will get the __genuine__
head page struct and increase its refcount.
1) try_get_page
2) folio_get
3) get_page
Just some thoughts from mine, maybe you guys have more simple and graceful
approaches. Comments are welcome.
Muchun,
Thanks.
^ permalink raw reply [relevance 7%]
* [syzbot] [fs?] KASAN: use-after-free Read in sysv_new_inode
@ 2024-02-09 10:12 4% syzbot
0 siblings, 0 replies; 200+ results
From: syzbot @ 2024-02-09 10:12 UTC (permalink / raw)
To: linux-fsdevel, linux-kernel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 23e11d031852 Add linux-next specific files for 20240205
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=11933ca8180000
kernel config: https://syzkaller.appspot.com/x/.config?x=6f1d38572a4a0540
dashboard link: https://syzkaller.appspot.com/bug?extid=2e64084fa0c65e8706c9
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/b4e82c0f5cca/disk-23e11d03.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/018dac30c4d4/vmlinux-23e11d03.xz
kernel image: https://storage.googleapis.com/syzbot-assets/ee21a2f37a73/bzImage-23e11d03.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2e64084fa0c65e8706c9@syzkaller.appspotmail.com
==================================================================
BUG: KASAN: use-after-free in sysv_new_inode+0xfdd/0x1170 fs/sysv/ialloc.c:153
Read of size 2 at addr ffff88803b1f61ce by task syz-executor.4/7277
CPU: 1 PID: 7277 Comm: syz-executor.4 Not tainted 6.8.0-rc3-next-20240205-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:377 [inline]
print_report+0x169/0x550 mm/kasan/report.c:488
kasan_report+0x143/0x180 mm/kasan/report.c:601
sysv_new_inode+0xfdd/0x1170 fs/sysv/ialloc.c:153
sysv_mknod+0x4e/0xe0 fs/sysv/namei.c:53
lookup_open fs/namei.c:3494 [inline]
open_last_lookups fs/namei.c:3563 [inline]
path_openat+0x1425/0x3240 fs/namei.c:3793
do_filp_open+0x235/0x490 fs/namei.c:3823
do_sys_openat2+0x13e/0x1d0 fs/open.c:1404
do_sys_open fs/open.c:1419 [inline]
__do_sys_openat fs/open.c:1435 [inline]
__se_sys_openat fs/open.c:1430 [inline]
__x64_sys_openat+0x247/0x2a0 fs/open.c:1430
do_syscall_64+0xfb/0x240
entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7f86a7c7dda9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f86a8a900c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f86a7dabf80 RCX: 00007f86a7c7dda9
RDX: 000000000000275a RSI: 0000000020000040 RDI: ffffffffffffff9c
RBP: 00007f86a7cca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f86a7dabf80 R15: 00007fff47892ea8
</TASK>
The buggy address belongs to the physical page:
page:ffffea0000ec7d80 refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x3b1f6
flags: 0xfff80000000000(node=0|zone=1|lastcpupid=0xfff)
page_type: 0xffffffff()
raw: 00fff80000000000 dead000000000100 dead000000000122 0000000000000000
raw: 0000000000000001 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as freed
page last allocated via order 0, migratetype Movable, gfp_mask 0x141cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP|__GFP_WRITE), pid 16881, tgid 16881 (syz-executor.3), ts 1415422682739, free_ts 1421164748730
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1ea/0x210 mm/page_alloc.c:1539
prep_new_page mm/page_alloc.c:1546 [inline]
get_page_from_freelist+0x34eb/0x3680 mm/page_alloc.c:3353
__alloc_pages+0x256/0x680 mm/page_alloc.c:4609
alloc_pages_mpol+0x3e8/0x680 mm/mempolicy.c:2263
alloc_pages mm/mempolicy.c:2334 [inline]
folio_alloc+0x12b/0x330 mm/mempolicy.c:2341
filemap_alloc_folio+0xdf/0x500 mm/filemap.c:975
__filemap_get_folio+0x431/0xbc0 mm/filemap.c:1919
ext4_da_write_begin+0x5b9/0xa50 fs/ext4/inode.c:2885
generic_perform_write+0x322/0x640 mm/filemap.c:3921
ext4_buffered_write_iter+0xc6/0x350 fs/ext4/file.c:299
ext4_file_write_iter+0x1de/0x1a10
__kernel_write_iter+0x435/0x8c0 fs/read_write.c:523
dump_emit_page fs/coredump.c:888 [inline]
dump_user_range+0x46d/0x910 fs/coredump.c:915
elf_core_dump+0x3d5e/0x4630 fs/binfmt_elf.c:2077
do_coredump+0x1bab/0x2b50 fs/coredump.c:764
get_signal+0x146b/0x1850 kernel/signal.c:2882
page last free pid 16881 tgid 16881 stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1140 [inline]
free_unref_page_prepare+0x968/0xa90 mm/page_alloc.c:2388
free_unref_page_list+0x5a3/0x850 mm/page_alloc.c:2574
release_pages+0x2744/0x2a80 mm/swap.c:1042
__folio_batch_release+0x84/0x100 mm/swap.c:1062
folio_batch_release include/linux/pagevec.h:83 [inline]
truncate_inode_pages_range+0x457/0xf70 mm/truncate.c:362
ext4_evict_inode+0x21c/0xf30 fs/ext4/inode.c:193
evict+0x2a8/0x630 fs/inode.c:666
__dentry_kill+0x20d/0x630 fs/dcache.c:603
dput+0x19f/0x2b0 fs/dcache.c:845
__fput+0x678/0x8a0 fs/file_table.c:384
task_work_run+0x24f/0x310 kernel/task_work.c:180
exit_task_work include/linux/task_work.h:38 [inline]
do_exit+0xa1b/0x27e0 kernel/exit.c:878
do_group_exit+0x207/0x2c0 kernel/exit.c:1027
get_signal+0x176e/0x1850 kernel/signal.c:2896
arch_do_signal_or_restart+0x96/0x860 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop kernel/entry/common.c:105 [inline]
exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
irqentry_exit_to_user_mode+0x79/0x280 kernel/entry/common.c:225
Memory state around the buggy address:
ffff88803b1f6080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff88803b1f6100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>ffff88803b1f6180: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
^
ffff88803b1f6200: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff88803b1f6280: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==================================================================
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [relevance 4%]
* Re: [PATCH v7 3/4] mm,page_owner: Display all stacks and their count
2024-02-08 23:45 4% ` [PATCH v7 3/4] mm,page_owner: Display all stacks and their count Oscar Salvador
@ 2024-02-09 8:00 0% ` Marco Elver
0 siblings, 0 replies; 200+ results
From: Marco Elver @ 2024-02-09 8:00 UTC (permalink / raw)
To: Oscar Salvador
Cc: Andrew Morton, linux-kernel, linux-mm, Michal Hocko,
Vlastimil Babka, Andrey Konovalov, Alexander Potapenko
On Fri, 9 Feb 2024 at 00:45, Oscar Salvador <osalvador@suse.de> wrote:
>
> This patch adds a new file called 'page_owner_stacks', which
> will show all stacks that were added by page_owner followed by
> their counting, giving us a clear overview of stack <-> count
> relationship.
>
> E.g:
>
> prep_new_page+0xa9/0x120
> get_page_from_freelist+0x801/0x2210
> __alloc_pages+0x18b/0x350
> alloc_pages_mpol+0x91/0x1f0
> folio_alloc+0x14/0x50
> filemap_alloc_folio+0xb2/0x100
> __filemap_get_folio+0x14a/0x490
> ext4_write_begin+0xbd/0x4b0 [ext4]
> generic_perform_write+0xc1/0x1e0
> ext4_buffered_write_iter+0x68/0xe0 [ext4]
> ext4_file_write_iter+0x70/0x740 [ext4]
> vfs_write+0x33d/0x420
> ksys_write+0xa5/0xe0
> do_syscall_64+0x80/0x160
> entry_SYSCALL_64_after_hwframe+0x6e/0x76
> stack_count: 4578
>
> In order to show all the stacks, we implement stack_depot_get_next_stack(),
> which walks all buckets while retrieving the stacks stored in them.
> stack_depot_get_next_stack() will return all stacks, one at a time,
> by first finding a non-empty bucket, and then retrieving all the stacks
> stored in that bucket.
> Once we have completely gone through it, we get the next non-empty bucket
> and repeat the same steps, and so on until we have completely checked all
> buckets.
>
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> ---
> include/linux/stackdepot.h | 20 +++++++++
> lib/stackdepot.c | 46 +++++++++++++++++++++
> mm/page_owner.c | 85 ++++++++++++++++++++++++++++++++++++++
> 3 files changed, 151 insertions(+)
>
> diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
> index ac62de4d4999..d851ec821e6f 100644
> --- a/include/linux/stackdepot.h
> +++ b/include/linux/stackdepot.h
> @@ -183,6 +183,26 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
> */
> struct stack_record *stack_depot_get_stack(depot_stack_handle_t handle);
>
> +/**
> + * stack_depot_get_next_stack - Returns all stacks, one at a time
"Returns all stack_records" to be clear that this is returning the struct.
> + *
> + * @table: Current table we are checking
> + * @bucket: Current bucket we are checking
> + * @last_found: Last stack that was found
> + *
> + * This function finds first a non-empty bucket and returns the first stack
> + * stored in it. On consequent calls, it walks the bucket to see whether
> + * it contains more stacks.
> + * Once we have walked all the stacks in a bucket, we check
> + * the next one, and we repeat the same steps until we have checked all of them
I think for this function it's important to say that no entry returned
from this function can be evicted.
I.e. the easiest way to ensure this is that the caller makes sure the
entries returned are never passed to stack_depot_put() - which is
certainly the case for your usecase because you do not use
stack_depot_put().
> + * Return: A pointer a to stack_record struct, or NULL when we have walked all
> + * buckets.
> + */
> +struct stack_record *stack_depot_get_next_stack(unsigned long *table,
To keep consistent, I'd also call this
__stack_depot_get_next_stack_record(), so that we're clear this is
more of an internal function not for general usage.
> + struct list_head **bucket,
> + struct stack_record **last_found);
> +
> /**
> * stack_depot_fetch - Fetch a stack trace from stack depot
> *
> diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> index 197c355601f9..107bd0174cd6 100644
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -782,6 +782,52 @@ unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
> }
> EXPORT_SYMBOL(stack_depot_get_extra_bits);
>
> +struct stack_record *stack_depot_get_next_stack(unsigned long *table,
> + struct list_head **curr_bucket,
> + struct stack_record **last_found)
> +{
> + struct list_head *bucket = *curr_bucket;
> + unsigned long nr_table = *table;
> + struct stack_record *found = NULL;
> + unsigned long stack_table_entries = stack_hash_mask + 1;
> +
> + rcu_read_lock_sched_notrace();
We are returning pointers to stack_records out of the RCU-read
critical section, which are then later used to continue the iteration.
list_for_each_entry_continue_rcu() says this is fine if "... you held
some sort of non-RCU reference (such as a reference count) ...".
Updating the function's documentation to say none of these entries can
be evicted via a stack_depot_put() is required.
> + if (!bucket) {
> + /*
> + * Find a non-empty bucket. Once we have found it,
> + * we will use list_for_each_entry_continue_rcu() on the next
> + * call to keep walking the bucket.
> + */
> +new_table:
> + bucket = &stack_table[nr_table];
> + list_for_each_entry_rcu(found, bucket, hash_list) {
> + goto out;
> + }
> + } else {
> + /* Check whether we have more stacks in this bucket */
> + found = *last_found;
> + list_for_each_entry_continue_rcu(found, bucket, hash_list) {
> + goto out;
> + }
> + }
> +
> + /* No more stacks in this bucket, check the next one */
> + nr_table++;
> + if (nr_table < stack_table_entries)
> + goto new_table;
> +
> + /* We are done walking all buckets */
> + found = NULL;
> +
> +out:
> + *table = nr_table;
> + *curr_bucket = bucket;
> + *last_found = found;
> + rcu_read_unlock_sched_notrace();
> +
> + return found;
> +}
> +
> static int stats_show(struct seq_file *seq, void *v)
> {
> /*
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 0adf41702b9d..aea212734557 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -749,6 +749,89 @@ static const struct file_operations proc_page_owner_operations = {
> .llseek = lseek_page_owner,
> };
>
> +struct stack_iterator {
> + unsigned long nr_table;
> + struct list_head *bucket;
> + struct stack_record *last_stack;
> +};
> +
> +static void *stack_start(struct seq_file *m, loff_t *ppos)
> +{
> + struct stack_iterator *iter = m->private;
> +
> + if (*ppos == -1UL)
> + return NULL;
> +
> + return stack_depot_get_next_stack(&iter->nr_table,
> + &iter->bucket,
> + &iter->last_stack);
> +}
> +
> +static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
> +{
> + struct stack_iterator *iter = m->private;
> + struct stack_record *stack;
> +
> + stack = stack_depot_get_next_stack(&iter->nr_table,
> + &iter->bucket,
> + &iter->last_stack);
> + *ppos = stack ? *ppos + 1 : -1UL;
> +
> + return stack;
> +}
> +
> +static int stack_print(struct seq_file *m, void *v)
> +{
> + char *buf;
> + int ret = 0;
> + struct stack_iterator *iter = m->private;
> + struct stack_record *stack = iter->last_stack;
> +
> + if (!stack->size || stack->size < 0 || refcount_read(&stack->count) < 2)
> + return 0;
> +
> + buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
> +
> + ret += stack_trace_snprint(buf, PAGE_SIZE, stack->entries, stack->size,
> + 0);
> + if (!ret)
> + goto out;
> +
> + scnprintf(buf + ret, PAGE_SIZE - ret, "stack_count: %d\n\n",
> + refcount_read(&stack->count));
> +
> + seq_printf(m, buf);
> + seq_puts(m, "\n\n");
> +out:
> + kfree(buf);
> +
> + return 0;
> +}
> +
> +static void stack_stop(struct seq_file *m, void *v)
> +{
> +}
> +
> +static const struct seq_operations page_owner_stack_op = {
> + .start = stack_start,
> + .next = stack_next,
> + .stop = stack_stop,
> + .show = stack_print
> +};
> +
> +static int page_owner_stack_open(struct inode *inode, struct file *file)
> +{
> + return seq_open_private(file, &page_owner_stack_op,
> + sizeof(struct stack_iterator));
> +}
> +
> +const struct file_operations page_owner_stack_operations = {
> + .open = page_owner_stack_open,
> + .read = seq_read,
> + .llseek = seq_lseek,
> + .release = seq_release,
> +};
> +
> static int __init pageowner_init(void)
> {
> if (!static_branch_unlikely(&page_owner_inited)) {
> @@ -758,6 +841,8 @@ static int __init pageowner_init(void)
>
> debugfs_create_file("page_owner", 0400, NULL, NULL,
> &proc_page_owner_operations);
> + debugfs_create_file("page_owner_stacks", 0400, NULL, NULL,
> + &page_owner_stack_operations);
>
> return 0;
> }
> --
> 2.43.0
>
^ permalink raw reply [relevance 0%]
* MGLRU premature memcg OOM on slow writes
@ 2024-02-09 2:31 5% Chris Down
0 siblings, 1 reply; 200+ results
From: Chris Down @ 2024-02-09 2:31 UTC (permalink / raw)
To: Yu Zhao; +Cc: linux-kernel, linux-mm, cgroups, kernel-team, Johannes Weiner
Hi Yu,
When running with MGLRU I'm encountering premature OOMs when transferring files
to a slow disk.
On non-MGLRU setups, writeback flushers are awakened and get to work. But on
MGLRU, one can see OOM killer outputs like the following when doing an rsync
with a memory.max of 32M:
---
% systemd-run --user -t -p MemoryMax=32M -- rsync -rv ... /mnt/usb
Running as unit: run-u640.service
Press ^] three times within 1s to disconnect TTY.
sending incremental file list
...
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(713) [generator=3.2.7]
---
[41368.535735] Memory cgroup out of memory: Killed process 128824 (rsync) total-vm:14008kB, anon-rss:256kB, file-rss:5504kB, shmem-rss:0kB, UID:1000 pgtables:64kB oom_score_adj:200
[41369.847965] rsync invoked oom-killer: gfp_mask=0x408d40(GFP_NOFS|__GFP_NOFAIL|__GFP_ZERO|__GFP_ACCOUNT), order=0, oom_score_adj=200
[41369.847972] CPU: 1 PID: 128826 Comm: rsync Tainted: G S OE 6.7.4-arch1-1 #1 20d30c48b78a04be2046f4b305b40455f0b5b38b
[41369.847975] Hardware name: LENOVO 20WNS23A0G/20WNS23A0G, BIOS N35ET53W (1.53 ) 03/22/2023
[41369.847977] Call Trace:
[41369.847978] <TASK>
[41369.847980] dump_stack_lvl+0x47/0x60
[41369.847985] dump_header+0x45/0x1b0
[41369.847988] oom_kill_process+0xfa/0x200
[41369.847990] out_of_memory+0x244/0x590
[41369.847992] mem_cgroup_out_of_memory+0x134/0x150
[41369.847995] try_charge_memcg+0x76d/0x870
[41369.847998] ? try_charge_memcg+0xcd/0x870
[41369.848000] obj_cgroup_charge+0xb8/0x1b0
[41369.848002] kmem_cache_alloc+0xaa/0x310
[41369.848005] ? alloc_buffer_head+0x1e/0x80
[41369.848007] alloc_buffer_head+0x1e/0x80
[41369.848009] folio_alloc_buffers+0xab/0x180
[41369.848012] ? __pfx_fat_get_block+0x10/0x10 [fat 0a109de409393851f8a884f020fb5682aab8dcd1]
[41369.848021] create_empty_buffers+0x1d/0xb0
[41369.848023] __block_write_begin_int+0x524/0x600
[41369.848026] ? __pfx_fat_get_block+0x10/0x10 [fat 0a109de409393851f8a884f020fb5682aab8dcd1]
[41369.848031] ? __filemap_get_folio+0x168/0x2e0
[41369.848033] ? __pfx_fat_get_block+0x10/0x10 [fat 0a109de409393851f8a884f020fb5682aab8dcd1]
[41369.848038] block_write_begin+0x52/0x120
[41369.848040] fat_write_begin+0x34/0x80 [fat 0a109de409393851f8a884f020fb5682aab8dcd1]
[41369.848046] ? __pfx_fat_get_block+0x10/0x10 [fat 0a109de409393851f8a884f020fb5682aab8dcd1]
[41369.848051] generic_perform_write+0xd6/0x240
[41369.848054] generic_file_write_iter+0x65/0xd0
[41369.848056] vfs_write+0x23a/0x400
[41369.848060] ksys_write+0x6f/0xf0
[41369.848063] do_syscall_64+0x61/0xe0
[41369.848065] ? do_user_addr_fault+0x304/0x670
[41369.848069] ? exc_page_fault+0x7f/0x180
[41369.848071] entry_SYSCALL_64_after_hwframe+0x6e/0x76
[41369.848074] RIP: 0033:0x7965df71a184
[41369.848116] Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d c5 3e 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 48 83 ec 28 48 89 54 24 18 48
[41369.848117] RSP: 002b:00007fffee661738 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
[41369.848119] RAX: ffffffffffffffda RBX: 0000570f66343bb0 RCX: 00007965df71a184
[41369.848121] RDX: 0000000000040000 RSI: 0000570f66343bb0 RDI: 0000000000000003
[41369.848122] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000570f66343b20
[41369.848122] R10: 0000000000000008 R11: 0000000000000202 R12: 0000000000000649
[41369.848123] R13: 0000570f651f8b40 R14: 0000000000008000 R15: 0000570f6633bba0
[41369.848125] </TASK>
[41369.848126] memory: usage 32768kB, limit 32768kB, failcnt 21239
[41369.848126] swap: usage 2112kB, limit 9007199254740988kB, failcnt 0
[41369.848127] Memory cgroup stats for /user.slice/user-1000.slice/user@1000.service/app.slice/run-u640.service:
[41369.848174] anon 0
[41369.848175] file 26927104
[41369.848176] kernel 6615040
[41369.848176] kernel_stack 32768
[41369.848177] pagetables 122880
[41369.848177] sec_pagetables 0
[41369.848177] percpu 480
[41369.848178] sock 0
[41369.848178] vmalloc 0
[41369.848178] shmem 0
[41369.848179] zswap 312451
[41369.848179] zswapped 1458176
[41369.848179] file_mapped 0
[41369.848180] file_dirty 26923008
[41369.848180] file_writeback 0
[41369.848180] swapcached 12288
[41369.848181] anon_thp 0
[41369.848181] file_thp 0
[41369.848181] shmem_thp 0
[41369.848182] inactive_anon 0
[41369.848182] active_anon 12288
[41369.848182] inactive_file 15908864
[41369.848183] active_file 11014144
[41369.848183] unevictable 0
[41369.848183] slab_reclaimable 5963640
[41369.848184] slab_unreclaimable 89048
[41369.848184] slab 6052688
[41369.848185] workingset_refault_anon 4031
[41369.848185] workingset_refault_file 9236
[41369.848185] workingset_activate_anon 691
[41369.848186] workingset_activate_file 2553
[41369.848186] workingset_restore_anon 691
[41369.848186] workingset_restore_file 0
[41369.848187] workingset_nodereclaim 0
[41369.848187] pgscan 40473
[41369.848187] pgsteal 20881
[41369.848188] pgscan_kswapd 0
[41369.848188] pgscan_direct 40473
[41369.848188] pgscan_khugepaged 0
[41369.848189] pgsteal_kswapd 0
[41369.848189] pgsteal_direct 20881
[41369.848190] pgsteal_khugepaged 0
[41369.848190] pgfault 6019
[41369.848190] pgmajfault 4033
[41369.848191] pgrefill 30578988
[41369.848191] pgactivate 2925
[41369.848191] pgdeactivate 0
[41369.848192] pglazyfree 0
[41369.848192] pglazyfreed 0
[41369.848192] zswpin 1520
[41369.848193] zswpout 1141
[41369.848193] thp_fault_alloc 0
[41369.848193] thp_collapse_alloc 0
[41369.848194] thp_swpout 0
[41369.848194] thp_swpout_fallback 0
[41369.848194] Tasks state (memory values in pages):
[41369.848195] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[41369.848195] [ 128825] 1000 128825 3449 864 65536 192 200 rsync
[41369.848198] [ 128826] 1000 128826 3523 288 57344 288 200 rsync
[41369.848199] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/user.slice/user-1000.slice/user@1000.service/app.slice/run-u640.service,task_memcg=/user.slice/user-1000.slice/user@1000.service/app.slice/run-u640.service,task=rsync,pid=128825,uid=1000
[41369.848207] Memory cgroup out of memory: Killed process 128825 (rsync) total-vm:13796kB, anon-rss:0kB, file-rss:3456kB, shmem-rss:0kB, UID:1000 pgtables:64kB oom_score_adj:200
---
Importantly, note that there appears to be no attempt to write back before
declaring OOM -- file_writeback is 0 when file_dirty is 26923008. The issue is
consistently reproducible (and thanks Johannes for looking at this with me).
On non-MGLRU, flushers are active and are making forward progress in preventing
OOM.
This is writing to a slow disk with about ~10MiB/s available write speed, so
the CPU and read speed is far faster than the write speed the disk
can take.
Is this a known problem in MGLRU? If not, could you point me to where MGLRU
tries to handle flusher wakeup on slow I/O? I didn't immediately find it.
Thanks,
Chris
^ permalink raw reply [relevance 5%]
* [PATCH v7 3/4] mm,page_owner: Display all stacks and their count
@ 2024-02-08 23:45 4% ` Oscar Salvador
2024-02-09 8:00 0% ` Marco Elver
0 siblings, 1 reply; 200+ results
From: Oscar Salvador @ 2024-02-08 23:45 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Michal Hocko, Vlastimil Babka,
Marco Elver, Andrey Konovalov, Alexander Potapenko,
Oscar Salvador
This patch adds a new file called 'page_owner_stacks', which
will show all stacks that were added by page_owner followed by
their counting, giving us a clear overview of stack <-> count
relationship.
E.g:
prep_new_page+0xa9/0x120
get_page_from_freelist+0x801/0x2210
__alloc_pages+0x18b/0x350
alloc_pages_mpol+0x91/0x1f0
folio_alloc+0x14/0x50
filemap_alloc_folio+0xb2/0x100
__filemap_get_folio+0x14a/0x490
ext4_write_begin+0xbd/0x4b0 [ext4]
generic_perform_write+0xc1/0x1e0
ext4_buffered_write_iter+0x68/0xe0 [ext4]
ext4_file_write_iter+0x70/0x740 [ext4]
vfs_write+0x33d/0x420
ksys_write+0xa5/0xe0
do_syscall_64+0x80/0x160
entry_SYSCALL_64_after_hwframe+0x6e/0x76
stack_count: 4578
In order to show all the stacks, we implement stack_depot_get_next_stack(),
which walks all buckets while retrieving the stacks stored in them.
stack_depot_get_next_stack() will return all stacks, one at a time,
by first finding a non-empty bucket, and then retrieving all the stacks
stored in that bucket.
Once we have completely gone through it, we get the next non-empty bucket
and repeat the same steps, and so on until we have completely checked all
buckets.
Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
include/linux/stackdepot.h | 20 +++++++++
lib/stackdepot.c | 46 +++++++++++++++++++++
mm/page_owner.c | 85 ++++++++++++++++++++++++++++++++++++++
3 files changed, 151 insertions(+)
diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index ac62de4d4999..d851ec821e6f 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -183,6 +183,26 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries,
*/
struct stack_record *stack_depot_get_stack(depot_stack_handle_t handle);
+/**
+ * stack_depot_get_next_stack - Returns all stacks, one at a time
+ *
+ * @table: Current table we are checking
+ * @bucket: Current bucket we are checking
+ * @last_found: Last stack that was found
+ *
+ * This function finds first a non-empty bucket and returns the first stack
+ * stored in it. On consequent calls, it walks the bucket to see whether
+ * it contains more stacks.
+ * Once we have walked all the stacks in a bucket, we check
+ * the next one, and we repeat the same steps until we have checked all of them
+ *
+ * Return: A pointer a to stack_record struct, or NULL when we have walked all
+ * buckets.
+ */
+struct stack_record *stack_depot_get_next_stack(unsigned long *table,
+ struct list_head **bucket,
+ struct stack_record **last_found);
+
/**
* stack_depot_fetch - Fetch a stack trace from stack depot
*
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 197c355601f9..107bd0174cd6 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -782,6 +782,52 @@ unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle)
}
EXPORT_SYMBOL(stack_depot_get_extra_bits);
+struct stack_record *stack_depot_get_next_stack(unsigned long *table,
+ struct list_head **curr_bucket,
+ struct stack_record **last_found)
+{
+ struct list_head *bucket = *curr_bucket;
+ unsigned long nr_table = *table;
+ struct stack_record *found = NULL;
+ unsigned long stack_table_entries = stack_hash_mask + 1;
+
+ rcu_read_lock_sched_notrace();
+ if (!bucket) {
+ /*
+ * Find a non-empty bucket. Once we have found it,
+ * we will use list_for_each_entry_continue_rcu() on the next
+ * call to keep walking the bucket.
+ */
+new_table:
+ bucket = &stack_table[nr_table];
+ list_for_each_entry_rcu(found, bucket, hash_list) {
+ goto out;
+ }
+ } else {
+ /* Check whether we have more stacks in this bucket */
+ found = *last_found;
+ list_for_each_entry_continue_rcu(found, bucket, hash_list) {
+ goto out;
+ }
+ }
+
+ /* No more stacks in this bucket, check the next one */
+ nr_table++;
+ if (nr_table < stack_table_entries)
+ goto new_table;
+
+ /* We are done walking all buckets */
+ found = NULL;
+
+out:
+ *table = nr_table;
+ *curr_bucket = bucket;
+ *last_found = found;
+ rcu_read_unlock_sched_notrace();
+
+ return found;
+}
+
static int stats_show(struct seq_file *seq, void *v)
{
/*
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 0adf41702b9d..aea212734557 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -749,6 +749,89 @@ static const struct file_operations proc_page_owner_operations = {
.llseek = lseek_page_owner,
};
+struct stack_iterator {
+ unsigned long nr_table;
+ struct list_head *bucket;
+ struct stack_record *last_stack;
+};
+
+static void *stack_start(struct seq_file *m, loff_t *ppos)
+{
+ struct stack_iterator *iter = m->private;
+
+ if (*ppos == -1UL)
+ return NULL;
+
+ return stack_depot_get_next_stack(&iter->nr_table,
+ &iter->bucket,
+ &iter->last_stack);
+}
+
+static void *stack_next(struct seq_file *m, void *v, loff_t *ppos)
+{
+ struct stack_iterator *iter = m->private;
+ struct stack_record *stack;
+
+ stack = stack_depot_get_next_stack(&iter->nr_table,
+ &iter->bucket,
+ &iter->last_stack);
+ *ppos = stack ? *ppos + 1 : -1UL;
+
+ return stack;
+}
+
+static int stack_print(struct seq_file *m, void *v)
+{
+ char *buf;
+ int ret = 0;
+ struct stack_iterator *iter = m->private;
+ struct stack_record *stack = iter->last_stack;
+
+ if (!stack->size || stack->size < 0 || refcount_read(&stack->count) < 2)
+ return 0;
+
+ buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
+
+ ret += stack_trace_snprint(buf, PAGE_SIZE, stack->entries, stack->size,
+ 0);
+ if (!ret)
+ goto out;
+
+ scnprintf(buf + ret, PAGE_SIZE - ret, "stack_count: %d\n\n",
+ refcount_read(&stack->count));
+
+ seq_printf(m, buf);
+ seq_puts(m, "\n\n");
+out:
+ kfree(buf);
+
+ return 0;
+}
+
+static void stack_stop(struct seq_file *m, void *v)
+{
+}
+
+static const struct seq_operations page_owner_stack_op = {
+ .start = stack_start,
+ .next = stack_next,
+ .stop = stack_stop,
+ .show = stack_print
+};
+
+static int page_owner_stack_open(struct inode *inode, struct file *file)
+{
+ return seq_open_private(file, &page_owner_stack_op,
+ sizeof(struct stack_iterator));
+}
+
+const struct file_operations page_owner_stack_operations = {
+ .open = page_owner_stack_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int __init pageowner_init(void)
{
if (!static_branch_unlikely(&page_owner_inited)) {
@@ -758,6 +841,8 @@ static int __init pageowner_init(void)
debugfs_create_file("page_owner", 0400, NULL, NULL,
&proc_page_owner_operations);
+ debugfs_create_file("page_owner_stacks", 0400, NULL, NULL,
+ &page_owner_stack_operations);
return 0;
}
--
2.43.0
^ permalink raw reply related [relevance 4%]
Results 1-200 of ~1900 next (older) | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2022-10-14 5:30 [RFC PATCH] mm: move xa forward when run across zombie page zhaoyang.huang
2022-10-14 12:11 ` Matthew Wilcox
2022-10-17 5:34 ` Zhaoyang Huang
2022-10-17 15:55 ` Matthew Wilcox
2022-10-18 2:52 ` Zhaoyang Huang
2022-10-18 3:09 ` Matthew Wilcox
2022-10-18 22:30 ` Dave Chinner
2022-10-19 15:23 ` Matthew Wilcox
2022-10-19 22:04 ` Dave Chinner
2022-10-20 21:52 ` Matthew Wilcox
2022-11-01 7:17 ` Dave Chinner
2024-04-11 7:04 0% ` Zhaoyang Huang
2022-11-30 21:44 BUG: Bad page map in process init pte:c0ab684c pmd:01182000 (on a PowerMac G4 DP) Erhard F.
2022-12-12 4:31 ` Nicholas Piggin
2024-02-29 1:09 ` Erhard Furtner
2024-02-29 17:11 ` Christophe Leroy
2024-04-17 0:56 1% ` Erhard Furtner
2024-04-17 0:56 1% ` Erhard Furtner
2024-01-02 18:46 [PATCH v3 00/11] Mitigate a vmap lock contention v3 Uladzislau Rezki (Sony)
2024-02-22 8:35 8% ` Uladzislau Rezki
2024-01-13 9:44 [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize Nanyong Sun
2024-01-25 18:06 ` Catalin Marinas
2024-01-27 5:04 ` Nanyong Sun
2024-02-07 11:12 ` Will Deacon
2024-02-07 11:21 ` Matthew Wilcox
2024-02-07 12:11 ` Will Deacon
2024-02-07 14:17 ` Matthew Wilcox
2024-02-08 2:24 ` Jane Chu
2024-02-08 15:49 ` Matthew Wilcox
2024-02-11 11:59 7% ` Muchun Song
2024-02-11 11:59 7% ` Muchun Song
2024-01-29 14:34 put the xfs xfile abstraction on a diet v3 Christoph Hellwig
2024-01-29 14:34 ` [PATCH 05/20] shmem: export shmem_get_folio Christoph Hellwig
2024-02-16 13:53 ` Matthew Wilcox
2024-02-19 6:25 6% ` Christoph Hellwig
2024-02-01 10:08 [PATCH 0/2] Fix I/O high when memory almost met memcg limit Liu Shixin
2024-02-01 10:08 ` [PATCH 2/2] mm/readahead: limit sync readahead while too many active refault Liu Shixin
2024-02-01 9:37 ` Jan Kara
2024-02-01 10:41 ` Liu Shixin
2024-02-01 17:31 ` Jan Kara
2024-02-02 9:02 ` Liu Shixin
2024-02-29 9:01 6% ` Liu Shixin
2024-02-08 23:45 [PATCH v7 0/4] page_owner: print stacks and their outstanding allocations Oscar Salvador
2024-02-08 23:45 4% ` [PATCH v7 3/4] mm,page_owner: Display all stacks and their count Oscar Salvador
2024-02-09 8:00 0% ` Marco Elver
2024-02-09 2:31 5% MGLRU premature memcg OOM on slow writes Chris Down
2024-02-29 23:51 ` Axel Rasmussen
2024-03-01 0:30 ` Chris Down
2024-03-08 19:18 ` Axel Rasmussen
2024-03-11 9:11 ` Yafang Shao
2024-03-12 16:44 ` Axel Rasmussen
2024-03-12 20:07 ` Yu Zhao
2024-03-12 20:11 ` Yu Zhao
2024-03-13 3:33 4% ` Yafang Shao
2024-03-14 22:23 0% ` Yu Zhao
2024-03-15 2:38 5% ` Yafang Shao
2024-02-09 10:12 4% [syzbot] [fs?] KASAN: use-after-free Read in sysv_new_inode syzbot
2024-02-12 21:38 [PATCH v3 00/35] Memory allocation profiling Suren Baghdasaryan
2024-02-12 21:38 ` [PATCH v3 13/35] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-02-16 8:57 ` Vlastimil Babka
2024-02-18 2:21 5% ` Suren Baghdasaryan
2024-02-12 22:30 6% [PATCH v8 0/5] page_owner: print stacks and their outstanding allocations Oscar Salvador
2024-02-12 22:30 5% ` [PATCH v8 3/5] mm,page_owner: Display all stacks and their count Oscar Salvador
2024-02-13 8:38 0% ` Marco Elver
2024-02-13 14:25 0% ` Vlastimil Babka
2024-02-12 23:28 5% + lib-stackdepot-move-stack_record-struct-definition-into-the-header.patch added to mm-unstable branch Andrew Morton
2024-02-12 23:28 4% + mmpage_owner-display-all-stacks-and-their-count.patch " Andrew Morton
2024-02-13 9:36 [RFC v2 00/14] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-02-13 9:37 16% ` [RFC v2 03/14] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
2024-02-13 14:58 0% ` Hannes Reinecke
2024-02-13 16:38 0% ` Darrick J. Wong
2024-02-13 22:05 0% ` Dave Chinner
2024-02-14 10:13 0% ` Pankaj Raghav (Samsung)
2024-02-13 9:37 6% ` [RFC v2 09/14] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
2024-02-14 17:01 6% [PATCH v9 0/7] page_owner: print stacks and their outstanding allocations Oscar Salvador
2024-02-14 17:01 5% ` [PATCH v9 5/7] mm,page_owner: Display all stacks and their count Oscar Salvador
2024-02-15 11:10 0% ` Vlastimil Babka
2024-02-14 18:32 6% + lib-stackdepot-fix-first-entry-having-a-0-handle.patch added to mm-unstable branch Andrew Morton
2024-02-14 18:32 4% + mmpage_owner-display-all-stacks-and-their-count.patch " Andrew Morton
2024-02-15 21:59 6% [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations Oscar Salvador
2024-02-15 21:59 5% ` [PATCH v10 5/7] mm,page_owner: Display all stacks and their count Oscar Salvador
2024-02-15 23:37 0% ` [PATCH v10 0/7] page_owner: print stacks and their outstanding allocations Andrey Konovalov
2024-02-15 23:34 6% + lib-stackdepot-fix-first-entry-having-a-0-handle.patch added to mm-unstable branch Andrew Morton
2024-02-15 23:34 4% + mmpage_owner-display-all-stacks-and-their-count.patch " Andrew Morton
2024-02-18 13:16 6% [linus:master] [mm] 9cee7e8ef3: netperf.Throughput_Mbps 4.0% improvement kernel test robot
2024-02-21 19:40 3% [PATCH v4 00/36] Memory allocation profiling Suren Baghdasaryan
2024-02-21 19:40 3% ` [PATCH v4 14/36] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-02-21 19:40 5% ` [PATCH v4 36/36] memprofiling: Documentation Suren Baghdasaryan
2024-02-27 13:36 0% ` [PATCH v4 00/36] Memory allocation profiling Vlastimil Babka
2024-02-27 16:10 0% ` Suren Baghdasaryan
2024-02-24 1:49 6% [merged mm-stable] lib-stackdepot-fix-first-entry-having-a-0-handle.patch removed from -mm tree Andrew Morton
2024-02-24 1:49 5% [merged mm-stable] mmpage_owner-display-all-stacks-and-their-count.patch " Andrew Morton
2024-02-24 14:03 [syzbot] [virtualization?] KMSAN: uninit-value in virtqueue_add (4) Tetsuo Handa
2024-02-24 14:24 5% ` [syzbot] [mm] " syzbot
2024-02-25 0:01 [syzbot] [virtualization?] " Tetsuo Handa
2024-02-25 0:21 5% ` [syzbot] [mm] " syzbot
2024-02-25 0:27 [syzbot] [virtualization?] " Edward Adam Davis
2024-02-25 0:52 5% ` [syzbot] [mm] " syzbot
2024-02-25 7:56 4% [PATCH v12 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
2024-02-25 7:57 4% ` [PATCH v12 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
2024-02-25 7:57 5% ` [PATCH v12 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2024-02-26 9:49 [PATCH 00/13] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-02-26 9:49 6% ` [PATCH 01/13] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
2024-02-26 9:49 16% ` [PATCH 04/13] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
2024-02-26 14:47 0% ` Matthew Wilcox
2024-02-27 12:09 0% ` Pankaj Raghav (Samsung)
2024-02-26 20:55 [PATCH v5 0/8] Split a folio to any lower order folios Zi Yan
2024-02-26 20:55 6% ` [PATCH v5 2/8] mm: Support order-1 folios in the page cache Zi Yan
2024-02-27 23:20 [PATCH 00/21] TDX/SNP part 1 of n, for 6.9 Paolo Bonzini
2024-02-27 23:20 7% ` [PATCH 17/21] filemap: add FGP_CREAT_ONLY Paolo Bonzini
2024-02-28 2:14 0% ` Sean Christopherson
2024-02-28 2:17 0% ` Yosry Ahmed
2024-02-28 13:15 0% ` Matthew Wilcox
2024-02-28 13:28 0% ` Paolo Bonzini
2024-03-04 2:55 0% ` Xu Yilun
2024-02-27 23:20 4% ` [PATCH 18/21] KVM: x86: Add gmem hook for initializing memory Paolo Bonzini
2024-02-29 7:27 3% [dhowells-fs:cifs-netfs] [cifs] a05396635d: filebench.sum_operations/s -98.8% regression kernel test robot
2024-03-01 16:44 [PATCH v2 00/13] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-03-01 16:44 6% ` [PATCH v2 01/13] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
2024-03-01 16:44 16% ` [PATCH v2 04/13] filemap: use mapping_min_order while allocating folios Pankaj Raghav (Samsung)
2024-03-03 5:46 3% [syzbot] [nilfs?] KMSAN: uninit-value in nilfs_add_checksums_on_logs (2) xingwei lee
2024-03-03 12:45 0% ` Ryusuke Konishi
2024-03-06 7:12 0% ` xingwei lee
2024-03-06 18:23 3% [PATCH v5 00/37] Memory allocation profiling Suren Baghdasaryan
2024-03-06 18:24 3% ` [PATCH v5 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-03-06 18:24 4% ` [PATCH v5 37/37] memprofiling: Documentation Suren Baghdasaryan
2024-03-07 3:18 0% ` Randy Dunlap
2024-03-07 16:51 0% ` Suren Baghdasaryan
2024-03-07 18:17 0% ` Kent Overstreet
2024-03-11 13:12 3% [linus:master] [btrfs] e06cc89475: aim7.jobs-per-min -12.9% regression kernel test robot
2024-03-11 15:03 3% [djwong-xfs:twf-hoist] [xfs] eacb32cc55: aim7.jobs-per-min -66.2% regression kernel test robot
2024-03-11 18:43 1% nfsd hangs and nfsd_break_deleg_cb+0x170/0x190 warning Rik Theys
2024-03-13 17:02 [PATCH v3 00/11] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-03-13 17:02 6% ` [PATCH v3 01/11] mm: Support order-1 folios in the page cache Pankaj Raghav (Samsung)
2024-03-13 17:02 14% ` [PATCH v3 03/11] filemap: allocate mapping_min_order " Pankaj Raghav (Samsung)
2024-03-15 13:21 0% ` Pankaj Raghav (Samsung)
2024-03-21 8:27 [PATCH 0/3] fs: aio: more folio conversion Kefeng Wang
2024-03-21 8:27 7% ` [PATCH 1/3] fs: aio: use a folio in aio_setup_ring() Kefeng Wang
2024-03-21 9:07 0% ` Kefeng Wang
2024-03-21 13:16 6% [PATCH v2 0/3] fs: aio: more folio conversion Kefeng Wang
2024-03-21 13:16 7% ` [PATCH v2 1/3] fs: aio: use a folio in aio_setup_ring() Kefeng Wang
2024-03-22 14:12 0% ` [PATCH v2 0/3] fs: aio: more folio conversion Christian Brauner
2024-03-25 19:50 0% ` Matthew Wilcox
2024-03-21 16:36 3% [PATCH v6 00/37] Memory allocation profiling Suren Baghdasaryan
2024-03-21 16:36 3% ` [PATCH v6 13/37] lib: add allocation tagging support for memory " Suren Baghdasaryan
2024-03-21 16:36 4% ` [PATCH v6 37/37] memprofiling: Documentation Suren Baghdasaryan
2024-03-21 20:41 0% ` [PATCH v6 00/37] Memory allocation profiling Andrew Morton
2024-03-21 21:08 0% ` Suren Baghdasaryan
2024-04-05 13:37 0% ` Klara Modin
2024-04-05 14:14 0% ` Suren Baghdasaryan
2024-03-21 20:42 2% + fix-missing-vmalloch-includes.patch added to mm-unstable branch Andrew Morton
2024-03-21 20:43 3% + lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch " Andrew Morton
2024-03-21 20:44 4% + memprofiling-documentation.patch " Andrew Morton
2024-03-22 21:04 6% [PATCH] mm/filemap: set folio->mapping to NULL before xas_store() Soma Nakata
2024-03-26 21:05 0% ` Andrew Morton
2024-03-26 22:50 0% ` Matthew Wilcox
2024-03-26 22:52 0% ` Soma
2024-03-24 22:22 [PATCH 6.8 000/715] 6.8.2-rc1 review Sasha Levin
2024-03-24 22:31 6% ` [PATCH 6.8 512/715] lib/stackdepot: fix first entry having a 0-handle Sasha Levin
2024-04-04 7:26 4% [PATCH v13 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
2024-04-04 7:26 4% ` [PATCH v13 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
2024-04-04 7:26 5% ` [PATCH v13 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2024-04-04 18:50 [PATCH 00/11] KVM: guest_memfd: New hooks and functionality for SEV-SNP and TDX Paolo Bonzini
2024-04-04 18:50 12% ` [PATCH 04/11] filemap: add FGP_CREAT_ONLY Paolo Bonzini
2024-04-25 5:52 0% ` Paolo Bonzini
2024-04-29 13:26 0% ` Vlastimil Babka
2024-04-04 18:50 5% ` [PATCH 06/11] KVM: guest_memfd: Add hook for initializing memory Paolo Bonzini
2024-04-22 10:53 0% ` Xu Yilun
2024-04-04 18:50 ` [PATCH 09/11] KVM: guest_memfd: Add interface for populating gmem pages with user data Paolo Bonzini
2024-04-22 14:44 6% ` Xu Yilun
2024-04-23 23:50 ` Isaku Yamahata
2024-04-24 22:24 ` Sean Christopherson
2024-04-25 1:12 ` Isaku Yamahata
2024-04-25 6:01 ` Paolo Bonzini
2024-04-25 16:00 ` Sean Christopherson
2024-04-26 5:41 ` Paolo Bonzini
2024-04-26 15:17 3% ` Sean Christopherson
2024-04-06 9:09 [PATCH vfs.all 00/26] fs & block: remove bdev->bd_inode Yu Kuai
2024-04-06 9:09 ` [PATCH vfs.all 22/26] block: stash a bdev_file to read/write raw blcok_device Yu Kuai
2024-04-09 10:23 ` Christian Brauner
2024-04-09 11:53 7% ` Yu Kuai
2024-04-11 6:59 4% [PATCH v14 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 5/8] udmabuf: Add back support for mapping hugetlb pages Vivek Kasireddy
2024-04-11 6:59 4% ` [PATCH v14 6/8] udmabuf: Convert udmabuf driver to use folios Vivek Kasireddy
2024-04-11 6:59 5% ` [PATCH v14 7/8] udmabuf: Pin the pages using memfd_pin_folios() API Vivek Kasireddy
2024-04-11 14:49 [PATCH vfs.all 22/26] block: stash a bdev_file to read/write raw blcok_device Al Viro
2024-04-11 14:53 ` [PATCH 01/11] block_device: add a pointer to struct address_space (page cache of bdev) Al Viro
2024-04-11 14:53 14% ` [PATCH 03/11] grow_dev_folio(): we only want ->bd_inode->i_mapping there Al Viro
2024-04-12 14:57 riscv32 EXT4 splat, 6.8 regression? Björn Töpel
2024-04-13 14:43 ` Nam Cao
2024-04-14 1:46 ` Andreas Dilger
2024-04-14 2:15 ` Al Viro
2024-04-14 4:16 ` Andreas Dilger
2024-04-14 14:08 6% ` Björn Töpel
2024-04-14 14:08 6% ` Björn Töpel
2024-04-15 13:04 0% ` Christian Brauner
2024-04-15 13:04 0% ` Christian Brauner
2024-04-15 16:04 0% ` Björn Töpel
2024-04-15 16:04 0% ` Björn Töpel
2024-04-16 6:44 ` Nam Cao
2024-04-16 8:25 0% ` Christian Brauner
2024-04-16 8:25 0% ` Christian Brauner
2024-04-15 8:02 2% [djwong-xfs:health-monitoring] [xfs] ac96cb4f2f: aim7.jobs-per-min -68.8% regression kernel test robot
2024-04-15 12:48 [PATCH v9 00/36] tracing: fprobe: function_graph: Multi-function graph and fprobe on fgraph Masami Hiramatsu (Google)
2024-04-25 20:31 ` Andrii Nakryiko
2024-04-29 13:51 ` Masami Hiramatsu
2024-04-29 20:25 ` Andrii Nakryiko
2024-04-30 13:32 1% ` Masami Hiramatsu
2024-04-16 17:28 [PATCH 0/5] Convert ext4's mballoc to use folios Matthew Wilcox (Oracle)
2024-04-16 17:28 8% ` [PATCH 1/5] ext4: Convert bd_bitmap_page to bd_bitmap_folio Matthew Wilcox (Oracle)
2024-04-16 17:28 8% ` [PATCH 2/5] ext4: Convert bd_buddy_page to bd_buddy_folio Matthew Wilcox (Oracle)
2024-04-17 15:04 [PATCH 0/7] Convert UDF to folios Matthew Wilcox (Oracle)
2024-04-17 15:04 7% ` [PATCH 2/7] udf: Convert udf_write_begin() to use a folio Matthew Wilcox (Oracle)
2024-04-17 15:04 7% ` [PATCH 3/7] udf: Convert udf_expand_file_adinicb() " Matthew Wilcox (Oracle)
2024-04-17 17:09 [PATCH 00/10] ntfs3: Convert (most of) ntfs3 to use folios Matthew Wilcox (Oracle)
2024-04-17 17:09 7% ` [PATCH 02/10] ntfs3: Convert ntfs_write_begin to use a folio Matthew Wilcox (Oracle)
2024-04-17 17:09 7% ` [PATCH 06/10] ntfs3: Convert attr_make_nonresident " Matthew Wilcox (Oracle)
2024-04-18 17:41 5% Removing PG_error use from btrfs Matthew Wilcox
2024-04-18 18:00 0% ` David Sterba
2024-04-18 19:41 [PATCH v13 00/26] Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support Michael Roth
2024-04-18 19:41 ` [PATCH v13 04/26] KVM: guest_memfd: Fix PTR_ERR() handling in __kvm_gmem_get_pfn() Michael Roth
2024-04-19 12:58 ` David Hildenbrand
2024-04-19 15:11 6% ` Michael Roth
2024-04-19 16:17 0% ` Paolo Bonzini
2024-04-19 2:30 6% [PATCH] ext4: remove the redundant folio_wait_stable() Zhang Yi
2024-04-19 9:28 0% ` Jan Kara
2024-05-07 23:03 0% ` Theodore Ts'o
2024-04-20 2:49 [PATCH 00/30] Remove PG_error flag Matthew Wilcox (Oracle)
2024-04-20 2:49 6% ` [PATCH 02/30] btrfs: Use a folio in write_dev_supers() Matthew Wilcox (Oracle)
2024-04-22 14:39 [PATCH RFC 0/7] buffered block atomic writes John Garry
2024-04-22 14:39 6% ` [PATCH RFC 2/7] filemap: Change mapping_set_folio_min_order() -> mapping_set_folio_orders() John Garry
2024-04-25 14:47 0% ` Pankaj Raghav (Samsung)
2024-04-26 8:02 0% ` John Garry
2024-04-22 14:39 10% ` [PATCH RFC 5/7] fs: iomap: buffered atomic write support John Garry
2024-04-22 19:31 [PATCH v2 00/11] Convert (most of) ntfs3 to use folios Matthew Wilcox (Oracle)
2024-04-22 19:31 7% ` [PATCH v2 02/11] ntfs3: Convert ntfs_write_begin to use a folio Matthew Wilcox (Oracle)
2024-04-22 19:31 7% ` [PATCH v2 06/11] ntfs3: Convert attr_make_nonresident " Matthew Wilcox (Oracle)
2024-04-22 19:32 7% ` [PATCH v2 10/11] ntfs3: Convert ntfs_get_frame_pages() " Matthew Wilcox (Oracle)
2024-04-25 11:37 [PATCH v4 00/11] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-04-25 11:37 14% ` [PATCH v4 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-04-25 19:04 0% ` Hannes Reinecke
2024-04-26 15:12 0% ` Darrick J. Wong
2024-04-28 20:59 0% ` Pankaj Raghav (Samsung)
[not found] <CA+EPQ66bhvV_Wr2PE=bQQwcYbfvXCAn_TyAoHdD9fSfahsgG0Q@mail.gmail.com>
2024-04-25 6:23 ` Kernel RIP 0010:cifs_flush_folio Steve French
[not found] ` <CA+EPQ664FHmSU-XW2e63jz1hEYNYVS-RdY6309g7-hvUMdt5Ew@mail.gmail.com>
2024-04-25 16:52 0% ` Shyam Prasad N
2024-04-26 3:57 2% [merged mm-stable] fix-missing-vmalloch-includes.patch removed from -mm tree Andrew Morton
2024-04-26 3:58 3% [merged mm-stable] lib-add-allocation-tagging-support-for-memory-allocation-profiling.patch " Andrew Morton
2024-04-26 3:58 4% [merged mm-stable] memprofiling-documentation.patch " Andrew Morton
2024-04-28 10:32 4% [syzbot] [crypto?] KMSAN: uninit-value in aes_encrypt (5) syzbot
2024-05-10 4:02 4% ` syzbot
2024-04-29 12:03 4% [syzbot] [nilfs?] possible deadlock in nilfs_dirty_inode (3) syzbot
2024-04-30 3:18 [PATCHSET v5.6 1/2] fs-verity: support merkle tree access by blocks Darrick J. Wong
2024-04-30 3:20 ` [PATCH 04/18] fsverity: support block-based Merkle tree caching Darrick J. Wong
2024-05-01 7:36 ` Christoph Hellwig
2024-05-01 22:35 ` Darrick J. Wong
2024-05-02 4:42 5% ` Christoph Hellwig
2024-05-03 9:53 [PATCH v5 00/11] enable bs > ps in XFS Luis Chamberlain
2024-05-03 9:53 14% ` [PATCH v5 03/11] filemap: allocate mapping_min_order folios in the page cache Luis Chamberlain
2024-05-03 17:45 4% CVE-2022-48689: tcp: TX zerocopy should not sense pfmemalloc status Greg Kroah-Hartman
2024-05-08 6:35 [PATCHES] ->bd_inode elimination Al Viro
2024-05-08 6:44 ` [PATCHES part 2 01/10] block_device: add a pointer to struct address_space (page cache of bdev) Al Viro
2024-05-08 6:44 14% ` [PATCHES part 2 03/10] grow_dev_folio(): we only want ->bd_inode->i_mapping there Al Viro
2024-05-08 6:44 7% ` [PATCHES part 2 05/10] fs/buffer.c: massage the remaining users of ->bd_inode to ->bd_mapping Al Viro
2024-05-15 0:40 1% KASAN: use-after-free in ext4_find_extent in v6.9 Shuangpeng Bai
2024-05-15 1:53 4% [PATCH] fsverity: support block-based Merkle tree caching Eric Biggers
2024-05-15 5:57 [PATCH 00/12] [LSF/MM/BPF RFC] shmem/tmpfs: add large folios support Daniel Gomez
[not found] ` <CGME20240515055740eucas1p1bf112e73a7009a0f9b2bbf09c989a51b@eucas1p1.samsung.com>
2024-05-15 5:57 11% ` [PATCH 12/12] shmem: add large folio support to the write and fallocate paths Daniel Gomez
2024-05-15 18:59 5% ` kernel test robot
2024-05-16 13:45 [PATCH stable] block/mq-deadline: fix different priority request on the same zone Bart Van Assche
2024-05-17 1:44 5% ` Wu Bo
2024-05-17 13:54 4% [syzbot] [udf?] possible deadlock in udf_setsize syzbot
2024-05-18 12:29 5% [syzbot] [nilfs?] possible deadlock in nilfs_transaction_begin syzbot
2024-05-18 19:16 0% ` Ryusuke Konishi
2024-05-18 18:53 4% [syzbot] [nilfs?] possible deadlock in nilfs_evict_inode (2) syzbot
2024-05-18 19:20 0% ` Ryusuke Konishi
2024-05-20 3:26 3% [syzbot] [net?] INFO: task hung in addrconf_dad_work (4) syzbot
2024-05-22 22:07 4% + udmabuf-add-back-support-for-mapping-hugetlb-pages.patch added to mm-unstable branch Andrew Morton
2024-05-22 22:07 4% + udmabuf-convert-udmabuf-driver-to-use-folios.patch " Andrew Morton
2024-05-22 22:07 4% + udmabuf-pin-the-pages-using-memfd_pin_folios-api.patch " Andrew Morton
2024-05-23 2:58 4% [amir73il:sb_write_barrier] [fs] 8829cb6189: stress-ng.fault.ops_per_sec -2.3% regression kernel test robot
2024-05-26 9:08 RIP: + BUG: with 6.8.11 and BTRFS Toralf Förster
2024-05-26 9:11 4% ` Toralf Förster
2024-05-27 8:58 6% page type is 3, passed migratetype is 1 (nr=512) Christoph Hellwig
2024-05-27 13:14 5% ` Christoph Hellwig
2024-05-28 16:47 0% ` Johannes Weiner
2024-05-27 16:36 support large folios for NFS Christoph Hellwig
2024-05-27 16:36 20% ` [PATCH 2/2] nfs: add support for large folios Christoph Hellwig
2024-05-27 18:50 [PATCH 6.8 000/493] 6.8.12-rc1 review Greg Kroah-Hartman
2024-05-27 18:57 6% ` [PATCH 6.8 459/493] ext4: remove the redundant folio_wait_stable() Greg Kroah-Hartman
2024-05-27 18:50 [PATCH 6.9 000/427] 6.9.3-rc1 review Greg Kroah-Hartman
2024-05-27 18:57 6% ` [PATCH 6.9 397/427] ext4: remove the redundant folio_wait_stable() Greg Kroah-Hartman
2024-05-27 21:01 5% [PATCH v5.1] fs: Allow fine-grained control of folio sizes Matthew Wilcox (Oracle)
2024-05-28 16:48 [PATCH 0/7] Start moving write_begin/write_end out of aops Matthew Wilcox (Oracle)
2024-05-28 16:48 5% ` [PATCH 3/7] buffer: Add buffer_write_begin, buffer_write_end and __buffer_write_end Matthew Wilcox (Oracle)
2024-05-28 16:48 13% ` [PATCH 6/7] ext4: Convert to buffered_write_operations Matthew Wilcox (Oracle)
2024-05-28 23:42 3% ` kernel test robot
2024-05-29 7:40 5% [hch-misc:nfs-large-folio] [nfs] 9349d8ed5c: fsmark.files_per_sec 20.2% improvement kernel test robot
2024-05-29 8:25 9% [amir73il:sb_write_barrier] [fanotify] 9d1fd61f1d: unixbench.throughput -7.9% regression kernel test robot
2024-05-29 11:17 0% ` Amir Goldstein
2024-05-29 13:44 [PATCH v6 00/11] enable bs > ps in XFS Pankaj Raghav (Samsung)
2024-05-29 13:45 5% ` [PATCH v6 02/11] fs: Allow fine-grained control of folio sizes Pankaj Raghav (Samsung)
2024-05-29 13:45 13% ` [PATCH v6 03/11] filemap: allocate mapping_min_order folios in the page cache Pankaj Raghav (Samsung)
2024-05-29 14:56 4% [PATCH 6/7] ext4: Convert to buffered_write_operations kernel test robot
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.