Linux-Raid Archives on lore.kernel.org
 help / color / Atom feed
[PATCH mdadm] Grow: be careful of corrupt dev_roles list
 2021-02-26  1:02 UTC  - mbox.gz / Atom

[song-md:md-next] BUILD SUCCESS WITH WARNING ec8263472f36ff06a9b5988675109cb0123e366b
 2021-02-24 21:07 UTC  - mbox.gz / Atom

[PATCH V2] md: don't unregister sync_thread with reconfig_mutex held
 2021-02-24  9:25 UTC  (4+ messages) - mbox.gz / Atom

md read-only fixes
 2021-02-24  8:45 UTC  (6+ messages) - mbox.gz / Atom
` [PATCH 1/2] md: check for NULL ->meta_bdev before calling bdev_read_only
` [PATCH 2/2] md: use rdev_read_only in restart_array

[PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
 2021-02-24  8:41 UTC  (9+ messages) - mbox.gz / Atom
` [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio
` [PATCH V2 2/5] md/raid10: extend r10bio devs to raid disks
` [PATCH V2 3/5] md/raid10: pull the code that wait for blocked dev into one function
` [PATCH V2 4/5] md/raid10: improve raid10 discard request
` [PATCH V2 5/5] md/raid10: improve discard request for far layout

Raid10 reshape bug
 2021-02-19 20:13 UTC  - mbox.gz / Atom

use ssd as write-journal or lvm-cache?
 2021-02-17 20:50 UTC  (9+ messages) - mbox.gz / Atom

mdxxx_raid6 kernel thread frozen
 2021-02-16 17:20 UTC  (3+ messages) - mbox.gz / Atom

[PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file
 2021-02-16 14:28 UTC  (2+ messages) - mbox.gz / Atom

cleanup updating the size of block devices v3
 2021-02-16 11:46 UTC  (4+ messages) - mbox.gz / Atom
` [PATCH 12/78] dm: use set_capacity_and_notify

[PATCH] md: don't unregister sync_thread with reconfig_mutex held
 2021-02-11  9:11 UTC  (4+ messages) - mbox.gz / Atom

Linux RAID with btrfs stuck and consume 100 % CPU
 2021-02-11  3:14 UTC  (11+ messages) - mbox.gz / Atom

I trashed my superblocks after reshape from raid5 to raid6 stalled - need help recovering
 2021-02-10 22:10 UTC  (2+ messages) - mbox.gz / Atom

[song-md:md-next] BUILD SUCCESS c5eec74f252dfba25269cd68f9a3407aedefd330
 2021-02-10  4:04 UTC  - mbox.gz / Atom

[PATCH 1/1] It should be FAILED when raid has not enough active disks
 2021-02-09  9:39 UTC  - mbox.gz / Atom

md_raid: mdX_raid6 looping after sync_action "check" to "idle" transition
 2021-02-09  9:24 UTC  (24+ messages) - mbox.gz / Atom

cleanup bvec allocation
 2021-02-08 15:33 UTC  (15+ messages) - mbox.gz / Atom
` [PATCH 01/11] block: reuse BIO_INLINE_VECS for integrity bvecs
` [PATCH 02/11] block: move struct biovec_slab to bio.c
` [PATCH 03/11] block: factor out a bvec_alloc_gfp helper
` [PATCH 04/11] block: streamline bvec_alloc
` [PATCH 05/11] block: remove the 1 and 4 vec bvec_slabs entries
` [PATCH 06/11] block: turn the nr_iovecs argument to bio_alloc* into an unsigned short
` [PATCH 07/11] block: remove a layer of indentation in bio_iov_iter_get_pages
` [PATCH 08/11] block: set BIO_NO_PAGE_REF in bio_iov_bvec_set
` [PATCH 09/11] block: mark the bio as cloned "
` [PATCH 10/11] md/raid10: remove dead code in reshape_request
` [PATCH 11/11] block: use bi_max_vecs to find the bvec pool

Repairing IMSM RAID array "active, FAILED, not started"
 2021-02-08 13:28 UTC  (2+ messages) - mbox.gz / Atom

[RFC PATCH] super-intel: correctly recognize NVMe device during assemble
 2021-02-07  3:35 UTC  (3+ messages) - mbox.gz / Atom

[PATCH] imsm: add verbose flag to compare_super
 2021-02-05 13:29 UTC  - mbox.gz / Atom

[PATCH AUTOSEL 5.4 03/26] dm integrity: select CRYPTO_SKCIPHER
 2021-02-05  0:28 UTC  (3+ messages) - mbox.gz / Atom
  ` [dm-devel] "

put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
 2021-02-04 19:58 UTC  (4+ messages) - mbox.gz / Atom
    `  "

[PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
 2021-02-04 17:29 UTC  (10+ messages) - mbox.gz / Atom
` [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio
` [PATCH V2 2/5] md/raid10: extend r10bio devs to raid disks
` [PATCH V2 3/5] md/raid10: pull the code that wait for blocked dev into one function
` [PATCH V2 4/5] md/raid10: improve raid10 discard request
` [PATCH V2 5/5] md/raid10: improve discard request for far layout

[GIT PULL] md-next 20210203
 2021-02-04 14:37 UTC  (2+ messages) - mbox.gz / Atom

One failed raid device can't umount automatically
 2021-02-04  8:25 UTC  (3+ messages) - mbox.gz / Atom

[PATCH 0/5] md/raid10: Improve handling raid10 discard request
 2021-02-04  7:37 UTC  (16+ messages) - mbox.gz / Atom
` [PATCH 1/5] md: add md_submit_discard_bio() for submitting discard bio
` [PATCH 2/5] md/raid10: extend r10bio devs to raid disks
` [PATCH 3/5] md/raid10: pull codes that wait for blocked dev into one function
` [PATCH 4/5] md/raid10: improve raid10 discard request
` [PATCH 5/5] md/raid10: improve discard request for far layout

Kenya Cutomized Business Plans for only Kes 499/=
 2021-02-04  5:26 UTC  - mbox.gz / Atom

[solved] 3 drive RAID5 with 1 bad drive, 1 drive active but not clean and a single clean drive
 2021-02-03  6:07 UTC  - mbox.gz / Atom

3 drive RAID5 with 1 bad drive, 1 drive active but not clean and a single clean drive
 2021-02-03  4:04 UTC  - mbox.gz / Atom

PROBLEM: Recent raid10 block discard patchset causes filesystem corruption on fstrim
 2021-02-03  1:43 UTC  (5+ messages) - mbox.gz / Atom

[PATCH v2] super1.c: avoid useless sync when bitmap switches from clustered to none
 2021-02-03  0:22 UTC  - mbox.gz / Atom

[PATCH 2/2] dm crypt: support using trusted keys
 2021-02-02 15:12 UTC  (5+ messages) - mbox.gz / Atom

[PATCH] super1.c: avoid useless sync when bitmap switches from clustered to none
 2021-02-02 13:53 UTC  - mbox.gz / Atom

Regarding Raid-6 array
 2021-02-01 22:00 UTC  - mbox.gz / Atom

misc bio allocation cleanups
 2021-02-01 12:22 UTC  (37+ messages) - mbox.gz / Atom
` [PATCH 01/17] zonefs: use bio_alloc in zonefs_file_dio_append
  ` [dm-devel] "
` [PATCH 02/17] btrfs: use bio_kmalloc in __alloc_device
` [PATCH 03/17] blk-crypto: use bio_kmalloc in blk_crypto_clone_bio
` [PATCH 04/17] block: split bio_kmalloc from bio_alloc_bioset
` [PATCH 05/17] block: use an on-stack bio in blkdev_issue_flush
` [PATCH 06/17] dm-clone: use blkdev_issue_flush in commit_metadata
` [PATCH 07/17] f2fs: use blkdev_issue_flush in __submit_flush_wait
  ` [f2fs-dev] "
` [PATCH 08/17] f2fs: remove FAULT_ALLOC_BIO
  ` [f2fs-dev] "
` [PATCH 09/17] drbd: remove bio_alloc_drbd
` [PATCH 10/17] drbd: remove drbd_req_make_private_bio
` [PATCH 11/17] md: remove bio_alloc_mddev
` [PATCH 12/17] md: simplify sync_page_io
` [PATCH 13/17] md: remove md_bio_alloc_sync
` [PATCH 14/17] md/raid6: refactor raid5_read_one_chunk
` [PATCH 15/17] nfs/blocklayout: remove cruft in bl_alloc_init_bio
` [PATCH 16/17] nilfs2: remove cruft in nilfs_alloc_seg_bio
` [PATCH 17/17] mm: remove get_swap_bio

[PATCH] super1: fix Floating point exception
 2021-01-30  9:49 UTC  - mbox.gz / Atom

Problem with initial sync of a RAID1 with 4Kn drives
 2021-01-29 20:31 UTC  - mbox.gz / Atom

"attempt to access beyond end of device" when reshaping raid10 from near=2 to offset=2
 2021-01-28 21:50 UTC  (3+ messages) - mbox.gz / Atom

[PATCH] md: change bitmap offset verification in write_sb_page
 2021-01-28 13:11 UTC  (3+ messages) - mbox.gz / Atom

md: Speed shrinks with drives number
 2021-01-28 11:20 UTC  - mbox.gz / Atom

[song-md:md-next] BUILD SUCCESS ae5fc93485e1e3fb961345359facfab24685410d
 2021-01-27 17:26 UTC  - mbox.gz / Atom

release plan for mdadm
 2021-01-27 11:39 UTC  - mbox.gz / Atom

[PATCH] md/raid5: cast chunk_sectors to sector_t value
 2021-01-26 18:50 UTC  (4+ messages) - mbox.gz / Atom

Kernel bug during chunk size migration
 2021-01-26 14:59 UTC  - mbox.gz / Atom

[PATCH v2 00/10] fsdax: introduce fs query to support reflink
 2021-01-26  0:50 UTC  (12+ messages) - mbox.gz / Atom
` [PATCH v2 01/10] pagemap: Introduce ->memory_failure()
` [PATCH v2 02/10] blk: Introduce ->corrupted_range() for block device
` [PATCH v2 03/10] fs: Introduce ->corrupted_range() for superblock
` [PATCH v2 04/10] mm, fsdax: Refactor memory-failure handler for dax mapping
` [PATCH v2 05/10] mm, pmem: Implement ->memory_failure() in pmem driver
` [PATCH v2 06/10] pmem: Implement ->corrupted_range() for "
` [PATCH v2 07/10] dm: Introduce ->rmap() to find bdev offset
` [PATCH v2 08/10] md: Implement ->corrupted_range()
` [PATCH v2 09/10] xfs: Implement ->corrupted_range() for XFS
` [PATCH v2 10/10] fs/dax: Remove useless functions

store a pointer to the block_device in struct bio (again) v2
 2021-01-25 18:31 UTC  (19+ messages) - mbox.gz / Atom
` [PATCH 01/10] brd: remove the end of device check in brd_do_bvec
` [PATCH 02/10] dcssblk: remove the end of device check in dcssblk_submit_bio
` [PATCH 04/10] block: simplify submit_bio_checks a bit
` [PATCH 05/10] block: do not reassig ->bi_bdev when partition remapping
` [PATCH 08/10] block: add a disk_uevent helper

page: 

Linux-Raid Archives on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-raid/0 linux-raid/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-raid linux-raid/ https://lore.kernel.org/linux-raid \
		linux-raid@vger.kernel.org
	public-inbox-index linux-raid

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-raid


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git