I created a test with 2 disks then booted without disk2 then without disk1 (each time adding a file. Then I booted with both. The result was in kernel logs I found 'mirroring or such' and I only got contents of disk 1. Is there any option to say if both mirrors get changed don't mount but cause an error because that might be better than getting newly written data reset randomly ? Did I miss an option? Would mdadm be better ? I tried looking at archives were CPU bottleneck for raids was mentioned often and 'work being done' on it. What's status of that or are the advices given there still valid ? Marc Weber _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Marc Weber <marco-oweber@gmx.de> ezt írta (időpont: 2021. ápr. 13., K, 17:58): > > I created a test with 2 disks then booted without disk2 then without disk1 (each time adding a file. > > Then I booted with both. The result was in kernel logs I found 'mirroring or such' and I only got contents of disk 1. > > > Is there any option to say if both mirrors get changed don't mount but cause an error because that might be better than > > getting newly written data reset randomly ? > > > Did I miss an option? Would mdadm be better ? > > > I tried looking at archives were CPU bottleneck for raids was mentioned often and 'work being done' on it. > > What's status of that or are the advices given there still valid ? > > > Marc Weber > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://listman.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > As far as I know mdadm works the same way. (At least it was working like this when this thread happened: https://www.spinics.net/lists/raid/msg36962.html) You can use a similar tactic that was recommended there: Don't assemble a degraded RAID1 array without user interaction. You can do this by changing the activation mode to complete in lvm.conf. _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> As far as I know mdadm works the same way. (At least it was working > like this when this thread happened: > https://www.spinics.net/lists/raid/msg36962.html) > > You can use a similar tactic that was recommended there: Don't > assemble a degraded RAID1 array without user interaction. You can do > this by changing the activation mode to complete in lvm.conf. But then the server doesn't start at all which might also be costly. The thread says that clock i not reliable eventually due to faulty hardware etc. *BUT* why isn't there a way to 'stamp' the disks once clock is synced with the internet ? Let's compare two cases and two solutions. solution 1: write dates [date, known-devices] to the active disks CASE: disk 1 appears sometimes. disk2 would have 'know-devices' disk2 only, so disk1 could be identified as troublesome. action mirror disk2 to1 CASE: split brain: disk1 appears, reboot disk2 appears, reboot both appear Dates would be different. -> no mount Not using the date info at all, thus could be using random numbers which would be solution 2. I don't have experience how often disks appear sometimes only. To me it looks like having such strategy above which also could be implemented in user space (but why ?) would work in more cases very well cause the server would boot more often than not. Marc Weber _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/