* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? [not found] <20000607100145.A6300@gondor.com> @ 2000-06-07 23:18 ` Paul Jakma 2020-11-27 16:17 ` Michael Marxmeier 0 siblings, 1 reply; 17+ messages in thread From: Paul Jakma @ 2000-06-07 23:18 UTC (permalink / raw) To: Jan Niehusmann; +Cc: Andi Kleen, Linux LVM list On Wed, 7 Jun 2000, Jan Niehusmann wrote: > The snapshot logic simply doesn't know what a filesystem is. It just > clones the block device. this i knew.. :) > So the fs may be in en inconsistent state, if you do the snapshot > while some write operation is in progress on the cloned > filesystem. > ok. so the solution is what? a call in the vfs to complete pending writes, and hold off on modifying any blocks? (i take it the vfs caches writes somehow before informing LVM of the dirty blocks, or not?) > Note that cloning a mounted fs usually works, as well as mounting a > unclean filesystem without fsck usually works - but it's not reliably. > how unreliable? how busy does the fs have to be? anyway, for my purposes snapshot does the job. the choice is: a) backup of a snapshot - inconsistencies hopefully limited to 10**1 order of blocks. b) backup of a live fs, which is so slow that data inconsistency might be in the order of 10**3 blocks or more. also, the fs's that are at risk - heavily written to - are usually things like mail and news queues, ie scratch space. These i can afford not to backup. :) > > Jan > > regards, -- Paul Jakma paul@clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt ------------------------------------------- Fortune: An engineer is someone who does list processing in FORTRAN. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-07 23:18 ` [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? Paul Jakma @ 2020-11-27 16:17 ` Michael Marxmeier 2000-06-08 0:47 ` Paul Jakma 2000-06-08 7:52 ` Heinz J. Mauelshagen 0 siblings, 2 replies; 17+ messages in thread From: Michael Marxmeier @ 2020-11-27 16:17 UTC (permalink / raw) To: paul; +Cc: jan, ak, linux-lvm > > On Wed, 7 Jun 2000, Jan Niehusmann wrote: > > > The snapshot logic simply doesn't know what a filesystem is. It just > > clones the block device. > > this i knew.. :) > > > So the fs may be in en inconsistent state, if you do the snapshot > > while some write operation is in progress on the cloned > > filesystem. > > ok. so the solution is what? a call in the vfs to complete pending > writes, and hold off on modifying any blocks? IMHO when creating a snapshot LVM could simply sync all outstanding buffers for the block device vialog block_fsync() (not sure if this is already done -- Heinz care to comment?). This should also take care of pending fs meta information thus we should get a consistent state. The question is how atomic this is? Since fsync_dev() currently does a lock_kernel() is might even be sufficient. Any reason why this is not suffiecient? Michael -- Michael Marxmeier Marxmeier Software AG E-Mail: mike@msede.com Besenbruchstrasse 9 Phone : +49 202 2431440 42285 Wuppertal, Germany Fax : +49 202 2431420 http://www.msede.com/ ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2020-11-27 16:17 ` Michael Marxmeier @ 2000-06-08 0:47 ` Paul Jakma 2000-06-08 8:23 ` Jos Visser 2000-06-08 7:52 ` Heinz J. Mauelshagen 1 sibling, 1 reply; 17+ messages in thread From: Paul Jakma @ 2000-06-08 0:47 UTC (permalink / raw) To: Michael Marxmeier; +Cc: jan, ak, linux-lvm On Thu, 8 Jun 2000, Michael Marxmeier wrote: > IMHO when creating a snapshot LVM could simply sync all outstanding > buffers for the block device vialog block_fsync() (not sure if this to be 100% safe there must be no possibility that some fs code could run between block_fsync() and the actual point of snapshot creation i think. (right?) > does a lock_kernel() is might even be sufficient. > > Any reason why this is not suffiecient? > if you can be sure that lvm-snapshot won't be intterrupted between the sync and the actual snapshot, then it should be ok, shouldn't it? > Michael > regards, -- Paul Jakma paul@clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt ------------------------------------------- Fortune: The unfacts, did we have them, are too imprecisely few to warrant our certitude. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-08 0:47 ` Paul Jakma @ 2000-06-08 8:23 ` Jos Visser 2000-06-08 8:34 ` Andi Kleen 0 siblings, 1 reply; 17+ messages in thread From: Jos Visser @ 2000-06-08 8:23 UTC (permalink / raw) To: Paul Jakma; +Cc: Michael Marxmeier, jan, ak, linux-lvm I have followed only part of this thread, but the gest I get is that people want to take an LVM snapshot of a file system, and the issue at hand is the status of the file system after the sync. I would like to make some remarks based on my experience with other volume managers and file systems. If all or most of this is already a piece of cake for you, please ignore it, but I reckon that there will be people on the list (or reading the archives) that will find this useful. 1) To be useful the snapshot must be "atomic", which means that the snapshotted LV contains an image which conforms to the orginal at a certain time. Since creating the snapshot usually involves some copying of data blocks (to put it mildly) during which you do not pause the entire system, a smart mechanism must be created to maintain this "illusion" of atomicity. In HP's LVM a snapshot can only be created by splitting of a mirror copy from a mirrored LV (thus decreasing the number of mirror copies of the volume. It is this reason by 3-way mirroring is supported by HP LVM). To create a snapshot one usually first extends the number of mirror copies and then splits off the freshly created mirror. The Veritas eXtended File System (vxfs) has a built-in snapshot system which works kind-a interesting. Instead of doing a full block device copy of the file system, it uses an "overflow" block device where it saves the originals of a changed block in the original block device. When looking at the snapshot, the vxfs first checks the overflow area if a copy of the requested block is available there. If it is, that block is returned, if it isn't, the block is read from the underlying original since it obviously hasn't been changed since the creation of the snapshot (otherwise the original would have been present in the overflow area). In the worst case the overflow area must be as big as the original, but in typical cases it needs only be 10% of the size of the original. After system reboot, the snapshot copy is gone. I would guess that such a volatile snapshot facility could be made into a generic feature available for every block device! 2) If you have a snapshot of a logical volume, the file system in there is always corrupt and needs to be fsck'ed. The point in time (atomic) creating of the snapshot resembles a system crash as far as the content of the snapshot is concerned. An fsck is therefore necessary. (A nice feature of the vxfs snapshot is that this fsck is not necessary, because the feature is implemented at the *file system* level). 3) People have been searching for a long time for a method to prevent this fsck. You would need to have full cooperatioon with the file system code for this. The fs should support a "quiesce" function (through the vfs layer) which would result in a complete update of all ondisk data of the fs. A complete block sync is not enough because an fs might have incore data that should be flushed but which is not in the block buffer cache (think: inode cache, log, B-tree info). Doing a full sync just before the atomic snapshot is a good idea however because it limits the damage fsck must repair. And, but: READ ON: 4) Even if we could quiesce the fs, the resulting snapshot would still be partially corrupt because of the fact that we have (could have) open files in the file system. If an application updates its data with more than one write() system call, and the snapshot creation happens between two consecutive write()'s, the applications ondisk data is corrupt (from an application point of view). What we normally do in complex backup situations is stop the application, sync the fs, create the snapshot, start the application, backup the snapshot. In that scenario we have a stable copy of the application's data with only a minimal application downtime. This scenario also applies if you use hardware RAID snapshot features such as the Business Continuity Volumes of EMC's Symmetrix, or the Business Copy feature of HP's XP256. 5) So, ideally we would need an "application quiesce" in which we can instruct the application to update its ondisk image by making all necessary changes to its disk data (flush()ing) and informing the operating system of its quiesced state, upon which the OS could make the snapshot and free the application to make changes again. Unix just does not support this particular model of application/OS interaction. And, most applications are internally not architected to easily support a quiesce. And the ones that are, are usually database management systems (such as Oracle) for which you can buy online backup features (such as Oracle Enterprise Backup Utility) with which you can create a stable copy of the database without snapshots or other features. ++Jos And thus it came to pass that Paul Jakma wrote: (on Thu, Jun 08, 2000 at 01:47:35AM +0100 to be exact) > On Thu, 8 Jun 2000, Michael Marxmeier wrote: > > > IMHO when creating a snapshot LVM could simply sync all outstanding > > buffers for the block device vialog block_fsync() (not sure if this > > to be 100% safe there must be no possibility that some fs code could > run between block_fsync() and the actual point of snapshot creation i > think. (right?) > > > does a lock_kernel() is might even be sufficient. > > > > Any reason why this is not suffiecient? > > > > if you can be sure that lvm-snapshot won't be intterrupted between > the sync and the actual snapshot, then it should be ok, shouldn't it? > > > Michael > > > > regards, > -- > Paul Jakma paul@clubi.ie > PGP5 key: http://www.clubi.ie/jakma/publickey.txt > ------------------------------------------- > Fortune: > The unfacts, did we have them, are too imprecisely few to warrant our certitude. -- The InSANE quiz master is always right! (or was it the other way round? :-) ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-08 8:23 ` Jos Visser @ 2000-06-08 8:34 ` Andi Kleen 2000-06-08 11:49 ` Paul Jakma 0 siblings, 1 reply; 17+ messages in thread From: Andi Kleen @ 2000-06-08 8:34 UTC (permalink / raw) To: Jos Visser; +Cc: Paul Jakma, Michael Marxmeier, jan, ak, linux-lvm On Thu, Jun 08, 2000 at 10:23:21AM +0200, Jos Visser wrote: > I have followed only part of this thread, but the gest I get is that > people want to take an LVM snapshot of a file system, and the issue > at hand is the status of the file system after the sync. I would like > to make some remarks based on my experience with other volume managers > and file systems. If all or most of this is already a piece of cake > for you, please ignore it, but I reckon that there will be people > on the list (or reading the archives) that will find this useful. > > 1) To be useful the snapshot must be "atomic", which means that the > snapshotted LV contains an image which conforms to the orginal at > a certain time. Since creating the snapshot usually involves some > copying of data blocks (to put it mildly) during which you do not > pause the entire system, a smart mechanism must be created to > maintain this "illusion" of atomicity. The problem is that it is not enough to sync the file systems to get an atomic copy; you must sync all applications too. For example consider a database that uses user space journaling using fsync: to make its disk files consistent after the snapshot it requires both the log write and the data write. When one is missing the log needs to be replayed, which requires writes. > The Veritas eXtended File System (vxfs) has a built-in snapshot > system which works kind-a interesting. Instead of doing a full > block device copy of the file system, it uses an "overflow" > block device where it saves the originals of a changed block > in the original block device. When looking at the snapshot, the > vxfs first checks the overflow area if a copy of the requested > block is available there. If it is, that block is returned, if > it isn't, the block is read from the underlying original since > it obviously hasn't been changed since the creation of the > snapshot (otherwise the original would have been present in the > overflow area). In the worst case the overflow area must be as > big as the original, but in typical cases it needs only be > 10% of the size of the original. After system reboot, the > snapshot copy is gone. This is basically how the current LVM snapshot mechanism works. The overflow blocks are configurable. The snapshots are gone after a reboot. > > I would guess that such a volatile snapshot facility could be > made into a generic feature available for every block device! Yes, it's called LVM @) > 5) So, ideally we would need an "application quiesce" in which > we can instruct the application to update its ondisk image > by making all necessary changes to its disk data (flush()ing) > and informing the operating system of its quiesced state, > upon which the OS could make the snapshot and free the > application to make changes again. Unix just does not support > this particular model of application/OS interaction. And, There is an easy workaround: make the snapshots writeable and allow an application ``fsck'' or log replay in the snapshot (if it can't handle that it is not crash safe anyways) -Andi ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-08 8:34 ` Andi Kleen @ 2000-06-08 11:49 ` Paul Jakma 2000-06-08 12:34 ` Jos Visser 0 siblings, 1 reply; 17+ messages in thread From: Paul Jakma @ 2000-06-08 11:49 UTC (permalink / raw) To: Andi Kleen; +Cc: Jos Visser, Paul Jakma, Michael Marxmeier, jan, linux-lvm On Thu, 8 Jun 2000, Andi Kleen wrote: > consider a database that uses user space journaling using fsync: > to make its disk files consistent after the snapshot it requires > both the log write and the data write. When one is missing the > log needs to be replayed, which requires writes. but doesn't the call to block_fsync that Heinz confirmed exists cover this? nothing can cover the case where app data consistency depends on a future write(). But that's the app's problem, and anyway a good database should be consistent/recover itself if it's died between write(?)'s. right? (in which case lvm snapshot is perfectly suitable for backing up databases..) --paulj ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-08 11:49 ` Paul Jakma @ 2000-06-08 12:34 ` Jos Visser 2000-06-09 6:59 ` Heinz J. Mauelshagen 0 siblings, 1 reply; 17+ messages in thread From: Jos Visser @ 2000-06-08 12:34 UTC (permalink / raw) To: Paul Jakma; +Cc: Andi Kleen, Paul Jakma, Michael Marxmeier, jan, linux-lvm And thus it came to pass that Paul Jakma wrote: (on Thu, Jun 08, 2000 at 12:49:53PM +0100 to be exact) > On Thu, 8 Jun 2000, Andi Kleen wrote: > > > consider a database that uses user space journaling using fsync: > > to make its disk files consistent after the snapshot it requires > > both the log write and the data write. When one is missing the > > log needs to be replayed, which requires writes. > > but doesn't the call to block_fsync that Heinz confirmed exists cover > this? > > nothing can cover the case where app data consistency depends on a future > write(). But that's the app's problem, and anyway a good database should > be consistent/recover itself if it's died between write(?)'s. right? > > (in which case lvm snapshot is perfectly suitable for backing up > databases..) Most databases want you to store their log and data spaces in different (logical) volumes. To create a consistent image you would want to be able to snapshot multiple logical volumes atomically in one operation. HP's LVM supports this through the "multiple atomical lvsplit" feature (as of HP-UX 10 if memory serves me right). ++Jos -- The InSANE quiz master is always right! (or was it the other way round? :-) ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-08 12:34 ` Jos Visser @ 2000-06-09 6:59 ` Heinz J. Mauelshagen 0 siblings, 0 replies; 17+ messages in thread From: Heinz J. Mauelshagen @ 2000-06-09 6:59 UTC (permalink / raw) To: Jos Visser; +Cc: linux-lvm On Thu, Jun 08, 2000 at 02:34:07PM +0200, Jos Visser wrote: > And thus it came to pass that Paul Jakma wrote: > (on Thu, Jun 08, 2000 at 12:49:53PM +0100 to be exact) > > > On Thu, 8 Jun 2000, Andi Kleen wrote: > > > > > consider a database that uses user space journaling using fsync: > > > to make its disk files consistent after the snapshot it requires > > > both the log write and the data write. When one is missing the > > > log needs to be replayed, which requires writes. > > > > but doesn't the call to block_fsync that Heinz confirmed exists cover > > this? > > > > nothing can cover the case where app data consistency depends on a future > > write(). But that's the app's problem, and anyway a good database should > > be consistent/recover itself if it's died between write(?)'s. right? > > > > (in which case lvm snapshot is perfectly suitable for backing up > > databases..) > > Most databases want you to store their log and data spaces in different > (logical) volumes. Exactly. That's why i'll implement atomic activation off multiple snapshot. The idea is: - prepare multiple _inactive_ snapshots - activate them afterwards at once Heinz > To create a consistent image you would want to be > able to snapshot multiple logical volumes atomically in one operation. > HP's LVM supports this through the "multiple atomical lvsplit" > feature (as of HP-UX 10 if memory serves me right). > > ++Jos > > -- > The InSANE quiz master is always right! > (or was it the other way round? :-) -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Heinz Mauelshagen Sistina Software Inc. Senior Consultant/Developer Bartningstr. 12 64289 Darmstadt Germany Mauelshgen@Sistina.com +49 6151 710386 FAX 710396 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2020-11-27 16:17 ` Michael Marxmeier 2000-06-08 0:47 ` Paul Jakma @ 2000-06-08 7:52 ` Heinz J. Mauelshagen 1 sibling, 0 replies; 17+ messages in thread From: Heinz J. Mauelshagen @ 2000-06-08 7:52 UTC (permalink / raw) To: Michael Marxmeier; +Cc: linux-lvm On Thu, Jun 08, 2000 at 02:22:14AM +0000, Michael Marxmeier wrote: > > > > On Wed, 7 Jun 2000, Jan Niehusmann wrote: > > > > > The snapshot logic simply doesn't know what a filesystem is. It just > > > clones the block device. > > > > this i knew.. :) > > > > > So the fs may be in en inconsistent state, if you do the snapshot > > > while some write operation is in progress on the cloned > > > filesystem. > > > > ok. so the solution is what? a call in the vfs to complete pending > > writes, and hold off on modifying any blocks? > > IMHO when creating a snapshot LVM could simply sync all outstanding > buffers for the block device vialog block_fsync() (not sure if this > is already done -- Heinz care to comment?). Yes, in case a new snapshot is created the buffers of the original LV are flushed using fsync_dev(). Please have a look at the end of lvm_do_lv_create() in lvm.c. > This should also take care of pending fs meta information thus > we should get a consistent state. > > The question is how atomic this is? Since fsync_dev() currently > does a lock_kernel() is might even be sufficient. It should do the job (see comment bellow). > > > Any reason why this is not suffiecient? The only case could be any kind of "private" metadata a filesystem keeps outside the cache. Any comments on that? Heinz -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Heinz Mauelshagen Sistina Software Inc. Senior Consultant/Developer Bartningstr. 12 64289 Darmstadt Germany Mauelshgen@Sistina.com +49 6151 710386 FAX 710396 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- ^ permalink raw reply [flat|nested] 17+ messages in thread
* [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? @ 2000-06-06 9:51 Paul Jakma 2000-06-06 14:42 ` Andi Kleen 0 siblings, 1 reply; 17+ messages in thread From: Paul Jakma @ 2000-06-06 9:51 UTC (permalink / raw) To: Linux LVM list is 0.8final going to be backported to 2.2? I'd love to have the snapshot capabilities of 0.8final, but there's no way i'm going to run 2.3/2.4test on a production machine. regards, -- Paul Jakma paul@clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt ------------------------------------------- Fortune: Where it is a duty to worship the sun it is pretty sure to be a crime to examine the laws of heat. -- Christopher Morley ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-06 9:51 Paul Jakma @ 2000-06-06 14:42 ` Andi Kleen 2000-06-06 22:30 ` Paul Jakma 2000-06-12 13:20 ` Paul Jakma 0 siblings, 2 replies; 17+ messages in thread From: Andi Kleen @ 2000-06-06 14:42 UTC (permalink / raw) To: Paul Jakma; +Cc: Linux LVM list On Tue, Jun 06, 2000 at 10:51:54AM +0100, Paul Jakma wrote: > is 0.8final going to be backported to 2.2? I'd love to have the > snapshot capabilities of 0.8final, but there's no way i'm going to > run 2.3/2.4test on a production machine. The SuSE 6.4 kernel contains a back port. You can download a source rpm from ftp.suse.com Note that the snapshots may not be as useful as you may think though - they are readonly and it is therefore impossible to replay a journal or run a fsck on them. This means you effectively can only take snapshots from unmounted file systems. -Andi ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-06 14:42 ` Andi Kleen @ 2000-06-06 22:30 ` Paul Jakma 2000-06-06 22:46 ` Andi Kleen 2000-06-12 13:20 ` Paul Jakma 1 sibling, 1 reply; 17+ messages in thread From: Paul Jakma @ 2000-06-06 22:30 UTC (permalink / raw) To: Andi Kleen; +Cc: Linux LVM list On Tue, 6 Jun 2000, Andi Kleen wrote: > The SuSE 6.4 kernel contains a back port. You can download a source > rpm from ftp.suse.com > uhmm... but an officially sanctioned patch blessed by Heinz would be nicer.. :) > Note that the snapshots may not be as useful as you may think > though - they are readonly they have to be by their very nature... :) > and it is therefore impossible to > replay a journal or run a fsck on them. why would you want to? snapshots main application is for providing a 'snapshot' of a point in time so you can make a consistent backup of the data, esp for large/frequently modified databases. which is precisely what i want snapshots for. > This means you effectively can only take > snapshots from unmounted file systems. > eh? but a snapshot is not ever meant to be used directly as an alternate filesystem, so there's no point running fsck on it.. ?? > -Andi > > -- Paul Jakma paul@clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt ------------------------------------------- Fortune: The Wright Bothers weren't the first to fly. They were just the first not to crash. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-06 22:30 ` Paul Jakma @ 2000-06-06 22:46 ` Andi Kleen 2000-06-07 0:03 ` Paul Jakma 2000-06-07 0:09 ` Paul Jakma 0 siblings, 2 replies; 17+ messages in thread From: Andi Kleen @ 2000-06-06 22:46 UTC (permalink / raw) To: Paul Jakma; +Cc: Andi Kleen, Linux LVM list On Tue, Jun 06, 2000 at 11:30:48PM +0100, Paul Jakma wrote: > > This means you effectively can only take > > snapshots from unmounted file systems. > > > > eh? but a snapshot is not ever meant to be used directly as an > alternate filesystem, so there's no point running fsck on it.. ?? You cannot safely access a unclean file system without replaying the log (when it is journaled) or running fsck (when it is not journaled). The file system meta data may not be consistent: files may contain bogus blocks, directories may point to nowhere, inodes may be outdated etc. In short, without write access you cannot safely read it, except when it was cleanly unmounted -Andi ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-06 22:46 ` Andi Kleen @ 2000-06-07 0:03 ` Paul Jakma 2000-06-07 0:09 ` Paul Jakma 1 sibling, 0 replies; 17+ messages in thread From: Paul Jakma @ 2000-06-07 0:03 UTC (permalink / raw) To: Andi Kleen; +Cc: Linux LVM list On Wed, 7 Jun 2000, Andi Kleen wrote: > You cannot safely access a unclean file system without replaying the log > (when it is journaled) or running fsck (when it is not journaled). The > file system meta data may not be consistent: but why would the snapshot capture the fs in an inconsistent state? > files may contain bogus > blocks, directories may point to nowhere, inodes may be outdated etc. only if the fs is inconsistent.. but why should the fs be inconsistent when making the snapshot? aiui the snapshot works within the lvm block layer. the fs works on the block layer. for the fs to be inconsistent there must be a disparity between the fs and the block layer at snapshot time, right? but surely the snapshot would do some kind of atomic sync at the point of snapshot creation? (what am missing?) > In short, without write access you cannot safely read it, except > when it was cleanly unmounted > would mount -o remount,sync ; lvcreate -s ... ; mount -o remount,async work? > > -Andi > > -- Paul Jakma paul@clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt ------------------------------------------- Fortune: Democracy is the recurrent suspicion that more than half of the people are right more than half of the time. -- E. B. White ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-06 22:46 ` Andi Kleen 2000-06-07 0:03 ` Paul Jakma @ 2000-06-07 0:09 ` Paul Jakma 1 sibling, 0 replies; 17+ messages in thread From: Paul Jakma @ 2000-06-07 0:09 UTC (permalink / raw) To: Andi Kleen; +Cc: Linux LVM list incidentally, snapshots /seem/ to work fine here on live fs's with 2.4.0-test1-ac7. On Wed, 7 Jun 2000, Andi Kleen wrote: > On Tue, Jun 06, 2000 at 11:30:48PM +0100, Paul Jakma wrote: > > > This means you effectively can only take > > > snapshots from unmounted file systems. > > > > > > > eh? but a snapshot is not ever meant to be used directly as an > > alternate filesystem, so there's no point running fsck on it.. ?? > > You cannot safely access a unclean file system without replaying the log > (when it is journaled) or running fsck (when it is not journaled). The > file system meta data may not be consistent: files may contain bogus > blocks, directories may point to nowhere, inodes may be outdated etc. > In short, without write access you cannot safely read it, except > when it was cleanly unmounted > > > -Andi > > -- Paul Jakma paul@clubi.ie PGP5 key: http://www.clubi.ie/jakma/publickey.txt ------------------------------------------- Fortune: The trouble with the rat-race is that even if you win, you're still a rat. -- Lily Tomlin ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-06 14:42 ` Andi Kleen 2000-06-06 22:30 ` Paul Jakma @ 2000-06-12 13:20 ` Paul Jakma 2000-06-12 19:53 ` Jay Weber 1 sibling, 1 reply; 17+ messages in thread From: Paul Jakma @ 2000-06-12 13:20 UTC (permalink / raw) To: Andi Kleen; +Cc: Paul Jakma, Linux LVM list On Tue, 6 Jun 2000, Andi Kleen wrote: > The SuSE 6.4 kernel contains a back port. You can download a source > rpm from ftp.suse.com you wouldn't have a specific URL would you? --paulj ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? 2000-06-12 13:20 ` Paul Jakma @ 2000-06-12 19:53 ` Jay Weber 0 siblings, 0 replies; 17+ messages in thread From: Jay Weber @ 2000-06-12 19:53 UTC (permalink / raw) To: linux-lvm You can get the patch from kernel.org /pub/linux/kernel/peopel/andrea/lvm as well, that's the patch I was told SuSe is using. Also, the kernel SRPM isn't available on SuSe ftp site last I checked, but I found a kernel-patches.tar.gz in the 6.4 updates directory, which includes the patches above. :) I've noted that loopback device doesn't seem to work with the pvcreate tool in the lvm-utils in andrea's backport or from the actual tarball from the linux.msede.com. It appears they changed the disk scanning procedure and somewhere in that change the loop device doesn't show up in /proc/partitions, hence never gets seen. I believe I tried loop under 2.4-test1 as well (which has lvm included in kernel) and it don't work there either? Any thoughts? On Mon, 12 Jun 2000, Paul Jakma wrote: > On Tue, 6 Jun 2000, Andi Kleen wrote: > > > > The SuSE 6.4 kernel contains a back port. You can download a source > > rpm from ftp.suse.com > > you wouldn't have a specific URL would you? > > --paulj > ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2020-11-27 16:17 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20000607100145.A6300@gondor.com> 2000-06-07 23:18 ` [linux-lvm] LVM 0.8final for 2.2.15/2.2.16? Paul Jakma 2020-11-27 16:17 ` Michael Marxmeier 2000-06-08 0:47 ` Paul Jakma 2000-06-08 8:23 ` Jos Visser 2000-06-08 8:34 ` Andi Kleen 2000-06-08 11:49 ` Paul Jakma 2000-06-08 12:34 ` Jos Visser 2000-06-09 6:59 ` Heinz J. Mauelshagen 2000-06-08 7:52 ` Heinz J. Mauelshagen 2000-06-06 9:51 Paul Jakma 2000-06-06 14:42 ` Andi Kleen 2000-06-06 22:30 ` Paul Jakma 2000-06-06 22:46 ` Andi Kleen 2000-06-07 0:03 ` Paul Jakma 2000-06-07 0:09 ` Paul Jakma 2000-06-12 13:20 ` Paul Jakma 2000-06-12 19:53 ` Jay Weber
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).