All of lore.kernel.org
 help / color / mirror / Atom feed
* xfsrestore'ing from file backups don't restore...why not?
@ 2021-10-25 21:30 L A Walsh
  2021-10-26  0:48 ` Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: L A Walsh @ 2021-10-25 21:30 UTC (permalink / raw)
  To: linux-xfs

I'm trying to do a cumulative restore a directory 
from a multi-file backup w/names:
-rw-rw-r-- 1 1578485336160 Oct  1 06:51 home-211001-0-0437.dump
-rw-rw-r-- 1  262411348256 Oct 23 04:53 home-211023-1-0431.dump
-rw-rw-r-- 1    1881207032 Oct 25 04:31 home-211025-2-0430.dump



At first I tried "-i" but once I got a prompt, no files or directories
were visible.

So I tried restoring the full thing:

I'm getting 1000's of messages like where it doesn't seem to be able
to restore the file and instead places it in the orphanage:

xfsrestore: NOTE: ino 1879669758 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/coherent_tail_mean.html
xfsrestore: NOTE: ino 1879669759 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/count.html
xfsrestore: NOTE: ino 1879669760 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/covariance.html
xfsrestore: NOTE: ino 1879669761 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/density.html
xfsrestore: NOTE: ino 1879669762 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/extended_p_square.html


-----

I've seen this before and was able to restore the file I wanted from
the "orphanage" -- instead of installing into the directory
I gave, it gives the above messages.

xfsdump+restore claim to be from xfsdump-3.1.8-1.3.x86_64
from openSUSE with Build Date  : Tue Jan 29 16:46:13 2019

I have also tried this on an empty partition -- same thing.
Why am I not able to restore my backups?







^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfsrestore'ing from file backups don't restore...why not?
  2021-10-25 21:30 xfsrestore'ing from file backups don't restore...why not? L A Walsh
@ 2021-10-26  0:48 ` Dave Chinner
       [not found]   ` <617B5DA3.7060106@tlinx.org>
                     ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Dave Chinner @ 2021-10-26  0:48 UTC (permalink / raw)
  To: L A Walsh; +Cc: linux-xfs

On Mon, Oct 25, 2021 at 02:30:08PM -0700, L A Walsh wrote:
> I'm trying to do a cumulative restore a directory from a multi-file backup
> w/names:
> -rw-rw-r-- 1 1578485336160 Oct  1 06:51 home-211001-0-0437.dump
> -rw-rw-r-- 1  262411348256 Oct 23 04:53 home-211023-1-0431.dump
> -rw-rw-r-- 1    1881207032 Oct 25 04:31 home-211025-2-0430.dump
> 

Have you ever successfully restored a directory from a multi-file
backup?

Note that restore errors are often caused by something going wrong
during the dump and it not being noticed until restore is run and
the error found. And at that point, there's nothing that can be done
to "fix" the dump image so it can be restored.

What was the xfs_dump commands that created these dump files?

Did you take the dumps from a frozen filesystem or a read-only
snapshot of the filesystem, or just take it straight from a running
system?

What happens if you try to restore one dump at a time? i.e. is the
problem in the level 0 dump, or in one of the incrementals that are
based on the level 0 dump?

> So I tried restoring the full thing:
> 
> I'm getting 1000's of messages like where it doesn't seem to be able
> to restore the file and instead places it in the orphanage:
> 
> xfsrestore: NOTE: ino 1879669758 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/coherent_tail_mean.html
> xfsrestore: NOTE: ino 1879669759 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/count.html
> xfsrestore: NOTE: ino 1879669760 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/covariance.html
> xfsrestore: NOTE: ino 1879669761 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/density.html
> xfsrestore: NOTE: ino 1879669762 salvaging file, placing in orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/extended_p_square.html

IIUC, this means an ancestor directory in the path doesn't exist in
the inventory and so the path for restore cannot be resolved
correctly.  Hence the inode gets placed in the orphanage under the
path name that is stored with the inode.

I /think/ this error implies that the backups (dumps) were taken from
an active filesystem. i.e between the time the dump was started
and when the inventory was finally updated, the directory structure
had changed and so the dump is internally inconsistent. Hence some of
the files that were recorded in the dump image were removed before
the inventory was updated, hence there's no directories present in
the inventory that reference those files and so they get salvaged
into the orphanage for admin cleanup.

It would be interesting to know what part of the above path is
actually missing from the dump inventory, because that might help
explain what went/is going wrong...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfsrestore'ing from file backups don't restore...why not?
       [not found]   ` <617B5DA3.7060106@tlinx.org>
@ 2021-10-29 19:24     ` L A Walsh
  0 siblings, 0 replies; 12+ messages in thread
From: L A Walsh @ 2021-10-29 19:24 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

On 2021/10/28 19:34, L A Walsh wrote:
>
>
> On 2021/10/25 17:48, Dave Chinner wrote:
>> On Mon, Oct 25, 2021 at 02:30:08PM -0700, L A Walsh wrote:
>> > I'm trying to do a cumulative restore a directory from a multi-file 
>> backup
>> > w/names:
>> > -rw-rw-r-- 1 1578485336160 Oct  1 06:51 home-211001-0-0437.dump
>> > -rw-rw-r-- 1  262411348256 Oct 23 04:53 home-211023-1-0431.dump
>> > -rw-rw-r-- 1    1881207032 Oct 25 04:31 home-211025-2-0430.dump
>> >
>>
>> Have you ever successfully restored a directory from a multi-file
>> backup?
> ---
> many times.  I thought back to when I 1st noticed this prob:  When
> I replaced my disks when I had to get new containers.
> All of the backed up "devices" (meta lvm partitions) needed
> a new lvl 0 then.
>
> Before that, never a problem, after that, only have had about 2 times
> trying a restore -- both times, had the message about an ino
> being placed in the orphanage.
>
> The first time this happened, my /home was restored under
> orphanage/<256.0>.  I.e. complete /home tree started at:
> /home/orphanage/<256.0>/home
>
> This time, nothing at all appears under /home/orphanage/<256.0>
> and, interactively in the lvl-0 dump of the home backup,
> nothing appears when I try 'ls' in xfsrestore (interactively) at
> the root of backup.
>>
>> Note that restore errors are often caused by something going wrong
>> during the dump and it not being noticed until restore is run and
>> the error found. And at that point, there's nothing that can be done
>> to "fix" the dump image so it can be restored.
>
>>
>> What was the xfs_dump commands that created these dump files?
The scripts that create the dumps date back to 2008 with last
revisions in 2013, so their style makes it hard to conveniently
provide a listing of params. 

I decided the scripts ned a bit of modernizing and refactoring
to allow easier additions (like echoing the command being
executed...etc)
>>
>>
>> Did you take the dumps from a frozen filesystem or a read-only
>> snapshot of the filesystem, or just take it straight from a running 
>> system?
----
    Does xfs support creating of arbitrary read-only snapshots?
In the past 20+ years running snapshots haven't ever used
a frozen snapshot -- never been that important.
>>
>> What happens if you try to restore one dump at a time? i.e. is the
>> problem in the level 0 dump, or in one of the incrementals that are
>> based on the level 0 dump?
----
    Both, with the most problems in lvl 0.
>>
>> > xfsrestore: NOTE: ino 1879669762 salvaging file, placing in 
>> orphanage/256.0/tools/libboost/boost_1_64_0/doc/html/boost/accumulators/extract/extended_p_square.html 
>>
>>
>> IIUC, this means an ancestor directory in the path doesn't exist in 
>> the inventory and so the path for restore cannot be resolved
>> correctly.  Hence the inode gets placed in the orphanage under the
>> path name that is stored with the inode.
>>
"/home" has an ancestor dir of "/" and "home".  When trying to
restore /home interactively, it showed no files in the root
directory.
>>
>> I /think/ this error implies that the backups (dumps) were taken from 
>> an active filesystem.
This part is most definitely true, w/default backups being run
at 4:30am when the system wasn't likely to be in active use.
>> i.e between the time the dump was started
>> and when the inventory was finally updated, the directory structure 
>> had changed and so the dump is internally inconsistent.
----
    Don't think this is possible.  Backup is of contents of
/home.  I.e. only '/' and '/home' could be deleted/missing,
Neither of which is likely.


>>
>> It would be interesting to know what part of the above path is
>> actually missing from the dump inventory, because that might help
>> explain what went/is going wrong...
---
    Well, at very least am going to rewrite/refactor these
scripts to get some more answers.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfsrestore'ing from file backups don't restore...why not?
  2021-10-26  0:48 ` Dave Chinner
       [not found]   ` <617B5DA3.7060106@tlinx.org>
@ 2021-10-31 21:28   ` L A Walsh
  2021-11-01 20:23     ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
  2021-11-02 14:24     ` xfsrestore'ing from file backups don't restore...why not? Eric Sandeen
  2021-11-01 19:39   ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
  2 siblings, 2 replies; 12+ messages in thread
From: L A Walsh @ 2021-10-31 21:28 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

When looking at a dump of /home (but not on other partitions 
that I've noticed, I see (stopping listing on problem line)

XFSDUMP_OPTIONS=-J #(set externally , not usually)

>./dump1fs#160(Xfsdump)> xfsdump -b 268435456 -l 8 -L home -J - /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.8 (dump format 3.0)
xfsdump: level 8 incremental dump of Ishtar:/home based on level 6 dump begun Fri Oct 29 04:30:13 2021
xfsdump: dump date: Sun Oct 31 14:20:37 2021
xfsdump: session id: 249233a0-a642-42a0-ae02-ed53012f3fa4
xfsdump: session label: "home"
xfsdump: NOTE: root ino 192 differs from mount dir ino 256, bind mount?

Of note, most things were placed in orphanage under
256.0

df shows:
df /home
Filesystem        Size  Used Avail Use% Mounted on
/dev/Space/Home2  2.0T  1.5T  570G  73% /home

(Became months ago as I made new partition of 2T to replace
old partition of 1.5T, after which I did another 
level-0 backup.




^ permalink raw reply	[flat|nested] 12+ messages in thread

* cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-10-26  0:48 ` Dave Chinner
       [not found]   ` <617B5DA3.7060106@tlinx.org>
  2021-10-31 21:28   ` L A Walsh
@ 2021-11-01 19:39   ` L A Walsh
  2021-11-02 14:29     ` Eric Sandeen
  2 siblings, 1 reply; 12+ messages in thread
From: L A Walsh @ 2021-11-01 19:39 UTC (permalink / raw)
  To: linux-xfs; +Cc: Dave Chinner


When I xfsdump my /home partition, I see the above diagnostic
where it  lists "bind mount?" might be involved, but as far as
I can see, that's not the case.

grepping for '/home\s' on output of mount:

/bin/mount|grep -P '/home\s'

shows only 1 entry -- nothing mounted on top of it:

/dev/mapper/Space-Home2 on /home type xfs (...)

I have bind-mounts of things like
/home/opt  on /opt, but that shouldn't affect the root node,
as far as I know.

So what would cause the root node to differ from the mountdir
ino?

I try mounting the same filesystem someplace new:

# df .
Filesystem        Size  Used Avail Use% Mounted on
/dev/Space/Home2  2.0T  1.5T  569G  73% /home

mkdir /home2
Ishtar:home# mount /dev/Space/Home2 /home2

Ishtar:home# ll -di /home /home2
256 drwxr-xr-x 40 4096 Nov  1 10:23 /home/
256 drwxr-xr-x 40 4096 Nov  1 10:23 /home2/

Shows 256 as the root inode.  So why is xfsdump claiming
192 is root inode?

I used xfs_db and 192 is allocated to a normal file, while
256 displays nothing for the filename.

How should I further debug this?





^ permalink raw reply	[flat|nested] 12+ messages in thread

* cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-10-31 21:28   ` L A Walsh
@ 2021-11-01 20:23     ` L A Walsh
  2021-11-01 21:12       ` Dave Chinner
  2021-11-02 14:24     ` xfsrestore'ing from file backups don't restore...why not? Eric Sandeen
  1 sibling, 1 reply; 12+ messages in thread
From: L A Walsh @ 2021-11-01 20:23 UTC (permalink / raw)
  To: linux-xfs; +Cc: Dave Chinner


Addendum to the below: get_blocks showed no error messages.


When I xfsdump my /home partition, I see the above diagnostic
where it  lists "bind mount?" might be involved, but as far as
I can see, that's not the case.

grepping for '/home\s' on output of mount:

/bin/mount|grep -P '/home\s'

shows only 1 entry -- nothing mounted on top of it:

/dev/mapper/Space-Home2 on /home type xfs (...)

I have bind-mounts of things like
/home/opt  on /opt, but that shouldn't affect the root node,
as far as I know.

So what would cause the root node to differ from the mountdir
ino?

I try mounting the same filesystem someplace new:

# df .
Filesystem        Size  Used Avail Use% Mounted on
/dev/Space/Home2  2.0T  1.5T  569G  73% /home
mkdir /home2
Ishtar:home# mount /dev/Space/Home2 /home2
Ishtar:home# ll -di /home /home2
256 drwxr-xr-x 40 4096 Nov  1 10:23 /home/
256 drwxr-xr-x 40 4096 Nov  1 10:23 /home2/

Shows 256 as the root inode.  So why is xfsdump claiming
192 is root inode?

I used xfs_db and 192 is allocated to a normal file, while
256 displays nothing for the filename.

How should I further debug this?





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-11-01 20:23     ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
@ 2021-11-01 21:12       ` Dave Chinner
  2021-11-02  1:35         ` L A Walsh
                           ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Dave Chinner @ 2021-11-01 21:12 UTC (permalink / raw)
  To: L A Walsh; +Cc: linux-xfs

On Mon, Nov 01, 2021 at 01:23:48PM -0700, L A Walsh wrote:
> 
> Addendum to the below: get_blocks showed no error messages.
> 
> 
> When I xfsdump my /home partition, I see the above diagnostic
> where it  lists "bind mount?" might be involved, but as far as
> I can see, that's not the case.

Can you attach the full output for the xfs_dump and xfsrestore
commands 
> 
> grepping for '/home\s' on output of mount:
> 
> /bin/mount|grep -P '/home\s'
> 
> shows only 1 entry -- nothing mounted on top of it:
> 
> /dev/mapper/Space-Home2 on /home type xfs (...)
> 
> I have bind-mounts of things like
> /home/opt  on /opt, but that shouldn't affect the root node,
> as far as I know.
> 
> So what would cause the root node to differ from the mountdir
> ino?
> 
> I try mounting the same filesystem someplace new:
> 
> # df .
> Filesystem        Size  Used Avail Use% Mounted on
> /dev/Space/Home2  2.0T  1.5T  569G  73% /home
> mkdir /home2
> Ishtar:home# mount /dev/Space/Home2 /home2
> Ishtar:home# ll -di /home /home2
> 256 drwxr-xr-x 40 4096 Nov  1 10:23 /home/
> 256 drwxr-xr-x 40 4096 Nov  1 10:23 /home2/
> 
> Shows 256 as the root inode.  So why is xfsdump claiming
> 192 is root inode?

IIRC, it's because xfsdump thinks that the first inode in the
filesystem is the root inode. Which is not always true - there are
corner cases to do with stripe alignment, btree roots relocating and
now sparse inodes that can result in new inodes being allocated at a
lower number than the root inode.

Indeed, the "bind mount?" message is an indication that xfsdump
found that the first inode was not the same as the root inode, and
so that's likely what has happened here.

Now that I think about this, ISTR the above "inodes before root
inode" situation being reported at some point in the past. Yeah:

https://lore.kernel.org/linux-xfs/f66f26f7-5e29-80fc-206c-9a53cf4640fa@redhat.com/

Eric, can you remember what came of those patches?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-11-01 21:12       ` Dave Chinner
@ 2021-11-02  1:35         ` L A Walsh
  2021-11-02  1:47         ` L A Walsh
  2021-11-02  4:45         ` L A Walsh
  2 siblings, 0 replies; 12+ messages in thread
From: L A Walsh @ 2021-11-02  1:35 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs



On 2021/11/01 14:12, Dave Chinner wrote: 
> Can you attach the full output for the xfs_dump and xfsrestore
> commands 
---
I can as soon as I do ones that I can capture.

I can restore the backup taken this morning (a level 0) to
an alternate partition -- it is doing that now and generating 
the same messages about files being stored in the orphanage
as we "speak", it will take a while to xfsrestore 1.4T.

At the same time, I'm generating a new level 0 backup (something
that was done this morning) resulting in a 1574649321568 byte 
(~1.4T) output file.

So far, the process doing the xfsdump shows:
 xfsdump -b 268435456 -l 0 -L home -e - /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.8 (dump format 3.0)
xfsdump: level 0 dump of Ishtar:/home
xfsdump: dump date: Mon Nov  1 18:15:07 2021
xfsdump: session id: 8f996280-21df-42c5-b0a0-3f1584ae1f54
xfsdump: session label: "home"
xfsdump: NOTE: root ino 192 differs from mount dir ino 256, bind mount?
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 1587242183552 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories

I'm using a 256M blocksize that is buffered via mbuffer
using 5 buffers of the same size (256M) to the output file.

xfsrestore uses a normal file read...hmm...I wonder
if a direct read might be faster, like using 'dd' to perform
an unbuffered read and pipe write to xfsrestore....  maybe something
for future exploring...



>> grepping for '/home\s' on output of mount:
>>
>> /bin/mount|grep -P '/home\s'
>>
>> shows only 1 entry -- nothing mounted on top of it:
>>
>> /dev/mapper/Space-Home2 on /home type xfs (...)
>>
>> I have bind-mounts of things like
>> /home/opt  on /opt, but that shouldn't affect the root node,
>> as far as I know.
>>
>> So what would cause the root node to differ from the mountdir
>> ino?
>>
>> I try mounting the same filesystem someplace new:
>>
>> # df .
>> Filesystem        Size  Used Avail Use% Mounted on
>> /dev/Space/Home2  2.0T  1.5T  569G  73% /home
>> mkdir /home2
>> Ishtar:home# mount /dev/Space/Home2 /home2
>> Ishtar:home# ll -di /home /home2
>> 256 drwxr-xr-x 40 4096 Nov  1 10:23 /home/
>> 256 drwxr-xr-x 40 4096 Nov  1 10:23 /home2/
>>
>> Shows 256 as the root inode.  So why is xfsdump claiming
>> 192 is root inode?
> 
> IIRC, it's because xfsdump thinks that the first inode in the
> filesystem is the root inode. Which is not always true - there are
> corner cases to do with stripe alignment, btree roots relocating and
> now sparse inodes that can result in new inodes being allocated at a
> lower number than the root inode.
> 
> Indeed, the "bind mount?" message is an indication that xfsdump
> found that the first inode was not the same as the root inode, and
> so that's likely what has happened here.
> 
> Now that I think about this, ISTR the above "inodes before root
> inode" situation being reported at some point in the past. Yeah:
> 
> https://lore.kernel.org/linux-xfs/f66f26f7-5e29-80fc-206c-9a53cf4640fa@redhat.com/
> 
> Eric, can you remember what came of those patches?
> 
> Cheers,
> 
> Dave.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-11-01 21:12       ` Dave Chinner
  2021-11-02  1:35         ` L A Walsh
@ 2021-11-02  1:47         ` L A Walsh
  2021-11-02  4:45         ` L A Walsh
  2 siblings, 0 replies; 12+ messages in thread
From: L A Walsh @ 2021-11-02  1:47 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs



On 2021/11/01 14:12, Dave Chinner wrote:

> Can you attach the full output for the xfs_dump and xfsrestore
> commands 
---
Full output for the dump shouldn't be long, but the xfs is
about 20% done and already has a 240MB output file because,
essentially, every file on the filesystem is listed
in the output in the form:
xfsrestore: NOTE: ino 2268735942 salvaging file, placing in orphanage/256.0/root+usr+var_copies/20190301/root/usr/lib/perl5/vendor_perl/5.18/WWW/Mechanize/Examples.pod

If the ratios hold, that's looking to be over 1G of output.
(uncompressed, but still...)

Um you sure about wanting that?  If so, where?


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-11-01 21:12       ` Dave Chinner
  2021-11-02  1:35         ` L A Walsh
  2021-11-02  1:47         ` L A Walsh
@ 2021-11-02  4:45         ` L A Walsh
  2 siblings, 0 replies; 12+ messages in thread
From: L A Walsh @ 2021-11-02  4:45 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

The restore finished, the beginning is:
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.8 (dump format 3.0)
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description: 
xfsrestore: hostname: Ishtar
xfsrestore: mount point: /home
xfsrestore: volume: /dev/Space/Home2
xfsrestore: session time: Mon Nov  1 07:37:47 2021
xfsrestore: level: 0
xfsrestore: session label: "home"
xfsrestore: media label: ""
xfsrestore: file system id: 5f41265a-3114-fb3c-2020-082214061852
xfsrestore: session id: 586026b8-5947-4b95-a213-1532ba25f503
xfsrestore: media id: 5fb4cd58-5cc9-4678-9829-a6539588a170
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: status at 18:21:14: 1289405/1338497 directories reconstructed, 96.3% complete, 13840475 directory entries processed, 60 seconds elapsed
xfsrestore: 1338497 directories and 14357961 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: NOTE: ino 259 salvaging file, placing in orphanage/256.0/root+usr+var_copies/20210316/usr/lib/mono/gac/System.Reactive.Runtime.Remoting/2.2.0.0__31bf3856ad364e35/System.Reactive.Runtime.Remoting.dll
...
there are a bunch of lines like that, 'wc' on the file shows:

> wc /tmp/xfsrestore.log 
  5320822  50100130 821050625 /tmp/xfsrestore.log

Then the end of the file looks like:

xfsrestore: NOTE: ino 8485912415 salvaging file, placing in orphanage/256.0/tools/samba/samba-4.14.2/third_party/resolv_wrapper/wscript
xfsrestore: WARNING: unable to rmdir /nhome/./orphanage: Directory not empty
xfsrestore: restore complete: 7643 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /backups/ishtar/home/home-211101-0-0737.dump OK (success)
xfsrestore: Restore Status: SUCCESS

The lines in between beginning and end appear to be 
an incrementing inode & file list of the disk as it was
put into the orphanage

The restored file system appears to slightly larger, but
that's likely because I cleared off some garbage from the
current home.

Ah, the xfsdump just finished:

>/root/bin/dump1fs#160(Xfsdump)> xfsdump -b 268435456 -l 0 -L home -e - /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.8 (dump format 3.0)
xfsdump: level 0 dump of Ishtar:/home
xfsdump: dump date: Mon Nov  1 18:15:07 2021
xfsdump: session id: 8f996280-21df-42c5-b0a0-3f1584ae1f54
xfsdump: session label: "home"
xfsdump: NOTE: root ino 192 differs from mount dir ino 256, bind mount?
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 1587242183552 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 1577602668640 bytes
xfsdump: dump size (non-dir files) : 1574177966864 bytes
xfsdump: dump complete: 12536 seconds elapsed
xfsdump: Dump Status: SUCCESS


Except for the 5.3 million lines between the start+end, the xfsrestore output is above.

I can't imagine why you'd want the 5.3 million lines of
file listings, but if you do, I'll need to upload it somewhere.





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfsrestore'ing from file backups don't restore...why not?
  2021-10-31 21:28   ` L A Walsh
  2021-11-01 20:23     ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
@ 2021-11-02 14:24     ` Eric Sandeen
  1 sibling, 0 replies; 12+ messages in thread
From: Eric Sandeen @ 2021-11-02 14:24 UTC (permalink / raw)
  To: L A Walsh, Dave Chinner; +Cc: linux-xfs


On 10/31/21 4:28 PM, L A Walsh wrote:
> When looking at a dump of /home (but not on other partitions that I've noticed, I see (stopping listing on problem line)
> 
> XFSDUMP_OPTIONS=-J #(set externally , not usually)
> 
>> ./dump1fs#160(Xfsdump)> xfsdump -b 268435456 -l 8 -L home -J - /home
> xfsdump: using file dump (drive_simple) strategy
> xfsdump: version 3.1.8 (dump format 3.0)
> xfsdump: level 8 incremental dump of Ishtar:/home based on level 6 dump begun Fri Oct 29 04:30:13 2021
> xfsdump: dump date: Sun Oct 31 14:20:37 2021
> xfsdump: session id: 249233a0-a642-42a0-ae02-ed53012f3fa4
> xfsdump: session label: "home"
> xfsdump: NOTE: root ino 192 differs from mount dir ino 256, bind mount?


Linda, if I gave you a build of xfsdump with a workaround for this problem,
would you be willing to test it?

Thanks,
-Eric

> Of note, most things were placed in orphanage under
> 256.0
> 
> df shows:
> df /home
> Filesystem        Size  Used Avail Use% Mounted on
> /dev/Space/Home2  2.0T  1.5T  570G  73% /home
> 
> (Became months ago as I made new partition of 2T to replace
> old partition of 1.5T, after which I did another level-0 backup.
> 
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256
  2021-11-01 19:39   ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
@ 2021-11-02 14:29     ` Eric Sandeen
  0 siblings, 0 replies; 12+ messages in thread
From: Eric Sandeen @ 2021-11-02 14:29 UTC (permalink / raw)
  To: L A Walsh, linux-xfs; +Cc: Dave Chinner

On 11/1/21 2:39 PM, L A Walsh wrote:
> 
> When I xfsdump my /home partition, I see the above diagnostic
> where it  lists "bind mount?" might be involved, but as far as
> I can see, that's not the case.
> 
> grepping for '/home\s' on output of mount:
> 
> /bin/mount|grep -P '/home\s'
> 
> shows only 1 entry -- nothing mounted on top of it:
> 
> /dev/mapper/Space-Home2 on /home type xfs (...)
> 
> I have bind-mounts of things like
> /home/opt  on /opt, but that shouldn't affect the root node,
> as far as I know.
> 
> So what would cause the root node to differ from the mountdir
> ino?
> 
> I try mounting the same filesystem someplace new:
> 
> # df .
> Filesystem        Size  Used Avail Use% Mounted on
> /dev/Space/Home2  2.0T  1.5T  569G  73% /home
> 
> mkdir /home2
> Ishtar:home# mount /dev/Space/Home2 /home2
> 
> Ishtar:home# ll -di /home /home2
> 256 drwxr-xr-x 40 4096 Nov  1 10:23 /home/
> 256 drwxr-xr-x 40 4096 Nov  1 10:23 /home2/
> 
> Shows 256 as the root inode.  So why is xfsdump claiming
> 192 is root inode?

Because of an error I made some time ago; we have the fix for it,
and possibly a workaround to recover dumps created with this bug in
place.  Ping me off-list and I can give you an xfsdump package to test
with (hopefully) a workaround for restore, ok?

Thanks,
-Eric

> I used xfs_db and 192 is allocated to a normal file, while
> 256 displays nothing for the filename.
> 
> How should I further debug this?
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-11-02 14:29 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-25 21:30 xfsrestore'ing from file backups don't restore...why not? L A Walsh
2021-10-26  0:48 ` Dave Chinner
     [not found]   ` <617B5DA3.7060106@tlinx.org>
2021-10-29 19:24     ` L A Walsh
2021-10-31 21:28   ` L A Walsh
2021-11-01 20:23     ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
2021-11-01 21:12       ` Dave Chinner
2021-11-02  1:35         ` L A Walsh
2021-11-02  1:47         ` L A Walsh
2021-11-02  4:45         ` L A Walsh
2021-11-02 14:24     ` xfsrestore'ing from file backups don't restore...why not? Eric Sandeen
2021-11-01 19:39   ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
2021-11-02 14:29     ` Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.