All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
       [not found] <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcas5p3.samsung.com>
@ 2020-06-29 11:02 ` Anupam Aggarwal
  2020-06-30 11:08   ` OGAWA Hirofumi
       [not found]   ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p6>
  0 siblings, 2 replies; 9+ messages in thread
From: Anupam Aggarwal @ 2020-06-29 11:02 UTC (permalink / raw)
  To: hirofumi; +Cc: linux-kernel, a.sahrawat, Anupam Aggarwal

Max directory size of FAT filesystem is FAT_MAX_DIR_SIZE(2097152 bytes)
It is possible that, due to corruption, directory size calculated in
fat_calc_dir_size() can be greater than FAT_MAX_DIR_SIZE, i.e.
can be in GBs, hence directory traversal can take long time.
for example when command "ls -lR" is executed on corrupted FAT
formatted USB, fat_search_long() function will lookup for a filename from
position 0 till end of corrupted directory size, multiple such lookups
will lead to long directory traversal

Added sanity check for directory size fat_calc_dir_size(),
and return EIO error, which will prevent lookup in corrupted directory

Signed-off-by: Anupam Aggarwal <anupam.al@samsung.com>
Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com>
---
 fs/fat/inode.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/fs/fat/inode.c b/fs/fat/inode.c
index a0cf99d..9b2e81e 100644
--- a/fs/fat/inode.c
+++ b/fs/fat/inode.c
@@ -490,6 +490,13 @@ static int fat_calc_dir_size(struct inode *inode)
 		return ret;
 	inode->i_size = (fclus + 1) << sbi->cluster_bits;
 
+	if (i_size_read(inode) > FAT_MAX_DIR_SIZE) {
+		fat_fs_error(inode->i_sb,
+			     "%s corrupted directory (invalid size %lld)\n",
+			     __func__, i_size_read(inode));
+		return -EIO;
+	}
+
 	return 0;
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
  2020-06-29 11:02 ` [PATCH] fs: fat: add check for dir size in fat_calc_dir_size Anupam Aggarwal
@ 2020-06-30 11:08   ` OGAWA Hirofumi
       [not found]   ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p6>
  1 sibling, 0 replies; 9+ messages in thread
From: OGAWA Hirofumi @ 2020-06-30 11:08 UTC (permalink / raw)
  To: Anupam Aggarwal; +Cc: linux-kernel, a.sahrawat

Anupam Aggarwal <anupam.al@samsung.com> writes:

> Max directory size of FAT filesystem is FAT_MAX_DIR_SIZE(2097152 bytes)
> It is possible that, due to corruption, directory size calculated in
> fat_calc_dir_size() can be greater than FAT_MAX_DIR_SIZE, i.e.
> can be in GBs, hence directory traversal can take long time.
> for example when command "ls -lR" is executed on corrupted FAT
> formatted USB, fat_search_long() function will lookup for a filename from
> position 0 till end of corrupted directory size, multiple such lookups
> will lead to long directory traversal
>
> Added sanity check for directory size fat_calc_dir_size(),
> and return EIO error, which will prevent lookup in corrupted directory
>
> Signed-off-by: Anupam Aggarwal <anupam.al@samsung.com>
> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com>

There are many implementation that doesn't follow the spec strictly. And
when I tested in past, Windows also allowed to read the directory beyond
that limit. I can't recall though if there is in real case or just test
case though.

So if there is no strong reason to apply the limit, I don't think it is
good to limit it. (btw, the current code should detect the corruption of
infinite loop already)

Thanks.

> ---
>  fs/fat/inode.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/fs/fat/inode.c b/fs/fat/inode.c
> index a0cf99d..9b2e81e 100644
> --- a/fs/fat/inode.c
> +++ b/fs/fat/inode.c
> @@ -490,6 +490,13 @@ static int fat_calc_dir_size(struct inode *inode)
>  		return ret;
>  	inode->i_size = (fclus + 1) << sbi->cluster_bits;
>  
> +	if (i_size_read(inode) > FAT_MAX_DIR_SIZE) {
> +		fat_fs_error(inode->i_sb,
> +			     "%s corrupted directory (invalid size %lld)\n",
> +			     __func__, i_size_read(inode));
> +		return -EIO;
> +	}
> +
>  	return 0;
>  }

-- 
OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE:(2) [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
       [not found]   ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p6>
@ 2020-06-30 12:33     ` AMIT SAHRAWAT
  2020-06-30 16:26       ` (2) " OGAWA Hirofumi
                         ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: AMIT SAHRAWAT @ 2020-06-30 12:33 UTC (permalink / raw)
  To: OGAWA Hirofumi, Anupam Aggarwal; +Cc: linux-kernel

There are many implementation that doesn't follow the spec strictly. And
when I tested in past, Windows also allowed to read the directory beyond
that limit. I can't recall though if there is in real case or just test
case though.
>> Thanks Ogawa, yes there are many implementations, preferably going around with different variants.
But, using standard linux version on the systems and having such USB connected on such systems is introducing issues(importantly because these being used on Windows also by users).
I am not sure, if this is something which is new from Windows part.
But, surely extending the directory beyond limit is causing regression with FAT usage on linux.
It is making FAT filesystem related storage virtually unresponsive for minutes in these cases,
and importantly keep on putting pressure on memory due to increasing buffer heads (already a known one with FAT fs).
 
So if there is no strong reason to apply the limit, I don't think it is
good to limit it. 
>> The reason for us to share this is because of the unresponsive behaviour observed with FAT fs on our systems.
This is not a new issue, we have been observing this for quite sometime (may be around 1year+).
Finally, we got hold of disk which is making this 100% reproducible.
We thought of applying this to the mainline, as our FAT is aligned with main kernel.

(btw, the current code should detect the corruption of
infinite loop already)
>>
No, there are no such error reported on our side. 
We had to trace to check the point of stuck in simple 'ls -lR'.

Thanks & Regards,
Amit Sahrawat
 
 
--------- Original Message ---------
Sender : OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Date : 2020-06-30 16:38 (GMT+5:30)
Title : Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
 
Anupam Aggarwal <anupam.al@samsung.com> writes:
 
> Max directory size of FAT filesystem is FAT_MAX_DIR_SIZE(2097152 bytes)
> It is possible that, due to corruption, directory size calculated in
> fat_calc_dir_size() can be greater than FAT_MAX_DIR_SIZE, i.e.
> can be in GBs, hence directory traversal can take long time.
> for example when command "ls -lR" is executed on corrupted FAT
> formatted USB, fat_search_long() function will lookup for a filename from
> position 0 till end of corrupted directory size, multiple such lookups
> will lead to long directory traversal
>
> Added sanity check for directory size fat_calc_dir_size(),
> and return EIO error, which will prevent lookup in corrupted directory
>
> Signed-off-by: Anupam Aggarwal <anupam.al@samsung.com>
> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com>
 
There are many implementation that doesn't follow the spec strictly. And
when I tested in past, Windows also allowed to read the directory beyond
that limit. I can't recall though if there is in real case or just test
case though.
 
So if there is no strong reason to apply the limit, I don't think it is
good to limit it. (btw, the current code should detect the corruption of
infinite loop already)
 
Thanks.
 
> ---
>  fs/fat/inode.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/fs/fat/inode.c b/fs/fat/inode.c
> index a0cf99d..9b2e81e 100644
> --- a/fs/fat/inode.c
> +++ b/fs/fat/inode.c
> @@ -490,6 +490,13 @@ static int fat_calc_dir_size(struct inode *inode)
>                  return ret;
>          inode->i_size = (fclus + 1) << sbi->cluster_bits;
>  
> +        if (i_size_read(inode) > FAT_MAX_DIR_SIZE) {
> +                fat_fs_error(inode->i_sb,
> +                             "%s corrupted directory (invalid size %lld)\n",
> +                             __func__, i_size_read(inode));
> +                return -EIO;
> +        }
> +
>          return 0;
>  }
 
-- 
OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: (2) [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
  2020-06-30 12:33     ` AMIT SAHRAWAT
@ 2020-06-30 16:26       ` OGAWA Hirofumi
       [not found]       ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p8>
       [not found]       ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p1>
  2 siblings, 0 replies; 9+ messages in thread
From: OGAWA Hirofumi @ 2020-06-30 16:26 UTC (permalink / raw)
  To: AMIT SAHRAWAT; +Cc: Anupam Aggarwal, linux-kernel

AMIT SAHRAWAT <a.sahrawat@samsung.com> writes:

> There are many implementation that doesn't follow the spec strictly. And
> when I tested in past, Windows also allowed to read the directory beyond
> that limit. I can't recall though if there is in real case or just test
> case though.
>>> Thanks Ogawa, yes there are many implementations, preferably going around with different variants.
> But, using standard linux version on the systems and having such USB connected on such systems is introducing issues(importantly because these being used on Windows also by users).
> I am not sure, if this is something which is new from Windows part.
> But, surely extending the directory beyond limit is causing regression with FAT usage on linux.

regression from what?

> It is making FAT filesystem related storage virtually unresponsive for minutes in these cases,
> and importantly keep on putting pressure on memory due to increasing buffer heads (already a known one with FAT fs).

I'm confused. What happen actually? Now looks like you are saying the
issue is extending size beyond limit. But previously it said the corruption.

Are you saying "beyond that limit" is the fs corruption?

I.e. did you met real directory corruption? or you are trying to limit
because slowness on big directory?

> So if there is no strong reason to apply the limit, I don't think it is
> good to limit it. 
>>> The reason for us to share this is because of the unresponsive behaviour observed with FAT fs on our systems.
> This is not a new issue, we have been observing this for quite sometime (may be around 1year+).
> Finally, we got hold of disk which is making this 100% reproducible.
> We thought of applying this to the mainline, as our FAT is aligned with main kernel.

So what was the root cause of slowness on big directory?

Thanks.
-- 
OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: (2) [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
       [not found]       ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p8>
@ 2020-06-30 17:07         ` AMIT SAHRAWAT
  0 siblings, 0 replies; 9+ messages in thread
From: AMIT SAHRAWAT @ 2020-06-30 17:07 UTC (permalink / raw)
  To: OGAWA Hirofumi; +Cc: Anupam Aggarwal, linux-kernel

 
> There are many implementation that doesn't follow the spec strictly. And
> when I tested in past, Windows also allowed to read the directory beyond
> that limit. I can't recall though if there is in real case or just test
> case though.
>>> Thanks Ogawa, yes there are many implementations, preferably going around with different variants.
> But, using standard linux version on the systems and having such USB connected on such systems is introducing issues(importantly because these being used on Windows also by users).
> I am not sure, if this is something which is new from Windows part.
> But, surely extending the directory beyond limit is causing regression with FAT usage on linux.
 
regression from what?
 
> It is making FAT filesystem related storage virtually unresponsive for minutes in these cases,
> and importantly keep on putting pressure on memory due to increasing buffer heads (already a known one with FAT fs).
 
I'm confused. What happen actually? Now looks like you are saying the
issue is extending size beyond limit. But previously it said the corruption.
 
Are you saying "beyond that limit" is the fs corruption?
 
I.e. did you met real directory corruption? or you are trying to limit
because slowness on big directory?
>>> Will try to arrange the fsck/chkdsk output for the related to disk, to highlight the concerns.

 
> So if there is no strong reason to apply the limit, I don't think it is
> good to limit it. 
>>> The reason for us to share this is because of the unresponsive behaviour observed with FAT fs on our systems.
> This is not a new issue, we have been observing this for quite sometime (may be around 1year+).
> Finally, we got hold of disk which is making this 100% reproducible.
> We thought of applying this to the mainline, as our FAT is aligned with main kernel.
 
So what was the root cause of slowness on big directory?
>>> Root cause was the continous FAT chain walk through for that directory and making the corresponding applications to stuck.
It keeps going on, so eventually the application had to be terminated.
May be arraning corresponding metadata dump related with this might help in clearing doubts.
I Hope to arrange them soon.
 
Thanks.
-- 
OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
       [not found]       ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p1>
@ 2020-07-03 14:29         ` Anupam Aggarwal
  2020-07-03 19:11           ` OGAWA Hirofumi
       [not found]           ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p2>
  0 siblings, 2 replies; 9+ messages in thread
From: Anupam Aggarwal @ 2020-07-03 14:29 UTC (permalink / raw)
  To: OGAWA Hirofumi, AMIT SAHRAWAT; +Cc: linux-kernel

Hi Ogawa,

>So what was the root cause of slowness on big directory?

Problem happened on FAT32 formatted 32GB USB 3.0 pendrive, which has 20GB of data, cluster size is 16KB
It has one corrupted directory whose size calculated by fat_calc_dir_size() is 1146896384 bytes i.e. 1.06 GB.

When directory traversal of corrupted directory starts, directory entries looks to be corrupted
and lookup fails for these directory entries.
Some directory entries name are having format abc/xyz,
following are the few observed directory entry names:

eqk/hb*
*ùï/ò¢7ô.úBæ
ty7@o/<`
-ò%/ç3{.9q
'ûu/öy<ö.^mö
Ph╤Cf┌6g.ß/k

Now when path lookup happens for above directory entries, it will search for name before ‘/’ in corrupted directory e.g.

eqk
*ùï
ty7@o
-ò%
'ûu
Ph╤Cf┌6g.ß

There are also directory entries with garbage name for which lookup fails, e.g.
á)Yº&q¼(.î».
Æ∞┴Ç▀╜r╟.╣g½
4▒h1▓x0┤.p3╣

During search for single name in fat_search_long() function, whole corrupted directory of size 1.06GB is traversed,
which takes around 230 to 240 secs, which finally ends up with returning ENOENT.

Now multiple lookups in corrupted directory makes “ls -lR” never-ending e.g. in overnite test of running “ls –lR”
on USB having corrupted directory, around 200 such lookups in corrupted directory took 14hrs and still ”ls –lR” is running.

Total number of directory entries in corrupted directory of size 1146896384 bytes = 1146896384/32 = 35840512,
so lookup for 35840512 looks very exhaustive, therefore we have put size check of directory in fat_calc_dir_size()
and prevented the directory traversal by returning -EIO.

While browsing corrupted directory(\CorruptedDIR) on Windows 10 PC,
2623 directory entries were listed and timestamps were wrong

Following is the readonly chkdsk output of USB.

--------------------------------------------------------------------------------------
chkdsk I:
The type of the file system is FAT32.
Volume AAA created 12/28/2018 3:15 PM
Volume Serial Number is 1606-72DC
Windows is verifying files and folders...
Windows found errors on the disk, but will not fix them
because disk checking was run without the /F (fix) parameter.
The \$TXRAJNL.DAT entry contains a nonvalid link.
The size of the \$TXRAJNL.DAT entry is not valid.
Unrecoverable error in folder \CorruptedDIR.
Convert folder to file (Y/N)? n
The \BBB\file1.txt entry contains a nonvalid link.
The size of the \BBB\file1.txt entry is not valid.
The \CCC\file1.txt entry contains a nonvalid link.
The size of the \CCC\file1.txt entry is not valid.
File and folder verification is complete.
Convert lost chains to files (Y/N)? n
3531520 KB of free disk space would be added.

Windows has checked the file system and found problems.
Run CHKDSK with the /F (fix) option to correct these.
   30,015,472 KB total disk space.
          400 KB in 2 hidden files.
        2,800 KB in 48 folders.
   16,479,312 KB in 7,583 files.
    9,999,392 KB are available.

       16,384 bytes in each allocation unit.
    1,875,967 total allocation units on disk.
      624,962 allocation units available on disk.
--------------------------------------------------------------------------------------

Please let us know for any queries,
and please suggest if something better can be done.

Regards,
Anupam


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
  2020-07-03 14:29         ` Anupam Aggarwal
@ 2020-07-03 19:11           ` OGAWA Hirofumi
       [not found]           ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p2>
  1 sibling, 0 replies; 9+ messages in thread
From: OGAWA Hirofumi @ 2020-07-03 19:11 UTC (permalink / raw)
  To: Anupam Aggarwal; +Cc: AMIT SAHRAWAT, linux-kernel

Anupam Aggarwal <anupam.al@samsung.com> writes:

>>So what was the root cause of slowness on big directory?
>
> Problem happened on FAT32 formatted 32GB USB 3.0 pendrive, which has
> 20GB of data, cluster size is 16KB It has one corrupted directory
> whose size calculated by fat_calc_dir_size() is 1146896384 bytes
> i.e. 1.06 GB.
>
> When directory traversal of corrupted directory starts, directory
> entries looks to be corrupted and lookup fails for these directory
> entries.  Some directory entries name are having format abc/xyz,
> following are the few observed directory entry names:

[...]

> During search for single name in fat_search_long() function, whole
> corrupted directory of size 1.06GB is traversed, which takes around
> 230 to 240 secs, which finally ends up with returning ENOENT.
> 
> Now multiple lookups in corrupted directory makes “ls -lR”
> never-ending e.g. in overnite test of running “ls –lR” on USB having
> corrupted directory, around 200 such lookups in corrupted directory
> took 14hrs and still ”ls –lR” is running.

Sounds like totally corrupted FAT image, and the directory may have the
non-simple loop (e.g. there is hardlink of directory).

If so, I'm not sure if we can detect without heavyweight check.  Well,
although user should run fsck before mount. However, if fs can detect
and stop early, it would be better.

BTW, if you run fsck, the corrupted directories and issue are gone at
least?

Anyway, fsck would be main way. And on other hand, if we want to add
mitigation for corruption, we would have to see much more details of
this corruption.  Can you put somewhere to access the corrupted image
(need the only metadata) to reproduce?

> Total number of directory entries in corrupted directory of size
> 1146896384 bytes = 1146896384/32 = 35840512, so lookup for 35840512
> looks very exhaustive, therefore we have put size check of directory
> in fat_calc_dir_size() and prevented the directory traversal by
> returning -EIO.
> 
> While browsing corrupted directory(\CorruptedDIR) on Windows 10 PC,
> 2623 directory entries were listed and timestamps were wrong

What happens if you recursively traversed directories on Windows? This
issue happens on Windows too?

Thanks.
-- 
OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
       [not found]           ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p2>
@ 2020-07-06 11:53             ` Anupam Aggarwal
  2020-07-06 14:22               ` OGAWA Hirofumi
  0 siblings, 1 reply; 9+ messages in thread
From: Anupam Aggarwal @ 2020-07-06 11:53 UTC (permalink / raw)
  To: OGAWA Hirofumi; +Cc: AMIT SAHRAWAT, VIVEK TRIVEDI, linux-kernel

Hi Ogawa,

>Anyway, fsck would be main way. And on other hand, if we want to add
>mitigation for corruption, we would have to see much more details of
>this corruption.  Can you put somewhere to access the corrupted image
>(need the only metadata) to reproduce?

Sorry, uploading of any file not allowed from within.
So, metadata image is not possible to be shared via. upload.
Can try to arrange few more logs via. fsck.

>What happens if you recursively traversed directories on Windows? This
>issue happens on Windows too?

After connecting USB to windows 10, when corrupted dir(\CorruptedDIR) is browsed,
it shows 2623 number of files and directories, without delay.
Name and timestamps of those file/directories are garbage values.

Further if we browse these sub-directories and open files of corrupted dir(\CorruptedDIR)
following popups are coming on Windows 10:
1. The filename, directory name, or volume label syntax is incorrect
2. Specified path does not exist. Check the path and try again

So issue of un-ending browsing(ls -lR) of corrupted USB is not coming on windows 10,
it lists limited number of files/directories, of corrupted dir(\CorruptedDIR) without delay.

>BTW, if you run fsck, the corrupted directories and issue are gone at
>least?

Yes, issues are gone, after running "chkdsk /f" on USB, on Windows 10 PC,
corrupted directory(\CorruptedDIR) is converted to file of 1.06 GB,
so issues are not coming further.
Following is the output of chkdsk write only mode.

--------------------------------------------------------------------------------------

chkdsk /f e:
The type of the file system is FAT32.
Volume AAA created 12/28/2018 3:15 PM
Volume Serial Number is 1606-72DC
Windows is verifying files and folders...
The \$TXRAJNL.DAT entry contains a nonvalid link.
The size of the \$TXRAJNL.DAT entry is not valid.
Unrecoverable error in folder \CorruptedDIR.
Convert folder to file (Y/N)? Y
\DDD\file.txt is cross-linked on allocation unit 736512.
Cross link resolved by copying.
\BBB\file1.txt is cross-linked on allocation unit 433153.
Cross link resolved by copying.
\System Volume Information\LightningSand.CFD is cross-linked on allocation unit 1114114.
Cross link resolved by copying.
\CCC\file1.txt is cross-linked on allocation unit 179989.
Cross link resolved by copying.
File and folder verification is complete.
Convert lost chains to files (Y/N)? Y
3531520 KB in 31 recovered files.

Windows has made corrections to the file system.
No further action is required.
   30,015,472 KB total disk space.
          400 KB in 2 hidden files.
        2,816 KB in 49 folders.
   23,470,800 KB in 7,616 files.
    6,539,408 KB are available.

       16,384 bytes in each allocation unit.
    1,875,967 total allocation units on disk.
      408,713 allocation units available on disk.

--------------------------------------------------------------------------------------

Thanks,
Anupam

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
  2020-07-06 11:53             ` Anupam Aggarwal
@ 2020-07-06 14:22               ` OGAWA Hirofumi
  0 siblings, 0 replies; 9+ messages in thread
From: OGAWA Hirofumi @ 2020-07-06 14:22 UTC (permalink / raw)
  To: Anupam Aggarwal; +Cc: AMIT SAHRAWAT, VIVEK TRIVEDI, linux-kernel

Anupam Aggarwal <anupam.al@samsung.com> writes:

>>Anyway, fsck would be main way. And on other hand, if we want to add
>>mitigation for corruption, we would have to see much more details of
>>this corruption.  Can you put somewhere to access the corrupted image
>>(need the only metadata) to reproduce?
>
> Sorry, uploading of any file not allowed from within.
> So, metadata image is not possible to be shared via. upload.
> Can try to arrange few more logs via. fsck.

Then, can you dump the invalid directory entries in corrupted image, and
check exactly why recursive traverse (ls -lR) never end?

We need to know the root cause to fix, e.g. this directory entry has
loop, etc.

>>What happens if you recursively traversed directories on Windows? This
>>issue happens on Windows too?
>
> After connecting USB to windows 10, when corrupted dir(\CorruptedDIR)
> is browsed, it shows 2623 number of files and directories, without
> delay.  Name and timestamps of those file/directories are garbage
> values.

Sounds like filtered the invalid names.

> Further if we browse these sub-directories and open files of corrupted
> dir(\CorruptedDIR) following popups are coming on Windows 10:
> 1. The filename, directory name, or volume label syntax is incorrect
> 2. Specified path does not exist. Check the path and try again
>
> So issue of un-ending browsing(ls -lR) of corrupted USB is not coming
> on windows 10, it lists limited number of files/directories, of
> corrupted dir(\CorruptedDIR) without delay.

It may had the luck, loop was filtered by invalid names. Well, not sure.
-- 
OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-07-06 14:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcas5p3.samsung.com>
2020-06-29 11:02 ` [PATCH] fs: fat: add check for dir size in fat_calc_dir_size Anupam Aggarwal
2020-06-30 11:08   ` OGAWA Hirofumi
     [not found]   ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p6>
2020-06-30 12:33     ` AMIT SAHRAWAT
2020-06-30 16:26       ` (2) " OGAWA Hirofumi
     [not found]       ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p8>
2020-06-30 17:07         ` AMIT SAHRAWAT
     [not found]       ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p1>
2020-07-03 14:29         ` Anupam Aggarwal
2020-07-03 19:11           ` OGAWA Hirofumi
     [not found]           ` <CGME20200629110320epcas5p34ccccc7c293f077b34b350935c328215@epcms5p2>
2020-07-06 11:53             ` Anupam Aggarwal
2020-07-06 14:22               ` OGAWA Hirofumi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.