All of lore.kernel.org
 help / color / mirror / Atom feed
* quotacheck speed
@ 2012-02-12 21:01 Arkadiusz Miśkiewicz
  2012-02-12 22:21 ` Dave Chinner
  2012-02-12 23:44 ` Christoph Hellwig
  0 siblings, 2 replies; 11+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-02-12 21:01 UTC (permalink / raw)
  To: xfs


Hi,

When mounting 800GB filesystem (after repair for example) here quotacheck 
takes 10 minutes. Quite long time that adds to whole time of filesystem 
downtime (repair + quotacheck).

I wonder if quotacheck can be somehow improved or done differently like doing 
it in parallel with normal fs usage (so there will be no downtime) ?
-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-12 21:01 quotacheck speed Arkadiusz Miśkiewicz
@ 2012-02-12 22:21 ` Dave Chinner
  2012-02-13 18:16   ` Arkadiusz Miśkiewicz
  2012-02-12 23:44 ` Christoph Hellwig
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2012-02-12 22:21 UTC (permalink / raw)
  To: Arkadiusz Miśkiewicz; +Cc: xfs

On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> 
> Hi,
> 
> When mounting 800GB filesystem (after repair for example) here quotacheck 
> takes 10 minutes. Quite long time that adds to whole time of filesystem 
> downtime (repair + quotacheck).

How long does a repair vs quotacheck of that same filesystem take?
repair has to iterate the inodes 2-3 times, so if that is faster
than quotacheck, then that is really important to know....

> I wonder if quotacheck can be somehow improved or done differently like doing 
> it in parallel with normal fs usage (so there will be no downtime) ?

quotacheck makes the assumption that it is run on an otherwise idle
filesystem that nobody is accessing. Well, what it requires is that
nobody is modifying it. What we could do is bring the filesystem up
in a frozen state so that read-only access could be made but
modifications are blocked until the quotacheck is completed.

Also, quotacheck uses the bulkstat code to iterate all the inodes
quickly. Improvements in bulkstat speed will translate directly
into faster quotachecks. quotacheck could probably drive bulkstat in
a parallel manner to do the quotacheck faster, but that assumes that
the underlying storage is not already seek bound. What is the
utilisation of the underlying storage and CPU while quotacheck is
running?

Otherwise, bulkstat inode prefetching could be improved like
xfs_repair was to look at inode chunk density and change IO patterns
and to slice and dice large IO buffers into smaller inode buffers.
We can actually do that efficiently now that we don't use the page
cache for metadata caching. If repair is iterating inodes faster
than bulkstat, then this optimisation will be the reason and having
that data point is very important....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-12 21:01 quotacheck speed Arkadiusz Miśkiewicz
  2012-02-12 22:21 ` Dave Chinner
@ 2012-02-12 23:44 ` Christoph Hellwig
  2012-02-13  0:17   ` Peter Grandi
  2012-02-13 18:09   ` Arkadiusz Miśkiewicz
  1 sibling, 2 replies; 11+ messages in thread
From: Christoph Hellwig @ 2012-02-12 23:44 UTC (permalink / raw)
  To: Arkadiusz Mi??kiewicz; +Cc: xfs

On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Mi??kiewicz wrote:
> 
> Hi,
> 
> When mounting 800GB filesystem (after repair for example) here quotacheck 
> takes 10 minutes. Quite long time that adds to whole time of filesystem 
> downtime (repair + quotacheck).
> 
> I wonder if quotacheck can be somehow improved or done differently like doing 
> it in parallel with normal fs usage (so there will be no downtime) ?

I think the best idea to improve the performance in case you did a
repair is to integrate the quotacheck code into repair.  It's fairly
simple given that quotacheck simply walks all inodes and adds their
space usage to the correct user/group/project, and given that repair
already walks all inodes, and checks their block maps it does most of
that work already.  The only downside would be that the memory usage
of repair increases a bit by keeping the dquots in memoryb, but even
for your 130000 dquot setup that would add about 100 bytes * 130000
please a bit of in-memory metadata (less than 20MB total) of memory
usage, so it probably is a good tradeoff.

In what cases do you regularly run quotacheck when you did not do
a repair first?

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-12 23:44 ` Christoph Hellwig
@ 2012-02-13  0:17   ` Peter Grandi
  2012-02-13 18:09   ` Arkadiusz Miśkiewicz
  1 sibling, 0 replies; 11+ messages in thread
From: Peter Grandi @ 2012-02-13  0:17 UTC (permalink / raw)
  To: Linux fs XFS

>> When mounting 800GB filesystem (after repair for example)
>> here quotacheck takes 10 minutes. Quite long time that adds
>> to whole time of filesystem downtime (repair + quotacheck).

For tight downtime minimization requirements wishful thinking is
not a strategy: whole filetree metadata scans are not cheap. If
you require fast scan of metadata of the whole filetree, ensure
filetrees don't have a lot of metadata (or fund the development
of a parallel whole tree metadata scanner).

Also 10 minutes is not that long; file system checks/repairs can
take days or weeks.

>> I wonder if quotacheck can be somehow improved or done
>> differently like doing it in parallel with normal fs usage
>> (so there will be no downtime) ?

>From 'man 8 quotacheck':

 "It is strongly recommended to run quotacheck with quotas turned off for  the  filesys-
  tem.  Otherwise, possible damage or loss to data in the quota files can result.  It is
  also unwise to run quotacheck on a live filesystem as actual usage may  change  during
  the  scan.   To  prevent  this,  quotacheck  tries to remount the filesystem read-only
  before starting the scan.  After the scan is done it  remounts  the  filesystem  read-
  write.  You  can disable this with option -m.  You can also make quotacheck ignore the
  failure to remount the filesystem read-only with option -M."

Accoridng to this the only consequence of parallel running of
'quotacheck' is a somewhat inaccurate accounting of quotas.

> I think the best idea to improve the performance in case you
> did a repair is to integrate the quotacheck code into repair.

Probably this should be 'xfs_check' more than 'xfs_repair'...

[ ... ]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-12 23:44 ` Christoph Hellwig
  2012-02-13  0:17   ` Peter Grandi
@ 2012-02-13 18:09   ` Arkadiusz Miśkiewicz
  2012-02-13 23:42     ` Dave Chinner
  1 sibling, 1 reply; 11+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-02-13 18:09 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

On Monday 13 of February 2012, Christoph Hellwig wrote:
> On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Mi??kiewicz wrote:
> > Hi,
> > 
> > When mounting 800GB filesystem (after repair for example) here quotacheck
> > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > downtime (repair + quotacheck).
> > 
> > I wonder if quotacheck can be somehow improved or done differently like
> > doing it in parallel with normal fs usage (so there will be no downtime)
> > ?
> 
> I think the best idea to improve the performance in case you did a
> repair is to integrate the quotacheck code into repair.  It's fairly
> simple given that quotacheck simply walks all inodes and adds their
> space usage to the correct user/group/project, and given that repair
> already walks all inodes, and checks their block maps it does most of
> that work already.

That would be interesting and probably make 

> The only downside would be that the memory usage
> of repair increases a bit by keeping the dquots in memoryb, but even
> for your 130000 dquot setup that would add about 100 bytes * 130000
> please a bit of in-memory metadata (less than 20MB total) of memory
> usage, so it probably is a good tradeoff.

> 
> In what cases do you regularly run quotacheck when you did not do
> a repair first?

I don't initiate quotacheck manually. AFAIK internal xfs quotacheck happens in 
two cases here:
1) repair->mount
2) filesystem has quotacheck done properly some time ago -> umount -> mount-
>oops/reset/something like that happens while mounting -> new mount

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-12 22:21 ` Dave Chinner
@ 2012-02-13 18:16   ` Arkadiusz Miśkiewicz
  2012-02-13 23:13     ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-02-13 18:16 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Sunday 12 of February 2012, Dave Chinner wrote:
> On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> > Hi,
> > 
> > When mounting 800GB filesystem (after repair for example) here quotacheck
> > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > downtime (repair + quotacheck).
> 
> How long does a repair vs quotacheck of that same filesystem take?
> repair has to iterate the inodes 2-3 times, so if that is faster
> than quotacheck, then that is really important to know....

Don't have exact times but looking at nagios and dmesg it took about:
repair ~20 minutes, quotacheck ~10 minutes (it's 800GB of maildirs).

> 
> > I wonder if quotacheck can be somehow improved or done differently like
> > doing it in parallel with normal fs usage (so there will be no downtime)
> > ?
> 
> quotacheck makes the assumption that it is run on an otherwise idle
> filesystem that nobody is accessing. Well, what it requires is that
> nobody is modifying it. What we could do is bring the filesystem up
> in a frozen state so that read-only access could be made but
> modifications are blocked until the quotacheck is completed.

Read-only is better than no access at all. I was hoping that there is a way to 
make quotacheck being recalculated on the fly with taking all write accesses 
that happen in meantime into account.

> Also, quotacheck uses the bulkstat code to iterate all the inodes
> quickly. Improvements in bulkstat speed will translate directly
> into faster quotachecks. quotacheck could probably drive bulkstat in
> a parallel manner to do the quotacheck faster, but that assumes that
> the underlying storage is not already seek bound. What is the
> utilisation of the underlying storage and CPU while quotacheck is
> running?

Will try to gather more information then.

> 
> Otherwise, bulkstat inode prefetching could be improved like
> xfs_repair was to look at inode chunk density and change IO patterns
> and to slice and dice large IO buffers into smaller inode buffers.
> We can actually do that efficiently now that we don't use the page
> cache for metadata caching. If repair is iterating inodes faster
> than bulkstat, then this optimisation will be the reason and having
> that data point is very important....
> 
> Cheers,
> 
> Dave.


-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-13 18:16   ` Arkadiusz Miśkiewicz
@ 2012-02-13 23:13     ` Dave Chinner
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Chinner @ 2012-02-13 23:13 UTC (permalink / raw)
  To: Arkadiusz Miśkiewicz; +Cc: xfs

On Mon, Feb 13, 2012 at 07:16:51PM +0100, Arkadiusz Miśkiewicz wrote:
> On Sunday 12 of February 2012, Dave Chinner wrote:
> > On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Miśkiewicz wrote:
> > > Hi,
> > > 
> > > When mounting 800GB filesystem (after repair for example) here quotacheck
> > > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > > downtime (repair + quotacheck).
> > 
> > How long does a repair vs quotacheck of that same filesystem take?
> > repair has to iterate the inodes 2-3 times, so if that is faster
> > than quotacheck, then that is really important to know....
> 
> Don't have exact times but looking at nagios and dmesg it took about:
> repair ~20 minutes, quotacheck ~10 minutes (it's 800GB of maildirs).

Ok. Seems like repair is a little faster than quotacheck, then.

> > > I wonder if quotacheck can be somehow improved or done differently like
> > > doing it in parallel with normal fs usage (so there will be no downtime)
> > > ?
> > 
> > quotacheck makes the assumption that it is run on an otherwise idle
> > filesystem that nobody is accessing. Well, what it requires is that
> > nobody is modifying it. What we could do is bring the filesystem up
> > in a frozen state so that read-only access could be made but
> > modifications are blocked until the quotacheck is completed.
> 
> Read-only is better than no access at all. I was hoping that there is a way to 
> make quotacheck being recalculated on the fly with taking all write accesses 
> that happen in meantime into account.

The problem is that we'd need to keep two sets of dquots in memory
for each quota user while the quota check is being done - one to
track modifications being made, and the other to track quotacheck
progress. It gets complex quite rapidly then - where do we account
changes to an inode that hasn't been quota-checked yet? Or vice
versa? How do we even know if an inode has been quota checked?

THese are probably all things that can be solved, but I get lost in
the complexity when just thinking about it....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-13 18:09   ` Arkadiusz Miśkiewicz
@ 2012-02-13 23:42     ` Dave Chinner
  2012-02-14  5:35       ` Arkadiusz Miśkiewicz
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2012-02-13 23:42 UTC (permalink / raw)
  To: Arkadiusz Miśkiewicz; +Cc: Christoph Hellwig, xfs

On Mon, Feb 13, 2012 at 07:09:50PM +0100, Arkadiusz Miśkiewicz wrote:
> On Monday 13 of February 2012, Christoph Hellwig wrote:
> > On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Mi??kiewicz wrote:
> > > Hi,
> > > 
> > > When mounting 800GB filesystem (after repair for example) here quotacheck
> > > takes 10 minutes. Quite long time that adds to whole time of filesystem
> > > downtime (repair + quotacheck).
> > > 
> > > I wonder if quotacheck can be somehow improved or done differently like
> > > doing it in parallel with normal fs usage (so there will be no downtime)
> > > ?
> > 
> > I think the best idea to improve the performance in case you did a
> > repair is to integrate the quotacheck code into repair.  It's fairly
> > simple given that quotacheck simply walks all inodes and adds their
> > space usage to the correct user/group/project, and given that repair
> > already walks all inodes, and checks their block maps it does most of
> > that work already.
> 
> That would be interesting and probably make 
> 
> > The only downside would be that the memory usage
> > of repair increases a bit by keeping the dquots in memoryb, but even
> > for your 130000 dquot setup that would add about 100 bytes * 130000
> > please a bit of in-memory metadata (less than 20MB total) of memory
> > usage, so it probably is a good tradeoff.
> 
> > 
> > In what cases do you regularly run quotacheck when you did not do
> > a repair first?
> 
> I don't initiate quotacheck manually. AFAIK internal xfs quotacheck happens in 
> two cases here:
> 1) repair->mount
> 2) filesystem has quotacheck done properly some time ago -> umount -> mount-
> >oops/reset/something like that happens while mounting -> new mount

So you'd like both quotacheck to be sped up and repair
to do it as well? ;)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-13 23:42     ` Dave Chinner
@ 2012-02-14  5:35       ` Arkadiusz Miśkiewicz
  2012-02-15 10:39         ` Arkadiusz Miśkiewicz
  0 siblings, 1 reply; 11+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-02-14  5:35 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Christoph Hellwig, xfs

On Tuesday 14 of February 2012, Dave Chinner wrote:
> On Mon, Feb 13, 2012 at 07:09:50PM +0100, Arkadiusz Miśkiewicz wrote:
> > On Monday 13 of February 2012, Christoph Hellwig wrote:
> > > On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Mi??kiewicz wrote:
> > > > Hi,
> > > > 
> > > > When mounting 800GB filesystem (after repair for example) here
> > > > quotacheck takes 10 minutes. Quite long time that adds to whole time
> > > > of filesystem downtime (repair + quotacheck).
> > > > 
> > > > I wonder if quotacheck can be somehow improved or done differently
> > > > like doing it in parallel with normal fs usage (so there will be no
> > > > downtime) ?
> > > 
> > > I think the best idea to improve the performance in case you did a
> > > repair is to integrate the quotacheck code into repair.  It's fairly
> > > simple given that quotacheck simply walks all inodes and adds their
> > > space usage to the correct user/group/project, and given that repair
> > > already walks all inodes, and checks their block maps it does most of
> > > that work already.
> > 
> > That would be interesting and probably make
> > 
> > > The only downside would be that the memory usage
> > > of repair increases a bit by keeping the dquots in memoryb, but even
> > > for your 130000 dquot setup that would add about 100 bytes * 130000
> > > please a bit of in-memory metadata (less than 20MB total) of memory
> > > usage, so it probably is a good tradeoff.
> > > 
> > > 
> > > In what cases do you regularly run quotacheck when you did not do
> > > a repair first?
> > 
> > I don't initiate quotacheck manually. AFAIK internal xfs quotacheck
> > happens in two cases here:
> > 1) repair->mount
> > 2) filesystem has quotacheck done properly some time ago -> umount ->
> > mount-
> > 
> > >oops/reset/something like that happens while mounting -> new mount
> 
> So you'd like both quotacheck to be sped up and repair
> to do it as well? ;)

Well, 1) is happening much more often than 2) :-)

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-14  5:35       ` Arkadiusz Miśkiewicz
@ 2012-02-15 10:39         ` Arkadiusz Miśkiewicz
  2012-02-15 21:45           ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-02-15 10:39 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Christoph Hellwig, xfs

On Tuesday 14 of February 2012, Arkadiusz Miśkiewicz wrote:
> On Tuesday 14 of February 2012, Dave Chinner wrote:
> > On Mon, Feb 13, 2012 at 07:09:50PM +0100, Arkadiusz Miśkiewicz wrote:
> > > On Monday 13 of February 2012, Christoph Hellwig wrote:
> > > > On Sun, Feb 12, 2012 at 10:01:07PM +0100, Arkadiusz Mi??kiewicz wrote:
> > > > > Hi,
> > > > > 
> > > > > When mounting 800GB filesystem (after repair for example) here
> > > > > quotacheck takes 10 minutes. Quite long time that adds to whole
> > > > > time of filesystem downtime (repair + quotacheck).
> > > > > 
> > > > > I wonder if quotacheck can be somehow improved or done differently
> > > > > like doing it in parallel with normal fs usage (so there will be no
> > > > > downtime) ?
> > > > 
> > > > I think the best idea to improve the performance in case you did a
> > > > repair is to integrate the quotacheck code into repair.  It's fairly
> > > > simple given that quotacheck simply walks all inodes and adds their
> > > > space usage to the correct user/group/project, and given that repair
> > > > already walks all inodes, and checks their block maps it does most of
> > > > that work already.
> > > 
> > > That would be interesting and probably make
> > > 
> > > > The only downside would be that the memory usage
> > > > of repair increases a bit by keeping the dquots in memoryb, but even
> > > > for your 130000 dquot setup that would add about 100 bytes * 130000
> > > > please a bit of in-memory metadata (less than 20MB total) of memory
> > > > usage, so it probably is a good tradeoff.
> > > > 
> > > > 
> > > > In what cases do you regularly run quotacheck when you did not do
> > > > a repair first?
> > > 
> > > I don't initiate quotacheck manually. AFAIK internal xfs quotacheck
> > > happens in two cases here:
> > > 1) repair->mount
> > > 2) filesystem has quotacheck done properly some time ago -> umount ->
> > > mount-
> > > 
> > > >oops/reset/something like that happens while mounting -> new mount
> > 
> > So you'd like both quotacheck to be sped up and repair
> > to do it as well? ;)
> 
> Well, 1) is happening much more often than 2) :-)

Oh, and one more scenario. Running system, sysrq u, s, b -> new boot, mount -> 
quotacheck runs.

Does it need to run in such case?

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: quotacheck speed
  2012-02-15 10:39         ` Arkadiusz Miśkiewicz
@ 2012-02-15 21:45           ` Dave Chinner
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Chinner @ 2012-02-15 21:45 UTC (permalink / raw)
  To: Arkadiusz Miśkiewicz; +Cc: Christoph Hellwig, xfs

On Wed, Feb 15, 2012 at 11:39:10AM +0100, Arkadiusz Miśkiewicz wrote:
> On Tuesday 14 of February 2012, Arkadiusz Miśkiewicz wrote:
> > On Tuesday 14 of February 2012, Dave Chinner wrote:
> > > On Mon, Feb 13, 2012 at 07:09:50PM +0100, Arkadiusz Miśkiewicz wrote:
> > > > On Monday 13 of February 2012, Christoph Hellwig wrote:
> > > > > In what cases do you regularly run quotacheck when you did not do
> > > > > a repair first?
> > > > 
> > > > I don't initiate quotacheck manually. AFAIK internal xfs quotacheck
> > > > happens in two cases here:
> > > > 1) repair->mount
> > > > 2) filesystem has quotacheck done properly some time ago -> umount ->
> > > > mount-
> > > > 
> > > > >oops/reset/something like that happens while mounting -> new mount
> > > 
> > > So you'd like both quotacheck to be sped up and repair
> > > to do it as well? ;)
> > 
> > Well, 1) is happening much more often than 2) :-)
> 
> Oh, and one more scenario. Running system, sysrq u, s, b -> new boot, mount -> 
> quotacheck runs.
> 
> Does it need to run in such case?

That's no different to an unclean shutdown. I'm not sure why that
would trigger a quotacheck. Anything inthe log that might indicate
why it started a quotacheck?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-02-15 21:45 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-12 21:01 quotacheck speed Arkadiusz Miśkiewicz
2012-02-12 22:21 ` Dave Chinner
2012-02-13 18:16   ` Arkadiusz Miśkiewicz
2012-02-13 23:13     ` Dave Chinner
2012-02-12 23:44 ` Christoph Hellwig
2012-02-13  0:17   ` Peter Grandi
2012-02-13 18:09   ` Arkadiusz Miśkiewicz
2012-02-13 23:42     ` Dave Chinner
2012-02-14  5:35       ` Arkadiusz Miśkiewicz
2012-02-15 10:39         ` Arkadiusz Miśkiewicz
2012-02-15 21:45           ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.