All of lore.kernel.org
 help / color / mirror / Atom feed
* Translating raw capacity to real data capacity in statfs
@ 2018-02-08  2:47 Chengguang Xu
  2018-02-08  5:20 ` Yan, Zheng
  0 siblings, 1 reply; 5+ messages in thread
From: Chengguang Xu @ 2018-02-08  2:47 UTC (permalink / raw)
  To: Ceph Development; +Cc: Yan, Zheng, Ilya Dryomov

Hi Cepher,

Current statfs(2) in ceph kernel client provides raw capacity information of total/used/avail,
but in the point of view of filesystem I think we actually more willing to see real data capacity instead of raw.

So I have a suggestion to translate raw capacity to real data capacity, it can be simply implemented by
taking advantage of num.object_copies, the result may be not very accurate but still having reference significance.

What do you think and any suggestion?

Thanks,
Chengguang.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Translating raw capacity to real data capacity in statfs
  2018-02-08  2:47 Translating raw capacity to real data capacity in statfs Chengguang Xu
@ 2018-02-08  5:20 ` Yan, Zheng
  2018-02-08 13:11   ` Sage Weil
  0 siblings, 1 reply; 5+ messages in thread
From: Yan, Zheng @ 2018-02-08  5:20 UTC (permalink / raw)
  To: Chengguang Xu; +Cc: Ceph Development, Yan, Zheng, Ilya Dryomov

On Thu, Feb 8, 2018 at 10:47 AM, Chengguang Xu <cgxu519@icloud.com> wrote:
> Hi Cepher,
>
> Current statfs(2) in ceph kernel client provides raw capacity information of total/used/avail,
> but in the point of view of filesystem I think we actually more willing to see real data capacity instead of raw.
>
> So I have a suggestion to translate raw capacity to real data capacity, it can be simply implemented by
> taking advantage of num.object_copies, the result may be not very accurate but still having reference significance.
>

It's not that simple when there are multiple data pools.

> What do you think and any suggestion?
>
> Thanks,
> Chengguang.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Translating raw capacity to real data capacity in statfs
  2018-02-08  5:20 ` Yan, Zheng
@ 2018-02-08 13:11   ` Sage Weil
  2018-02-08 19:03     ` Douglas Fuller
       [not found]     ` <CE91B764-C317-4585-AFB3-F10EF7979FF1@redhat.com>
  0 siblings, 2 replies; 5+ messages in thread
From: Sage Weil @ 2018-02-08 13:11 UTC (permalink / raw)
  To: Yan, Zheng; +Cc: Chengguang Xu, Ceph Development, Yan, Zheng, Ilya Dryomov

On Thu, 8 Feb 2018, Yan, Zheng wrote:
> On Thu, Feb 8, 2018 at 10:47 AM, Chengguang Xu <cgxu519@icloud.com> wrote:
> > Hi Cepher,
> >
> > Current statfs(2) in ceph kernel client provides raw capacity information of total/used/avail,
> > but in the point of view of filesystem I think we actually more willing to see real data capacity instead of raw.
> >
> > So I have a suggestion to translate raw capacity to real data capacity, it can be simply implemented by
> > taking advantage of num.object_copies, the result may be not very accurate but still having reference significance.
> >
> 
> It's not that simple when there are multiple data pools.

I seem to remember someone (Doug or Jeff?) working on a patch that would 
adjust the statfs result for a given directory based on the data pool and 
the usual "max avail" style calculation that 'ceph df' uses per pool 
(which is based on replication/ec multiplier and most-full osd)...?

sage

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Translating raw capacity to real data capacity in statfs
  2018-02-08 13:11   ` Sage Weil
@ 2018-02-08 19:03     ` Douglas Fuller
       [not found]     ` <CE91B764-C317-4585-AFB3-F10EF7979FF1@redhat.com>
  1 sibling, 0 replies; 5+ messages in thread
From: Douglas Fuller @ 2018-02-08 19:03 UTC (permalink / raw)
  To: Ceph Development



> On Feb 8, 2018, at 8:11 AM, Sage Weil <sage@newdream.net> wrote:
> 
> On Thu, 8 Feb 2018, Yan, Zheng wrote:
>> On Thu, Feb 8, 2018 at 10:47 AM, Chengguang Xu <cgxu519@icloud.com> wrote:
>>> Hi Cepher,
>>> 
>>> Current statfs(2) in ceph kernel client provides raw capacity information of total/used/avail,
>>> but in the point of view of filesystem I think we actually more willing to see real data capacity instead of raw.
>>> 
>>> So I have a suggestion to translate raw capacity to real data capacity, it can be simply implemented by
>>> taking advantage of num.object_copies, the result may be not very accurate but still having reference significance.
>>> 
>> 
>> It's not that simple when there are multiple data pools.
> 
> I seem to remember someone (Doug or Jeff?) working on a patch that would 
> adjust the statfs result for a given directory based on the data pool and 
> the usual "max avail" style calculation that 'ceph df' uses per pool 
> (which is based on replication/ec multiplier and most-full osd)…?

I did that in: https://github.com/ceph/ceph/pull/16378 . This only works for the case where the filesystem uses a single data pool. In other cases, it falls back to the old behavior.

Cheers,
—Doug

> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Translating raw capacity to real data capacity in statfs
       [not found]     ` <CE91B764-C317-4585-AFB3-F10EF7979FF1@redhat.com>
@ 2018-02-08 19:06       ` Sage Weil
  0 siblings, 0 replies; 5+ messages in thread
From: Sage Weil @ 2018-02-08 19:06 UTC (permalink / raw)
  To: Douglas Fuller
  Cc: Yan, Zheng, Chengguang Xu, Ceph Development, Yan, Zheng, Ilya Dryomov

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1578 bytes --]

On Thu, 8 Feb 2018, Douglas Fuller wrote:
> > On Feb 8, 2018, at 8:11 AM, Sage Weil <sage@newdream.net> wrote:
> > 
> > On Thu, 8 Feb 2018, Yan, Zheng wrote:
> >> On Thu, Feb 8, 2018 at 10:47 AM, Chengguang Xu <cgxu519@icloud.com> wrote:
> >>> Hi Cepher,
> >>> 
> >>> Current statfs(2) in ceph kernel client provides raw capacity information of total/used/avail,
> >>> but in the point of view of filesystem I think we actually more willing to see real data capacity instead of raw.
> >>> 
> >>> So I have a suggestion to translate raw capacity to real data capacity, it can be simply implemented by
> >>> taking advantage of num.object_copies, the result may be not very accurate but still having reference significance.
> >>> 
> >> 
> >> It's not that simple when there are multiple data pools.
> > 
> > I seem to remember someone (Doug or Jeff?) working on a patch that would 
> > adjust the statfs result for a given directory based on the data pool and 
> > the usual "max avail" style calculation that 'ceph df' uses per pool 
> > (which is based on replication/ec multiplier and most-full osd)…?
> 
> I did that in: https://github.com/ceph/ceph/pull/16378 
> <https://github.com/ceph/ceph/pull/16378> . This only works for the case 
> where the filesystem uses a single data pool. In other cases, it falls 
> back to the old behavior.

FWIW, statfs(2) takes a path and fstatfs(2) takes an fd, so at least the 
user-facing API should be able to handle it.  And in the kernel the statfs 
call super_operations takes a dentry.  So it should be possible to do 
this?

sage

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-02-08 19:06 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-08  2:47 Translating raw capacity to real data capacity in statfs Chengguang Xu
2018-02-08  5:20 ` Yan, Zheng
2018-02-08 13:11   ` Sage Weil
2018-02-08 19:03     ` Douglas Fuller
     [not found]     ` <CE91B764-C317-4585-AFB3-F10EF7979FF1@redhat.com>
2018-02-08 19:06       ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.