All of lore.kernel.org
 help / color / mirror / Atom feed
* Cache tier unable to auto flush data to storage tier
@ 2014-09-13 22:23 Karan Singh
       [not found] ` <6F0091EE-47F9-49CC-9C34-13A873C01F43-Gn+qtVAUx6s@public.gmane.org>
  0 siblings, 1 reply; 2+ messages in thread
From: Karan Singh @ 2014-09-13 22:23 UTC (permalink / raw)
  To: Ceph Community, ceph-users, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 1744 bytes --]

Hello Cephers

I have created a Cache pool and looks like cache tiering agent is not able to flush/evict data as per defined policy. However when i manually evict / flush data , it migrates data from cache-tier to storage-tier

Kindly advice if there is something wrong with policy or anything else i am missing.

Ceph Version: 0.80.5
OS : Cent OS 6.4

Cache pool created using the following commands :

ceph osd tier add data cache-pool 
ceph osd tier cache-mode cache-pool writeback
ceph osd tier set-overlay data cache-pool
ceph osd pool set cache-pool hit_set_type bloom
ceph osd pool set cache-pool hit_set_count 1
ceph osd pool set cache-pool hit_set_period 300
ceph osd pool set cache-pool target_max_bytes 10000
ceph osd pool set cache-pool target_max_objects 100
ceph osd pool set cache-pool cache_min_flush_age 60
ceph osd pool set cache-pool cache_min_evict_age 60


[root@ceph-node1 ~]# date
Sun Sep 14 00:49:59 EEST 2014
[root@ceph-node1 ~]# rados -p data  put file1 /etc/hosts
[root@ceph-node1 ~]# rados -p data ls
[root@ceph-node1 ~]# rados -p cache-pool ls
file1
[root@ceph-node1 ~]#


[root@ceph-node1 ~]# date
Sun Sep 14 00:59:33 EEST 2014
[root@ceph-node1 ~]# rados -p data ls
[root@ceph-node1 ~]# 
[root@ceph-node1 ~]# rados -p cache-pool ls
file1
[root@ceph-node1 ~]#


[root@ceph-node1 ~]# date
Sun Sep 14 01:08:02 EEST 2014
[root@ceph-node1 ~]# rados -p data ls
[root@ceph-node1 ~]# rados -p cache-pool ls
file1
[root@ceph-node1 ~]#



[root@ceph-node1 ~]# rados -p cache-pool  cache-flush-evict-all
file1
[root@ceph-node1 ~]#
[root@ceph-node1 ~]# rados -p data ls
file1
[root@ceph-node1 ~]# rados -p cache-pool ls
[root@ceph-node1 ~]#


Regards
Karan Singh


[-- Attachment #1.2: Type: text/html, Size: 7807 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Cache tier unable to auto flush data to storage tier
       [not found] ` <6F0091EE-47F9-49CC-9C34-13A873C01F43-Gn+qtVAUx6s@public.gmane.org>
@ 2014-09-14  0:42   ` Jean-Charles LOPEZ
  0 siblings, 0 replies; 2+ messages in thread
From: Jean-Charles LOPEZ @ 2014-09-14  0:42 UTC (permalink / raw)
  To: Karan Singh; +Cc: ceph-users, ceph-devel, Ceph Community

Hi Karan,

May be setting the dirty byte ratio (flush) and the full ratio (eviction). Just try to see if it makes any difference
- cache_target_dirty_ratio .1
- cache_target_full_ratio .2

Tune the percentage as desired relatively to target_max_bytes and target_max_objects. The first threshold reached will trigger flush or eviction (num objects or num bytes)

JC



On Sep 13, 2014, at 15:23, Karan Singh <karan.singh-Gn+qtVAUx6s@public.gmane.org> wrote:

> Hello Cephers
> 
> I have created a Cache pool and looks like cache tiering agent is not able to flush/evict data as per defined policy. However when i manually evict / flush data , it migrates data from cache-tier to storage-tier
> 
> Kindly advice if there is something wrong with policy or anything else i am missing.
> 
> Ceph Version: 0.80.5
> OS : Cent OS 6.4
> 
> Cache pool created using the following commands :
> 
> ceph osd tier add data cache-pool 
> ceph osd tier cache-mode cache-pool writeback
> ceph osd tier set-overlay data cache-pool
> ceph osd pool set cache-pool hit_set_type bloom
> ceph osd pool set cache-pool hit_set_count 1
> ceph osd pool set cache-pool hit_set_period 300
> ceph osd pool set cache-pool target_max_bytes 10000
> ceph osd pool set cache-pool target_max_objects 100
> ceph osd pool set cache-pool cache_min_flush_age 60
> ceph osd pool set cache-pool cache_min_evict_age 60
> 
> 
> [root@ceph-node1 ~]# date
> Sun Sep 14 00:49:59 EEST 2014
> [root@ceph-node1 ~]# rados -p data  put file1 /etc/hosts
> [root@ceph-node1 ~]# rados -p data ls
> [root@ceph-node1 ~]# rados -p cache-pool ls
> file1
> [root@ceph-node1 ~]#
> 
> 
> [root@ceph-node1 ~]# date
> Sun Sep 14 00:59:33 EEST 2014
> [root@ceph-node1 ~]# rados -p data ls
> [root@ceph-node1 ~]# 
> [root@ceph-node1 ~]# rados -p cache-pool ls
> file1
> [root@ceph-node1 ~]#
> 
> 
> [root@ceph-node1 ~]# date
> Sun Sep 14 01:08:02 EEST 2014
> [root@ceph-node1 ~]# rados -p data ls
> [root@ceph-node1 ~]# rados -p cache-pool ls
> file1
> [root@ceph-node1 ~]#
> 
> 
> 
> [root@ceph-node1 ~]# rados -p cache-pool  cache-flush-evict-all
> file1
> [root@ceph-node1 ~]#
> [root@ceph-node1 ~]# rados -p data ls
> file1
> [root@ceph-node1 ~]# rados -p cache-pool ls
> [root@ceph-node1 ~]#
> 
> 
> Regards
> Karan Singh
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-09-14  0:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-13 22:23 Cache tier unable to auto flush data to storage tier Karan Singh
     [not found] ` <6F0091EE-47F9-49CC-9C34-13A873C01F43-Gn+qtVAUx6s@public.gmane.org>
2014-09-14  0:42   ` Jean-Charles LOPEZ

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.