From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gui Jianfeng Subject: Re: [PATCH 07/18] io-controller: Export disk time used and nr sectors dipatched through cgroups Date: Thu, 14 May 2009 15:53:18 +0800 Message-ID: <4A0BCDEE.5020708__3354.01791749081$1242287790$gmane$org@cn.fujitsu.com> References: <1241553525-28095-1-git-send-email-vgoyal@redhat.com> <1241553525-28095-8-git-send-email-vgoyal@redhat.com> <4A0A32CB.4020609@cn.fujitsu.com> <20090513145127.GB7696@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20090513145127.GB7696-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Vivek Goyal Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org, snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, agk-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org, paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org, fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org, jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org List-Id: containers.vger.kernel.org Vivek Goyal wrote: > On Wed, May 13, 2009 at 10:39:07AM +0800, Gui Jianfeng wrote: >> Vivek Goyal wrote: >> ... >>> >>> +/* >>> + * traverse through all the io_groups associated with this cgroup and calculate >>> + * the aggr disk time received by all the groups on respective disks. >>> + */ >>> +static u64 calculate_aggr_disk_time(struct io_cgroup *iocg) >>> +{ >>> + struct io_group *iog; >>> + struct hlist_node *n; >>> + u64 disk_time = 0; >>> + >>> + rcu_read_lock(); >> This function is in slow-path, so no need to call rcu_read_lock(), just need to ensure >> that the caller already holds the iocg->lock. >> > > Or can we get rid of requirement of iocg_lock here and just read the io > group data under rcu read lock? Actually I am wondering why do we require > an iocg_lock here. We are not modifying the rcu protected list. We are > just traversing through it and reading the data. Yes, i think removing the iocg->lock from caller(io_cgroup_disk_time_read()) is a better choice. > > Thanks > Vivek > >>> + hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) { >>> + /* >>> + * There might be groups which are not functional and >>> + * waiting to be reclaimed upon cgoup deletion. >>> + */ >>> + if (rcu_dereference(iog->key)) >>> + disk_time += iog->entity.total_service; >>> + } >>> + rcu_read_unlock(); >>> + >>> + return disk_time; >>> +} >>> + >> -- >> Regards >> Gui Jianfeng > > > -- Regards Gui Jianfeng