From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752220AbcKHHBl (ORCPT ); Tue, 8 Nov 2016 02:01:41 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:63594 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750938AbcKHHBj (ORCPT ); Tue, 8 Nov 2016 02:01:39 -0500 Subject: Re: [PATCH v6 2/6] mm/cma: introduce new zone, ZONE_CMA To: Joonsoo Kim References: <1476414196-3514-1-git-send-email-iamjoonsoo.kim@lge.com> <1476414196-3514-3-git-send-email-iamjoonsoo.kim@lge.com> <58184B28.8090405@hisilicon.com> <20161107061500.GA21159@js1304-P5Q-DELUXE> <58202881.5030004@hisilicon.com> <20161107072702.GC21159@js1304-P5Q-DELUXE> <582030CB.80905@hisilicon.com> <5820313A.80207@hisilicon.com> <20161108035942.GA31767@js1304-P5Q-DELUXE> CC: Andrew Morton , Rik van Riel , Johannes Weiner , , "Laura Abbott" , Minchan Kim , "Marek Szyprowski" , Michal Nazarewicz , "Aneesh Kumar K.V" , Vlastimil Babka , , , , Zhuangluan Su , Dan Zhao From: Chen Feng Message-ID: <582177C7.7010706@hisilicon.com> Date: Tue, 8 Nov 2016 14:59:19 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20161108035942.GA31767@js1304-P5Q-DELUXE> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.142.193.64] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2016/11/8 11:59, Joonsoo Kim wrote: > On Mon, Nov 07, 2016 at 03:46:02PM +0800, Chen Feng wrote: >> >> >> On 2016/11/7 15:44, Chen Feng wrote: >>> On 2016/11/7 15:27, Joonsoo Kim wrote: >>>> On Mon, Nov 07, 2016 at 03:08:49PM +0800, Chen Feng wrote: >>>>> >>>>> >>>>> On 2016/11/7 14:15, Joonsoo Kim wrote: >>>>>> On Tue, Nov 01, 2016 at 03:58:32PM +0800, Chen Feng wrote: >>>>>>> Hello, I hava a question on cma zone. >>>>>>> >>>>>>> When we have cma zone, cma zone will be the highest zone of system. >>>>>>> >>>>>>> In android system, the most memory allocator is ION. Media system will >>>>>>> alloc unmovable memory from it. >>>>>>> >>>>>>> On low memory scene, will the CMA zone always do balance? >>>>>> >>>>>> Allocation request for low zone (normal zone) would not cause CMA zone >>>>>> to be balanced since it isn't helpful. >>>>>> >>>>> Yes. But the cma zone will run out soon. And it always need to do balance. >>>>> >>>>> How about use migrate cma before movable and let cma type to fallback movable. >>>>> >>>>> https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1263745.html >>>> >>>> ZONE_CMA approach will act like as your solution. Could you elaborate >>>> more on the problem of zone approach? >>>> >>> >>> The ZONE approach is that makes cma pages in a zone. It can cause a higher swapin/out >>> than use migrate cma first. > > Interesting result. I should look at it more deeply. Could you explain > me why the ZONE approach causes a higher swapin/out? > The result is that. I don't have a obvious reason. Maybe add a zone, need to do more balance to keep the watermark of cma-zone. cma-zone is always used firstly. Since the test-case alloced the same memory in total. >>> >>> The higher swapin/out may have a performance effect to application. The application may >>> use too much time swapin memory. >>> >>> You can see my tested result attached for detail. And the baseline is result of [1]. >>> >>> >> My test case is run 60 applications and alloc 512MB ION memory. >> >> Repeat this action 50 times > > Could you tell me more detail about your test? > Kernel version? Total Memory? Total CMA Memory? Android system? What > type of memory does ION uses? Other statistics? Etc... Tested on 4.1, android 7, 512MB-cma in 4G memory. ION use normal unmovable memory, I use it to simulate a camera open operator. > > If it tested on the Android, I'm not sure that we need to consider > it's result. Android has a lowmemory killer which is quitely different > with normal reclaim behaviour. Why? > > Thanks. > > > . >