From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753907AbcEXIRN (ORCPT ); Tue, 24 May 2016 04:17:13 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:60222 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751420AbcEXIRK (ORCPT ); Tue, 24 May 2016 04:17:10 -0400 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.203 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 24 May 2016 17:17:21 +0900 From: Minchan Kim To: Sergey Senozhatsky CC: Andrew Morton , , , Sergey Senozhatsky Subject: Re: [PATCH v6 11/12] zsmalloc: page migration support Message-ID: <20160524081721.GC29094@bbox> References: <1463754225-31311-1-git-send-email-minchan@kernel.org> <1463754225-31311-12-git-send-email-minchan@kernel.org> <20160524052824.GA496@swordfish> <20160524062801.GB29094@bbox> <20160524080511.GB496@swordfish> MIME-Version: 1.0 In-Reply-To: <20160524080511.GB496@swordfish> User-Agent: Mutt/1.5.21 (2010-09-15) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB03/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/05/24 17:17:06, Serialize by Router on LGEKRMHUB03/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/05/24 17:17:06, Serialize complete at 2016/05/24 17:17:06 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 24, 2016 at 05:05:11PM +0900, Sergey Senozhatsky wrote: > Hello, > > On (05/24/16 15:28), Minchan Kim wrote: > [..] > > Most important point to me is that it makes code *simple* at the cost of > > addtional wasting memory. Now, every zspage lives in *a* list so we don't > > need to check zspage groupness to use list_empty of zspage. > > I'm not sure how you feel it makes code simple a lot. > > However, while I implement page migration logic, the check with condition > > that zspage's groupness is either almost_empty and almost_full is really > > bogus and tricky to me so I should debug several time to find what's > > wrong. > > > > Compared to old, zsmalloc is complicated day by day so I want to weight > > on *simple* for easy maintainance. > > > > One more note: > > Now, ZS_EMPTY is used as pool. Look at find_get_zspage. So adding > > "empty" column in ZSMALLOC_STAT might be worth but I wanted to handle it > > as another topic. > > > > So if you don't feel strong the saving is really huge, I want to > > go with this. And if we are adding more wasted memory in future, > > let's handle it then. > > oh, sure, all those micro-optimizations can be done later, > off the series. > > > About CONFIG_ZSMALLOC_STAT, It might be off-topic. Frankly speaking, > > I have guided production team to enable it because when I profile the > > overhead caused by ZSMALLOC_STAT, there is no performance lost > > in real workload. However, the stat gives more detailed useful > > information. > > ok, agree. > good to know that you use stats in production, by the way. > > [..] > > > > + pos = (((class->objs_per_zspage * class->size) * > > > > + page_idx / class->pages_per_zspage) / class->size > > > > + ) * class->size; > > > > > > > > > something went wrong with the indentation here :) > > > > > > so... it's > > > > > > (((class->objs_per_zspage * class->size) * page_idx / class->pages_per_zspage) / class->size ) * class->size; > > > > > > the last ' / class->size ) * class->size' can be dropped, I think. > > > > You prove I didn't learn math. > > Will drop it. > > haha, no, that wasn't the point :) great job with the series! > > [..] > > > hm... zsmalloc is getting sooo complex now. > > > > > > `system_wq' -- can we have problems here when the system is getting > > > low on memory and workers are getting increasingly busy trying to > > > allocate the memory for some other purposes? > > > > > > _theoretically_ zsmalloc can stack a number of ready-to-release zspages, > > > which won't be accessible to zsmalloc, nor will they be released. how likely > > > is this? hm, can zsmalloc take zspages from that deferred release list when > > > it wants to allocate a new zspage? > > > > Done. > > oh, good. that was a purely theoretical thing, and to continue with the > theories, I assume that zs_malloc() will improve with this change. the > sort of kind of problem with zs_malloc(), *I think*, is that we release > the class ->lock after failed find_get_zspage(): > > handle = cache_alloc_handle(pool, gfp); > if (!handle) > return 0; > > zspage = find_get_zspage(class); > if (likely(zspage)) { > obj = obj_malloc(class, zspage, handle); > [..] > spin_unlock(&class->lock); > > return handle; > } > > spin_unlock(&class->lock); > > zspage = alloc_zspage(pool, class, gfp); > if (!zspage) { > cache_free_handle(pool, handle); > return 0; > } > > spin_lock(&class->lock); > obj = obj_malloc(class, zspage, handle); > [..] > spin_unlock(&class->lock); > > > _theoretically_, on a not-really-huge system, let's say 64 CPUs for > example, we can have 64 write paths trying to store objects of size > OBJ_SZ to a ZS_FULL class-OBJSZ. the write path (each of them) will > fail on find_get_zspage(), unlock the class ->lock (so another write > path will have its chance to fail on find_get_zspage()), alloc_zspage(), > create a page chain, spin on class ->lock to add the new zspage to the > class. so we can end up allocating up to 64 zspages, each of them will > carry N PAGE_SIZE pages. those zspages, at least at the beginning, will > store only one object per-zspage; which will blastoff the internal > fragmentation and can cause more compaction/migration/etc later on. well, > it's a bit pessimistic, but I think to _some extent_ this scenario is > quite possible. > > I assume that this "pick an already marked for release zspage" thing is > happening as a fast path within the first class ->lock section, so the > rest of concurrent write requests that are spinning on the class ->lock > at the moment will see a zspage, instead of !find_get_zspage(). As well, we would reduce page alloc/free cost although it's not expensive compared to comp overhead. :) Thanks for giving the thought!