From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752384AbdFMJAw (ORCPT ); Tue, 13 Jun 2017 05:00:52 -0400 Received: from mail-wr0-f196.google.com ([209.85.128.196]:34441 "EHLO mail-wr0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752004AbdFMJAu (ORCPT ); Tue, 13 Jun 2017 05:00:50 -0400 From: Michal Hocko To: Cc: Naoya Horiguchi , Mike Kravetz , Mel Gorman , Vlastimil Babka , Andrew Morton , LKML Subject: [RFC PATCH 0/4] mm, hugetlb: allow proper node fallback dequeue Date: Tue, 13 Jun 2017 11:00:35 +0200 Message-Id: <20170613090039.14393-1-mhocko@kernel.org> X-Mailer: git-send-email 2.11.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, while working on a hugetlb migration issue addressed in a separate patchset [1] I have noticed that the hugetlb allocations from the preallocated pool are quite subotimal. There is no fallback mechanism implemented and no notion of preferred node. I have tried to work around it by [2] but Vlastimil was right to push back for a more robust solution. It seems that such a solution is to reuse zonelist approach we use for the page alloctor. This series has 4 patches. The first one tries to make hugetlb allocation layers more clear. The second one implements the zonelist hugetlb pool allocation and introduces a preferred node semantic which is used by the migration callbacks. The third patch is a pure clean up as well as the last patch. Note that this patch depends on [1] (without the last patch which is replaced by this work). You can find the whole series in git://git.kernel.org/pub/scm/linux/kernel/git/mhocko/mm.git branch attempts/hugetlb-zonelists I am sending this as an RFC because I might be missing some subtle dependencies which led to the original design. Shortlog Michal Hocko (4): mm, hugetlb: unclutter hugetlb allocation layers hugetlb: add support for preferred node to alloc_huge_page_nodemask mm, hugetlb: get rid of dequeue_huge_page_node mm, hugetlb, soft_offline: use new_page_nodemask for soft offline migration And the diffstat looks promissing as well include/linux/hugetlb.h | 3 +- include/linux/migrate.h | 2 +- mm/hugetlb.c | 233 ++++++++++++++++-------------------------------- mm/memory-failure.c | 10 +-- 4 files changed, 82 insertions(+), 166 deletions(-) [1] http://lkml.kernel.org/r/20170608074553.22152-1-mhocko@kernel.org [2] http://lkml.kernel.org/r/20170608074553.22152-5-mhocko@kernel.org