linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Add predictive memory reclamation and compaction
@ 2019-08-13  1:40 Khalid Aziz
  2019-08-13  1:40 ` [RFC PATCH 1/2] mm: Add trend based prediction algorithm for memory usage Khalid Aziz
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Khalid Aziz @ 2019-08-13  1:40 UTC (permalink / raw)
  To: akpm, vbabka, mgorman, mhocko, dan.j.williams
  Cc: Khalid Aziz, osalvador, richard.weiyang, hannes, arunks, rppt,
	jgg, amir73il, alexander.h.duyck, linux-mm, linux-kernel-mentees,
	linux-kernel


Page reclamation and compaction is triggered in response to reaching low
watermark. This makes reclamation/compaction reactive based upon a
snapshot of the system at a point in time. When that point is reached,
system is already suffering from free memory shortage and must now try
to recover. Recovery can often land system in direct
reclamation/compaction path and while recovery happens, workloads start
to experience unpredictable memory allocation latencies. In real life,
forced direct reclamation has been seen to cause sudden spike in time it
takes to populate a new database or an extraordinary unpredictable
latency in launching a new server on cloud platform. These events create
SLA violations which are expensive for businesses.

If the kernel could foresee a potential free page exhaustion or
fragmentation event well before it happens, it could start reclamation
proactively instead to avoid allocation stalls. A time based trend line
for available free pages can show such potential future events by
charting the current memory consumption trend on the system.

These patches propose a way to capture enough memory usage information
to compute a trend line based upon most recent data. Trend line is
graphed with x-axis showing time and y-axis showing number of free
pages. The proposal is to capture the number of free pages at opportune
moments along with the current timestamp. Once system has enough data
points (the lookback window for trend analysis), fit a line of the form
y=mx+c to these points using least sqaure regression method.  As time
advances, these points can be updated with new data points and a new
best fit line can be computed. Capturing these data points and computing
trend line for pages of order 0-MAX_ORDER allows us to not only foresee
free pages exhaustion point but also severe fragmentation points in
future.

If the line representing trend for total free pages has a negative slope
(hence trending downward), solving y=mx+c for x with y=0 tells us if
the current trend continues, at what point would the system run out of
free pages. If average rate of page reclamation is computed by observing
page reclamation behavior, that information can be used to compute the
time to start reclamation at so that number of free pages does not fall
to 0 or below low watermark if current memory consumption trend were to
continue.

Similarly, if kernel tracks the level of fragmentation for each order
page (which can be done by computing the number of free pages below this
order), a trend line for each order can be used to compute the point in
time when no more pages of that order will be available for allocation.
If the trend line represents number of unusable pages for that order,
the intersection of this line with line representing number of free
pages is the point of 100% fragmentation. This holds true because at
this intersection point all free pages are of lower order. Intersetion
point for two lines y0=m0x0+c0 and y1=m1x1+c1 can be computed
mathematically which yields x and y coordinates on time and free pages
graph. If average rate of compaction is computed by timing previous
compaction runs, kernel can compute how soon does it need to start
compaction to avoid this 100% fragmentation point.

Patch 1 adds code to maintain a sliding lookback window of (time, number
of free pages) points which can be updated continuously and adds code to
compute best fit line across these points. It also adds code to use the
best fit lines to determine if kernel must start reclamation or
compaction.

Patch 2 adds code to collect data points on free pages of various orders
at different points in time, uses code in patch 1 to update sliding
lookback window with these points and kicks off reclamation or
compaction based upon the results it gets.

Patch 1 maintains a fixed size lookback window. A fixed size lookback
window limits the amount of data that has to be maintained to compute a
best fit line. Routine mem_predict() in patch 1 uses best fit line to
determine the immediate need for reclamation or compaction. To simplify
initial concept implementation, it uses a fixed time threshold when
compaction should start in anticipation of impending fragmentation.
Similarly it uses a fixed minimum precentage free pages as criteria to
detrmine if it is time to start reclamation if the current trend line
shows continued drop in number of free pages. Both of these criteria can
be improved upon in final implementation by taking rate of compaction
and rate of reclamation into account.

Patch 2 collects data points for best fit line in kswapd before we
decide if kswapd should go to sleep or continue reclamation. It then
uses that data to delay kswapd from sleeping and continue reclamation.
Potential fragmentation information obtained from best fit line is used
to decide if zone watermark should be boosted to avert impending
fragmentation. This data is also used in balance_pgdat() to determine if
kcompatcd should be woken up to start compaction.
get_page_from_freelist() might be a better place to gather data points
and make decision on starting reclamation or comapction but it can also
impact page allocation latency. Another possibility is to create a
separate kernel thread that gathers page usage data periodically and
wakes up kswapd or kcompactd as needed based upon trend analysis. This
is something that can be finalized before final implementation of this
proposal.

Impact of this implementation was measured using two sets of tests.
First test consists of three concurrent dd processes writing large
amounts of data (66 GB, 131 GB and 262 GB) to three different SSDs
causing large number of free pages to be used up for buffer/page cache.
Number of cumulative allocation stalls as reported by /proc/vmstat were
recorded for 5 runs of this test.

5.3-rc2
-------

allocstall_dma 0
allocstall_dma32 0
allocstall_normal 15
allocstall_movable 1629
compact_stall 0

Total = 1644


5.3-rc2 + this patch series
---------------------------

allocstall_dma 0
allocstall_dma32 0
allocstall_normal 182
allocstall_movable 1266
compact_stall 0

Total = 1544

There was no significant change in system time between these runs. This
was a ~6.5% improvement in number of allocation stalls.

A scond test used was the parallel dd test from mmtests. Average number
of stalls over 4 runs with unpatched 5.3-rc2 kernel was 6057. Average
number of stalls over 4 runs after applying these patches was 5584. This
was an ~8% improvement in number of allocation stalls.

This work is complementary to other allocation/compaction stall
improvements. It attempts to address potential stalls proactively before
they happen and will make use of any improvements made to the
reclamation/compaction code.

Any feedback on this proposal and associated implementation will be
greatly appreciated. This is work in progress.

Khalid Aziz (2):
  mm: Add trend based prediction algorithm for memory usage
  mm/vmscan: Add fragmentation prediction to kswapd

 include/linux/mmzone.h |  72 +++++++++++
 mm/Makefile            |   2 +-
 mm/lsq.c               | 273 +++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c        |  27 ----
 mm/vmscan.c            | 116 ++++++++++++++++-
 5 files changed, 456 insertions(+), 34 deletions(-)
 create mode 100644 mm/lsq.c

-- 
2.20.1



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC PATCH 1/2] mm: Add trend based prediction algorithm for memory usage
  2019-08-13  1:40 [RFC PATCH 0/2] Add predictive memory reclamation and compaction Khalid Aziz
@ 2019-08-13  1:40 ` Khalid Aziz
  2019-08-13  1:40 ` [RFC PATCH 2/2] mm/vmscan: Add fragmentation and page starvation prediction to kswapd Khalid Aziz
  2019-08-13 14:05 ` [RFC PATCH 0/2] Add predictive memory reclamation and compaction Michal Hocko
  2 siblings, 0 replies; 17+ messages in thread
From: Khalid Aziz @ 2019-08-13  1:40 UTC (permalink / raw)
  To: akpm, vbabka, mgorman, mhocko, dan.j.williams
  Cc: Khalid Aziz, osalvador, richard.weiyang, hannes, arunks, rppt,
	jgg, amir73il, alexander.h.duyck, linux-mm, linux-kernel-mentees,
	linux-kernel, Bharath Vedartham, Vandana BN

Direct page reclamation and compaction have high and unpredictable
latency costs for applications. This patch adds code to predict if
system is about to run out of free memory by watching the historical
memory consumption trends. It computes a best fit line to this
historical data using method of least squares. it can then compute if
system will run out of memory if the current trend continues.
Historical data is held in a new data structure lsq_struct for each
zone and each order within the zone. Size of the window for historical
data is given by LSQ_LOOKBACK.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: Bharath Vedartham <linux.bhar@gmail.com>
Reviewed-by: Vandana BN <bnvandana@gmail.com>
---
 include/linux/mmzone.h |  34 +++++
 mm/Makefile            |   2 +-
 mm/lsq.c               | 273 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 308 insertions(+), 1 deletion(-)
 create mode 100644 mm/lsq.c

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d77d717c620c..9a0e5cab7171 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -355,6 +355,38 @@ struct per_cpu_nodestat {
 
 #endif /* !__GENERATING_BOUNDS.H */
 
+/*
+ * Size of lookback window for the free memory exhaustion prediction
+ * algorithm. Keep it to less than 16 to keep data manageable
+ */
+#define LSQ_LOOKBACK 8
+
+/*
+ * How far forward to look when determining if memory exhaustion would
+ * become an issue.
+ */
+extern unsigned long mempredict_threshold;
+
+/*
+ * Structure to keep track of current values required to compute the best
+ * fit line using method of least squares
+ */
+struct lsq_struct {
+	bool ready;
+	int next;
+	u64 x[LSQ_LOOKBACK];
+	unsigned long y[LSQ_LOOKBACK];
+};
+
+struct frag_info {
+	unsigned long free_pages;
+	unsigned long time;
+};
+
+/* Possile bits to be set by mem_predict in its return value */
+#define MEMPREDICT_RECLAIM	0x01
+#define MEMPREDICT_COMPACT	0x02
+
 enum zone_type {
 #ifdef CONFIG_ZONE_DMA
 	/*
@@ -581,6 +613,8 @@ enum zone_flags {
 					 */
 };
 
+extern int mem_predict(struct frag_info *frag_vec, struct zone *zone);
+
 static inline unsigned long zone_managed_pages(struct zone *zone)
 {
 	return (unsigned long)atomic_long_read(&zone->managed_pages);
diff --git a/mm/Makefile b/mm/Makefile
index 338e528ad436..fb7b3c19dd13 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -39,7 +39,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 			   mm_init.o mmu_context.o percpu.o slab_common.o \
 			   compaction.o vmacache.o \
 			   interval_tree.o list_lru.o workingset.o \
-			   debug.o gup.o $(mmu-y)
+			   debug.o gup.o lsq.o $(mmu-y)
 
 # Give 'page_alloc' its own module-parameter namespace
 page-alloc-y := page_alloc.o
diff --git a/mm/lsq.c b/mm/lsq.c
new file mode 100644
index 000000000000..6005a2b2f44d
--- /dev/null
+++ b/mm/lsq.c
@@ -0,0 +1,273 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * lsq.c: Provide a prediction on whether free memory exhaustion is
+ *	imminent or not by using a best fit line based upon method of
+ *	least squares. Best fit line is based upon recent historical
+ *	data. This historical data forms the lookback window for the
+ *	algorithm.
+ *
+ *
+ * Author: Robert Harris
+ * Author: Khalid Aziz <khalid.aziz@oracle.com>
+ *
+ * Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include <linux/mm.h>
+#include <linux/mmzone.h>
+#include <linux/math64.h>
+
+/*
+ * How far forward to look when determining if fragmentation would
+ * become an issue. The unit for this is same as the unit for the
+ * x-axis of graph where sample points for memory utilization are being
+ * plotted. We start with a default value of 1000 units but can tweak it
+ * dynamically to get better prediction results. With data points for
+ * memory being gathered with granularity of milliseconds, this translates
+ * to a look ahead of 1 second. If system is 1 second away from severe
+ * fragmentation, start compaction now to avoid direct comapction.
+ */
+unsigned long mempredict_threshold = 1000;
+
+/*
+ * Threshold for number of free pages that should trigger reclamation,
+ * expressed as percentage of total number of pages
+ */
+#define MEMRECLAMATION_THRESHOLD	20
+
+/*
+ * This function inserts the given value into the list of most recently seen
+ * data and returns the parameters, m and c, of a straight line of the form
+ * y = (mx/100) + c that, according to the the method of least squares
+ * fits them best. This implementation looks at just the last few data points
+ * (lookback window) which allows for fixed amount of storage required for
+ * data points and a nearly fixed time to calculate best fit line. Using
+ * line equation of the form y=(mx/100)+c instead of y=mx+c allows us to
+ * avoid floating point operations since m can be fractional often.
+ */
+static int
+lsq_fit(struct lsq_struct *lsq, unsigned long new_y, u64 new_x,
+	long long *m, long long *c)
+{
+	u64 sigma_x, sigma_y;
+	u64 sigma_xy, sigma_xx;
+	long long slope_divisor;
+	int i, next;
+	u64 x_offset;
+
+	next = lsq->next++;
+	lsq->x[next] = new_x;
+	lsq->y[next] = new_y;
+
+	if (lsq->next == LSQ_LOOKBACK) {
+		lsq->next = 0;
+		/*
+		 * Lookback window is fill which means a reasonable
+		 * best fit line can be computed. Flag enough data
+		 * is available in lookback window now.
+		 */
+		lsq->ready = true;
+	}
+
+	/*
+	 * If lookback window is not full, do not continue with
+	 * computing slope and intercept of best fit line.
+	 */
+	if (!lsq->ready)
+		return -1;
+
+	/*
+	 * If lookback window is full, compute slope and intercept
+	 * for the best fit line. In the process of computing those, we need
+	 * to compute squares of values along x-axis. Sqaure values can be
+	 * large enough to overflow 64-bits if they are large enough to
+	 * begin with. To solve this problem, transform the line on
+	 * x-axis so the first point falls at x=0. Since lsq->x is a
+	 * circular buffer, lsq->next points to the oldest entry in this
+	 * buffer.
+	 */
+	x_offset = lsq->x[lsq->next];
+	for (i = 0; i < LSQ_LOOKBACK; i++)
+		lsq->x[i] -= x_offset;
+
+	/*
+	 * Lookback window is full. Compute slope and intercept
+	 * for the best fit line
+	 */
+	sigma_x = sigma_y = sigma_xy = sigma_xx = 0;
+	for (i = 0; i < LSQ_LOOKBACK; i++) {
+		sigma_x += lsq->x[i];
+		sigma_y += lsq->y[i];
+		sigma_xy += (lsq->x[i] * lsq->y[i]);
+		sigma_xx += (lsq->x[i] * lsq->x[i]);
+	}
+
+	/*
+	 * guard against divide-by-zero
+	 */
+	slope_divisor = LSQ_LOOKBACK * sigma_xx - sigma_x * sigma_x;
+	if (slope_divisor == 0)
+		return -1;
+	*m = div64_s64(((LSQ_LOOKBACK * sigma_xy - sigma_x * sigma_y) * 100),
+			slope_divisor);
+
+	*c = div64_long((sigma_y - *m * sigma_x), LSQ_LOOKBACK);
+
+	/*
+	 * Restore original values for x-axis
+	 */
+	for (i = 0; i < LSQ_LOOKBACK; ++i)
+		lsq->x[i] += x_offset;
+
+	return 0;
+}
+
+/*
+ * This function determines whether it is necessary to begin
+ * reclamation/compaction now in order to avert exhaustion of any of the
+ * free lists.
+ *
+ * Its basis is a simple model in which the total free memory, f_T, is
+ * consumed at a constant rate, R_T, i.e.
+ *
+ *	f_T(t) = R_T * t + f_T(0)
+ *
+ * For any given order, o > 0, members of subordinate lists constitute
+ * fragmented free memory, f_f(o): the blocks are notionally free but
+ * they are unavailable for allocation. The fragmented free memory is
+ * also assumed to behave linearly and in the absence of compaction is
+ * given by
+ *
+ *	f_f(o, t) = R_f(o) t + f_f(o, 0)
+ *
+ * Order 0 function represents current trend line for total free pages
+ * instead.
+ *
+ * It is assumed that all allocations will be made from contiguous
+ * memory meaning that, under net memory pressure and with no change in
+ * fragmentation, f_T will become equal to f_f and subsequent allocations
+ * will stall in either direct compaction or reclaim. Preemptive compaction
+ * will delay the onset of exhaustion but, to be useful, must begin early
+ * enough and must proceed at a sufficient rate.
+ *
+ * On each invocation, this function obtains estimates for the
+ * parameters f_T(0), R_T, f_f(o, 0) and R_f(o). Using the best fit
+ * line, it then determines if reclamation or compaction should be started
+ * now to avert free pages exhaustion or severe fragmentation. Return value
+ * is a set of bits which represent which condition has been observed -
+ * potential free memory exhaustion, and potential severe fragmentation.
+ */
+int mem_predict(struct frag_info *frag_vec, struct zone *zone)
+{
+	int order, retval = 0;
+	long long m[MAX_ORDER];
+	long long c[MAX_ORDER];
+	bool is_ready = true;
+	long long x_cross;
+	struct lsq_struct *lsq = zone->mem_prediction;
+
+	/*
+	 * Compute the trend line for fragmentation on each order page.
+	 * For order 0 pages, it will be a trend line showing rate
+	 * of consumption of pages. For higher order pages, trend line
+	 * shows loss/gain of pages of that order. When the trend line
+	 * for example for order n pages intersects with trend line for
+	 * total free pages, it means all available pages are of order
+	 * (n-1) or lower and there is 100% fragmentation of order n
+	 * pages. Kernel must compact pages at this point to gain
+	 * new order n pages.
+	 */
+	for (order = 0; order < MAX_ORDER; order++) {
+		if (lsq_fit(&lsq[order], frag_vec[order].free_pages,
+				frag_vec[order].time, &m[order],
+				&c[order]) == -1)
+			is_ready = false;
+	}
+
+	if (!is_ready)
+		return 0;
+
+	/*
+	 * Trend line for each order page is available now. If the trend
+	 * line for overall free pages is trending upwards (positive
+	 * slope), there is no need to reclaim pages but there may be
+	 * need to compact pages if system is running out of contiguous pages
+	 * for higher orders.
+	 */
+	if (m[0] >= 0) {
+		for (order = 1; order < MAX_ORDER; order++) {
+			/*
+			 * If lines are parallel, then they never intersect.
+			 */
+			if (m[0] == m[order])
+				continue;
+			/*
+			 * Find the point of intersection of the two lines.
+			 * The point of intersection represents 100%
+			 * fragmentation for this order.
+			 */
+			x_cross = div64_s64(((c[0] - c[order]) * 100),
+					(m[order] - m[0]));
+
+			/*
+			 * If they intersect anytime soon in the future
+			 * or intersected recently in the past, then it
+			 * is time for compaction and there is no need
+			 * to continue evaluating remaining order pages
+			 *
+			 * TODO: Instead of a fixed time threshold,
+			 * track compaction rate on the system and compute
+			 * how soon should compaction be started with the
+			 * current compaction rate to avoid direct
+			 * compaction
+			 */
+			if ((x_cross < mempredict_threshold) &&
+				(x_cross > -mempredict_threshold)) {
+				retval |= MEMPREDICT_COMPACT;
+				return retval;
+			}
+		}
+	} else {
+		unsigned long threshold;
+
+		/*
+		 * Trend line for overall free pages is showing a
+		 * negative trend. Check if less than threshold
+		 * pages are free. If so, start reclamation now to stave
+		 * off memory exhaustion
+		 *
+		 * TODO: This is not the best way to use trend analysis.
+		 * The right way to determine if it is time to start
+		 * reclamation to avoid memory exhaustion is to compute
+		 * how far away is exhaustion (least square fit
+		 * line can provide that) and what is the average rate of
+		 * memory reclamation. Using those two rates, compute how
+		 * far in advance of exhaustion should reclamation be
+		 * started to avoid exhaustion. This can be done after
+		 * additional code has been added to keep track of current
+		 * rate of reclamation.
+		 */
+		threshold = (zone_managed_pages(zone)*MEMRECLAMATION_THRESHOLD)
+				/100;
+		if (frag_vec[0].free_pages < threshold)
+			retval |= MEMPREDICT_RECLAIM;
+	}
+
+	return retval;
+}
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH 2/2] mm/vmscan: Add fragmentation and page starvation prediction to kswapd
  2019-08-13  1:40 [RFC PATCH 0/2] Add predictive memory reclamation and compaction Khalid Aziz
  2019-08-13  1:40 ` [RFC PATCH 1/2] mm: Add trend based prediction algorithm for memory usage Khalid Aziz
@ 2019-08-13  1:40 ` Khalid Aziz
  2019-08-13 14:05 ` [RFC PATCH 0/2] Add predictive memory reclamation and compaction Michal Hocko
  2 siblings, 0 replies; 17+ messages in thread
From: Khalid Aziz @ 2019-08-13  1:40 UTC (permalink / raw)
  To: akpm, vbabka, mgorman, mhocko, dan.j.williams
  Cc: Khalid Aziz, osalvador, richard.weiyang, hannes, arunks, rppt,
	jgg, amir73il, alexander.h.duyck, linux-mm, linux-kernel-mentees,
	linux-kernel, Bharath Vedartham, Vandana BN

This patch adds proactive memory reclamation to kswapd using the
free page exhaustion/fragmentation prediction based upon memory
consumption trend. It uses the least squares fit algorithm introduced
earlier for this prediction. A new function node_trend_analysis()
iterates through all zones and updates trend data in the lookback
window for least square fit algorithm. At the same time it flags any
zones that have potential for exhaustion/fragmentation by setting
ZONE_POTENTIAL_FRAG flag.

prepare_kswapd_sleep() calls node_trend_analysis() to check if the
node has potential exhaustion/fragmentation. If so, kswapd will
continue reclamataion. balance_pgdat has been modified to take
potential fragmentation into account when deciding when to wake
kcompactd up. Any zones that have potential severe fragmentation get
watermark boosted to reclaim and compact free pages proactively.

Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Signed-off-by: Bharath Vedartham <linux.bhar@gmail.com>
Tested-by: Vandana BN <bnvandana@gmail.com>
---
 include/linux/mmzone.h |  38 ++++++++++++++
 mm/page_alloc.c        |  27 ----------
 mm/vmscan.c            | 116 ++++++++++++++++++++++++++++++++++++++---
 3 files changed, 148 insertions(+), 33 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 9a0e5cab7171..a523476b5ce1 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -587,6 +587,12 @@ struct zone {
 
 	bool			contiguous;
 
+	/*
+	 * Structures to use for memory consumption prediction for
+	 * each order
+	 */
+	struct lsq_struct	mem_prediction[MAX_ORDER];
+
 	ZONE_PADDING(_pad3_)
 	/* Zone statistics */
 	atomic_long_t		vm_stat[NR_VM_ZONE_STAT_ITEMS];
@@ -611,6 +617,9 @@ enum zone_flags {
 	ZONE_BOOSTED_WATERMARK,		/* zone recently boosted watermarks.
 					 * Cleared when kswapd is woken.
 					 */
+	ZONE_POTENTIAL_FRAG,		/* zone detected with a potential
+					 * external fragmentation event.
+					 */
 };
 
 extern int mem_predict(struct frag_info *frag_vec, struct zone *zone);
@@ -1130,6 +1139,35 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
 #define for_each_zone_zonelist(zone, z, zlist, highidx) \
 	for_each_zone_zonelist_nodemask(zone, z, zlist, highidx, NULL)
 
+extern int watermark_boost_factor;
+
+static inline void boost_watermark(struct zone *zone)
+{
+	unsigned long max_boost;
+
+	if (!watermark_boost_factor)
+		return;
+
+	max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
+			watermark_boost_factor, 10000);
+
+	/*
+	 * high watermark may be uninitialised if fragmentation occurs
+	 * very early in boot so do not boost. We do not fall
+	 * through and boost by pageblock_nr_pages as failing
+	 * allocations that early means that reclaim is not going
+	 * to help and it may even be impossible to reclaim the
+	 * boosted watermark resulting in a hang.
+	 */
+	if (!max_boost)
+		return;
+
+	max_boost = max(pageblock_nr_pages, max_boost);
+
+	zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages,
+		max_boost);
+}
+
 #ifdef CONFIG_SPARSEMEM
 #include <asm/sparsemem.h>
 #endif
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 272c6de1bf4e..1b4e6ba16f1c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2351,33 +2351,6 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
 	return false;
 }
 
-static inline void boost_watermark(struct zone *zone)
-{
-	unsigned long max_boost;
-
-	if (!watermark_boost_factor)
-		return;
-
-	max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
-			watermark_boost_factor, 10000);
-
-	/*
-	 * high watermark may be uninitialised if fragmentation occurs
-	 * very early in boot so do not boost. We do not fall
-	 * through and boost by pageblock_nr_pages as failing
-	 * allocations that early means that reclaim is not going
-	 * to help and it may even be impossible to reclaim the
-	 * boosted watermark resulting in a hang.
-	 */
-	if (!max_boost)
-		return;
-
-	max_boost = max(pageblock_nr_pages, max_boost);
-
-	zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages,
-		max_boost);
-}
-
 /*
  * This function implements actual steal behaviour. If order is large enough,
  * we can steal whole pageblock. If not, we first move freepages in this
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 44df66a98f2a..b9cf6658c83d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -51,6 +51,7 @@
 #include <linux/printk.h>
 #include <linux/dax.h>
 #include <linux/psi.h>
+#include <linux/jiffies.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -3397,14 +3398,82 @@ static void clear_pgdat_congested(pg_data_t *pgdat)
 	clear_bit(PGDAT_WRITEBACK, &pgdat->flags);
 }
 
+/*
+ * Update  trend data and perform trend analysis for a zone to foresee
+ * a low memory or severe fragmentation event
+ */
+static int zone_trend_analysis(struct zone *zone)
+{
+	struct frag_info frag_vec[MAX_ORDER];
+	int order, result;
+	unsigned long total_free_pages;
+	unsigned long curr_free_pages;
+
+	total_free_pages = frag_vec[0].free_pages = 0;
+	for (order = 0; order < MAX_ORDER; order++) {
+		curr_free_pages = zone->free_area[order].nr_free << order;
+		total_free_pages += curr_free_pages;
+
+		if (order < MAX_ORDER - 1) {
+			frag_vec[order + 1].free_pages =
+				frag_vec[order].free_pages + curr_free_pages;
+			frag_vec[order + 1].time =
+				jiffies64_to_msecs(get_jiffies_64()
+				- INITIAL_JIFFIES);
+		}
+	}
+	frag_vec[0].free_pages = total_free_pages;
+	frag_vec[0].time = frag_vec[MAX_ORDER - 1].time;
+
+	result = mem_predict(frag_vec, zone);
+
+	return result;
+}
+
+/*
+ * Perform trend analysis for memory usage for each zone in the node to
+ * detect potential upcoming low memory or fragmented memory conditions
+ */
+static int node_trend_analysis(pg_data_t *pgdat, int classzone_idx)
+{
+	struct zone *zone = NULL;
+	int i, retval = 0;
+
+	for (i = 0; i <= classzone_idx; i++) {
+		int zoneval;
+
+		zone = pgdat->node_zones + i;
+
+		if (!managed_zone(zone))
+			continue;
+
+		/*
+		 * Check if trend analysis shows potential fragmentation
+		 * in near future
+		 */
+		zoneval = zone_trend_analysis(zone);
+		if (zoneval & MEMPREDICT_COMPACT)
+			set_bit(ZONE_POTENTIAL_FRAG, &zone->flags);
+		if (zoneval & MEMPREDICT_RECLAIM)
+			boost_watermark(zone);
+		retval |= zoneval;
+	}
+
+	return retval;
+}
+
 /*
  * Prepare kswapd for sleeping. This verifies that there are no processes
  * waiting in throttle_direct_reclaim() and that watermarks have been met.
+ * It also checks if this node could have a potential external fragmentation
+ * event which could lead to direct reclaim/compaction stalls.
  *
  * Returns true if kswapd is ready to sleep
  */
 static bool prepare_kswapd_sleep(pg_data_t *pgdat, int order, int classzone_idx)
 {
+	int retval;
+
 	/*
 	 * The throttled processes are normally woken up in balance_pgdat() as
 	 * soon as allow_direct_reclaim() is true. But there is a potential
@@ -3425,6 +3494,21 @@ static bool prepare_kswapd_sleep(pg_data_t *pgdat, int order, int classzone_idx)
 	if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES)
 		return true;
 
+	/*
+	 * Check whether this node could have a potential memory
+	 * exhaustion in near future. If trend analysis shows such
+	 * an event occurring, don't allow kswapd to sleep so
+	 * reclamation starts now to prevent memory exhaustion. If
+	 * trend analysis shows no impending memory exhaustion but
+	 * shows impending severe fragmentation, return true to
+	 * wake up kcompactd.
+	 */
+	retval = node_trend_analysis(pgdat, classzone_idx);
+	if (retval & MEMPREDICT_RECLAIM)
+		return false;
+	if (retval & MEMPREDICT_COMPACT)
+		return true;
+
 	if (pgdat_balanced(pgdat, order, classzone_idx)) {
 		clear_pgdat_congested(pgdat);
 		return true;
@@ -3498,6 +3582,8 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
 	unsigned long nr_boost_reclaim;
 	unsigned long zone_boosts[MAX_NR_ZONES] = { 0, };
 	bool boosted;
+	bool potential_frag = 0;
+	bool need_compact;
 	struct zone *zone;
 	struct scan_control sc = {
 		.gfp_mask = GFP_KERNEL,
@@ -3524,9 +3610,27 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
 
 		nr_boost_reclaim += zone->watermark_boost;
 		zone_boosts[i] = zone->watermark_boost;
+
+		/*
+		 * Check if any of the zones could have a potential
+		 * fragmentation event.
+		 */
+		if (test_bit(ZONE_POTENTIAL_FRAG, &zone->flags)) {
+			potential_frag = 1;
+			clear_bit(ZONE_POTENTIAL_FRAG, &zone->flags);
+		}
 	}
 	boosted = nr_boost_reclaim;
 
+	/*
+	 * If kswapd is woken up because of watermark boosting or forced
+	 * to run another balance_pgdat run because it detected an
+	 * external fragmentation event, run compaction after
+	 * reclaiming some pages. need_compact is true if such compaction
+	 * is required.
+	 */
+	need_compact = boosted || potential_frag;
+
 restart:
 	sc.priority = DEF_PRIORITY;
 	do {
@@ -3645,7 +3749,6 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
 		 */
 		nr_reclaimed = sc.nr_reclaimed - nr_reclaimed;
 		nr_boost_reclaim -= min(nr_boost_reclaim, nr_reclaimed);
-
 		/*
 		 * If reclaim made no progress for a boost, stop reclaim as
 		 * IO cannot be queued and it could be an infinite loop in
@@ -3676,13 +3779,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
 			zone->watermark_boost -= min(zone->watermark_boost, zone_boosts[i]);
 			spin_unlock_irqrestore(&zone->lock, flags);
 		}
+	}
 
-		/*
-		 * As there is now likely space, wakeup kcompact to defragment
-		 * pageblocks.
-		 */
+	/*
+	 * As there is now likely space, wakeup kcompactd to defragment
+	 * pageblocks.
+	 */
+	if (need_compact)
 		wakeup_kcompactd(pgdat, pageblock_order, classzone_idx);
-	}
 
 	snapshot_refaults(NULL, pgdat);
 	__fs_reclaim_release();
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-13  1:40 [RFC PATCH 0/2] Add predictive memory reclamation and compaction Khalid Aziz
  2019-08-13  1:40 ` [RFC PATCH 1/2] mm: Add trend based prediction algorithm for memory usage Khalid Aziz
  2019-08-13  1:40 ` [RFC PATCH 2/2] mm/vmscan: Add fragmentation and page starvation prediction to kswapd Khalid Aziz
@ 2019-08-13 14:05 ` Michal Hocko
  2019-08-13 15:20   ` Khalid Aziz
  2 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2019-08-13 14:05 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Mon 12-08-19 19:40:10, Khalid Aziz wrote:
[...]
> Patch 1 adds code to maintain a sliding lookback window of (time, number
> of free pages) points which can be updated continuously and adds code to
> compute best fit line across these points. It also adds code to use the
> best fit lines to determine if kernel must start reclamation or
> compaction.
> 
> Patch 2 adds code to collect data points on free pages of various orders
> at different points in time, uses code in patch 1 to update sliding
> lookback window with these points and kicks off reclamation or
> compaction based upon the results it gets.

An important piece of information missing in your description is why
do we need to keep that logic in the kernel. In other words, we have
the background reclaim that acts on a wmark range and those are tunable
from the userspace. The primary point of this background reclaim is to
keep balance and prevent from direct reclaim. Why cannot you implement
this or any other dynamic trend watching watchdog and tune watermarks
accordingly? Something similar applies to kcompactd although we might be
lacking a good interface.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-13 14:05 ` [RFC PATCH 0/2] Add predictive memory reclamation and compaction Michal Hocko
@ 2019-08-13 15:20   ` Khalid Aziz
  2019-08-14  8:58     ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Khalid Aziz @ 2019-08-13 15:20 UTC (permalink / raw)
  To: Michal Hocko
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On 8/13/19 8:05 AM, Michal Hocko wrote:
> On Mon 12-08-19 19:40:10, Khalid Aziz wrote:
> [...]
>> Patch 1 adds code to maintain a sliding lookback window of (time, number
>> of free pages) points which can be updated continuously and adds code to
>> compute best fit line across these points. It also adds code to use the
>> best fit lines to determine if kernel must start reclamation or
>> compaction.
>>
>> Patch 2 adds code to collect data points on free pages of various orders
>> at different points in time, uses code in patch 1 to update sliding
>> lookback window with these points and kicks off reclamation or
>> compaction based upon the results it gets.
> 
> An important piece of information missing in your description is why
> do we need to keep that logic in the kernel. In other words, we have
> the background reclaim that acts on a wmark range and those are tunable
> from the userspace. The primary point of this background reclaim is to
> keep balance and prevent from direct reclaim. Why cannot you implement
> this or any other dynamic trend watching watchdog and tune watermarks
> accordingly? Something similar applies to kcompactd although we might be
> lacking a good interface.
> 

Hi Michal,

That is a very good question. As a matter of fact the initial prototype
to assess the feasibility of this approach was written in userspace for
a very limited application. We wrote the initial prototype to monitor
fragmentation and used /sys/devices/system/node/node*/compact to trigger
compaction. The prototype demonstrated this approach has merits.

The primary reason to implement this logic in the kernel is to make the
kernel self-tuning. The more knobs we have externally, the more complex
it becomes to tune the kernel externally. If we can make the kernel
self-tuning, we can actually eliminate external knobs and simplify
kernel admin. Inspite of availability of tuning knobs and large number
of tuning guides for databases and cloud platforms, allocation stalls is
a routinely occurring problem on customer deployments. A best fit line
algorithm shows immeasurable impact on system performance yet provides
measurable improvement and room for further refinement. Makes sense?

Thanks,
Khalid



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-13 15:20   ` Khalid Aziz
@ 2019-08-14  8:58     ` Michal Hocko
  2019-08-15 16:27       ` Khalid Aziz
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2019-08-14  8:58 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Tue 13-08-19 09:20:51, Khalid Aziz wrote:
> On 8/13/19 8:05 AM, Michal Hocko wrote:
> > On Mon 12-08-19 19:40:10, Khalid Aziz wrote:
> > [...]
> >> Patch 1 adds code to maintain a sliding lookback window of (time, number
> >> of free pages) points which can be updated continuously and adds code to
> >> compute best fit line across these points. It also adds code to use the
> >> best fit lines to determine if kernel must start reclamation or
> >> compaction.
> >>
> >> Patch 2 adds code to collect data points on free pages of various orders
> >> at different points in time, uses code in patch 1 to update sliding
> >> lookback window with these points and kicks off reclamation or
> >> compaction based upon the results it gets.
> > 
> > An important piece of information missing in your description is why
> > do we need to keep that logic in the kernel. In other words, we have
> > the background reclaim that acts on a wmark range and those are tunable
> > from the userspace. The primary point of this background reclaim is to
> > keep balance and prevent from direct reclaim. Why cannot you implement
> > this or any other dynamic trend watching watchdog and tune watermarks
> > accordingly? Something similar applies to kcompactd although we might be
> > lacking a good interface.
> > 
> 
> Hi Michal,
> 
> That is a very good question. As a matter of fact the initial prototype
> to assess the feasibility of this approach was written in userspace for
> a very limited application. We wrote the initial prototype to monitor
> fragmentation and used /sys/devices/system/node/node*/compact to trigger
> compaction. The prototype demonstrated this approach has merits.
> 
> The primary reason to implement this logic in the kernel is to make the
> kernel self-tuning.

What makes this particular self-tuning an universal win? In other words
there are many ways to analyze the memory pressure and feedback it back
that I can think of. It is quite likely that very specific workloads
would have very specific demands there. I have seen cases where are
trivial increase of min_free_kbytes to normally insane value worked
really great for a DB workload because the wasted memory didn't matter
for example.

> The more knobs we have externally, the more complex
> it becomes to tune the kernel externally.

I agree on this point. Is the current set of tunning sufficient? What
would be missing if not?
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-14  8:58     ` Michal Hocko
@ 2019-08-15 16:27       ` Khalid Aziz
  2019-08-15 17:02         ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Khalid Aziz @ 2019-08-15 16:27 UTC (permalink / raw)
  To: Michal Hocko
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On 8/14/19 2:58 AM, Michal Hocko wrote:
> On Tue 13-08-19 09:20:51, Khalid Aziz wrote:
>> On 8/13/19 8:05 AM, Michal Hocko wrote:
>>> On Mon 12-08-19 19:40:10, Khalid Aziz wrote:
>>> [...]
>>>> Patch 1 adds code to maintain a sliding lookback window of (time, number
>>>> of free pages) points which can be updated continuously and adds code to
>>>> compute best fit line across these points. It also adds code to use the
>>>> best fit lines to determine if kernel must start reclamation or
>>>> compaction.
>>>>
>>>> Patch 2 adds code to collect data points on free pages of various orders
>>>> at different points in time, uses code in patch 1 to update sliding
>>>> lookback window with these points and kicks off reclamation or
>>>> compaction based upon the results it gets.
>>>
>>> An important piece of information missing in your description is why
>>> do we need to keep that logic in the kernel. In other words, we have
>>> the background reclaim that acts on a wmark range and those are tunable
>>> from the userspace. The primary point of this background reclaim is to
>>> keep balance and prevent from direct reclaim. Why cannot you implement
>>> this or any other dynamic trend watching watchdog and tune watermarks
>>> accordingly? Something similar applies to kcompactd although we might be
>>> lacking a good interface.
>>>
>>
>> Hi Michal,
>>
>> That is a very good question. As a matter of fact the initial prototype
>> to assess the feasibility of this approach was written in userspace for
>> a very limited application. We wrote the initial prototype to monitor
>> fragmentation and used /sys/devices/system/node/node*/compact to trigger
>> compaction. The prototype demonstrated this approach has merits.
>>
>> The primary reason to implement this logic in the kernel is to make the
>> kernel self-tuning.
> 
> What makes this particular self-tuning an universal win? In other words
> there are many ways to analyze the memory pressure and feedback it back
> that I can think of. It is quite likely that very specific workloads
> would have very specific demands there. I have seen cases where are
> trivial increase of min_free_kbytes to normally insane value worked
> really great for a DB workload because the wasted memory didn't matter
> for example.

Hi Michal,

The problem is not so much as do we have enough knobs available, rather
how do we tweak them dynamically to avoid allocation stalls. Knobs like
watermarks and min_free_kbytes are set once typically and left alone.
Allocation stalls show up even on much smaller scale than large DB or
cloud platforms. I have seen it on a desktop class machine running a few
services in the background. Desktop is running gnome3, I would lock the
screen and come back to unlock it a day or two later. In that time most
of memory has been consumed by buffer/page cache. Just unlocking the
screen can take 30+ seconds while system reclaims pages to be able swap
back in all the processes that were inactive so far.

It is true different workloads will have different requirements and that
is what I am attempting to address here. Instead of tweaking the knobs
statically based upon one workload requirements, I am looking at the
trend of memory consumption instead. A best fit line showing recent
trend can be quite indicative of what the workload is doing in terms of
memory. For instance, a cloud server might be running a certain number
of instances for a few days and it can end up using any memory not used
up by tasks, for buffer/page cache. Now the sys admin gets a request to
launch another instance and when they try to to do that, system starts
to allocate pages and soon runs out of free pages. We are now in direct
reclaim path and it can take significant amount of time to find all free
pages the new task needs. If the kernel were watching the memory
consumption trend instead, it could see that the trend line shows a
complete exhaustion of free pages or 100% fragmentation in near future,
irrespective of what the workload is. This allows kernel to start
reclamation/compaction before we actually hit the point of complete free
page exhaustion or fragmentation. This could avoid direct
reclamation/compaction or at least cut down its severity enough. That is
what makes it a win in large number of cases. Least square algorithm is
lightweight enough to not add to system load or complexity. If you have
come across a better algorithm, I certainly would look into using that.

> 
>> The more knobs we have externally, the more complex
>> it becomes to tune the kernel externally.
> 
> I agree on this point. Is the current set of tunning sufficient? What
> would be missing if not?
> 

We have knob available to force compaction immediately. That is helpful
and in some case, sys admins have resorted to forcing compaction on all
zones before launching a new cloud instance or loading a new database.
Some admins have resorted to using /proc/sys/vm/drop_caches to force
buffer/page cache pages to be freed up. Either of these solutions causes
system load to go up immediately while kswapd/kcompactd run to free up
and compact pages. This is far from ideal. Other knobs available seem to
be hard to set correctly especially on servers that run mixed workloads
which results in a regular stream of customer complaints coming in about
system stalling at most inopportune times.

I appreciate this discussion. This is how we can get to a solution that
actually works.

Thanks,
Khalid




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-15 16:27       ` Khalid Aziz
@ 2019-08-15 17:02         ` Michal Hocko
  2019-08-15 20:51           ` Khalid Aziz
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2019-08-15 17:02 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Thu 15-08-19 10:27:26, Khalid Aziz wrote:
> On 8/14/19 2:58 AM, Michal Hocko wrote:
> > On Tue 13-08-19 09:20:51, Khalid Aziz wrote:
> >> On 8/13/19 8:05 AM, Michal Hocko wrote:
> >>> On Mon 12-08-19 19:40:10, Khalid Aziz wrote:
> >>> [...]
> >>>> Patch 1 adds code to maintain a sliding lookback window of (time, number
> >>>> of free pages) points which can be updated continuously and adds code to
> >>>> compute best fit line across these points. It also adds code to use the
> >>>> best fit lines to determine if kernel must start reclamation or
> >>>> compaction.
> >>>>
> >>>> Patch 2 adds code to collect data points on free pages of various orders
> >>>> at different points in time, uses code in patch 1 to update sliding
> >>>> lookback window with these points and kicks off reclamation or
> >>>> compaction based upon the results it gets.
> >>>
> >>> An important piece of information missing in your description is why
> >>> do we need to keep that logic in the kernel. In other words, we have
> >>> the background reclaim that acts on a wmark range and those are tunable
> >>> from the userspace. The primary point of this background reclaim is to
> >>> keep balance and prevent from direct reclaim. Why cannot you implement
> >>> this or any other dynamic trend watching watchdog and tune watermarks
> >>> accordingly? Something similar applies to kcompactd although we might be
> >>> lacking a good interface.
> >>>
> >>
> >> Hi Michal,
> >>
> >> That is a very good question. As a matter of fact the initial prototype
> >> to assess the feasibility of this approach was written in userspace for
> >> a very limited application. We wrote the initial prototype to monitor
> >> fragmentation and used /sys/devices/system/node/node*/compact to trigger
> >> compaction. The prototype demonstrated this approach has merits.
> >>
> >> The primary reason to implement this logic in the kernel is to make the
> >> kernel self-tuning.
> > 
> > What makes this particular self-tuning an universal win? In other words
> > there are many ways to analyze the memory pressure and feedback it back
> > that I can think of. It is quite likely that very specific workloads
> > would have very specific demands there. I have seen cases where are
> > trivial increase of min_free_kbytes to normally insane value worked
> > really great for a DB workload because the wasted memory didn't matter
> > for example.
> 
> Hi Michal,
> 
> The problem is not so much as do we have enough knobs available, rather
> how do we tweak them dynamically to avoid allocation stalls. Knobs like
> watermarks and min_free_kbytes are set once typically and left alone.

Does anything prevent from tuning these knobs more dynamically based on
already exported metrics?

> Allocation stalls show up even on much smaller scale than large DB or
> cloud platforms. I have seen it on a desktop class machine running a few
> services in the background. Desktop is running gnome3, I would lock the
> screen and come back to unlock it a day or two later. In that time most
> of memory has been consumed by buffer/page cache. Just unlocking the
> screen can take 30+ seconds while system reclaims pages to be able swap
> back in all the processes that were inactive so far.

This sounds like a bug to me.

> It is true different workloads will have different requirements and that
> is what I am attempting to address here. Instead of tweaking the knobs
> statically based upon one workload requirements, I am looking at the
> trend of memory consumption instead. A best fit line showing recent
> trend can be quite indicative of what the workload is doing in terms of
> memory.

Is there anything preventing from following that trend from the
userspace and trigger background reclaim earlier to not even get to the
direct reclaim though?

> For instance, a cloud server might be running a certain number
> of instances for a few days and it can end up using any memory not used
> up by tasks, for buffer/page cache. Now the sys admin gets a request to
> launch another instance and when they try to to do that, system starts
> to allocate pages and soon runs out of free pages. We are now in direct
> reclaim path and it can take significant amount of time to find all free
> pages the new task needs. If the kernel were watching the memory
> consumption trend instead, it could see that the trend line shows a
> complete exhaustion of free pages or 100% fragmentation in near future,
> irrespective of what the workload is.

I am confused now. How can an unpredictable action (like sys admin
starting a new workload) be handled by watching a memory consumption
history trend? From the above description I would expect that the system
would be in a balanced state for few days when a new instance is
launched. The only reasonable thing to do then is to trigger the reclaim
before the workload is spawned but then what is the actual difference
between direct reclaim and an early reclaim?

[...]
> > I agree on this point. Is the current set of tunning sufficient? What
> > would be missing if not?
> > 
> 
> We have knob available to force compaction immediately. That is helpful
> and in some case, sys admins have resorted to forcing compaction on all
> zones before launching a new cloud instance or loading a new database.
> Some admins have resorted to using /proc/sys/vm/drop_caches to force
> buffer/page cache pages to be freed up. Either of these solutions causes
> system load to go up immediately while kswapd/kcompactd run to free up
> and compact pages. This is far from ideal. Other knobs available seem to
> be hard to set correctly especially on servers that run mixed workloads
> which results in a regular stream of customer complaints coming in about
> system stalling at most inopportune times.

Then let's talk about what is missing in the existing tuning we already
provide. I do agree that compaction needs some love but I am under
impression that min_free_kbytes and watermark_*_factor should give a
decent abstraction to control the background reclaim. If that is not the
case then I am really interested on examples because I might be easily
missing something there.

Thanks!
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-15 17:02         ` Michal Hocko
@ 2019-08-15 20:51           ` Khalid Aziz
  2019-08-21 14:06             ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Khalid Aziz @ 2019-08-15 20:51 UTC (permalink / raw)
  To: Michal Hocko
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On 8/15/19 11:02 AM, Michal Hocko wrote:
> On Thu 15-08-19 10:27:26, Khalid Aziz wrote:
>> On 8/14/19 2:58 AM, Michal Hocko wrote:
>>> On Tue 13-08-19 09:20:51, Khalid Aziz wrote:
>>>> On 8/13/19 8:05 AM, Michal Hocko wrote:
>>>>> On Mon 12-08-19 19:40:10, Khalid Aziz wrote:
>>>>> [...]
>>>>>> Patch 1 adds code to maintain a sliding lookback window of (time, number
>>>>>> of free pages) points which can be updated continuously and adds code to
>>>>>> compute best fit line across these points. It also adds code to use the
>>>>>> best fit lines to determine if kernel must start reclamation or
>>>>>> compaction.
>>>>>>
>>>>>> Patch 2 adds code to collect data points on free pages of various orders
>>>>>> at different points in time, uses code in patch 1 to update sliding
>>>>>> lookback window with these points and kicks off reclamation or
>>>>>> compaction based upon the results it gets.
>>>>>
>>>>> An important piece of information missing in your description is why
>>>>> do we need to keep that logic in the kernel. In other words, we have
>>>>> the background reclaim that acts on a wmark range and those are tunable
>>>>> from the userspace. The primary point of this background reclaim is to
>>>>> keep balance and prevent from direct reclaim. Why cannot you implement
>>>>> this or any other dynamic trend watching watchdog and tune watermarks
>>>>> accordingly? Something similar applies to kcompactd although we might be
>>>>> lacking a good interface.
>>>>>
>>>>
>>>> Hi Michal,
>>>>
>>>> That is a very good question. As a matter of fact the initial prototype
>>>> to assess the feasibility of this approach was written in userspace for
>>>> a very limited application. We wrote the initial prototype to monitor
>>>> fragmentation and used /sys/devices/system/node/node*/compact to trigger
>>>> compaction. The prototype demonstrated this approach has merits.
>>>>
>>>> The primary reason to implement this logic in the kernel is to make the
>>>> kernel self-tuning.
>>>
>>> What makes this particular self-tuning an universal win? In other words
>>> there are many ways to analyze the memory pressure and feedback it back
>>> that I can think of. It is quite likely that very specific workloads
>>> would have very specific demands there. I have seen cases where are
>>> trivial increase of min_free_kbytes to normally insane value worked
>>> really great for a DB workload because the wasted memory didn't matter
>>> for example.
>>
>> Hi Michal,
>>
>> The problem is not so much as do we have enough knobs available, rather
>> how do we tweak them dynamically to avoid allocation stalls. Knobs like
>> watermarks and min_free_kbytes are set once typically and left alone.
> 
> Does anything prevent from tuning these knobs more dynamically based on
> already exported metrics?

Hi Michal,

The smarts for tuning these knobs can be implemented in userspace and
more knobs added to allow for what is missing today, but we get back to
the same issue as before. That does nothing to make kernel self-tuning
and adds possibly even more knobs to userspace. Something so fundamental
to kernel memory management as making free pages available when they are
needed really should be taken care of in the kernel itself. Moving it to
userspace just means the kernel is hobbled unless one installs and tunes
a userspace package correctly.

> 
>> Allocation stalls show up even on much smaller scale than large DB or
>> cloud platforms. I have seen it on a desktop class machine running a few
>> services in the background. Desktop is running gnome3, I would lock the
>> screen and come back to unlock it a day or two later. In that time most
>> of memory has been consumed by buffer/page cache. Just unlocking the
>> screen can take 30+ seconds while system reclaims pages to be able swap
>> back in all the processes that were inactive so far.
> 
> This sounds like a bug to me.

Quite possibly. I had seen that behavior with 4.17, 4.18 and 4.19
kernels. I then just moved enough tasks off of my machine to other
machines to make the problem go away. So I can't say if the problem has
persisted past 4.19.

> 
>> It is true different workloads will have different requirements and that
>> is what I am attempting to address here. Instead of tweaking the knobs
>> statically based upon one workload requirements, I am looking at the
>> trend of memory consumption instead. A best fit line showing recent
>> trend can be quite indicative of what the workload is doing in terms of
>> memory.
> 
> Is there anything preventing from following that trend from the
> userspace and trigger background reclaim earlier to not even get to the
> direct reclaim though?

It is possible to do that in userspace for compaction. We will need a
smaller hammer than drop_cache to do the same for reclamation. This
still makes kernel dependent upon a properly configured userspace
program for it to do something as fundamental as free page management.
That does not sound like a good situation. Allocation stalls have been a
problem for many years (I could find patch from as far back as 2002
attempting to address allocation stalls). More tuning knobs have been
temporary solution at best since workloads and storage technology keep
changing and processors keep getting faster overall.

> 
>> For instance, a cloud server might be running a certain number
>> of instances for a few days and it can end up using any memory not used
>> up by tasks, for buffer/page cache. Now the sys admin gets a request to
>> launch another instance and when they try to to do that, system starts
>> to allocate pages and soon runs out of free pages. We are now in direct
>> reclaim path and it can take significant amount of time to find all free
>> pages the new task needs. If the kernel were watching the memory
>> consumption trend instead, it could see that the trend line shows a
>> complete exhaustion of free pages or 100% fragmentation in near future,
>> irrespective of what the workload is.
> 
> I am confused now. How can an unpredictable action (like sys admin
> starting a new workload) be handled by watching a memory consumption
> history trend? From the above description I would expect that the system
> would be in a balanced state for few days when a new instance is
> launched. The only reasonable thing to do then is to trigger the reclaim
> before the workload is spawned but then what is the actual difference
> between direct reclaim and an early reclaim?

If kernel watches trend far ahead enough, it can start
reclaiming/compacting well in advance and keep direct reclamation at bay
even if there is sudden surge of memory demand. A pathological case of
userspace suddenly demanding 100's of GB of memory in one request is
always difficult to tackle. For such cases, triggering
reclamation/compaction and waiting to launch new process until enough
free pages are available might be the only solution. A more normal case
will be a continuous stream of page allocations until a database is
fully populated or a new server instance is launched. It is like a
bucket with a hole. We can wait to start filling it until water gets
very low in it or notice that the hole at the bottom has been unplugged
and water is draining fast, so we start filling it before water gets too
low. If we have been observing how fast the bucket fills up with no leak
and how fast is the current drain, we can start filling in advance
enough that water never gets too low. That is what I referred to as
improvements to current patch, i.e. track current reclamation/compaction
rate in kswapd and kcompactd and use those rates to determine how far in
advance do we start reclaiming/compacting.

> 
> [...]
>>> I agree on this point. Is the current set of tunning sufficient? What
>>> would be missing if not?
>>>
>>
>> We have knob available to force compaction immediately. That is helpful
>> and in some case, sys admins have resorted to forcing compaction on all
>> zones before launching a new cloud instance or loading a new database.
>> Some admins have resorted to using /proc/sys/vm/drop_caches to force
>> buffer/page cache pages to be freed up. Either of these solutions causes
>> system load to go up immediately while kswapd/kcompactd run to free up
>> and compact pages. This is far from ideal. Other knobs available seem to
>> be hard to set correctly especially on servers that run mixed workloads
>> which results in a regular stream of customer complaints coming in about
>> system stalling at most inopportune times.
> 
> Then let's talk about what is missing in the existing tuning we already
> provide. I do agree that compaction needs some love but I am under
> impression that min_free_kbytes and watermark_*_factor should give a
> decent abstraction to control the background reclaim. If that is not the
> case then I am really interested on examples because I might be easily
> missing something there.

Just last week an email crossed my mailbox where an order 4 allocation
failed on a server that has 768 GB memory and had 355,000 free pages of
order 2 and lower available at the time. That allocation failure brought
down an important service and was a significant disruption.

These knobs do give some control to userspace but their values depend
upon workload and it is easy enough to set them wrong. Finding the right
value is not easy for servers that run mixed workloads. So it is not
that there are not enough knobs or we can not add more knobs. The
question is is that the right direction to go or do we make kernel
self-tuning and give it the capability to deal with these issues without
requiring sys admins to be able to determine correct values for these
knobs for every new workload.

Thanks,
Khalid



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-15 20:51           ` Khalid Aziz
@ 2019-08-21 14:06             ` Michal Hocko
  2019-08-26 20:44               ` Bharath Vedartham
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2019-08-21 14:06 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Thu 15-08-19 14:51:04, Khalid Aziz wrote:
> Hi Michal,
> 
> The smarts for tuning these knobs can be implemented in userspace and
> more knobs added to allow for what is missing today, but we get back to
> the same issue as before. That does nothing to make kernel self-tuning
> and adds possibly even more knobs to userspace. Something so fundamental
> to kernel memory management as making free pages available when they are
> needed really should be taken care of in the kernel itself. Moving it to
> userspace just means the kernel is hobbled unless one installs and tunes
> a userspace package correctly.

From my past experience the existing autotunig works mostly ok for a
vast variety of workloads. A more clever tuning is possible and people
are doing that already. Especially for cases when the machine is heavily
overcommited. There are different ways to achieve that. Your new
in-kernel auto tuning would have to be tested on a large variety of
workloads to be proven and riskless. So I am quite skeptical to be
honest.

Therefore I would really focus on discussing whether we have sufficient
APIs to tune the kernel to do the right thing when needed. That requires
to identify gaps in that area.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-21 14:06             ` Michal Hocko
@ 2019-08-26 20:44               ` Bharath Vedartham
  2019-08-27  6:16                 ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Bharath Vedartham @ 2019-08-26 20:44 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Khalid Aziz, akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

Hi Michal,

Here are some of my thoughts,
On Wed, Aug 21, 2019 at 04:06:32PM +0200, Michal Hocko wrote:
> On Thu 15-08-19 14:51:04, Khalid Aziz wrote:
> > Hi Michal,
> > 
> > The smarts for tuning these knobs can be implemented in userspace and
> > more knobs added to allow for what is missing today, but we get back to
> > the same issue as before. That does nothing to make kernel self-tuning
> > and adds possibly even more knobs to userspace. Something so fundamental
> > to kernel memory management as making free pages available when they are
> > needed really should be taken care of in the kernel itself. Moving it to
> > userspace just means the kernel is hobbled unless one installs and tunes
> > a userspace package correctly.
> 
> From my past experience the existing autotunig works mostly ok for a
> vast variety of workloads. A more clever tuning is possible and people
> are doing that already. Especially for cases when the machine is heavily
> overcommited. There are different ways to achieve that. Your new
> in-kernel auto tuning would have to be tested on a large variety of
> workloads to be proven and riskless. So I am quite skeptical to be
> honest.
Could you give some references to such works regarding tuning the kernel? 

Essentially, Our idea here is to foresee potential memory exhaustion.
This foreseeing is done by observing the workload, observing the memory
usage of the workload. Based on this observations, we make a prediction
whether or not memory exhaustion could occur. If memory exhaustion
occurs, we reclaim some more memory. kswapd stops reclaim when
hwmark is reached. hwmark is usually set to a fairly low percentage of
total memory, in my system for zone Normal hwmark is 13% of total pages.
So there is scope for reclaiming more pages to make sure system does not
suffer from a lack of pages. 

Since we are "predicting", there could be mistakes in our prediction.
The question is how bad are the mistakes? How much does a wrong
prediction cost? 

A right prediction would be a win. We rightfully predict that there could be
exhaustion, this would lead to us reclaiming more memory(than hwmark)/compacting
memory beforehand(unlike kcompactd which does it on demand).

A wrong prediction on the other hand can be categorized into 2
situations: 
(i) We foresee memory exhaustion but there is no memory exhaustion in
the future. In this case, we would be reclaiming more memory for not a lot
of use. This situation is not entirely bad but we definitly waste a few
clock cycles.
(ii) We don't foresee memory exhaustion but there is memory exhaustion
in the future. This is a bad case where we may end up going into direct
compaction/reclaim. But it could be the case that the memory exhaustion
is far in the future and even though we didnt see it, kswapd could have
reclaimed that memory or drop_cache occured.

How often we hit wrong predictions of type (ii) would really determine our
efficiency. 

Coming to your situation of provisioning vms. A situation where our work
will come to good is when there is a cloud burst. When the demand for
vms is super high, our algorithm could adapt to the increase in demand
for these vms and reclaim more memory/compact more memory to reduce
allocation stalls and improve performance.
> Therefore I would really focus on discussing whether we have sufficient
> APIs to tune the kernel to do the right thing when needed. That requires
> to identify gaps in that area. 
One thing that comes to my mind is based on the issue Khalid mentioned
earlier on how his desktop took more than 30secs to boot up because of
the caches using up a lot of memory.
Rather than allowing any unused memory to be the page cache, would it be
a good idea to fix a size for the caches and elastically change the size
based on the workload?

Thank you
Bharath

> -- 
> Michal Hocko
> SUSE Labs
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-26 20:44               ` Bharath Vedartham
@ 2019-08-27  6:16                 ` Michal Hocko
  2019-08-28 13:09                   ` Bharath Vedartham
  2019-08-30 21:35                   ` Khalid Aziz
  0 siblings, 2 replies; 17+ messages in thread
From: Michal Hocko @ 2019-08-27  6:16 UTC (permalink / raw)
  To: Bharath Vedartham
  Cc: Khalid Aziz, akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Tue 27-08-19 02:14:20, Bharath Vedartham wrote:
> Hi Michal,
> 
> Here are some of my thoughts,
> On Wed, Aug 21, 2019 at 04:06:32PM +0200, Michal Hocko wrote:
> > On Thu 15-08-19 14:51:04, Khalid Aziz wrote:
> > > Hi Michal,
> > > 
> > > The smarts for tuning these knobs can be implemented in userspace and
> > > more knobs added to allow for what is missing today, but we get back to
> > > the same issue as before. That does nothing to make kernel self-tuning
> > > and adds possibly even more knobs to userspace. Something so fundamental
> > > to kernel memory management as making free pages available when they are
> > > needed really should be taken care of in the kernel itself. Moving it to
> > > userspace just means the kernel is hobbled unless one installs and tunes
> > > a userspace package correctly.
> > 
> > From my past experience the existing autotunig works mostly ok for a
> > vast variety of workloads. A more clever tuning is possible and people
> > are doing that already. Especially for cases when the machine is heavily
> > overcommited. There are different ways to achieve that. Your new
> > in-kernel auto tuning would have to be tested on a large variety of
> > workloads to be proven and riskless. So I am quite skeptical to be
> > honest.
> Could you give some references to such works regarding tuning the kernel? 

Talk to Facebook guys and their usage of PSI to control the memory
distribution and OOM situations.

> Essentially, Our idea here is to foresee potential memory exhaustion.
> This foreseeing is done by observing the workload, observing the memory
> usage of the workload. Based on this observations, we make a prediction
> whether or not memory exhaustion could occur.

I understand that and I am not disputing this can be useful. All I do
argue here is that there is unlikely a good "crystall ball" for most/all
workloads that would justify its inclusion into the kernel and that this
is something better done in the userspace where you can experiment and
tune the behavior for a particular workload of your interest.

Therefore I would like to shift the discussion towards existing APIs and
whether they are suitable for such an advance auto-tuning. I haven't
heard any arguments about missing pieces.

> If memory exhaustion
> occurs, we reclaim some more memory. kswapd stops reclaim when
> hwmark is reached. hwmark is usually set to a fairly low percentage of
> total memory, in my system for zone Normal hwmark is 13% of total pages.
> So there is scope for reclaiming more pages to make sure system does not
> suffer from a lack of pages. 

Yes and we have ways to control those watermarks that your monitoring
tool can use to alter the reclaim behavior.
 
[...]
> > Therefore I would really focus on discussing whether we have sufficient
> > APIs to tune the kernel to do the right thing when needed. That requires
> > to identify gaps in that area. 
> One thing that comes to my mind is based on the issue Khalid mentioned
> earlier on how his desktop took more than 30secs to boot up because of
> the caches using up a lot of memory.
> Rather than allowing any unused memory to be the page cache, would it be
> a good idea to fix a size for the caches and elastically change the size
> based on the workload?

I do not think so. Limiting the pagecache is unlikely to help as it is
really cheap to reclaim most of the time. In those cases when this is
not the case (e.g. the underlying FS needs to flush and/or metadata)
then the same would be possible in a restricted page cache situation
and you could easily end up stalled waiting for pagecache (e.g. any
executable/library) while there is a lot of memory.

I cannot comment on the Khalid's example because there were no details
there but I would be really surprised if the primary source of stall was
the pagecache.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-27  6:16                 ` Michal Hocko
@ 2019-08-28 13:09                   ` Bharath Vedartham
  2019-08-28 13:15                     ` Michal Hocko
  2019-08-30 21:35                   ` Khalid Aziz
  1 sibling, 1 reply; 17+ messages in thread
From: Bharath Vedartham @ 2019-08-28 13:09 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Khalid Aziz, akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

Hi Michal, Thank you for spending your time on this.
On Tue, Aug 27, 2019 at 08:16:06AM +0200, Michal Hocko wrote:
> On Tue 27-08-19 02:14:20, Bharath Vedartham wrote:
> > Hi Michal,
> > 
> > Here are some of my thoughts,
> > On Wed, Aug 21, 2019 at 04:06:32PM +0200, Michal Hocko wrote:
> > > On Thu 15-08-19 14:51:04, Khalid Aziz wrote:
> > > > Hi Michal,
> > > > 
> > > > The smarts for tuning these knobs can be implemented in userspace and
> > > > more knobs added to allow for what is missing today, but we get back to
> > > > the same issue as before. That does nothing to make kernel self-tuning
> > > > and adds possibly even more knobs to userspace. Something so fundamental
> > > > to kernel memory management as making free pages available when they are
> > > > needed really should be taken care of in the kernel itself. Moving it to
> > > > userspace just means the kernel is hobbled unless one installs and tunes
> > > > a userspace package correctly.
> > > 
> > > From my past experience the existing autotunig works mostly ok for a
> > > vast variety of workloads. A more clever tuning is possible and people
> > > are doing that already. Especially for cases when the machine is heavily
> > > overcommited. There are different ways to achieve that. Your new
> > > in-kernel auto tuning would have to be tested on a large variety of
> > > workloads to be proven and riskless. So I am quite skeptical to be
> > > honest.
> > Could you give some references to such works regarding tuning the kernel? 
> 
> Talk to Facebook guys and their usage of PSI to control the memory
> distribution and OOM situations.
Yup. Thanks for the pointer.
> > Essentially, Our idea here is to foresee potential memory exhaustion.
> > This foreseeing is done by observing the workload, observing the memory
> > usage of the workload. Based on this observations, we make a prediction
> > whether or not memory exhaustion could occur.
> 
> I understand that and I am not disputing this can be useful. All I do
> argue here is that there is unlikely a good "crystall ball" for most/all
> workloads that would justify its inclusion into the kernel and that this
> is something better done in the userspace where you can experiment and
> tune the behavior for a particular workload of your interest.
> 
> Therefore I would like to shift the discussion towards existing APIs and
> whether they are suitable for such an advance auto-tuning. I haven't
> heard any arguments about missing pieces.
I understand your concern here. Just confirming, by APIs you are
referring to sysctls, sysfs files and stuff like that right?
> > If memory exhaustion
> > occurs, we reclaim some more memory. kswapd stops reclaim when
> > hwmark is reached. hwmark is usually set to a fairly low percentage of
> > total memory, in my system for zone Normal hwmark is 13% of total pages.
> > So there is scope for reclaiming more pages to make sure system does not
> > suffer from a lack of pages. 
> 
> Yes and we have ways to control those watermarks that your monitoring
> tool can use to alter the reclaim behavior.
Just to confirm here, I am aware of one way which is to alter
min_kfree_bytes values. What other ways are there to alter watermarks
from user space? 
> [...]
> > > Therefore I would really focus on discussing whether we have sufficient
> > > APIs to tune the kernel to do the right thing when needed. That requires
> > > to identify gaps in that area. 
> > One thing that comes to my mind is based on the issue Khalid mentioned
> > earlier on how his desktop took more than 30secs to boot up because of
> > the caches using up a lot of memory.
> > Rather than allowing any unused memory to be the page cache, would it be
> > a good idea to fix a size for the caches and elastically change the size
> > based on the workload?
> 
> I do not think so. Limiting the pagecache is unlikely to help as it is
> really cheap to reclaim most of the time. In those cases when this is
> not the case (e.g. the underlying FS needs to flush and/or metadata)
> then the same would be possible in a restricted page cache situation
> and you could easily end up stalled waiting for pagecache (e.g. any
> executable/library) while there is a lot of memory.
That makes sense to me.
> I cannot comment on the Khalid's example because there were no details
> there but I would be really surprised if the primary source of stall was
> the pagecache.
Should have done more research before talking :) Sorry about that.
> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-28 13:09                   ` Bharath Vedartham
@ 2019-08-28 13:15                     ` Michal Hocko
  0 siblings, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2019-08-28 13:15 UTC (permalink / raw)
  To: Bharath Vedartham
  Cc: Khalid Aziz, akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Wed 28-08-19 18:39:22, Bharath Vedartham wrote:
[...]
> > Therefore I would like to shift the discussion towards existing APIs and
> > whether they are suitable for such an advance auto-tuning. I haven't
> > heard any arguments about missing pieces.
> I understand your concern here. Just confirming, by APIs you are
> referring to sysctls, sysfs files and stuff like that right?

Yup

> > > If memory exhaustion
> > > occurs, we reclaim some more memory. kswapd stops reclaim when
> > > hwmark is reached. hwmark is usually set to a fairly low percentage of
> > > total memory, in my system for zone Normal hwmark is 13% of total pages.
> > > So there is scope for reclaiming more pages to make sure system does not
> > > suffer from a lack of pages. 
> > 
> > Yes and we have ways to control those watermarks that your monitoring
> > tool can use to alter the reclaim behavior.
> Just to confirm here, I am aware of one way which is to alter
> min_kfree_bytes values. What other ways are there to alter watermarks
> from user space? 

/proc/sys/vm/watermark_*factor
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-27  6:16                 ` Michal Hocko
  2019-08-28 13:09                   ` Bharath Vedartham
@ 2019-08-30 21:35                   ` Khalid Aziz
  2019-09-02  8:02                     ` Michal Hocko
  1 sibling, 1 reply; 17+ messages in thread
From: Khalid Aziz @ 2019-08-30 21:35 UTC (permalink / raw)
  To: Michal Hocko, Bharath Vedartham
  Cc: akpm, vbabka, mgorman, dan.j.williams, osalvador,
	richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On 8/27/19 12:16 AM, Michal Hocko wrote:
> On Tue 27-08-19 02:14:20, Bharath Vedartham wrote:
>> Hi Michal,
>>
>> Here are some of my thoughts,
>> On Wed, Aug 21, 2019 at 04:06:32PM +0200, Michal Hocko wrote:
>>> On Thu 15-08-19 14:51:04, Khalid Aziz wrote:
>>>> Hi Michal,
>>>>
>>>> The smarts for tuning these knobs can be implemented in userspace and
>>>> more knobs added to allow for what is missing today, but we get back to
>>>> the same issue as before. That does nothing to make kernel self-tuning
>>>> and adds possibly even more knobs to userspace. Something so fundamental
>>>> to kernel memory management as making free pages available when they are
>>>> needed really should be taken care of in the kernel itself. Moving it to
>>>> userspace just means the kernel is hobbled unless one installs and tunes
>>>> a userspace package correctly.
>>>
>>> From my past experience the existing autotunig works mostly ok for a
>>> vast variety of workloads. A more clever tuning is possible and people
>>> are doing that already. Especially for cases when the machine is heavily
>>> overcommited. There are different ways to achieve that. Your new
>>> in-kernel auto tuning would have to be tested on a large variety of
>>> workloads to be proven and riskless. So I am quite skeptical to be
>>> honest.
>> Could you give some references to such works regarding tuning the kernel? 
> 
> Talk to Facebook guys and their usage of PSI to control the memory
> distribution and OOM situations.
> 
>> Essentially, Our idea here is to foresee potential memory exhaustion.
>> This foreseeing is done by observing the workload, observing the memory
>> usage of the workload. Based on this observations, we make a prediction
>> whether or not memory exhaustion could occur.
> 
> I understand that and I am not disputing this can be useful. All I do
> argue here is that there is unlikely a good "crystall ball" for most/all
> workloads that would justify its inclusion into the kernel and that this
> is something better done in the userspace where you can experiment and
> tune the behavior for a particular workload of your interest.
> 
> Therefore I would like to shift the discussion towards existing APIs and
> whether they are suitable for such an advance auto-tuning. I haven't
> heard any arguments about missing pieces.
> 

We seem to be in agreement that dynamic tuning is a useful tool. The
question is does that tuning belong in the kernel or in userspace. I see
your point that putting it in userspace allows for faster evolution of
such predictive algorithm than it would be for in-kernel algorithm. I
see following pros and cons with that approach:

+ Keeps complexity of predictive algorithms out of kernel and allows for
faster evolution of these algorithms in userspace.

+ Tuning algorithm can be fine-tuned to specific workloads as appropriate

- Kernel is not self-tuning and is dependent upon a userspace tool to
perform well in a fundamental area of memory management.

- More knobs get added to already crowded field of knobs to allow for
userspace to tweak mm subsystem for better performance.

As for adding predictive algorithm to kernel, I see following pros and cons:

+ Kernel becomes self-tuning and can respond to varying workloads better.

+ Allows for number of user visible tuning knobs to be reduced.

- Getting predictive algorithm right is important to ensure none of the
users see worse performance than today.

- Adds a certain level of complexity to mm subsystem

Pushing the burden of tuning kernel to userspace is no different from
where we are today and we still have allocation stall issues after years
of tuning from userspace. Adding more knobs to aid tuning from userspace
just makes the kernel look even more complex to the users. In my
opinion, a self tuning kernel should be the base for long term solution.
We can still export knobs to userspace to allow for users with specific
needs to further fine-tune but the base kernel should work well enough
for majority of users. We are not there at this point. We can discuss
what are the missing pieces to support further tuning from userspace but
is continuing to tweak from userpace the right long term strategy?

Assuming we want to continue to support tuning from userspace instead, I
can't say more knobs are needed right now. We may have enough knobs and
monitors available between /proc/buddyinfo, /sys/devices/system/node and
/proc/sys/vm. Right values for these knobs and their interaction is not
always clear. Maybe we need to simplify these knobs into something more
understandable for average user as opposed to adding more knobs.

--
Khalid






^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-08-30 21:35                   ` Khalid Aziz
@ 2019-09-02  8:02                     ` Michal Hocko
  2019-09-03 19:45                       ` Khalid Aziz
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2019-09-02  8:02 UTC (permalink / raw)
  To: Khalid Aziz
  Cc: Bharath Vedartham, akpm, vbabka, mgorman, dan.j.williams,
	osalvador, richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On Fri 30-08-19 15:35:06, Khalid Aziz wrote:
[...]
> - Kernel is not self-tuning and is dependent upon a userspace tool to
> perform well in a fundamental area of memory management.

You keep bringing this up without an actual analysis of a wider range of
workloads that would prove that the default behavior is really
suboptimal. You are making some assumptions based on a very specific DB
workload which might benefit from a more aggressive background workload.
If you really want to sell any changes to auto tuning then you really
need to come up with more workloads and an actual theory why an early
and more aggressive reclaim pays off.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH 0/2] Add predictive memory reclamation and compaction
  2019-09-02  8:02                     ` Michal Hocko
@ 2019-09-03 19:45                       ` Khalid Aziz
  0 siblings, 0 replies; 17+ messages in thread
From: Khalid Aziz @ 2019-09-03 19:45 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Bharath Vedartham, akpm, vbabka, mgorman, dan.j.williams,
	osalvador, richard.weiyang, hannes, arunks, rppt, jgg, amir73il,
	alexander.h.duyck, linux-mm, linux-kernel-mentees, linux-kernel

On 9/2/19 2:02 AM, Michal Hocko wrote:
> On Fri 30-08-19 15:35:06, Khalid Aziz wrote:
> [...]
>> - Kernel is not self-tuning and is dependent upon a userspace tool to
>> perform well in a fundamental area of memory management.
> 
> You keep bringing this up without an actual analysis of a wider range of
> workloads that would prove that the default behavior is really
> suboptimal. You are making some assumptions based on a very specific DB
> workload which might benefit from a more aggressive background workload.
> If you really want to sell any changes to auto tuning then you really
> need to come up with more workloads and an actual theory why an early
> and more aggressive reclaim pays off.
> 

Hi Michal,

Fair enough. I have seen DB and cloud server workloads suffer under
default behavior of reclaim/compaction. It manifests itself as prolonged
delays in populating new database and in launching new cloud
applications. It is fair to ask for the predictive algorithm to be
proven before pulling something like this in kernel. I will implement
this same algorithm in userspace and use existing knobs to tune kernel
dynamically. Running that with large number of workloads will provide
data on how often does this help. If I find any useful tunables missing,
I will be sure to bring it up.

Thanks,
Khalid



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-09-03 19:45 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-13  1:40 [RFC PATCH 0/2] Add predictive memory reclamation and compaction Khalid Aziz
2019-08-13  1:40 ` [RFC PATCH 1/2] mm: Add trend based prediction algorithm for memory usage Khalid Aziz
2019-08-13  1:40 ` [RFC PATCH 2/2] mm/vmscan: Add fragmentation and page starvation prediction to kswapd Khalid Aziz
2019-08-13 14:05 ` [RFC PATCH 0/2] Add predictive memory reclamation and compaction Michal Hocko
2019-08-13 15:20   ` Khalid Aziz
2019-08-14  8:58     ` Michal Hocko
2019-08-15 16:27       ` Khalid Aziz
2019-08-15 17:02         ` Michal Hocko
2019-08-15 20:51           ` Khalid Aziz
2019-08-21 14:06             ` Michal Hocko
2019-08-26 20:44               ` Bharath Vedartham
2019-08-27  6:16                 ` Michal Hocko
2019-08-28 13:09                   ` Bharath Vedartham
2019-08-28 13:15                     ` Michal Hocko
2019-08-30 21:35                   ` Khalid Aziz
2019-09-02  8:02                     ` Michal Hocko
2019-09-03 19:45                       ` Khalid Aziz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).