linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
@ 2020-02-24 12:30 SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 01/14] mm: " SeongJae Park
                   ` (15 more replies)
  0 siblings, 16 replies; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

Introduction
============

Memory management decisions can be improved if finer data access information is
available.  However, because such finer information usually comes with higher
overhead, most systems including Linux forgives the potential improvement and
rely on only coarse information or some light-weight heuristics.  The
pseudo-LRU and the aggressive THP promotions are such examples.

A number of experimental data access pattern awared memory management
optimizations (refer to 'Appendix A' for more details) say the sacrifices are
huge.  However, none of those has successfully adopted to Linux kernel mainly
due to the absence of a scalable and efficient data access monitoring
mechanism.  Refer to 'Appendix B' to see the limitations of existing memory
monitoring mechanisms.

DAMON is a data access monitoring subsystem for the problem.  It is 1) accurate
enough to be used for the DRAM level memory management (a straightforward
DAMON-based optimization achieved up to 2.55x speedup), 2) light-weight enough
to be applied online (compared to a straightforward access monitoring scheme,
DAMON is up to 94.242.42x lighter) and 3) keeps predefined upper-bound overhead
regardless of the size of target workloads (thus scalable).  Refer to 'Appendix
C' if you interested in how it is possible.

DAMON has mainly designed for the kernel's memory management mechanisms.
However, because it is implemented as a standalone kernel module and provides
several interfaces, it can be used by a wide range of users including kernel
space programs, user space programs, programmers, and administrators.  DAMON
is now supporting the monitoring only, but it will also provide simple and
convenient data access pattern awared memory managements by itself.  Refer to
'Appendix D' for more detailed expected usages of DAMON.


Visualized Outputs of DAMON
===========================

For intuitively understanding of DAMON, I made web pages[1-8] showing the
visualized dynamic data access pattern of various realistic workloads, which I
picked up from PARSEC3 and SPLASH-2X bechmark suites.  The figures are
generated using the user space tool in 10th patch of this patchset.

There are pages showing the heatmap format dynamic access pattern of each
workload for heap area[1], mmap()-ed area[2], and stack[3] area.  I splitted
the entire address space to the three area because there are huge unmapped
regions between the areas.

You can also show how the dynamic working set size of each workload is
distributed[4], and how it is chronologically changing[5].

The most important characteristic of DAMON is its promise of the upperbound of
the monitoring overhead.  To show whether DAMON keeps the promise well, I
visualized the number of monitoring operations required for each 5
milliseconds, which is configured to not exceed 1000.  You can show the
distribution of the numbers[6] and how it changes chronologically[7].

[1] https://damonitor.github.io/reports/latest/by_image/heatmap.0.png.html
[2] https://damonitor.github.io/reports/latest/by_image/heatmap.1.png.html
[3] https://damonitor.github.io/reports/latest/by_image/heatmap.2.png.html
[4] https://damonitor.github.io/reports/latest/by_image/wss_sz.png.html
[5] https://damonitor.github.io/reports/latest/by_image/wss_time.png.html
[6] https://damonitor.github.io/reports/latest/by_image/nr_regions_sz.png.html
[7] https://damonitor.github.io/reports/latest/by_image/nr_regions_time.png.html


Data Access Monitoring-based Operation Schemes
==============================================

As 'Appendix D' describes, DAMON can be used for data access monitoring-based
operation schemes (DAMOS).  RFC patchsets for DAMOS are already available
(https://lore.kernel.org/linux-mm/20200218085309.18346-1-sjpark@amazon.com/).

By applying a very simple scheme for THP promotion/demotion with a latest
version of the patchset (not posted yet), DAMON achieved 18x lower memory space
overhead compared to THP while preserving about 50% of the THP performance
benefit with SPLASH-2X benchmark suite.

The detailed setup and number will be posted soon with the next RFC patchset
for DAMOS.  The posting is currently scheduled for tomorrow.


Frequently Asked Questions
==========================

Q: Why DAMON is not integrated with perf?
A: From the perspective of perf like profilers, DAMON can be thought of as a
data source in kernel, like the tracepoints, the pressure stall information
(psi), or the idle page tracking.  Thus, it is easy to integrate DAMON with the
profilers.  However, this patchset doesn't provide a fancy perf integration
because current step of DAMON development is focused on its core logic only.
That said, DAMON already provides two interfaces for user space programs, which
based on debugfs and tracepoint, respectively.  Using the tracepoint interface,
you can use DAMON with perf.  This patchset also provides a debugfs interface
based user space tool for DAMON.  It can be used to record, visualize, and
analyze data access patterns of target processes in a convenient way.

Q: Why a new module, instead of extending perf or other tools?
A: First, DAMON aims to be used by other programs including the kernel.
Therefore, having dependency to specific tools like perf is not desirable.
Second, because it need to be lightweight as much as possible so that it can be
used online, any unnecessary overhead such as kernel - user space context
switching cost should be avoided.  These are the two most biggest reasons why
DAMON is implemented in the kernel space.  The idle page tracking subsystem
would be the kernel module that most seems similar to DAMON.  However, its own
interface is not compatible with DAMON.  Also, the internal implementation of
it has no common part to be reused by DAMON.

Q: Can 'perf mem' provide the data required for DAMON?
A: On the systems supporting 'perf mem', yes.  DAMON is using the PTE Accessed
bits in low level.  Other H/W or S/W features that can be used for the purpose
could be used.  However, as explained with above question, DAMON need to be
implemented in the kernel space.


Evaluations
===========

A prototype of DAMON has evaluated on an Intel Xeon E7-8837 machine using 20
benchmarks that picked from SPEC CPU 2006, NAS, Tensorflow Benchmark,
SPLASH-2X, and PARSEC 3 benchmark suite.  Nonethless, this section provides
only summary of the results.  For more detail, please refer to the slides used
for the introduction of DAMON at the Linux Plumbers Conference 2019[1] or the
MIDDLEWARE'19 industrial track paper[2].


Quality
-------

We first traced and visualized the data access pattern of each workload.  We
were able to confirm that the visualized results are reasonably accurate by
manually comparing those with the source code of the workloads.

To see the usefulness of the monitoring, we optimized 9 memory intensive
workloads among them for memory pressure situations using the DAMON outputs.
In detail, we identified frequently accessed memory regions in each workload
based on the DAMON results and protected them with ``mlock()`` system calls.
The optimized versions consistently show speedup (2.55x in best case, 1.65x in
average) under memory pressure.


Overhead
--------

We also measured the overhead of DAMON.  It was not only under the upperbound
we set, but was much lower (0.6 percent of the bound in best case, 13.288
percent of the bound in average).  This reduction of the overhead is mainly
resulted from its core mechanism called adaptive regions adjustment.  Refer to
'Appendix D' for more detail about the mechanism.  We also compared the
overhead of DAMON with that of a straightforward periodic access check-based
monitoring.  DAMON's overhead was smaller than it by 94,242.42x in best case,
3,159.61x in average.

The latest version of DAMON running with its default configuration consumes
only up to 1% of CPU time when applied to realistic workloads in PARSEC3 and
SPLASH-2X and makes no visible slowdown to the target processes.


References
==========

Prototypes of DAMON have introduced by an LPC kernel summit track talk[1] and
two academic papers[2,3].  Please refer to those for more detailed information,
especially the evaluations.  The latest version of the patchsets has also
introduced by an LWN artice[4].

[1] SeongJae Park, Tracing Data Access Pattern with Bounded Overhead and
    Best-effort Accuracy. In The Linux Kernel Summit, September 2019.
    https://linuxplumbersconf.org/event/4/contributions/548/
[2] SeongJae Park, Yunjae Lee, Heon Y. Yeom, Profiling Dynamic Data Access
    Patterns with Controlled Overhead and Quality. In 20th ACM/IFIP
    International Middleware Conference Industry, December 2019.
    https://dl.acm.org/doi/10.1145/3366626.3368125
[3] SeongJae Park, Yunjae Lee, Yunhee Kim, Heon Y. Yeom, Profiling Dynamic Data
    Access Patterns with Bounded Overhead and Accuracy. In IEEE International
    Workshop on Foundations and Applications of Self- Systems (FAS 2019), June
    2019.
[4] Jonathan Corbet, Memory-management optimization with DAMON. In Linux Weekly
    News (LWN), Feb 2020. https://lwn.net/Articles/812707/


Sequence Of Patches
===================

The patches are organized in the following sequence.  The first patch
introduces DAMON module, it's data structures, and data structure related
common functions.  Following three patches (2nd to 4th) implement the core
logics of DAMON, namely regions based sampling, adaptive regions adjustment,
and dynamic memory mapping chage adoption, one by one.

Following five patches are for low level users of DAMON.  The 5th patch
implements callbacks for each of monitoring steps so that users can do whatever
they want with the access patterns.  The 6th one implements recording of access
patterns in DAMON for better convenience and efficiency.  Each of next three
patches (7th to 9th) respectively adds a programmable interface for other
kernel code, a debugfs interface for privileged people and/or programs in user
space, and a tracepoint for other tracepoints supporting tracers such as perf.

Two patches for high level users of DAMON follows.  To provide a minimal
reference to the debugfs interface and for high level use/tests of the DAMON,
the next patch (10th) implements an user space tool.  The 11th patch adds a
document for administrators of DAMON.

Next two patches are for tests.  The 12th and 13th patches provide unit tests
(based on kunit) and user space tests (based on kselftest) respectively.

Finally, the last patch (14th) updates the MAINTAINERS file.

The patches are based on the v5.5.  You can also clone the complete git
tree:

    $ git clone git://github.com/sjp38/linux -b damon/patches/v6

The web is also available:
https://github.com/sjp38/linux/releases/tag/damon/patches/v6


Patch History
=============

Changes from v5
(https://lore.kernel.org/linux-mm/20200217103110.30817-1-sjpark@amazon.com/)
 - Fix minor bugs (sampling, record attributes, debugfs and user space tool)
 - selftests: Add debugfs interface tests for the bugs
 - Modify the user space tool to use its self default values for parameters
 - Fix pmg huge page access check

Changes from v4
(https://lore.kernel.org/linux-mm/20200210144812.26845-1-sjpark@amazon.com/)
 - Add 'Reviewed-by' for the kunit tests patch (Brendan Higgins)
 - Make the unit test to depedns on 'DAMON=y' (Randy Dunlap and kbuild bot)
   Reported-by: kbuild test robot <lkp@intel.com>
 - Fix m68k module build issue
   Reported-by: kbuild test robot <lkp@intel.com>
 - Add selftests
 - Seperate patches for low level users from core logics for better reading
 - Clean up debugfs interface
 - Trivial nitpicks

Changes from v3
(https://lore.kernel.org/linux-mm/20200204062312.19913-1-sj38.park@gmail.com/)
 - Fix i386 build issue
   Reported-by: kbuild test robot <lkp@intel.com>
 - Increase the default size of the monitoring result buffer to 1 MiB
 - Fix misc bugs in debugfs interface

Changes from v2
(https://lore.kernel.org/linux-mm/20200128085742.14566-1-sjpark@amazon.com/)
 - Move MAINTAINERS changes to last commit (Brendan Higgins)
 - Add descriptions for kunittest: why not only entire mappings and what the 4
   input sets are trying to test (Brendan Higgins)
 - Remove 'kdamond_need_stop()' test (Brendan Higgins)
 - Discuss about the 'perf mem' and DAMON (Peter Zijlstra)
 - Make CV clearly say what it actually does (Peter Zijlstra)
 - Answer why new module (Qian Cai)
 - Diable DAMON by default (Randy Dunlap)
 - Change the interface: Seperate recording attributes
   (attrs, record, rules) and allow multiple kdamond instances
 - Implement kernel API interface

Changes from v1
(https://lore.kernel.org/linux-mm/20200120162757.32375-1-sjpark@amazon.com/)
 - Rebase on v5.5
 - Add a tracepoint for integration with other tracers (Kirill A. Shutemov)
 - document: Add more description for the user space tool (Brendan Higgins)
 - unittest: Improve readability (Brendan Higgins)
 - unittest: Use consistent name and helpers function (Brendan Higgins)
 - Update PG_Young to avoid reclaim logic interference (Yunjae Lee)

Changes from RFC
(https://lore.kernel.org/linux-mm/20200110131522.29964-1-sjpark@amazon.com/)
 - Specify an ambiguous plan of access pattern based mm optimizations
 - Support loadable module build
 - Cleanup code

SeongJae Park (14):
  mm: Introduce Data Access MONitor (DAMON)
  mm/damon: Implement region based sampling
  mm/damon: Adaptively adjust regions
  mm/damon: Apply dynamic memory mapping changes
  mm/damon: Implement callbacks
  mm/damon: Implement access pattern recording
  mm/damon: Implement kernel space API
  mm/damon: Add debugfs interface
  mm/damon: Add a tracepoint for result writing
  tools: Add a minimal user-space tool for DAMON
  Documentation/admin-guide/mm: Add a document for DAMON
  mm/damon: Add kunit tests
  mm/damon: Add user selftests
  MAINTAINERS: Update for DAMON

 .../admin-guide/mm/data_access_monitor.rst    |  414 +++++
 Documentation/admin-guide/mm/index.rst        |    1 +
 MAINTAINERS                                   |   12 +
 include/linux/damon.h                         |   71 +
 include/trace/events/damon.h                  |   32 +
 mm/Kconfig                                    |   23 +
 mm/Makefile                                   |    1 +
 mm/damon-test.h                               |  604 +++++++
 mm/damon.c                                    | 1427 +++++++++++++++++
 mm/page_ext.c                                 |    1 +
 tools/damon/.gitignore                        |    1 +
 tools/damon/_dist.py                          |   36 +
 tools/damon/bin2txt.py                        |   64 +
 tools/damon/damo                              |   37 +
 tools/damon/heats.py                          |  358 +++++
 tools/damon/nr_regions.py                     |   89 +
 tools/damon/record.py                         |  212 +++
 tools/damon/report.py                         |   45 +
 tools/damon/wss.py                            |   95 ++
 tools/testing/selftests/damon/Makefile        |    7 +
 .../selftests/damon/_chk_dependency.sh        |   28 +
 tools/testing/selftests/damon/_chk_record.py  |   89 +
 .../testing/selftests/damon/debugfs_attrs.sh  |  139 ++
 .../testing/selftests/damon/debugfs_record.sh |   50 +
 24 files changed, 3836 insertions(+)
 create mode 100644 Documentation/admin-guide/mm/data_access_monitor.rst
 create mode 100644 include/linux/damon.h
 create mode 100644 include/trace/events/damon.h
 create mode 100644 mm/damon-test.h
 create mode 100644 mm/damon.c
 create mode 100644 tools/damon/.gitignore
 create mode 100644 tools/damon/_dist.py
 create mode 100644 tools/damon/bin2txt.py
 create mode 100755 tools/damon/damo
 create mode 100644 tools/damon/heats.py
 create mode 100644 tools/damon/nr_regions.py
 create mode 100644 tools/damon/record.py
 create mode 100644 tools/damon/report.py
 create mode 100644 tools/damon/wss.py
 create mode 100644 tools/testing/selftests/damon/Makefile
 create mode 100644 tools/testing/selftests/damon/_chk_dependency.sh
 create mode 100644 tools/testing/selftests/damon/_chk_record.py
 create mode 100755 tools/testing/selftests/damon/debugfs_attrs.sh
 create mode 100755 tools/testing/selftests/damon/debugfs_record.sh

-- 
2.17.1

============================= 8< ======================================

Appendix A: Related Works
=========================

There are a number of researches[1,2,3,4,5,6] optimizing memory management
mechanisms based on the actual memory access patterns that shows impressive
results.  However, most of those has no deep consideration about the monitoring
of the accesses itself.  Some of those focused on the overhead of the
monitoring, but does not consider the accuracy scalability[6] or has additional
dependencies[7].  Indeed, one recent research[5] about the proactive
reclamation has also proposed[8] to the kernel community but the monitoring
overhead was considered a main problem.

[1] Subramanya R Dulloor, Amitabha Roy, Zheguang Zhao, Narayanan Sundaram,
    Nadathur Satish, Rajesh Sankaran, Jeff Jackson, and Karsten Schwan. 2016.
    Data tiering in heterogeneous memory systems. In Proceedings of the 11th
    European Conference on Computer Systems (EuroSys). ACM, 15.
[2] Youngjin Kwon, Hangchen Yu, Simon Peter, Christopher J Rossbach, and Emmett
    Witchel. 2016. Coordinated and efficient huge page management with ingens.
    In 12th USENIX Symposium on Operating Systems Design and Implementation
    (OSDI).  705–721.
[3] Harald Servat, Antonio J Peña, Germán Llort, Estanislao Mercadal,
    HansChristian Hoppe, and Jesús Labarta. 2017. Automating the application
    data placement in hybrid memory systems. In 2017 IEEE International
    Conference on Cluster Computing (CLUSTER). IEEE, 126–136.
[4] Vlad Nitu, Boris Teabe, Alain Tchana, Canturk Isci, and Daniel Hagimont.
    2018. Welcome to zombieland: practical and energy-efficient memory
    disaggregation in a datacenter. In Proceedings of the 13th European
    Conference on Computer Systems (EuroSys). ACM, 16.
[5] Andres Lagar-Cavilla, Junwhan Ahn, Suleiman Souhlal, Neha Agarwal, Radoslaw
    Burny, Shakeel Butt, Jichuan Chang, Ashwin Chaugule, Nan Deng, Junaid
    Shahid, Greg Thelen, Kamil Adam Yurtsever, Yu Zhao, and Parthasarathy
    Ranganathan.  2019. Software-Defined Far Memory in Warehouse-Scale
    Computers.  In Proceedings of the 24th International Conference on
    Architectural Support for Programming Languages and Operating Systems
    (ASPLOS).  ACM, New York, NY, USA, 317–330.
    DOI:https://doi.org/10.1145/3297858.3304053
[6] Carl Waldspurger, Trausti Saemundsson, Irfan Ahmad, and Nohhyun Park.
    2017. Cache Modeling and Optimization using Miniature Simulations. In 2017
    USENIX Annual Technical Conference (ATC). USENIX Association, Santa
    Clara, CA, 487–498.
    https://www.usenix.org/conference/atc17/technical-sessions/
[7] Haojie Wang, Jidong Zhai, Xiongchao Tang, Bowen Yu, Xiaosong Ma, and
    Wenguang Chen. 2018. Spindle: Informed Memory Access Monitoring. In 2018
    USENIX Annual Technical Conference (ATC). USENIX Association, Boston, MA,
    561–574.  https://www.usenix.org/conference/atc18/presentation/wang-haojie
[8] Jonathan Corbet. 2019. Proactively reclaiming idle memory. (2019).
    https://lwn.net/Articles/787611/.


Appendix B: Limitations of Other Access Monitoring Techniques
=============================================================

The memory access instrumentation techniques which are applied to
many tools such as Intel PIN is essential for correctness required cases such
as memory access bug detections or cache level optimizations.  However, those
usually incur exceptionally high overhead which is unacceptable.

Periodic access checks based on access counting features (e.g., PTE Accessed
bits or PG_Idle flags) can reduce the overhead.  It sacrifies some of the
quality but it's still ok to many of this domain.  However, the overhead
arbitrarily increase as the size of the target workload grows.  Miniature-like
static region based sampling can set the upperbound of the overhead, but it
will now decrease the quality of the output as the size of the workload grows.

DAMON is another solution that overcomes the limitations.  It is 1) accurate
enough for this domain, 2) light-weight so that it can be applied online, and
3) allow users to set the upper-bound of the overhead, regardless of the size
of target workloads.  It is implemented as a simple and small kernel module to
support various users in both of the user space and the kernel space.  Refer to
'Evaluations' section below for detailed performance of DAMON.

For the goals, DAMON utilizes its two core mechanisms, which allows lightweight
overhead and high quality of output, repectively.  To show how DAMON promises
those, refer to 'Mechanisms of DAMON' section below.


Appendix C: Mechanisms of DAMON
===============================


Basic Access Check
------------------

DAMON basically reports what pages are how frequently accessed.  The report is
passed to users in binary format via a ``result file`` which users can set it's
path.  Note that the frequency is not an absolute number of accesses, but a
relative frequency among the pages of the target workloads.

Users can also control the resolution of the reports by setting two time
intervals, ``sampling interval`` and ``aggregation interval``.  In detail,
DAMON checks access to each page per ``sampling interval``, aggregates the
results (counts the number of the accesses to each page), and reports the
aggregated results per ``aggregation interval``.  For the access check of each
page, DAMON uses the Accessed bits of PTEs.

This is thus similar to the previously mentioned periodic access checks based
mechanisms, which overhead is increasing as the size of the target process
grows.


Region Based Sampling
---------------------

To avoid the unbounded increase of the overhead, DAMON groups a number of
adjacent pages that assumed to have same access frequencies into a region.  As
long as the assumption (pages in a region have same access frequencies) is
kept, only one page in the region is required to be checked.  Thus, for each
``sampling interval``, DAMON randomly picks one page in each region and clears
its Accessed bit.  After one more ``sampling interval``, DAMON reads the
Accessed bit of the page and increases the access frequency of the region if
the bit has set meanwhile.  Therefore, the monitoring overhead is controllable
by setting the number of regions.  DAMON allows users to set the minimal and
maximum number of regions for the trade-off.

Except the assumption, this is almost same with the above-mentioned
miniature-like static region based sampling.  In other words, this scheme
cannot preserve the quality of the output if the assumption is not guaranteed.


Adaptive Regions Adjustment
---------------------------

At the beginning of the monitoring, DAMON constructs the initial regions by
evenly splitting the memory mapped address space of the process into the
user-specified minimal number of regions.  In this initial state, the
assumption is normally not kept and thus the quality could be low.  To keep the
assumption as much as possible, DAMON adaptively merges and splits each region.
For each ``aggregation interval``, it compares the access frequencies of
adjacent regions and merges those if the frequency difference is small.  Then,
after it reports and clears the aggregated access frequency of each region, it
splits each region into two regions if the total number of regions is smaller
than the half of the user-specified maximum number of regions.

In this way, DAMON provides its best-effort quality and minimal overhead while
keeping the bounds users set for their trade-off.


Applying Dynamic Memory Mappings
--------------------------------

Only a number of small parts in the super-huge virtual address space of the
processes is mapped to physical memory and accessed.  Thus, tracking the
unmapped address regions is just wasteful.  However, tracking every memory
mapping change might incur an overhead.  For the reason, DAMON applies the
dynamic memory mapping changes to the tracking regions only for each of an
user-specified time interval (``regions update interval``).


Appendix D: Expected Use-cases
==============================

A straightforward usecase of DAMON would be the program behavior analysis.
With the DAMON output, users can confirm whether the program is running as
intended or not.  This will be useful for debuggings and tests of design
points.

The monitored results can also be useful for counting the dynamic working set
size of workloads.  For the administration of memory overcommitted systems or
selection of the environments (e.g., containers providing different amount of
memory) for your workloads, this will be useful.

If you are a programmer, you can optimize your program by managing the memory
based on the actual data access pattern.  For example, you can identify the
dynamic hotness of your data using DAMON and call ``mlock()`` to keep your hot
data in DRAM, or call ``madvise()`` with ``MADV_PAGEOUT`` to proactively
reclaim cold data.  Even though your program is guaranteed to not encounter
memory pressure, you can still improve the performance by applying the DAMON
outputs for call of ``MADV_HUGEPAGE`` and ``MADV_NOHUGEPAGE``.  More creative
optimizations would be possible.  Our evaluations of DAMON includes a
straightforward optimization using the ``mlock()``.  Please refer to the below
Evaluation section for more detail.

As DAMON incurs very low overhead, such optimizations can be applied not only
offline, but also online.  Also, there is no reason to limit such optimizations
to the user space.  Several parts of the kernel's memory management mechanisms
could be also optimized using DAMON. The reclamation, the THP (de)promotion
decisions, and the compaction would be such a candidates.  DAMON will continue
its development to be highly optimized for the online/in-kernel uses.


A Future Plan: Data Access Monitoring-based Operation Schemes
-------------------------------------------------------------

As described in the above section, DAMON could be helpful for actual access
based memory management optimizations.  Nevertheless, users who want to do such
optimizations should run DAMON, read the traced data (either online or
offline), analyze it, plan a new memory management scheme, and apply the new
scheme by themselves.  It must be easier than the past, but could still require
some level of efforts.  In its next development stage, DAMON will reduce some
of such efforts by allowing users to specify some access based memory
management rules for their specific processes.

Because this is just a plan, the specific interface is not fixed yet, but for
example, users will be allowed to write their desired memory management rules
to a special file in a DAMON specific format.  The rules will be something like
'if a memory region of size in a range is keeping a range of hotness for more
than a duration, apply specific memory management rule using madvise() or
mlock() to the region'.  For example, we can imagine rules like below:

    # format is: <min/max size> <min/max frequency (0-99)> <duration> <action>

    # if a region of a size keeps a very high access frequency for more than
    # 100ms, lock the region in the main memory (call mlock()). But, if the
    # region is larger than 500 MiB, skip it. The exception might be helpful
    # if the system has only, say, 600 MiB of DRAM, a region of size larger
    # than 600 MiB cannot be locked in the DRAM at all.
    na 500M 90 99 100ms mlock

    # if a region keeps a high access frequency for more than 100ms, put the
    # region on the head of the LRU list (call madvise() with MADV_WILLNEED).
    na na 80 90 100ms madv_willneed

    # if a region keeps a low access frequency for more than 100ms, put the
    # region on the tail of the LRU list (call madvise() with MADV_COLD).
    na na 10 20 100ms madv_cold

    # if a region keeps a very low access frequency for more than 100ms, swap
    # out the region immediately (call madvise() with MADV_PAGEOUT).
    na na 0 10 100ms madv_pageout

    # if a region of a size bigger than 2MB keeps a very high access frequency
    # for more than 100ms, let the region to use huge pages (call madvise()
    # with MADV_HUGEPAGE).
    2M na 90 99 100ms madv_hugepage

    # If a regions of a size bigger than > 2MB keeps no high access frequency
    # for more than 100ms, avoid the region from using huge pages (call
    # madvise() with MADV_NOHUGEPAGE).
    2M na 0 25 100ms madv_nohugepage

An RFC patchset for this is available:
https://lore.kernel.org/linux-mm/20200218085309.18346-1-sjpark@amazon.com/


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v6 01/14] mm: Introduce Data Access MONitor (DAMON)
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  8:54   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 02/14] mm/damon: Implement region based sampling SeongJae Park
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit introduces a kernel module named DAMON.  Note that this
commit is implementing only the stub for the module load/unload, basic
data structures, and simple manipulation functions of the structures to
keep the size of commit small.  The core mechanisms of DAMON will be
implemented one by one by following commits.

Brief Introduction
==================

Memory management decisions can be improved if finer data access
information is available.  However, because such finer information
usually comes with higher overhead, most systems including Linux
forgives the potential improvement and rely on only coarse information
or some light-weight heuristics.  The pseudo-LRU and the aggressive THP
promotions are such examples.

A number of experimental data access pattern awared memory management
optimizations say the sacrifices are huge.  However, none of those has
successfully adopted to Linux kernel mainly due to the absence of a
scalable and efficient data access monitoring mechanism.

DAMON is a data access monitoring solution for the problem.  It is 1)
accurate enough for the DRAM level memory management, 2) light-weight
enough to be applied online, and 3) keeps predefined upper-bound
overhead regardless of the size of target workloads (thus scalable).

DAMON is implemented as a standalone kernel module and provides several
simple interfaces.  Owing to that, though it has mainly designed for the
kernel's memory management mechanisms, it can be also used for a wide
range of user space programs and people.

Frequently Asked Questions
==========================

Q: Why not integrated with perf?
A: From the perspective of perf like profilers, DAMON can be thought of
as a data source in kernel, like tracepoints, pressure stall information
(psi), or idle page tracking.  Thus, it can be easily integrated with
those.  However, this patchset doesn't provide a fancy perf integration
because current step of DAMON development is focused on its core logic
only.  That said, DAMON already provides two interfaces for user space
programs, which based on debugfs and tracepoint, respectively.  Using
the tracepoint interface, you can use DAMON with perf.  This patchset
also provides the debugfs interface based user space tool for DAMON.  It
can be used to record, visualize, and analyze data access pattern of
target processes in a convenient way.

Q: Why a new module, instead of extending perf or other tools?
A: First, DAMON aims to be used by other programs including the kernel.
Therefore, having dependency to specific tools like perf is not
desirable.  Second, because it need to be lightweight as much as
possible so that it can be used online, any unnecessary overhead such as
kernel - user space context switching cost should be avoided.  These are
the two most biggest reasons why DAMON is implemented in the kernel
space.  The idle page tracking subsystem would be the kernel module that
most seems similar to DAMON.  However, it's own interface is not
compatible with DAMON.  Also, the internal implementation of it has no
common part to be reused by DAMON.

Q: Can 'perf mem' provide the data required for DAMON?
A: On the systems supporting 'perf mem', yes.  DAMON is using the PTE
Accessed bits in low level.  Other H/W or S/W features that can be used
for the purpose could be used.  However, as explained with above
question, DAMON need to be implemented in the kernel space.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/Kconfig  |  12 +++
 mm/Makefile |   1 +
 mm/damon.c  | 224 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 237 insertions(+)
 create mode 100644 mm/damon.c

diff --git a/mm/Kconfig b/mm/Kconfig
index ab80933be65f..387d469f40ec 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -739,4 +739,16 @@ config ARCH_HAS_HUGEPD
 config MAPPING_DIRTY_HELPERS
         bool
 
+config DAMON
+	tristate "Data Access Monitor"
+	depends on MMU
+	default n
+	help
+	  Provides data access monitoring.
+
+	  DAMON is a kernel module that allows users to monitor the actual
+	  memory access pattern of specific user-space processes.  It aims to
+	  be 1) accurate enough to be useful for performance-centric domains,
+	  and 2) sufficiently light-weight so that it can be applied online.
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 1937cc251883..2911b3832c90 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -108,3 +108,4 @@ obj-$(CONFIG_ZONE_DEVICE) += memremap.o
 obj-$(CONFIG_HMM_MIRROR) += hmm.o
 obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
+obj-$(CONFIG_DAMON) += damon.o
diff --git a/mm/damon.c b/mm/damon.c
new file mode 100644
index 000000000000..aafdca35b7b8
--- /dev/null
+++ b/mm/damon.c
@@ -0,0 +1,224 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Data Access Monitor
+ *
+ * Copyright 2019 Amazon.com, Inc. or its affiliates.  All rights reserved.
+ *
+ * Author: SeongJae Park <sjpark@amazon.de>
+ */
+
+#define pr_fmt(fmt) "damon: " fmt
+
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+
+#define damon_get_task_struct(t) \
+	(get_pid_task(find_vpid(t->pid), PIDTYPE_PID))
+
+#define damon_next_region(r) \
+	(container_of(r->list.next, struct damon_region, list))
+
+#define damon_prev_region(r) \
+	(container_of(r->list.prev, struct damon_region, list))
+
+#define damon_for_each_region(r, t) \
+	list_for_each_entry(r, &t->regions_list, list)
+
+#define damon_for_each_region_safe(r, next, t) \
+	list_for_each_entry_safe(r, next, &t->regions_list, list)
+
+#define damon_for_each_task(ctx, t) \
+	list_for_each_entry(t, &(ctx)->tasks_list, list)
+
+#define damon_for_each_task_safe(ctx, t, next) \
+	list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list)
+
+/* Represents a monitoring target region on the virtual address space */
+struct damon_region {
+	unsigned long vm_start;
+	unsigned long vm_end;
+	unsigned long sampling_addr;
+	unsigned int nr_accesses;
+	struct list_head list;
+};
+
+/* Represents a monitoring target task */
+struct damon_task {
+	unsigned long pid;
+	struct list_head regions_list;
+	struct list_head list;
+};
+
+struct damon_ctx {
+	struct rnd_state rndseed;
+
+	struct list_head tasks_list;	/* 'damon_task' objects */
+};
+
+/* Get a random number in [l, r) */
+#define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l))
+
+/*
+ * Construct a damon_region struct
+ *
+ * Returns the pointer to the new struct if success, or NULL otherwise
+ */
+static struct damon_region *damon_new_region(struct damon_ctx *ctx,
+				unsigned long vm_start, unsigned long vm_end)
+{
+	struct damon_region *ret;
+
+	ret = kmalloc(sizeof(struct damon_region), GFP_KERNEL);
+	if (!ret)
+		return NULL;
+	ret->vm_start = vm_start;
+	ret->vm_end = vm_end;
+	ret->nr_accesses = 0;
+	ret->sampling_addr = damon_rand(ctx, vm_start, vm_end);
+	INIT_LIST_HEAD(&ret->list);
+
+	return ret;
+}
+
+/*
+ * Add a region between two other regions
+ */
+static inline void damon_add_region(struct damon_region *r,
+		struct damon_region *prev, struct damon_region *next)
+{
+	__list_add(&r->list, &prev->list, &next->list);
+}
+
+/*
+ * Append a region to a task's list of regions
+ */
+static void damon_add_region_tail(struct damon_region *r, struct damon_task *t)
+{
+	list_add_tail(&r->list, &t->regions_list);
+}
+
+/*
+ * Delete a region from its list
+ */
+static void damon_del_region(struct damon_region *r)
+{
+	list_del(&r->list);
+}
+
+/*
+ * De-allocate a region
+ */
+static void damon_free_region(struct damon_region *r)
+{
+	kfree(r);
+}
+
+static void damon_destroy_region(struct damon_region *r)
+{
+	damon_del_region(r);
+	damon_free_region(r);
+}
+
+/*
+ * Construct a damon_task struct
+ *
+ * Returns the pointer to the new struct if success, or NULL otherwise
+ */
+static struct damon_task *damon_new_task(unsigned long pid)
+{
+	struct damon_task *t;
+
+	t = kmalloc(sizeof(struct damon_task), GFP_KERNEL);
+	if (!t)
+		return NULL;
+	t->pid = pid;
+	INIT_LIST_HEAD(&t->regions_list);
+
+	return t;
+}
+
+/* Returns n-th damon_region of the given task */
+struct damon_region *damon_nth_region_of(struct damon_task *t, unsigned int n)
+{
+	struct damon_region *r;
+	unsigned int i;
+
+	i = 0;
+	damon_for_each_region(r, t) {
+		if (i++ == n)
+			return r;
+	}
+	return NULL;
+}
+
+static void damon_add_task_tail(struct damon_ctx *ctx, struct damon_task *t)
+{
+	list_add_tail(&t->list, &ctx->tasks_list);
+}
+
+static void damon_del_task(struct damon_task *t)
+{
+	list_del(&t->list);
+}
+
+static void damon_free_task(struct damon_task *t)
+{
+	struct damon_region *r, *next;
+
+	damon_for_each_region_safe(r, next, t)
+		damon_free_region(r);
+	kfree(t);
+}
+
+static void damon_destroy_task(struct damon_task *t)
+{
+	damon_del_task(t);
+	damon_free_task(t);
+}
+
+/*
+ * Returns number of monitoring target tasks
+ */
+static unsigned int nr_damon_tasks(struct damon_ctx *ctx)
+{
+	struct damon_task *t;
+	unsigned int ret = 0;
+
+	damon_for_each_task(ctx, t)
+		ret++;
+	return ret;
+}
+
+/*
+ * Returns the number of target regions for a given target task
+ */
+static unsigned int nr_damon_regions(struct damon_task *t)
+{
+	struct damon_region *r;
+	unsigned int ret = 0;
+
+	damon_for_each_region(r, t)
+		ret++;
+	return ret;
+}
+
+static int __init damon_init(void)
+{
+	pr_info("init\n");
+
+	return 0;
+}
+
+static void __exit damon_exit(void)
+{
+	pr_info("exit\n");
+}
+
+module_init(damon_init);
+module_exit(damon_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("SeongJae Park <sjpark@amazon.de>");
+MODULE_DESCRIPTION("DAMON: Data Access MONitor");
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 01/14] mm: " SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  8:57   ` Jonathan Cameron
  2020-03-13 17:29   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 03/14] mm/damon: Adaptively adjust regions SeongJae Park
                   ` (13 subsequent siblings)
  15 siblings, 2 replies; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit implements DAMON's basic access check and region based
sampling mechanisms.  This change would seems make no sense, mainly
because it is only a part of the DAMON's logics.  Following two commits
will make more sense.

This commit also exports `lookup_page_ext()` to GPL modules because
DAMON uses the function but also supports the module build.

Basic Access Check
------------------

DAMON basically reports what pages are how frequently accessed.  Note
that the frequency is not an absolute number of accesses, but a relative
frequency among the pages of the target workloads.

Users can control the resolution of the reports by setting two time
intervals, ``sampling interval`` and ``aggregation interval``.  In
detail, DAMON checks access to each page per ``sampling interval``,
aggregates the results (counts the number of the accesses to each page),
and reports the aggregated results per ``aggregation interval``.  For
the access check of each page, DAMON uses the Accessed bits of PTEs.

This is thus similar to common periodic access checks based access
tracking mechanisms, which overhead is increasing as the size of the
target process grows.

Region Based Sampling
---------------------

To avoid the unbounded increase of the overhead, DAMON groups a number
of adjacent pages that assumed to have same access frequencies into a
region.  As long as the assumption (pages in a region have same access
frequencies) is kept, only one page in the region is required to be
checked.  Thus, for each ``sampling interval``, DAMON randomly picks one
page in each region and clears its Accessed bit.  After one more
``sampling interval``, DAMON reads the Accessed bit of the page and
increases the access frequency of the region if the bit has set
meanwhile.  Therefore, the monitoring overhead is controllable by
setting the number of regions.

Nonetheless, this scheme cannot preserve the quality of the output if
the assumption is not kept.  Following commit will introduce how we can
make the guarantee with best effort.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c    | 509 ++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/page_ext.c |   1 +
 2 files changed, 510 insertions(+)

diff --git a/mm/damon.c b/mm/damon.c
index aafdca35b7b8..6bdeb84d89af 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -9,9 +9,14 @@
 
 #define pr_fmt(fmt) "damon: " fmt
 
+#include <linux/delay.h>
+#include <linux/kthread.h>
 #include <linux/mm.h>
 #include <linux/module.h>
+#include <linux/page_idle.h>
 #include <linux/random.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
 #include <linux/slab.h>
 
 #define damon_get_task_struct(t) \
@@ -51,7 +56,24 @@ struct damon_task {
 	struct list_head list;
 };
 
+/*
+ * For each 'sample_interval', DAMON checks whether each region is accessed or
+ * not.  It aggregates and keeps the access information (number of accesses to
+ * each region) for each 'aggr_interval' time.
+ *
+ * All time intervals are in micro-seconds.
+ */
 struct damon_ctx {
+	unsigned long sample_interval;
+	unsigned long aggr_interval;
+	unsigned long min_nr_regions;
+
+	struct timespec64 last_aggregation;
+
+	struct task_struct *kdamond;
+	bool kdamond_stop;
+	spinlock_t kdamond_lock;
+
 	struct rnd_state rndseed;
 
 	struct list_head tasks_list;	/* 'damon_task' objects */
@@ -204,6 +226,493 @@ static unsigned int nr_damon_regions(struct damon_task *t)
 	return ret;
 }
 
+/*
+ * Get the mm_struct of the given task
+ *
+ * Callser should put the mm_struct after use, unless it is NULL.
+ *
+ * Returns the mm_struct of the task on success, NULL on failure
+ */
+static struct mm_struct *damon_get_mm(struct damon_task *t)
+{
+	struct task_struct *task;
+	struct mm_struct *mm;
+
+	task = damon_get_task_struct(t);
+	if (!task)
+		return NULL;
+
+	mm = get_task_mm(task);
+	put_task_struct(task);
+	return mm;
+}
+
+/*
+ * Size-evenly split a region into 'nr_pieces' small regions
+ *
+ * Returns 0 on success, or negative error code otherwise.
+ */
+static int damon_split_region_evenly(struct damon_ctx *ctx,
+		struct damon_region *r, unsigned int nr_pieces)
+{
+	unsigned long sz_orig, sz_piece, orig_end;
+	struct damon_region *piece = NULL, *next;
+	unsigned long start;
+
+	if (!r || !nr_pieces)
+		return -EINVAL;
+
+	orig_end = r->vm_end;
+	sz_orig = r->vm_end - r->vm_start;
+	sz_piece = sz_orig / nr_pieces;
+
+	if (!sz_piece)
+		return -EINVAL;
+
+	r->vm_end = r->vm_start + sz_piece;
+	next = damon_next_region(r);
+	for (start = r->vm_end; start + sz_piece <= orig_end;
+			start += sz_piece) {
+		piece = damon_new_region(ctx, start, start + sz_piece);
+		damon_add_region(piece, r, next);
+		r = piece;
+	}
+	if (piece)
+		piece->vm_end = orig_end;
+	return 0;
+}
+
+struct region {
+	unsigned long start;
+	unsigned long end;
+};
+
+static unsigned long sz_region(struct region *r)
+{
+	return r->end - r->start;
+}
+
+static void swap_regions(struct region *r1, struct region *r2)
+{
+	struct region tmp;
+
+	tmp = *r1;
+	*r1 = *r2;
+	*r2 = tmp;
+}
+
+/*
+ * Find the three regions in an address space
+ *
+ * vma		the head vma of the target address space
+ * regions	an array of three 'struct region's that results will be saved
+ *
+ * This function receives an address space and finds three regions in it which
+ * separated by the two biggest unmapped regions in the space.  Please refer to
+ * below comments of 'damon_init_regions_of()' function to know why this is
+ * necessary.
+ *
+ * Returns 0 if success, or negative error code otherwise.
+ */
+static int damon_three_regions_in_vmas(struct vm_area_struct *vma,
+		struct region regions[3])
+{
+	struct region gap = {0,}, first_gap = {0,}, second_gap = {0,};
+	struct vm_area_struct *last_vma = NULL;
+	unsigned long start = 0;
+
+	/* Find two biggest gaps so that first_gap > second_gap > others */
+	for (; vma; vma = vma->vm_next) {
+		if (!last_vma) {
+			start = vma->vm_start;
+			last_vma = vma;
+			continue;
+		}
+		gap.start = last_vma->vm_end;
+		gap.end = vma->vm_start;
+		if (sz_region(&gap) > sz_region(&second_gap)) {
+			swap_regions(&gap, &second_gap);
+			if (sz_region(&second_gap) > sz_region(&first_gap))
+				swap_regions(&second_gap, &first_gap);
+		}
+		last_vma = vma;
+	}
+
+	if (!sz_region(&second_gap) || !sz_region(&first_gap))
+		return -EINVAL;
+
+	/* Sort the two biggest gaps by address */
+	if (first_gap.start > second_gap.start)
+		swap_regions(&first_gap, &second_gap);
+
+	/* Store the result */
+	regions[0].start = start;
+	regions[0].end = first_gap.start;
+	regions[1].start = first_gap.end;
+	regions[1].end = second_gap.start;
+	regions[2].start = second_gap.end;
+	regions[2].end = last_vma->vm_end;
+
+	return 0;
+}
+
+/*
+ * Get the three regions in the given task
+ *
+ * Returns 0 on success, negative error code otherwise.
+ */
+static int damon_three_regions_of(struct damon_task *t,
+				struct region regions[3])
+{
+	struct mm_struct *mm;
+	int ret;
+
+	mm = damon_get_mm(t);
+	if (!mm)
+		return -EINVAL;
+
+	down_read(&mm->mmap_sem);
+	ret = damon_three_regions_in_vmas(mm->mmap, regions);
+	up_read(&mm->mmap_sem);
+
+	mmput(mm);
+	return ret;
+}
+
+/*
+ * Initialize the monitoring target regions for the given task
+ *
+ * t	the given target task
+ *
+ * Because only a number of small portions of the entire address space
+ * is acutally mapped to the memory and accessed, monitoring the unmapped
+ * regions is wasteful.  That said, because we can deal with small noises,
+ * tracking every mapping is not strictly required but could even incur a high
+ * overhead if the mapping frequently changes or the number of mappings is
+ * high.  Nonetheless, this may seems very weird.  DAMON's dynamic regions
+ * adjustment mechanism, which will be implemented with following commit will
+ * make this more sense.
+ *
+ * For the reason, we convert the complex mappings to three distinct regions
+ * that cover every mapped areas of the address space.  Also the two gaps
+ * between the three regions are the two biggest unmapped areas in the given
+ * address space.  In detail, this function first identifies the start and the
+ * end of the mappings and the two biggest unmapped areas of the address space.
+ * Then, it constructs the three regions as below:
+ *
+ *     [mappings[0]->start, big_two_unmapped_areas[0]->start)
+ *     [big_two_unmapped_areas[0]->end, big_two_unmapped_areas[1]->start)
+ *     [big_two_unmapped_areas[1]->end, mappings[nr_mappings - 1]->end)
+ *
+ * As usual memory map of processes is as below, the gap between the heap and
+ * the uppermost mmap()-ed region, and the gap between the lowermost mmap()-ed
+ * region and the stack will be two biggest unmapped regions.  Because these
+ * gaps are exceptionally huge areas in usual address space, excluding these
+ * two biggest unmapped regions will be sufficient to make a trade-off.
+ *
+ *   <heap>
+ *   <BIG UNMAPPED REGION 1>
+ *   <uppermost mmap()-ed region>
+ *   (other mmap()-ed regions and small unmapped regions)
+ *   <lowermost mmap()-ed region>
+ *   <BIG UNMAPPED REGION 2>
+ *   <stack>
+ */
+static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t)
+{
+	struct damon_region *r;
+	struct region regions[3];
+	int i;
+
+	if (damon_three_regions_of(t, regions)) {
+		pr_err("Failed to get three regions of task %lu\n", t->pid);
+		return;
+	}
+
+	/* Set the initial three regions of the task */
+	for (i = 0; i < 3; i++) {
+		r = damon_new_region(c, regions[i].start, regions[i].end);
+		damon_add_region_tail(r, t);
+	}
+
+	/* Split the middle region into 'min_nr_regions - 2' regions */
+	r = damon_nth_region_of(t, 1);
+	if (damon_split_region_evenly(c, r, c->min_nr_regions - 2))
+		pr_warn("Init middle region failed to be split\n");
+}
+
+/* Initialize '->regions_list' of every task */
+static void kdamond_init_regions(struct damon_ctx *ctx)
+{
+	struct damon_task *t;
+
+	damon_for_each_task(ctx, t)
+		damon_init_regions_of(ctx, t);
+}
+
+/*
+ * Check whether the given region has accessed since the last check
+ *
+ * mm	'mm_struct' for the given virtual address space
+ * r	the region to be checked
+ */
+static void kdamond_check_access(struct damon_ctx *ctx,
+			struct mm_struct *mm, struct damon_region *r)
+{
+	pte_t *pte = NULL;
+	pmd_t *pmd = NULL;
+	spinlock_t *ptl;
+
+	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
+		goto mkold;
+
+	/* Read the page table access bit of the page */
+	if (pte && pte_young(*pte))
+		r->nr_accesses++;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	else if (pmd && pmd_young(*pmd))
+		r->nr_accesses++;
+#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
+
+	spin_unlock(ptl);
+
+mkold:
+	/* mkold next target */
+	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
+
+	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
+		return;
+
+	if (pte) {
+		if (pte_young(*pte)) {
+			clear_page_idle(pte_page(*pte));
+			set_page_young(pte_page(*pte));
+		}
+		*pte = pte_mkold(*pte);
+	}
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	else if (pmd) {
+		if (pmd_young(*pmd)) {
+			clear_page_idle(pmd_page(*pmd));
+			set_page_young(pmd_page(*pmd));
+		}
+		*pmd = pmd_mkold(*pmd);
+	}
+#endif
+
+	spin_unlock(ptl);
+}
+
+/*
+ * Check whether a time interval is elapsed
+ *
+ * baseline	the time to check whether the interval has elapsed since
+ * interval	the time interval (microseconds)
+ *
+ * See whether the given time interval has passed since the given baseline
+ * time.  If so, it also updates the baseline to current time for next check.
+ *
+ * Returns true if the time interval has passed, or false otherwise.
+ */
+static bool damon_check_reset_time_interval(struct timespec64 *baseline,
+		unsigned long interval)
+{
+	struct timespec64 now;
+
+	ktime_get_coarse_ts64(&now);
+	if ((timespec64_to_ns(&now) - timespec64_to_ns(baseline)) <
+			interval * 1000)
+		return false;
+	*baseline = now;
+	return true;
+}
+
+/*
+ * Check whether it is time to flush the aggregated information
+ */
+static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx)
+{
+	return damon_check_reset_time_interval(&ctx->last_aggregation,
+			ctx->aggr_interval);
+}
+
+/*
+ * Reset the aggregated monitoring results
+ */
+static void kdamond_flush_aggregated(struct damon_ctx *c)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+
+	damon_for_each_task(c, t) {
+		damon_for_each_region(r, t)
+			r->nr_accesses = 0;
+	}
+}
+
+/*
+ * Check whether current monitoring should be stopped
+ *
+ * If users asked to stop, need stop.  Even though no user has asked to stop,
+ * need stop if every target task has dead.
+ *
+ * Returns true if need to stop current monitoring.
+ */
+static bool kdamond_need_stop(struct damon_ctx *ctx)
+{
+	struct damon_task *t;
+	struct task_struct *task;
+	bool stop;
+
+	spin_lock(&ctx->kdamond_lock);
+	stop = ctx->kdamond_stop;
+	spin_unlock(&ctx->kdamond_lock);
+	if (stop)
+		return true;
+
+	damon_for_each_task(ctx, t) {
+		task = damon_get_task_struct(t);
+		if (task) {
+			put_task_struct(task);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * The monitoring daemon that runs as a kernel thread
+ */
+static int kdamond_fn(void *data)
+{
+	struct damon_ctx *ctx = (struct damon_ctx *)data;
+	struct damon_task *t;
+	struct damon_region *r, *next;
+	struct mm_struct *mm;
+
+	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
+	kdamond_init_regions(ctx);
+	while (!kdamond_need_stop(ctx)) {
+		damon_for_each_task(ctx, t) {
+			mm = damon_get_mm(t);
+			if (!mm)
+				continue;
+			damon_for_each_region(r, t)
+				kdamond_check_access(ctx, mm, r);
+			mmput(mm);
+		}
+
+		if (kdamond_aggregate_interval_passed(ctx))
+			kdamond_flush_aggregated(ctx);
+
+		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
+	}
+	damon_for_each_task(ctx, t) {
+		damon_for_each_region_safe(r, next, t)
+			damon_destroy_region(r);
+	}
+	pr_info("kdamond (%d) finishes\n", ctx->kdamond->pid);
+	spin_lock(&ctx->kdamond_lock);
+	ctx->kdamond = NULL;
+	spin_unlock(&ctx->kdamond_lock);
+	return 0;
+}
+
+/*
+ * Controller functions
+ */
+
+/*
+ * Start or stop the kdamond
+ *
+ * Returns 0 if success, negative error code otherwise.
+ */
+static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
+{
+	spin_lock(&ctx->kdamond_lock);
+	ctx->kdamond_stop = !on;
+	if (!ctx->kdamond && on) {
+		ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond");
+		if (!ctx->kdamond)
+			goto fail;
+		goto success;
+	}
+	if (ctx->kdamond && !on) {
+		spin_unlock(&ctx->kdamond_lock);
+		while (true) {
+			spin_lock(&ctx->kdamond_lock);
+			if (!ctx->kdamond)
+				goto success;
+			spin_unlock(&ctx->kdamond_lock);
+
+			usleep_range(ctx->sample_interval,
+					ctx->sample_interval * 2);
+		}
+	}
+
+	/* tried to turn on while turned on, or turn off while turned off */
+
+fail:
+	spin_unlock(&ctx->kdamond_lock);
+	return -EINVAL;
+
+success:
+	spin_unlock(&ctx->kdamond_lock);
+	return 0;
+}
+
+/*
+ * This function should not be called while the kdamond is running.
+ */
+static int damon_set_pids(struct damon_ctx *ctx,
+			unsigned long *pids, ssize_t nr_pids)
+{
+	ssize_t i;
+	struct damon_task *t, *next;
+
+	damon_for_each_task_safe(ctx, t, next)
+		damon_destroy_task(t);
+
+	for (i = 0; i < nr_pids; i++) {
+		t = damon_new_task(pids[i]);
+		if (!t) {
+			pr_err("Failed to alloc damon_task\n");
+			return -ENOMEM;
+		}
+		damon_add_task_tail(ctx, t);
+	}
+
+	return 0;
+}
+
+/*
+ * Set attributes for the monitoring
+ *
+ * sample_int		time interval between samplings
+ * aggr_int		time interval between aggregations
+ * min_nr_reg		minimal number of regions
+ *
+ * This function should not be called while the kdamond is running.
+ * Every time interval is in micro-seconds.
+ *
+ * Returns 0 on success, negative error code otherwise.
+ */
+static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
+		unsigned long aggr_int, unsigned long min_nr_reg)
+{
+	if (min_nr_reg < 3) {
+		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
+				min_nr_reg);
+		return -EINVAL;
+	}
+
+	ctx->sample_interval = sample_int;
+	ctx->aggr_interval = aggr_int;
+	ctx->min_nr_regions = min_nr_reg;
+	return 0;
+}
+
 static int __init damon_init(void)
 {
 	pr_info("init\n");
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 4ade843ff588..71169b45bba9 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -131,6 +131,7 @@ struct page_ext *lookup_page_ext(const struct page *page)
 					MAX_ORDER_NR_PAGES);
 	return get_entry(base, index);
 }
+EXPORT_SYMBOL_GPL(lookup_page_ext);
 
 static int __init alloc_node_page_ext(int nid)
 {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 03/14] mm/damon: Adaptively adjust regions
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 01/14] mm: " SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 02/14] mm/damon: Implement region based sampling SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  8:57   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes SeongJae Park
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

At the beginning of the monitoring, DAMON constructs the initial regions
by evenly splitting the memory mapped address space of the process into
the user-specified minimal number of regions.  In this initial state,
the assumption of the regions (pages in same region have similar access
frequencies) is normally not kept and thus the monitoring quality could
be low.  To keep the assumption as much as possible, DAMON adaptively
merges and splits each region.

For each ``aggregation interval``, it compares the access frequencies of
adjacent regions and merges those if the frequency difference is small.
Then, after it reports and clears the aggregated access frequency of
each region, it splits each region into two regions if the total number
of regions is smaller than the half of the user-specified maximum number
of regions.

In this way, DAMON provides its best-effort quality and minimal overhead
while keeping the bounds users set for their trade-off.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 144 insertions(+), 7 deletions(-)

diff --git a/mm/damon.c b/mm/damon.c
index 6bdeb84d89af..1c8bb71bbce9 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -67,6 +67,7 @@ struct damon_ctx {
 	unsigned long sample_interval;
 	unsigned long aggr_interval;
 	unsigned long min_nr_regions;
+	unsigned long max_nr_regions;
 
 	struct timespec64 last_aggregation;
 
@@ -389,9 +390,12 @@ static int damon_three_regions_of(struct damon_task *t,
  * regions is wasteful.  That said, because we can deal with small noises,
  * tracking every mapping is not strictly required but could even incur a high
  * overhead if the mapping frequently changes or the number of mappings is
- * high.  Nonetheless, this may seems very weird.  DAMON's dynamic regions
- * adjustment mechanism, which will be implemented with following commit will
- * make this more sense.
+ * high.  The adaptive regions adjustment mechanism will further help to deal
+ * with the noises by simply identifying the unmapped areas as a region that
+ * has no access.  Moreover, applying the real mappings that would have many
+ * unmapped areas inside will make the adaptive mechanism quite complex.  That
+ * said, too huge unmapped areas inside the monitoring target should be removed
+ * to not take the time for the adaptive mechanism.
  *
  * For the reason, we convert the complex mappings to three distinct regions
  * that cover every mapped areas of the address space.  Also the two gaps
@@ -550,6 +554,123 @@ static void kdamond_flush_aggregated(struct damon_ctx *c)
 	}
 }
 
+#define sz_damon_region(r) (r->vm_end - r->vm_start)
+
+/*
+ * Merge two adjacent regions into one region
+ */
+static void damon_merge_two_regions(struct damon_region *l,
+				struct damon_region *r)
+{
+	l->nr_accesses = (l->nr_accesses * sz_damon_region(l) +
+			r->nr_accesses * sz_damon_region(r)) /
+			(sz_damon_region(l) + sz_damon_region(r));
+	l->vm_end = r->vm_end;
+	damon_destroy_region(r);
+}
+
+#define diff_of(a, b) (a > b ? a - b : b - a)
+
+/*
+ * Merge adjacent regions having similar access frequencies
+ *
+ * t		task that merge operation will make change
+ * thres	merge regions having '->nr_accesses' diff smaller than this
+ */
+static void damon_merge_regions_of(struct damon_task *t, unsigned int thres)
+{
+	struct damon_region *r, *prev = NULL, *next;
+
+	damon_for_each_region_safe(r, next, t) {
+		if (!prev || prev->vm_end != r->vm_start)
+			goto next;
+		if (diff_of(prev->nr_accesses, r->nr_accesses) > thres)
+			goto next;
+		damon_merge_two_regions(prev, r);
+		continue;
+next:
+		prev = r;
+	}
+}
+
+/*
+ * Merge adjacent regions having similar access frequencies
+ *
+ * threshold	merge regions havind nr_accesses diff larger than this
+ *
+ * This function merges monitoring target regions which are adjacent and their
+ * access frequencies are similar.  This is for minimizing the monitoring
+ * overhead under the dynamically changeable access pattern.  If a merge was
+ * unnecessarily made, later 'kdamond_split_regions()' will revert it.
+ */
+static void kdamond_merge_regions(struct damon_ctx *c, unsigned int threshold)
+{
+	struct damon_task *t;
+
+	damon_for_each_task(c, t)
+		damon_merge_regions_of(t, threshold);
+}
+
+/*
+ * Split a region into two small regions
+ *
+ * r		the region to be split
+ * sz_r		size of the first sub-region that will be made
+ */
+static void damon_split_region_at(struct damon_ctx *ctx,
+		struct damon_region *r, unsigned long sz_r)
+{
+	struct damon_region *new;
+
+	new = damon_new_region(ctx, r->vm_start + sz_r, r->vm_end);
+	r->vm_end = new->vm_start;
+
+	damon_add_region(new, r, damon_next_region(r));
+}
+
+static void damon_split_regions_of(struct damon_ctx *ctx, struct damon_task *t)
+{
+	struct damon_region *r, *next;
+	unsigned long sz_left_region;
+
+	damon_for_each_region_safe(r, next, t) {
+		/*
+		 * Randomly select size of left sub-region to be at least
+		 * 10 percent and at most 90% of original region
+		 */
+		sz_left_region = (prandom_u32_state(&ctx->rndseed) % 9 + 1) *
+			(r->vm_end - r->vm_start) / 10;
+		/* Do not allow blank region */
+		if (sz_left_region == 0)
+			continue;
+		damon_split_region_at(ctx, r, sz_left_region);
+	}
+}
+
+/*
+ * splits every target regions into two randomly-sized regions
+ *
+ * This function splits every target regions into two random-sized regions if
+ * current total number of the regions is smaller than the half of the
+ * user-specified maximum number of regions.  This is for maximizing the
+ * monitoring accuracy under the dynamically changeable access patterns.  If a
+ * split was unnecessarily made, later 'kdamond_merge_regions()' will revert
+ * it.
+ */
+static void kdamond_split_regions(struct damon_ctx *ctx)
+{
+	struct damon_task *t;
+	unsigned int nr_regions = 0;
+
+	damon_for_each_task(ctx, t)
+		nr_regions += nr_damon_regions(t);
+	if (nr_regions > ctx->max_nr_regions / 2)
+		return;
+
+	damon_for_each_task(ctx, t)
+		damon_split_regions_of(ctx, t);
+}
+
 /*
  * Check whether current monitoring should be stopped
  *
@@ -590,21 +711,29 @@ static int kdamond_fn(void *data)
 	struct damon_task *t;
 	struct damon_region *r, *next;
 	struct mm_struct *mm;
+	unsigned long max_nr_accesses;
 
 	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
 	kdamond_init_regions(ctx);
 	while (!kdamond_need_stop(ctx)) {
+		max_nr_accesses = 0;
 		damon_for_each_task(ctx, t) {
 			mm = damon_get_mm(t);
 			if (!mm)
 				continue;
-			damon_for_each_region(r, t)
+			damon_for_each_region(r, t) {
 				kdamond_check_access(ctx, mm, r);
+				if (r->nr_accesses > max_nr_accesses)
+					max_nr_accesses = r->nr_accesses;
+			}
 			mmput(mm);
 		}
 
-		if (kdamond_aggregate_interval_passed(ctx))
+		if (kdamond_aggregate_interval_passed(ctx)) {
+			kdamond_merge_regions(ctx, max_nr_accesses / 10);
 			kdamond_flush_aggregated(ctx);
+			kdamond_split_regions(ctx);
+		}
 
 		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
 	}
@@ -692,24 +821,32 @@ static int damon_set_pids(struct damon_ctx *ctx,
  * sample_int		time interval between samplings
  * aggr_int		time interval between aggregations
  * min_nr_reg		minimal number of regions
+ * max_nr_reg		maximum number of regions
  *
  * This function should not be called while the kdamond is running.
  * Every time interval is in micro-seconds.
  *
  * Returns 0 on success, negative error code otherwise.
  */
-static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
-		unsigned long aggr_int, unsigned long min_nr_reg)
+static int damon_set_attrs(struct damon_ctx *ctx,
+			unsigned long sample_int, unsigned long aggr_int,
+			unsigned long min_nr_reg, unsigned long max_nr_reg)
 {
 	if (min_nr_reg < 3) {
 		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
 				min_nr_reg);
 		return -EINVAL;
 	}
+	if (min_nr_reg >= ctx->max_nr_regions) {
+		pr_err("invalid nr_regions.  min (%lu) >= max (%lu)\n",
+				min_nr_reg, max_nr_reg);
+		return -EINVAL;
+	}
 
 	ctx->sample_interval = sample_int;
 	ctx->aggr_interval = aggr_int;
 	ctx->min_nr_regions = min_nr_reg;
+	ctx->max_nr_regions = max_nr_reg;
 	return 0;
 }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (2 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 03/14] mm/damon: Adaptively adjust regions SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:00   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 05/14] mm/damon: Implement callbacks SeongJae Park
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

Only a number of parts in the virtual address space of the processes is
mapped to physical memory and accessed.  Thus, tracking the unmapped
address regions is just wasteful.  However, tracking every memory
mapping change might incur an overhead.  For the reason, DAMON applies
the dynamic memory mapping changes to the tracking regions only for each
of a user-specified time interval (``regions update interval``).

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 95 insertions(+), 4 deletions(-)

diff --git a/mm/damon.c b/mm/damon.c
index 1c8bb71bbce9..6a17408e83c2 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -59,17 +59,22 @@ struct damon_task {
 /*
  * For each 'sample_interval', DAMON checks whether each region is accessed or
  * not.  It aggregates and keeps the access information (number of accesses to
- * each region) for each 'aggr_interval' time.
+ * each region) for each 'aggr_interval' time.  And for each
+ * 'regions_update_interval', damon checks whether the memory mapping of the
+ * target tasks has changed (e.g., by mmap() calls from the applications) and
+ * applies the changes.
  *
  * All time intervals are in micro-seconds.
  */
 struct damon_ctx {
 	unsigned long sample_interval;
 	unsigned long aggr_interval;
+	unsigned long regions_update_interval;
 	unsigned long min_nr_regions;
 	unsigned long max_nr_regions;
 
 	struct timespec64 last_aggregation;
+	struct timespec64 last_regions_update;
 
 	struct task_struct *kdamond;
 	bool kdamond_stop;
@@ -671,6 +676,87 @@ static void kdamond_split_regions(struct damon_ctx *ctx)
 		damon_split_regions_of(ctx, t);
 }
 
+/*
+ * Check whether it is time to check and apply the dynamic mmap changes
+ *
+ * Returns true if it is.
+ */
+static bool kdamond_need_update_regions(struct damon_ctx *ctx)
+{
+	return damon_check_reset_time_interval(&ctx->last_regions_update,
+			ctx->regions_update_interval);
+}
+
+static bool damon_intersect(struct damon_region *r, struct region *re)
+{
+	return !(r->vm_end <= re->start || re->end <= r->vm_start);
+}
+
+/*
+ * Update damon regions for the three big regions of the given task
+ *
+ * t		the given task
+ * bregions	the three big regions of the task
+ */
+static void damon_apply_three_regions(struct damon_ctx *ctx,
+		struct damon_task *t, struct region bregions[3])
+{
+	struct damon_region *r, *next;
+	unsigned int i = 0;
+
+	/* Remove regions which isn't in the three big regions now */
+	damon_for_each_region_safe(r, next, t) {
+		for (i = 0; i < 3; i++) {
+			if (damon_intersect(r, &bregions[i]))
+				break;
+		}
+		if (i == 3)
+			damon_destroy_region(r);
+	}
+
+	/* Adjust intersecting regions to fit with the threee big regions */
+	for (i = 0; i < 3; i++) {
+		struct damon_region *first = NULL, *last;
+		struct damon_region *newr;
+		struct region *br;
+
+		br = &bregions[i];
+		/* Get the first and last regions which intersects with br */
+		damon_for_each_region(r, t) {
+			if (damon_intersect(r, br)) {
+				if (!first)
+					first = r;
+				last = r;
+			}
+			if (r->vm_start >= br->end)
+				break;
+		}
+		if (!first) {
+			/* no damon_region intersects with this big region */
+			newr = damon_new_region(ctx, br->start, br->end);
+			damon_add_region(newr, damon_prev_region(r), r);
+		} else {
+			first->vm_start = br->start;
+			last->vm_end = br->end;
+		}
+	}
+}
+
+/*
+ * Update regions for current memory mappings
+ */
+static void kdamond_update_regions(struct damon_ctx *ctx)
+{
+	struct region three_regions[3];
+	struct damon_task *t;
+
+	damon_for_each_task(ctx, t) {
+		if (damon_three_regions_of(t, three_regions))
+			continue;
+		damon_apply_three_regions(ctx, t, three_regions);
+	}
+}
+
 /*
  * Check whether current monitoring should be stopped
  *
@@ -735,6 +821,9 @@ static int kdamond_fn(void *data)
 			kdamond_split_regions(ctx);
 		}
 
+		if (kdamond_need_update_regions(ctx))
+			kdamond_update_regions(ctx);
+
 		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
 	}
 	damon_for_each_task(ctx, t) {
@@ -820,6 +909,7 @@ static int damon_set_pids(struct damon_ctx *ctx,
  *
  * sample_int		time interval between samplings
  * aggr_int		time interval between aggregations
+ * regions_update_int	time interval between vma update checks
  * min_nr_reg		minimal number of regions
  * max_nr_reg		maximum number of regions
  *
@@ -828,9 +918,9 @@ static int damon_set_pids(struct damon_ctx *ctx,
  *
  * Returns 0 on success, negative error code otherwise.
  */
-static int damon_set_attrs(struct damon_ctx *ctx,
-			unsigned long sample_int, unsigned long aggr_int,
-			unsigned long min_nr_reg, unsigned long max_nr_reg)
+static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
+		unsigned long aggr_int, unsigned long regions_update_int,
+		unsigned long min_nr_reg, unsigned long max_nr_reg)
 {
 	if (min_nr_reg < 3) {
 		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
@@ -845,6 +935,7 @@ static int damon_set_attrs(struct damon_ctx *ctx,
 
 	ctx->sample_interval = sample_int;
 	ctx->aggr_interval = aggr_int;
+	ctx->regions_update_interval = regions_update_int;
 	ctx->min_nr_regions = min_nr_reg;
 	ctx->max_nr_regions = max_nr_reg;
 	return 0;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 05/14] mm/damon: Implement callbacks
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (3 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:01   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 06/14] mm/damon: Implement access pattern recording SeongJae Park
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit implements callbacks for DAMON.  Using this, DAMON users can
install their callbacks for each step of the access monitoring so that
they can do something interesting with the monitored access pattrns
online.  For example, callbacks can report the monitored patterns to
users or do some access pattern based memory management such as
proactive reclamations or access pattern based THP promotions/demotions
decision makings.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/damon.c b/mm/damon.c
index 6a17408e83c2..554720778e8a 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -83,6 +83,10 @@ struct damon_ctx {
 	struct rnd_state rndseed;
 
 	struct list_head tasks_list;	/* 'damon_task' objects */
+
+	/* callbacks */
+	void (*sample_cb)(struct damon_ctx *context);
+	void (*aggregate_cb)(struct damon_ctx *context);
 };
 
 /* Get a random number in [l, r) */
@@ -814,9 +818,13 @@ static int kdamond_fn(void *data)
 			}
 			mmput(mm);
 		}
+		if (ctx->sample_cb)
+			ctx->sample_cb(ctx);
 
 		if (kdamond_aggregate_interval_passed(ctx)) {
 			kdamond_merge_regions(ctx, max_nr_accesses / 10);
+			if (ctx->aggregate_cb)
+				ctx->aggregate_cb(ctx);
 			kdamond_flush_aggregated(ctx);
 			kdamond_split_regions(ctx);
 		}
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 06/14] mm/damon: Implement access pattern recording
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (4 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 05/14] mm/damon: Implement callbacks SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:01   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 07/14] mm/damon: Implement kernel space API SeongJae Park
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit implements the recording feature of DAMON. If this feature
is enabled, DAMON writes the monitored access patterns in its binary
format into a file which specified by the user. This is already able to
be implemented by each user using the callbacks.  However, as the
recording is expected to be used widely, this commit implements the
feature in the DAMON, for more convenience and efficiency.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c | 126 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 123 insertions(+), 3 deletions(-)

diff --git a/mm/damon.c b/mm/damon.c
index 554720778e8a..a7edb2dfa700 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -76,6 +76,11 @@ struct damon_ctx {
 	struct timespec64 last_aggregation;
 	struct timespec64 last_regions_update;
 
+	unsigned char *rbuf;
+	unsigned int rbuf_len;
+	unsigned int rbuf_offset;
+	char *rfile_path;
+
 	struct task_struct *kdamond;
 	bool kdamond_stop;
 	spinlock_t kdamond_lock;
@@ -89,6 +94,8 @@ struct damon_ctx {
 	void (*aggregate_cb)(struct damon_ctx *context);
 };
 
+#define MAX_RFILE_PATH_LEN	256
+
 /* Get a random number in [l, r) */
 #define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l))
 
@@ -550,16 +557,81 @@ static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx)
 }
 
 /*
- * Reset the aggregated monitoring results
+ * Flush the content in the result buffer to the result file
+ */
+static void damon_flush_rbuffer(struct damon_ctx *ctx)
+{
+	ssize_t sz;
+	loff_t pos;
+	struct file *rfile;
+
+	while (ctx->rbuf_offset) {
+		pos = 0;
+		rfile = filp_open(ctx->rfile_path, O_CREAT | O_RDWR | O_APPEND,
+				0644);
+		if (IS_ERR(rfile)) {
+			pr_err("Cannot open the result file %s\n",
+					ctx->rfile_path);
+			return;
+		}
+
+		sz = kernel_write(rfile, ctx->rbuf, ctx->rbuf_offset, &pos);
+		filp_close(rfile, NULL);
+
+		ctx->rbuf_offset -= sz;
+	}
+}
+
+/*
+ * Write a data into the result buffer
+ */
+static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size)
+{
+	if (!ctx->rbuf_len || !ctx->rbuf)
+		return;
+	if (ctx->rbuf_offset + size > ctx->rbuf_len)
+		damon_flush_rbuffer(ctx);
+
+	memcpy(&ctx->rbuf[ctx->rbuf_offset], data, size);
+	ctx->rbuf_offset += size;
+}
+
+/*
+ * Flush the aggregated monitoring results to the result buffer
+ *
+ * Stores current tracking results to the result buffer and reset 'nr_accesses'
+ * of each regions.  The format for the result buffer is as below:
+ *
+ *   <time> <number of tasks> <array of task infos>
+ *
+ *   task info: <pid> <number of regions> <array of region infos>
+ *   region info: <start address> <end address> <nr_accesses>
  */
 static void kdamond_flush_aggregated(struct damon_ctx *c)
 {
 	struct damon_task *t;
-	struct damon_region *r;
+	struct timespec64 now;
+	unsigned int nr;
+
+	ktime_get_coarse_ts64(&now);
+
+	damon_write_rbuf(c, &now, sizeof(struct timespec64));
+	nr = nr_damon_tasks(c);
+	damon_write_rbuf(c, &nr, sizeof(nr));
 
 	damon_for_each_task(c, t) {
-		damon_for_each_region(r, t)
+		struct damon_region *r;
+
+		damon_write_rbuf(c, &t->pid, sizeof(t->pid));
+		nr = nr_damon_regions(t);
+		damon_write_rbuf(c, &nr, sizeof(nr));
+		damon_for_each_region(r, t) {
+			damon_write_rbuf(c, &r->vm_start, sizeof(r->vm_start));
+			damon_write_rbuf(c, &r->vm_end, sizeof(r->vm_end));
+			damon_write_rbuf(c, &r->nr_accesses,
+					sizeof(r->nr_accesses));
 			r->nr_accesses = 0;
+		}
 	}
 }
 
@@ -834,6 +906,7 @@ static int kdamond_fn(void *data)
 
 		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
 	}
+	damon_flush_rbuffer(ctx);
 	damon_for_each_task(ctx, t) {
 		damon_for_each_region_safe(r, next, t)
 			damon_destroy_region(r);
@@ -912,6 +985,53 @@ static int damon_set_pids(struct damon_ctx *ctx,
 	return 0;
 }
 
+/*
+ * Set attributes for the recording
+ *
+ * ctx		target kdamond context
+ * rbuf_len	length of the result buffer
+ * rfile_path	path to the monitor result files
+ *
+ * Setting 'rbuf_len' 0 disables recording.
+ *
+ * This function should not be called while the kdamond is running.
+ *
+ * Returns 0 on success, negative error code otherwise.
+ */
+static int damon_set_recording(struct damon_ctx *ctx,
+				unsigned int rbuf_len, char *rfile_path)
+{
+	size_t rfile_path_len;
+
+	if (rbuf_len > 4 * 1024 * 1024) {
+		pr_err("too long (>%d) result buffer length\n",
+				4 * 1024 * 1024);
+		return -EINVAL;
+	}
+	rfile_path_len = strnlen(rfile_path, MAX_RFILE_PATH_LEN);
+	if (rfile_path_len >= MAX_RFILE_PATH_LEN) {
+		pr_err("too long (>%d) result file path %s\n",
+				MAX_RFILE_PATH_LEN, rfile_path);
+		return -EINVAL;
+	}
+	ctx->rbuf_len = rbuf_len;
+	kfree(ctx->rbuf);
+	kfree(ctx->rfile_path);
+	ctx->rfile_path = NULL;
+	if (!rbuf_len) {
+		ctx->rbuf = NULL;
+	} else {
+		ctx->rbuf = kvmalloc(rbuf_len, GFP_KERNEL);
+		if (!ctx->rbuf)
+			return -ENOMEM;
+	}
+	ctx->rfile_path = kmalloc(rfile_path_len + 1, GFP_KERNEL);
+	if (!ctx->rfile_path)
+		return -ENOMEM;
+	strncpy(ctx->rfile_path, rfile_path, rfile_path_len + 1);
+	return 0;
+}
+
 /*
  * Set attributes for the monitoring
  *
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 07/14] mm/damon: Implement kernel space API
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (5 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 06/14] mm/damon: Implement access pattern recording SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:01   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 08/14] mm/damon: Add debugfs interface SeongJae Park
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit implements the DAMON api for the kernel.  Other kernel code
can use DAMON by calling damon_start() and damon_stop() with their own
'struct damon_ctx'.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 include/linux/damon.h | 71 +++++++++++++++++++++++++++++++++++++++++++
 mm/damon.c            | 71 +++++++++----------------------------------
 2 files changed, 85 insertions(+), 57 deletions(-)
 create mode 100644 include/linux/damon.h

diff --git a/include/linux/damon.h b/include/linux/damon.h
new file mode 100644
index 000000000000..78785cb88d42
--- /dev/null
+++ b/include/linux/damon.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DAMON api
+ *
+ * Copyright 2019 Amazon.com, Inc. or its affiliates.  All rights reserved.
+ *
+ * Author: SeongJae Park <sjpark@amazon.de>
+ */
+
+#ifndef _DAMON_H_
+#define _DAMON_H_
+
+#include <linux/random.h>
+#include <linux/spinlock_types.h>
+#include <linux/time64.h>
+#include <linux/types.h>
+
+/* Represents a monitoring target region on the virtual address space */
+struct damon_region {
+	unsigned long vm_start;
+	unsigned long vm_end;
+	unsigned long sampling_addr;
+	unsigned int nr_accesses;
+	struct list_head list;
+};
+
+/* Represents a monitoring target task */
+struct damon_task {
+	unsigned long pid;
+	struct list_head regions_list;
+	struct list_head list;
+};
+
+struct damon_ctx {
+	unsigned long sample_interval;
+	unsigned long aggr_interval;
+	unsigned long regions_update_interval;
+	unsigned long min_nr_regions;
+	unsigned long max_nr_regions;
+
+	struct timespec64 last_aggregation;
+	struct timespec64 last_regions_update;
+
+	unsigned char *rbuf;
+	unsigned int rbuf_len;
+	unsigned int rbuf_offset;
+	char *rfile_path;
+
+	struct task_struct *kdamond;
+	bool kdamond_stop;
+	spinlock_t kdamond_lock;
+
+	struct rnd_state rndseed;
+
+	struct list_head tasks_list;	/* 'damon_task' objects */
+
+	/* callbacks */
+	void (*sample_cb)(struct damon_ctx *context);
+	void (*aggregate_cb)(struct damon_ctx *context);
+};
+
+int damon_set_pids(struct damon_ctx *ctx,
+			unsigned long *pids, ssize_t nr_pids);
+int damon_set_recording(struct damon_ctx *ctx,
+			unsigned int rbuf_len, char *rfile_path);
+int damon_set_attrs(struct damon_ctx *ctx, unsigned long s, unsigned long a,
+			unsigned long r, unsigned long min, unsigned long max);
+int damon_start(struct damon_ctx *ctx);
+int damon_stop(struct damon_ctx *ctx);
+
+#endif
diff --git a/mm/damon.c b/mm/damon.c
index a7edb2dfa700..b3e9b9da5720 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -9,6 +9,7 @@
 
 #define pr_fmt(fmt) "damon: " fmt
 
+#include <linux/damon.h>
 #include <linux/delay.h>
 #include <linux/kthread.h>
 #include <linux/mm.h>
@@ -40,60 +41,6 @@
 #define damon_for_each_task_safe(ctx, t, next) \
 	list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list)
 
-/* Represents a monitoring target region on the virtual address space */
-struct damon_region {
-	unsigned long vm_start;
-	unsigned long vm_end;
-	unsigned long sampling_addr;
-	unsigned int nr_accesses;
-	struct list_head list;
-};
-
-/* Represents a monitoring target task */
-struct damon_task {
-	unsigned long pid;
-	struct list_head regions_list;
-	struct list_head list;
-};
-
-/*
- * For each 'sample_interval', DAMON checks whether each region is accessed or
- * not.  It aggregates and keeps the access information (number of accesses to
- * each region) for each 'aggr_interval' time.  And for each
- * 'regions_update_interval', damon checks whether the memory mapping of the
- * target tasks has changed (e.g., by mmap() calls from the applications) and
- * applies the changes.
- *
- * All time intervals are in micro-seconds.
- */
-struct damon_ctx {
-	unsigned long sample_interval;
-	unsigned long aggr_interval;
-	unsigned long regions_update_interval;
-	unsigned long min_nr_regions;
-	unsigned long max_nr_regions;
-
-	struct timespec64 last_aggregation;
-	struct timespec64 last_regions_update;
-
-	unsigned char *rbuf;
-	unsigned int rbuf_len;
-	unsigned int rbuf_offset;
-	char *rfile_path;
-
-	struct task_struct *kdamond;
-	bool kdamond_stop;
-	spinlock_t kdamond_lock;
-
-	struct rnd_state rndseed;
-
-	struct list_head tasks_list;	/* 'damon_task' objects */
-
-	/* callbacks */
-	void (*sample_cb)(struct damon_ctx *context);
-	void (*aggregate_cb)(struct damon_ctx *context);
-};
-
 #define MAX_RFILE_PATH_LEN	256
 
 /* Get a random number in [l, r) */
@@ -961,10 +908,20 @@ static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
 	return 0;
 }
 
+int damon_start(struct damon_ctx *ctx)
+{
+	return damon_turn_kdamond(ctx, true);
+}
+
+int damon_stop(struct damon_ctx *ctx)
+{
+	return damon_turn_kdamond(ctx, false);
+}
+
 /*
  * This function should not be called while the kdamond is running.
  */
-static int damon_set_pids(struct damon_ctx *ctx,
+int damon_set_pids(struct damon_ctx *ctx,
 			unsigned long *pids, ssize_t nr_pids)
 {
 	ssize_t i;
@@ -998,7 +955,7 @@ static int damon_set_pids(struct damon_ctx *ctx,
  *
  * Returns 0 on success, negative error code otherwise.
  */
-static int damon_set_recording(struct damon_ctx *ctx,
+int damon_set_recording(struct damon_ctx *ctx,
 				unsigned int rbuf_len, char *rfile_path)
 {
 	size_t rfile_path_len;
@@ -1046,7 +1003,7 @@ static int damon_set_recording(struct damon_ctx *ctx,
  *
  * Returns 0 on success, negative error code otherwise.
  */
-static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
+int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
 		unsigned long aggr_int, unsigned long regions_update_int,
 		unsigned long min_nr_reg, unsigned long max_nr_reg)
 {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 08/14] mm/damon: Add debugfs interface
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (6 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 07/14] mm/damon: Implement kernel space API SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:02   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing SeongJae Park
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit adds a debugfs interface for DAMON.

DAMON exports four files, ``attrs``, ``pids``, ``record``, and
``monitor_on`` under its debugfs directory, ``<debugfs>/damon/``.

Attributes
----------

Users can read and write the ``sampling interval``, ``aggregation
interval``, ``regions update interval``, and min/max number of
monitoring target regions by reading from and writing to the ``attrs``
file.  For example, below commands set those values to 5 ms, 100 ms,
1,000 ms, 10, 1000 and check it again::

    # cd <debugfs>/damon
    # echo 5000 100000 1000000 10 1000 > attrs
    # cat attrs
    5000 100000 1000000 10 1000

Target PIDs
-----------

Users can read and write the pids of current monitoring target processes
by reading from and writing to the ``pids`` file.  For example, below
commands set processes having pids 42 and 4242 as the processes to be
monitored and check it again::

    # cd <debugfs>/damon
    # echo 42 4242 > pids
    # cat pids
    42 4242

Note that setting the pids doesn't starts the monitoring.

Record
------

DAMON support direct monitoring result record feature.  The recorded
results are first written to a buffer and flushed to a file in batch.
Users can set the size of the buffer and the path to the result file by
reading from and writing to the ``record`` file.  For example, below
commands set the buffer to be 4 KiB and the result to be saved in
'/damon.data'.

    # cd <debugfs>/damon
    # echo 4096 /damon.data > pids
    # cat record
    4096 /damon.data

Turning On/Off
--------------

You can check current status, start and stop the monitoring by reading
from and writing to the ``monitor_on`` file.  Writing ``on`` to the file
starts DAMON to monitor the target processes with the attributes.
Writing ``off`` to the file stops DAMON.  DAMON also stops if every
target processes is be terminated.  Below example commands turn on, off,
and check status of DAMON::

    # cd <debugfs>/damon
    # echo on > monitor_on
    # echo off > monitor_on
    # cat monitor_on
    off

Please note that you cannot write to the ``attrs`` and ``pids`` files
while the monitoring is turned on.  If you write to the files while
DAMON is running, ``-EINVAL`` will be returned.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c | 377 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 376 insertions(+), 1 deletion(-)

diff --git a/mm/damon.c b/mm/damon.c
index b3e9b9da5720..facb1d7f121b 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -10,6 +10,7 @@
 #define pr_fmt(fmt) "damon: " fmt
 
 #include <linux/damon.h>
+#include <linux/debugfs.h>
 #include <linux/delay.h>
 #include <linux/kthread.h>
 #include <linux/mm.h>
@@ -46,6 +47,24 @@
 /* Get a random number in [l, r) */
 #define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l))
 
+/*
+ * For each 'sample_interval', DAMON checks whether each region is accessed or
+ * not.  It aggregates and keeps the access information (number of accesses to
+ * each region) for 'aggr_interval' and then flushes it to the result buffer if
+ * an 'aggr_interval' surpassed.  And for each 'regions_update_interval', damon
+ * checks whether the memory mapping of the target tasks has changed (e.g., by
+ * mmap() calls from the applications) and applies the changes.
+ *
+ * All time intervals are in micro-seconds.
+ */
+static struct damon_ctx damon_user_ctx = {
+	.sample_interval = 5 * 1000,
+	.aggr_interval = 100 * 1000,
+	.regions_update_interval = 1000 * 1000,
+	.min_nr_regions = 10,
+	.max_nr_regions = 1000,
+};
+
 /*
  * Construct a damon_region struct
  *
@@ -1026,15 +1045,371 @@ int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
 	return 0;
 }
 
+/*
+ * debugfs functions
+ */
+
+static ssize_t debugfs_monitor_on_read(struct file *file,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char monitor_on_buf[5];
+	bool monitor_on;
+	int ret;
+
+	spin_lock(&ctx->kdamond_lock);
+	monitor_on = ctx->kdamond != NULL;
+	spin_unlock(&ctx->kdamond_lock);
+
+	ret = snprintf(monitor_on_buf, 5, monitor_on ? "on\n" : "off\n");
+
+	return simple_read_from_buffer(buf, count, ppos, monitor_on_buf, ret);
+}
+
+static ssize_t debugfs_monitor_on_write(struct file *file,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	ssize_t ret;
+	bool on = false;
+	char cmdbuf[5];
+
+	ret = simple_write_to_buffer(cmdbuf, 5, ppos, buf, count);
+	if (ret < 0)
+		return ret;
+
+	if (sscanf(cmdbuf, "%s", cmdbuf) != 1)
+		return -EINVAL;
+	if (!strncmp(cmdbuf, "on", 5))
+		on = true;
+	else if (!strncmp(cmdbuf, "off", 5))
+		on = false;
+	else
+		return -EINVAL;
+
+	if (damon_turn_kdamond(ctx, on))
+		return -EINVAL;
+
+	return ret;
+}
+
+static ssize_t damon_sprint_pids(struct damon_ctx *ctx, char *buf, ssize_t len)
+{
+	struct damon_task *t;
+	int written = 0;
+	int rc;
+
+	damon_for_each_task(ctx, t) {
+		rc = snprintf(&buf[written], len - written, "%lu ", t->pid);
+		if (!rc)
+			return -ENOMEM;
+		written += rc;
+	}
+	if (written)
+		written -= 1;
+	written += snprintf(&buf[written], len - written, "\n");
+	return written;
+}
+
+static ssize_t debugfs_pids_read(struct file *file,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	ssize_t len;
+	char pids_buf[320];
+
+	len = damon_sprint_pids(ctx, pids_buf, 320);
+	if (len < 0)
+		return len;
+
+	return simple_read_from_buffer(buf, count, ppos, pids_buf, len);
+}
+
+/*
+ * Converts a string into an array of unsigned long integers
+ *
+ * Returns an array of unsigned long integers if the conversion success, or
+ * NULL otherwise.
+ */
+static unsigned long *str_to_pids(const char *str, ssize_t len,
+				ssize_t *nr_pids)
+{
+	unsigned long *pids;
+	const int max_nr_pids = 32;
+	unsigned long pid;
+	int pos = 0, parsed, ret;
+
+	*nr_pids = 0;
+	pids = kmalloc_array(max_nr_pids, sizeof(unsigned long), GFP_KERNEL);
+	if (!pids)
+		return NULL;
+	while (*nr_pids < max_nr_pids && pos < len) {
+		ret = sscanf(&str[pos], "%lu%n", &pid, &parsed);
+		pos += parsed;
+		if (ret != 1)
+			break;
+		pids[*nr_pids] = pid;
+		*nr_pids += 1;
+	}
+	if (*nr_pids == 0) {
+		kfree(pids);
+		pids = NULL;
+	}
+
+	return pids;
+}
+
+static ssize_t debugfs_pids_write(struct file *file,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char *kbuf;
+	unsigned long *targets;
+	ssize_t nr_targets;
+	ssize_t ret;
+
+	kbuf = kmalloc_array(count, sizeof(char), GFP_KERNEL);
+	if (!kbuf)
+		return -ENOMEM;
+
+	ret = simple_write_to_buffer(kbuf, 512, ppos, buf, count);
+	if (ret < 0)
+		goto out;
+
+	targets = str_to_pids(kbuf, ret, &nr_targets);
+	if (!targets) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	spin_lock(&ctx->kdamond_lock);
+	if (ctx->kdamond)
+		goto monitor_running;
+
+	damon_set_pids(ctx, targets, nr_targets);
+	spin_unlock(&ctx->kdamond_lock);
+
+	goto free_targets_out;
+
+monitor_running:
+	spin_unlock(&ctx->kdamond_lock);
+	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
+	ret = -EINVAL;
+free_targets_out:
+	kfree(targets);
+out:
+	kfree(kbuf);
+	return ret;
+}
+
+static ssize_t debugfs_record_read(struct file *file,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char record_buf[20 + MAX_RFILE_PATH_LEN];
+	int ret;
+
+	ret = snprintf(record_buf, ARRAY_SIZE(record_buf), "%u %s\n",
+			ctx->rbuf_len, ctx->rfile_path);
+	return simple_read_from_buffer(buf, count, ppos, record_buf, ret);
+}
+
+static ssize_t debugfs_record_write(struct file *file,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char *kbuf;
+	unsigned int rbuf_len;
+	char rfile_path[MAX_RFILE_PATH_LEN];
+	ssize_t ret;
+
+	kbuf = kmalloc_array(count + 1, sizeof(char), GFP_KERNEL);
+	if (!kbuf)
+		return -ENOMEM;
+	kbuf[count] = '\0';
+
+	ret = simple_write_to_buffer(kbuf, count, ppos, buf, count);
+	if (ret < 0)
+		goto out;
+	if (sscanf(kbuf, "%u %s",
+				&rbuf_len, rfile_path) != 2) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	spin_lock(&ctx->kdamond_lock);
+	if (ctx->kdamond)
+		goto monitor_running;
+
+	damon_set_recording(ctx, rbuf_len, rfile_path);
+	spin_unlock(&ctx->kdamond_lock);
+
+	goto out;
+
+monitor_running:
+	spin_unlock(&ctx->kdamond_lock);
+	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
+	ret = -EINVAL;
+out:
+	kfree(kbuf);
+	return ret;
+}
+
+
+static ssize_t debugfs_attrs_read(struct file *file,
+		char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char kbuf[128];
+	int ret;
+
+	ret = snprintf(kbuf, ARRAY_SIZE(kbuf), "%lu %lu %lu %lu %lu\n",
+			ctx->sample_interval, ctx->aggr_interval,
+			ctx->regions_update_interval, ctx->min_nr_regions,
+			ctx->max_nr_regions);
+
+	return simple_read_from_buffer(buf, count, ppos, kbuf, ret);
+}
+
+static ssize_t debugfs_attrs_write(struct file *file,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	unsigned long s, a, r, minr, maxr;
+	char *kbuf;
+	ssize_t ret;
+
+	kbuf = kmalloc_array(count, sizeof(char), GFP_KERNEL);
+	if (!kbuf)
+		return -ENOMEM;
+
+	ret = simple_write_to_buffer(kbuf, count, ppos, buf, count);
+	if (ret < 0)
+		goto out;
+
+	if (sscanf(kbuf, "%lu %lu %lu %lu %lu",
+				&s, &a, &r, &minr, &maxr) != 5) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	spin_lock(&ctx->kdamond_lock);
+	if (ctx->kdamond)
+		goto monitor_running;
+
+	damon_set_attrs(ctx, s, a, r, minr, maxr);
+	spin_unlock(&ctx->kdamond_lock);
+
+	goto out;
+
+monitor_running:
+	spin_unlock(&ctx->kdamond_lock);
+	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
+	ret = -EINVAL;
+out:
+	kfree(kbuf);
+	return ret;
+}
+
+static const struct file_operations monitor_on_fops = {
+	.owner = THIS_MODULE,
+	.read = debugfs_monitor_on_read,
+	.write = debugfs_monitor_on_write,
+};
+
+static const struct file_operations pids_fops = {
+	.owner = THIS_MODULE,
+	.read = debugfs_pids_read,
+	.write = debugfs_pids_write,
+};
+
+static const struct file_operations record_fops = {
+	.owner = THIS_MODULE,
+	.read = debugfs_record_read,
+	.write = debugfs_record_write,
+};
+
+static const struct file_operations attrs_fops = {
+	.owner = THIS_MODULE,
+	.read = debugfs_attrs_read,
+	.write = debugfs_attrs_write,
+};
+
+static struct dentry *debugfs_root;
+
+static int __init debugfs_init(void)
+{
+	const char * const file_names[] = {"attrs", "record",
+		"pids", "monitor_on"};
+	const struct file_operations *fops[] = {&attrs_fops, &record_fops,
+		&pids_fops, &monitor_on_fops};
+	int i;
+
+	debugfs_root = debugfs_create_dir("damon", NULL);
+	if (!debugfs_root) {
+		pr_err("failed to create the debugfs dir\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(file_names); i++) {
+		if (!debugfs_create_file(file_names[i], 0600, debugfs_root,
+					NULL, fops[i])) {
+			pr_err("failed to create %s file\n", file_names[i]);
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+static int __init damon_init_user_ctx(void)
+{
+	int rc;
+
+	struct damon_ctx *ctx = &damon_user_ctx;
+
+	ktime_get_coarse_ts64(&ctx->last_aggregation);
+	ctx->last_regions_update = ctx->last_aggregation;
+
+	ctx->rbuf_offset = 0;
+	rc = damon_set_recording(ctx, 1024 * 1024, "/damon.data");
+	if (rc)
+		return rc;
+
+	ctx->kdamond = NULL;
+	ctx->kdamond_stop = false;
+	spin_lock_init(&ctx->kdamond_lock);
+
+	prandom_seed_state(&ctx->rndseed, 42);
+	INIT_LIST_HEAD(&ctx->tasks_list);
+
+	ctx->sample_cb = NULL;
+	ctx->aggregate_cb = NULL;
+
+	return 0;
+}
+
 static int __init damon_init(void)
 {
+	int rc;
+
 	pr_info("init\n");
 
-	return 0;
+	rc = damon_init_user_ctx();
+	if (rc)
+		return rc;
+
+	return debugfs_init();
 }
 
 static void __exit damon_exit(void)
 {
+	damon_turn_kdamond(&damon_user_ctx, false);
+	debugfs_remove_recursive(debugfs_root);
+
+	kfree(damon_user_ctx.rbuf);
+	kfree(damon_user_ctx.rfile_path);
+
 	pr_info("exit\n");
 }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (7 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 08/14] mm/damon: Add debugfs interface SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:03   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 10/14] tools: Add a minimal user-space tool for DAMON SeongJae Park
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit adds a tracepoint for DAMON's result buffer writing.  It is
called for each writing of the DAMON results and print the result data.
Therefore, it would be used to easily integrated with other tracepoint
supporting tracers such as perf.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 include/trace/events/damon.h | 32 ++++++++++++++++++++++++++++++++
 mm/damon.c                   |  4 ++++
 2 files changed, 36 insertions(+)
 create mode 100644 include/trace/events/damon.h

diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h
new file mode 100644
index 000000000000..fb33993620ce
--- /dev/null
+++ b/include/trace/events/damon.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM damon
+
+#if !defined(_TRACE_DAMON_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_DAMON_H
+
+#include <linux/types.h>
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(damon_write_rbuf,
+
+	TP_PROTO(void *buf, const ssize_t sz),
+
+	TP_ARGS(buf, sz),
+
+	TP_STRUCT__entry(
+		__dynamic_array(char, buf, sz)
+	),
+
+	TP_fast_assign(
+		memcpy(__get_dynamic_array(buf), buf, sz);
+	),
+
+	TP_printk("dat=%s", __print_hex(__get_dynamic_array(buf),
+			__get_dynamic_array_len(buf)))
+);
+
+#endif /* _TRACE_DAMON_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/damon.c b/mm/damon.c
index facb1d7f121b..8faf3879f99e 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -9,6 +9,8 @@
 
 #define pr_fmt(fmt) "damon: " fmt
 
+#define CREATE_TRACE_POINTS
+
 #include <linux/damon.h>
 #include <linux/debugfs.h>
 #include <linux/delay.h>
@@ -20,6 +22,7 @@
 #include <linux/sched/mm.h>
 #include <linux/sched/task.h>
 #include <linux/slab.h>
+#include <trace/events/damon.h>
 
 #define damon_get_task_struct(t) \
 	(get_pid_task(find_vpid(t->pid), PIDTYPE_PID))
@@ -553,6 +556,7 @@ static void damon_flush_rbuffer(struct damon_ctx *ctx)
  */
 static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size)
 {
+	trace_damon_write_rbuf(data, size);
 	if (!ctx->rbuf_len || !ctx->rbuf)
 		return;
 	if (ctx->rbuf_offset + size > ctx->rbuf_len)
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 10/14] tools: Add a minimal user-space tool for DAMON
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (8 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document " SeongJae Park
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit adds a shallow wrapper python script, ``/tools/damon/damo``
that provides more convenient interface.  Note that it is only aimed to
be used for minimal reference of the DAMON's debugfs interfaces and for
debugging of the DAMON itself.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 tools/damon/.gitignore    |   1 +
 tools/damon/_dist.py      |  36 ++++
 tools/damon/bin2txt.py    |  64 +++++++
 tools/damon/damo          |  37 ++++
 tools/damon/heats.py      | 358 ++++++++++++++++++++++++++++++++++++++
 tools/damon/nr_regions.py |  89 ++++++++++
 tools/damon/record.py     | 212 ++++++++++++++++++++++
 tools/damon/report.py     |  45 +++++
 tools/damon/wss.py        |  95 ++++++++++
 9 files changed, 937 insertions(+)
 create mode 100644 tools/damon/.gitignore
 create mode 100644 tools/damon/_dist.py
 create mode 100644 tools/damon/bin2txt.py
 create mode 100755 tools/damon/damo
 create mode 100644 tools/damon/heats.py
 create mode 100644 tools/damon/nr_regions.py
 create mode 100644 tools/damon/record.py
 create mode 100644 tools/damon/report.py
 create mode 100644 tools/damon/wss.py

diff --git a/tools/damon/.gitignore b/tools/damon/.gitignore
new file mode 100644
index 000000000000..96403d36ff93
--- /dev/null
+++ b/tools/damon/.gitignore
@@ -0,0 +1 @@
+__pycache__/*
diff --git a/tools/damon/_dist.py b/tools/damon/_dist.py
new file mode 100644
index 000000000000..9851ec964e5c
--- /dev/null
+++ b/tools/damon/_dist.py
@@ -0,0 +1,36 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+import os
+import struct
+import subprocess
+
+def access_patterns(f):
+    nr_regions = struct.unpack('I', f.read(4))[0]
+
+    patterns = []
+    for r in range(nr_regions):
+        saddr = struct.unpack('L', f.read(8))[0]
+        eaddr = struct.unpack('L', f.read(8))[0]
+        nr_accesses = struct.unpack('I', f.read(4))[0]
+        patterns.append([eaddr - saddr, nr_accesses])
+    return patterns
+
+def plot_dist(data_file, output_file, xlabel, ylabel):
+    terminal = output_file.split('.')[-1]
+    if not terminal in ['pdf', 'jpeg', 'png', 'svg']:
+        os.remove(data_file)
+        print("Unsupported plot output type.")
+        exit(-1)
+
+    gnuplot_cmd = """
+    set term %s;
+    set output '%s';
+    set key off;
+    set xlabel '%s';
+    set ylabel '%s';
+    plot '%s' with linespoints;""" % (terminal, output_file, xlabel, ylabel,
+            data_file)
+    subprocess.call(['gnuplot', '-e', gnuplot_cmd])
+    os.remove(data_file)
+
diff --git a/tools/damon/bin2txt.py b/tools/damon/bin2txt.py
new file mode 100644
index 000000000000..d5ffac60e02c
--- /dev/null
+++ b/tools/damon/bin2txt.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+import argparse
+import os
+import struct
+import sys
+
+def parse_time(bindat):
+    "bindat should be 16 bytes"
+    sec = struct.unpack('l', bindat[0:8])[0]
+    nsec = struct.unpack('l', bindat[8:16])[0]
+    return sec * 1000000000 + nsec;
+
+def pr_region(f):
+    saddr = struct.unpack('L', f.read(8))[0]
+    eaddr = struct.unpack('L', f.read(8))[0]
+    nr_accesses = struct.unpack('I', f.read(4))[0]
+    print("%012x-%012x(%10d):\t%d" %
+            (saddr, eaddr, eaddr - saddr, nr_accesses))
+
+def pr_task_info(f):
+    pid = struct.unpack('L', f.read(8))[0]
+    print("pid: ", pid)
+    nr_regions = struct.unpack('I', f.read(4))[0]
+    print("nr_regions: ", nr_regions)
+    for r in range(nr_regions):
+        pr_region(f)
+
+def set_argparser(parser):
+    parser.add_argument('--input', '-i', type=str, metavar='<file>',
+            default='damon.data', help='input file name')
+
+def main(args=None):
+    if not args:
+        parser = argparse.ArgumentParser()
+        set_argparser(parser)
+        args = parser.parse_args()
+
+    file_path = args.input
+
+    if not os.path.isfile(file_path):
+        print('input file (%s) is not exist' % file_path)
+        exit(1)
+
+    with open(file_path, 'rb') as f:
+        start_time = None
+        while True:
+            timebin = f.read(16)
+            if len(timebin) != 16:
+                break
+            time = parse_time(timebin)
+            if not start_time:
+                start_time = time
+                print("start_time: ", start_time)
+            print("rel time: %16d" % (time - start_time))
+            nr_tasks = struct.unpack('I', f.read(4))[0]
+            print("nr_tasks: ", nr_tasks)
+            for t in range(nr_tasks):
+                pr_task_info(f)
+                print("")
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/damon/damo b/tools/damon/damo
new file mode 100755
index 000000000000..58e1099ae5fc
--- /dev/null
+++ b/tools/damon/damo
@@ -0,0 +1,37 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+import argparse
+
+import record
+import report
+
+class SubCmdHelpFormatter(argparse.RawDescriptionHelpFormatter):
+    def _format_action(self, action):
+        parts = super(argparse.RawDescriptionHelpFormatter,
+                self)._format_action(action)
+        # skip sub parsers help
+        if action.nargs == argparse.PARSER:
+            parts = '\n'.join(parts.split('\n')[1:])
+        return parts
+
+parser = argparse.ArgumentParser(formatter_class=SubCmdHelpFormatter)
+
+subparser = parser.add_subparsers(title='command', dest='command',
+        metavar='<command>')
+subparser.required = True
+
+parser_record = subparser.add_parser('record',
+        help='record data accesses of the given target processes')
+record.set_argparser(parser_record)
+
+parser_report = subparser.add_parser('report',
+        help='report the recorded data accesses in the specified form')
+report.set_argparser(parser_report)
+
+args = parser.parse_args()
+
+if args.command == 'record':
+    record.main(args)
+elif args.command == 'report':
+    report.main(args)
diff --git a/tools/damon/heats.py b/tools/damon/heats.py
new file mode 100644
index 000000000000..48e966c5ca02
--- /dev/null
+++ b/tools/damon/heats.py
@@ -0,0 +1,358 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+"""
+Transform binary trace data into human readable text that can be used for
+heatmap drawing, or directly plot the data in a heatmap format.
+
+Format of the text is:
+
+    <time> <space> <heat>
+    ...
+
+"""
+
+import argparse
+import os
+import struct
+import subprocess
+import sys
+import tempfile
+
+class HeatSample:
+    space_idx = None
+    sz_time_space = None
+    heat = None
+
+    def __init__(self, space_idx, sz_time_space, heat):
+        if sz_time_space < 0:
+            raise RuntimeError()
+        self.space_idx = space_idx
+        self.sz_time_space = sz_time_space
+        self.heat = heat
+
+    def total_heat(self):
+        return self.heat * self.sz_time_space
+
+    def merge(self, sample):
+        "sample must have a space idx that same to self"
+        heat_sum = self.total_heat() + sample.total_heat()
+        self.heat = heat_sum / (self.sz_time_space + sample.sz_time_space)
+        self.sz_time_space += sample.sz_time_space
+
+def pr_samples(samples, time_idx, time_unit, region_unit):
+    display_time = time_idx * time_unit
+    for idx, sample in enumerate(samples):
+        display_addr = idx * region_unit
+        if not sample:
+            print("%s\t%s\t%s" % (display_time, display_addr, 0.0))
+            continue
+        print("%s\t%s\t%s" % (display_time, display_addr, sample.total_heat() /
+            time_unit / region_unit))
+
+def to_idx(value, min_, unit):
+    return (value - min_) // unit
+
+def read_task_heats(f, pid, aunit, amin, amax):
+    pid_ = struct.unpack('L', f.read(8))[0]
+    nr_regions = struct.unpack('I', f.read(4))[0]
+    if pid_ != pid:
+        f.read(20 * nr_regions)
+        return None
+    samples = []
+    for i in range(nr_regions):
+        saddr = struct.unpack('L', f.read(8))[0]
+        eaddr = struct.unpack('L', f.read(8))[0]
+        eaddr = min(eaddr, amax - 1)
+        heat = struct.unpack('I', f.read(4))[0]
+
+        if eaddr <= amin:
+            continue
+        if saddr >= amax:
+            continue
+        saddr = max(amin, saddr)
+        eaddr = min(amax, eaddr)
+
+        sidx = to_idx(saddr, amin, aunit)
+        eidx = to_idx(eaddr - 1, amin, aunit)
+        for idx in range(sidx, eidx + 1):
+            sa = max(amin + idx * aunit, saddr)
+            ea = min(amin + (idx + 1) * aunit, eaddr)
+            sample = HeatSample(idx, (ea - sa), heat)
+            samples.append(sample)
+    return samples
+
+def parse_time(bindat):
+    sec = struct.unpack('l', bindat[0:8])[0]
+    nsec = struct.unpack('l', bindat[8:16])[0]
+    return sec * 1000000000 + nsec
+
+def apply_samples(target_samples, samples, start_time, end_time, aunit, amin):
+    for s in samples:
+        sample = HeatSample(s.space_idx,
+                s.sz_time_space * (end_time - start_time), s.heat)
+        idx = sample.space_idx
+        if not target_samples[idx]:
+            target_samples[idx] = sample
+        else:
+            target_samples[idx].merge(sample)
+
+def __pr_heats(f, pid, tunit, tmin, tmax, aunit, amin, amax):
+    heat_samples = [None] * ((amax - amin) // aunit)
+
+    start_time = 0
+    end_time = 0
+    last_flushed = -1
+    while True:
+        start_time = end_time
+        timebin = f.read(16)
+        if (len(timebin)) != 16:
+            break
+        end_time = parse_time(timebin)
+        nr_tasks = struct.unpack('I', f.read(4))[0]
+        samples_set = {}
+        for t in range(nr_tasks):
+            samples = read_task_heats(f, pid, aunit, amin, amax)
+            if samples:
+                samples_set[pid] = samples
+        if not pid in samples_set:
+            continue
+        if start_time >= tmax:
+            continue
+        if end_time <= tmin:
+            continue
+        start_time = max(start_time, tmin)
+        end_time = min(end_time, tmax)
+
+        sidx = to_idx(start_time, tmin, tunit)
+        eidx = to_idx(end_time - 1, tmin, tunit)
+        for idx in range(sidx, eidx + 1):
+            if idx != last_flushed:
+                pr_samples(heat_samples, idx, tunit, aunit)
+                heat_samples = [None] * ((amax - amin) // aunit)
+                last_flushed = idx
+            st = max(start_time, tmin + idx * tunit)
+            et = min(end_time, tmin + (idx + 1) * tunit)
+            apply_samples(heat_samples, samples_set[pid], st, et, aunit, amin)
+
+def pr_heats(args):
+    binfile = args.input
+    pid = args.pid
+    tres = args.tres
+    tmin = args.tmin
+    ares = args.ares
+    amin = args.amin
+
+    tunit = (args.tmax - tmin) // tres
+    aunit = (args.amax - amin) // ares
+
+    # Compensate the values so that those fit with the resolution
+    tmax = tmin + tunit * tres
+    amax = amin + aunit * ares
+
+    with open(binfile, 'rb') as f:
+        __pr_heats(f, pid, tunit, tmin, tmax, aunit, amin, amax)
+
+class GuideInfo:
+    pid = None
+    start_time = None
+    end_time = None
+    lowest_addr = None
+    highest_addr = None
+    gaps = None
+
+    def __init__(self, pid, start_time):
+        self.pid = pid
+        self.start_time = start_time
+        self.gaps = []
+
+    def regions(self):
+        regions = []
+        region = [self.lowest_addr]
+        for gap in self.gaps:
+            for idx, point in enumerate(gap):
+                if idx == 0:
+                    region.append(point)
+                    regions.append(region)
+                else:
+                    region = [point]
+        region.append(self.highest_addr)
+        regions.append(region)
+        return regions
+
+    def total_space(self):
+        ret = 0
+        for r in self.regions():
+            ret += r[1] - r[0]
+        return ret
+
+    def __str__(self):
+        lines = ['pid:%d' % self.pid]
+        lines.append('time: %d-%d (%d)' % (self.start_time, self.end_time,
+                    self.end_time - self.start_time))
+        for idx, region in enumerate(self.regions()):
+            lines.append('region\t%2d: %020d-%020d (%d)' %
+                    (idx, region[0], region[1], region[1] - region[0]))
+        return '\n'.join(lines)
+
+def is_overlap(region1, region2):
+    if region1[1] < region2[0]:
+        return False
+    if region2[1] < region1[0]:
+        return False
+    return True
+
+def overlap_region_of(region1, region2):
+    return [max(region1[0], region2[0]), min(region1[1], region2[1])]
+
+def overlapping_regions(regions1, regions2):
+    overlap_regions = []
+    for r1 in regions1:
+        for r2 in regions2:
+            if is_overlap(r1, r2):
+                r1 = overlap_region_of(r1, r2)
+        if r1:
+            overlap_regions.append(r1)
+    return overlap_regions
+
+def get_guide_info(binfile):
+    "Read file, return the set of guide information objects of the data"
+    guides = {}
+    with open(binfile, 'rb') as f:
+        while True:
+            timebin = f.read(16)
+            if len(timebin) != 16:
+                break
+            monitor_time = parse_time(timebin)
+            nr_tasks = struct.unpack('I', f.read(4))[0]
+            for t in range(nr_tasks):
+                pid = struct.unpack('L', f.read(8))[0]
+                nr_regions = struct.unpack('I', f.read(4))[0]
+                if not pid in guides:
+                    guides[pid] = GuideInfo(pid, monitor_time)
+                guide = guides[pid]
+                guide.end_time = monitor_time
+
+                last_addr = None
+                gaps = []
+                for r in range(nr_regions):
+                    saddr = struct.unpack('L', f.read(8))[0]
+                    eaddr = struct.unpack('L', f.read(8))[0]
+                    f.read(4)
+
+                    if not guide.lowest_addr or saddr < guide.lowest_addr:
+                        guide.lowest_addr = saddr
+                    if not guide.highest_addr or eaddr > guide.highest_addr:
+                        guide.highest_addr = eaddr
+
+                    if not last_addr:
+                        last_addr = eaddr
+                        continue
+                    if last_addr != saddr:
+                        gaps.append([last_addr, saddr])
+                    last_addr = eaddr
+
+                if not guide.gaps:
+                    guide.gaps = gaps
+                else:
+                    guide.gaps = overlapping_regions(guide.gaps, gaps)
+    return sorted(list(guides.values()), key=lambda x: x.total_space(),
+                    reverse=True)
+
+def pr_guide(binfile):
+    for guide in get_guide_info(binfile):
+        print(guide)
+
+def region_sort_key(region):
+    return region[1] - region[0]
+
+def set_missed_args(args):
+    if args.pid and args.tmin and args.tmax and args.amin and args.amax:
+        return
+    guides = get_guide_info(args.input)
+    guide = guides[0]
+    if not args.pid:
+        args.pid = guide.pid
+    for g in guides:
+        if g.pid == args.pid:
+            guide = g
+            break
+
+    if not args.tmin:
+        args.tmin = guide.start_time
+    if not args.tmax:
+        args.tmax = guide.end_time
+
+    if not args.amin or not args.amax:
+        region = sorted(guide.regions(), key=lambda x: x[1] - x[0],
+                reverse=True)[0]
+        args.amin = region[0]
+        args.amax = region[1]
+
+def plot_heatmap(data_file, output_file):
+    terminal = output_file.split('.')[-1]
+    if not terminal in ['pdf', 'jpeg', 'png', 'svg']:
+        os.remove(data_file)
+        print("Unsupported plot output type.")
+        exit(-1)
+
+    gnuplot_cmd = """
+    set term %s;
+    set output '%s';
+    set key off;
+    set xrange [0:];
+    set yrange [0:];
+    set xlabel 'Time (ns)';
+    set ylabel 'Virtual Address (bytes)';
+    plot '%s' using 1:2:3 with image;""" % (terminal, output_file, data_file)
+    subprocess.call(['gnuplot', '-e', gnuplot_cmd])
+    os.remove(data_file)
+
+def set_argparser(parser):
+    parser.add_argument('--input', '-i', type=str, metavar='<file>',
+            default='damon.data', help='input file name')
+    parser.add_argument('--pid', metavar='<pid>', type=int,
+            help='pid of target task')
+    parser.add_argument('--tres', metavar='<resolution>', type=int,
+            default=500, help='time resolution of the output')
+    parser.add_argument('--tmin', metavar='<time>', type=lambda x: int(x,0),
+            help='minimal time of the output')
+    parser.add_argument('--tmax', metavar='<time>', type=lambda x: int(x,0),
+            help='maximum time of the output')
+    parser.add_argument('--ares', metavar='<resolution>', type=int, default=500,
+            help='space address resolution of the output')
+    parser.add_argument('--amin', metavar='<address>', type=lambda x: int(x,0),
+            help='minimal space address of the output')
+    parser.add_argument('--amax', metavar='<address>', type=lambda x: int(x,0),
+            help='maximum space address of the output')
+    parser.add_argument('--guide', action='store_true',
+            help='print a guidance for the min/max/resolution settings')
+    parser.add_argument('--heatmap', metavar='<file>', type=str,
+            help='heatmap image file to create')
+
+def main(args=None):
+    if not args:
+        parser = argparse.ArgumentParser()
+        set_argparser(parser)
+        args = parser.parse_args()
+
+    if args.guide:
+        pr_guide(args.input)
+    else:
+        set_missed_args(args)
+        orig_stdout = sys.stdout
+        if args.heatmap:
+            tmp_path = tempfile.mkstemp()[1]
+            tmp_file = open(tmp_path, 'w')
+            sys.stdout = tmp_file
+
+        pr_heats(args)
+
+        if args.heatmap:
+            sys.stdout = orig_stdout
+            tmp_file.flush()
+            tmp_file.close()
+            plot_heatmap(tmp_path, args.heatmap)
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/damon/nr_regions.py b/tools/damon/nr_regions.py
new file mode 100644
index 000000000000..fcc2ce13e5f5
--- /dev/null
+++ b/tools/damon/nr_regions.py
@@ -0,0 +1,89 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+"Print out distribution of the number of regions in the given record"
+
+import argparse
+import struct
+import sys
+import tempfile
+
+import _dist
+
+def set_argparser(parser):
+    parser.add_argument('--input', '-i', type=str, metavar='<file>',
+            default='damon.data', help='input file name')
+    parser.add_argument('--range', '-r', type=int, nargs=3,
+            metavar=('<start>', '<stop>', '<step>'),
+            help='range of percentiles to print')
+    parser.add_argument('--sortby', '-s', choices=['time', 'size'],
+            help='the metric to be used for sorting the number of regions')
+    parser.add_argument('--plot', '-p', type=str, metavar='<file>',
+            help='plot the distribution to an image file')
+
+def main(args=None):
+    if not args:
+        parser = argparse.ArgumentParser()
+        set_argparser(parser)
+        args = parser.parse_args()
+
+    percentiles = [0, 25, 50, 75, 100]
+
+    file_path = args.input
+    if args.range:
+        percentiles = range(args.range[0], args.range[1], args.range[2])
+    nr_regions_sort = True
+    if args.sortby == 'time':
+        nr_regions_sort = False
+
+    pid_pattern_map = {}
+    with open(file_path, 'rb') as f:
+        start_time = None
+        while True:
+            timebin = f.read(16)
+            if len(timebin) != 16:
+                break
+            nr_tasks = struct.unpack('I', f.read(4))[0]
+            for t in range(nr_tasks):
+                pid = struct.unpack('L', f.read(8))[0]
+                if not pid in pid_pattern_map:
+                    pid_pattern_map[pid] = []
+                pid_pattern_map[pid].append(_dist.access_patterns(f))
+
+    orig_stdout = sys.stdout
+    if args.plot:
+        tmp_path = tempfile.mkstemp()[1]
+        tmp_file = open(tmp_path, 'w')
+        sys.stdout = tmp_file
+
+    print('# <percentile> <# regions>')
+    for pid in pid_pattern_map.keys():
+        # Skip firs 20 regions as those would not adaptively adjusted
+        snapshots = pid_pattern_map[pid][20:]
+        nr_regions_dist = []
+        for snapshot in snapshots:
+            nr_regions_dist.append(len(snapshot))
+        if nr_regions_sort:
+            nr_regions_dist.sort(reverse=False)
+
+        print('# pid\t%s' % pid)
+        print('# avr:\t%d' % (sum(nr_regions_dist) / len(nr_regions_dist)))
+        for percentile in percentiles:
+            thres_idx = int(percentile / 100.0 * len(nr_regions_dist))
+            if thres_idx == len(nr_regions_dist):
+                thres_idx -= 1
+            threshold = nr_regions_dist[thres_idx]
+            print('%d\t%d' % (percentile, nr_regions_dist[thres_idx]))
+
+    if args.plot:
+        sys.stdout = orig_stdout
+        tmp_file.flush()
+        tmp_file.close()
+        xlabel = 'runtime (percent)'
+        if nr_regions_sort:
+            xlabel = 'percentile'
+        _dist.plot_dist(tmp_path, args.plot, xlabel,
+                'number of monitoring target regions')
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/damon/record.py b/tools/damon/record.py
new file mode 100644
index 000000000000..a547d479a103
--- /dev/null
+++ b/tools/damon/record.py
@@ -0,0 +1,212 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+"""
+Record data access patterns of the target process.
+"""
+
+import argparse
+import copy
+import os
+import signal
+import subprocess
+import time
+
+debugfs_attrs = None
+debugfs_record = None
+debugfs_pids = None
+debugfs_monitor_on = None
+
+def set_target_pid(pid):
+    return subprocess.call('echo %s > %s' % (pid, debugfs_pids), shell=True,
+            executable='/bin/bash')
+
+def turn_damon(on_off):
+    return subprocess.call("echo %s > %s" % (on_off, debugfs_monitor_on),
+            shell=True, executable="/bin/bash")
+
+def is_damon_running():
+    with open(debugfs_monitor_on, 'r') as f:
+        return f.read().strip() == 'on'
+
+def do_record(target, is_target_cmd, attrs, old_attrs):
+    if os.path.isfile(attrs.rfile_path):
+        os.rename(attrs.rfile_path, attrs.rfile_path + '.old')
+
+    if attrs.apply():
+        print('attributes (%s) failed to be applied' % attrs)
+        cleanup_exit(old_attrs, -1)
+    print('# damon attrs: %s' % attrs)
+    if is_target_cmd:
+        p = subprocess.Popen(target, shell=True, executable='/bin/bash')
+        target = p.pid
+    if set_target_pid(target):
+        print('pid setting (%s) failed' % target)
+        cleanup_exit(old_attrs, -2)
+    if turn_damon('on'):
+        print('could not turn on damon' % target)
+        cleanup_exit(old_attrs, -3)
+    if is_target_cmd:
+        p.wait()
+    while True:
+        # damon will turn it off by itself if the target tasks are terminated.
+        if not is_damon_running():
+            break
+        time.sleep(1)
+
+    cleanup_exit(old_attrs, 0)
+
+class Attrs:
+    sample_interval = None
+    aggr_interval = None
+    regions_update_interval = None
+    min_nr_regions = None
+    max_nr_regions = None
+    rbuf_len = None
+    rfile_path = None
+
+    def __init__(self, s, a, r, n, x, l, f):
+        self.sample_interval = s
+        self.aggr_interval = a
+        self.regions_update_interval = r
+        self.min_nr_regions = n
+        self.max_nr_regions = x
+        self.rbuf_len = l
+        self.rfile_path = f
+
+    def __str__(self):
+        return "%s %s %s %s %s %s %s" % (self.sample_interval, self.aggr_interval,
+                self.regions_update_interval, self.min_nr_regions,
+                self.max_nr_regions, self.rbuf_len, self.rfile_path)
+
+    def attr_str(self):
+        return "%s %s %s %s %s " % (self.sample_interval, self.aggr_interval,
+                self.regions_update_interval, self.min_nr_regions,
+                self.max_nr_regions)
+
+    def record_str(self):
+        return '%s %s ' % (self.rbuf_len, self.rfile_path)
+
+    def apply(self):
+        ret = subprocess.call('echo %s > %s' % (self.attr_str(), debugfs_attrs),
+                shell=True, executable='/bin/bash')
+        if ret:
+            return ret
+        return subprocess.call('echo %s > %s' % (self.record_str(),
+            debugfs_record), shell=True, executable='/bin/bash')
+
+def current_attrs():
+    with open(debugfs_attrs, 'r') as f:
+        attrs = f.read().split()
+    attrs = [int(x) for x in attrs]
+
+    with open(debugfs_record, 'r') as f:
+        rattrs = f.read().split()
+    attrs.append(int(rattrs[0]))
+    attrs.append(rattrs[1])
+    return Attrs(*attrs)
+
+def cmd_args_to_attrs(args):
+    "Generate attributes with specified arguments"
+    sample_interval = args.sample
+    aggr_interval = args.aggr
+    regions_update_interval = args.updr
+    min_nr_regions = args.minr
+    max_nr_regions = args.maxr
+    rbuf_len = args.rbuf
+    if not os.path.isabs(args.out):
+        args.out = os.path.join(os.getcwd(), args.out)
+    rfile_path = args.out
+    return Attrs(sample_interval, aggr_interval, regions_update_interval,
+            min_nr_regions, max_nr_regions, rbuf_len, rfile_path)
+
+def cleanup_exit(orig_attrs, exit_code):
+    if is_damon_running():
+        if turn_damon('off'):
+            print('failed to turn damon off!')
+    if orig_attrs:
+        if orig_attrs.apply():
+            print('original attributes (%s) restoration failed!' % orig_attrs)
+    exit(exit_code)
+
+def sighandler(signum, frame):
+    print('\nsignal %s received' % signum)
+    cleanup_exit(orig_attrs, signum)
+
+def chk_update_debugfs(debugfs):
+    global debugfs_attrs
+    global debugfs_record
+    global debugfs_pids
+    global debugfs_monitor_on
+
+    debugfs_damon = os.path.join(debugfs, 'damon')
+    debugfs_attrs = os.path.join(debugfs_damon, 'attrs')
+    debugfs_record = os.path.join(debugfs_damon, 'record')
+    debugfs_pids = os.path.join(debugfs_damon, 'pids')
+    debugfs_monitor_on = os.path.join(debugfs_damon, 'monitor_on')
+
+    if not os.path.isdir(debugfs_damon):
+        print("damon debugfs dir (%s) not found", debugfs_damon)
+        exit(1)
+
+    for f in [debugfs_attrs, debugfs_record, debugfs_pids, debugfs_monitor_on]:
+        if not os.path.isfile(f):
+            print("damon debugfs file (%s) not found" % f)
+            exit(1)
+
+def chk_permission():
+    if os.geteuid() != 0:
+        print("Run as root")
+        exit(1)
+
+def set_argparser(parser):
+    parser.add_argument('target', type=str, metavar='<target>',
+            help='the target command or the pid to record')
+    parser.add_argument('-s', '--sample', metavar='<interval>', type=int,
+            default=5000, help='sampling interval')
+    parser.add_argument('-a', '--aggr', metavar='<interval>', type=int,
+            default=100000, help='aggregate interval')
+    parser.add_argument('-u', '--updr', metavar='<interval>', type=int,
+            default=1000000, help='regions update interval')
+    parser.add_argument('-n', '--minr', metavar='<# regions>', type=int,
+            default=10, help='minimal number of regions')
+    parser.add_argument('-m', '--maxr', metavar='<# regions>', type=int,
+            default=1000, help='maximum number of regions')
+    parser.add_argument('-l', '--rbuf', metavar='<len>', type=int,
+            default=1024*1024, help='length of record result buffer')
+    parser.add_argument('-o', '--out', metavar='<file path>', type=str,
+            default='damon.data', help='output file path')
+    parser.add_argument('-d', '--debugfs', metavar='<debugfs>', type=str,
+            default='/sys/kernel/debug', help='debugfs mounted path')
+
+def main(args=None):
+    global orig_attrs
+    if not args:
+        parser = argparse.ArgumentParser()
+        set_argparser(parser)
+        args = parser.parse_args()
+
+    chk_permission()
+    chk_update_debugfs(args.debugfs)
+
+    signal.signal(signal.SIGINT, sighandler)
+    signal.signal(signal.SIGTERM, sighandler)
+    orig_attrs = current_attrs()
+
+    new_attrs = cmd_args_to_attrs(args)
+    target = args.target
+
+    target_fields = target.split()
+    if not subprocess.call('which %s > /dev/null' % target_fields[0],
+            shell=True, executable='/bin/bash'):
+        do_record(target, True, new_attrs, orig_attrs)
+    else:
+        try:
+            pid = int(target)
+        except:
+            print('target \'%s\' is neither a command, nor a pid' % target)
+            exit(1)
+        do_record(target, False, new_attrs, orig_attrs)
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/damon/report.py b/tools/damon/report.py
new file mode 100644
index 000000000000..c661c7b2f1af
--- /dev/null
+++ b/tools/damon/report.py
@@ -0,0 +1,45 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+import argparse
+
+import bin2txt
+import heats
+import nr_regions
+import wss
+
+def set_argparser(parser):
+    subparsers = parser.add_subparsers(title='report type', dest='report_type',
+            metavar='<report type>', help='the type of the report to generate')
+    subparsers.required = True
+
+    parser_raw = subparsers.add_parser('raw', help='human readable raw data')
+    bin2txt.set_argparser(parser_raw)
+
+    parser_heats = subparsers.add_parser('heats', help='heats of regions')
+    heats.set_argparser(parser_heats)
+
+    parser_wss = subparsers.add_parser('wss', help='working set size')
+    wss.set_argparser(parser_wss)
+
+    parser_nr_regions = subparsers.add_parser('nr_regions',
+            help='number of regions')
+    nr_regions.set_argparser(parser_nr_regions)
+
+def main(args=None):
+    if not args:
+        parser = argparse.ArgumentParser()
+        set_argparser(parser)
+        args = parser.parse_args()
+
+    if args.report_type == 'raw':
+        bin2txt.main(args)
+    elif args.report_type == 'heats':
+        heats.main(args)
+    elif args.report_type == 'wss':
+        wss.main(args)
+    elif args.report_type == 'nr_regions':
+        nr_regions.main(args)
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/damon/wss.py b/tools/damon/wss.py
new file mode 100644
index 000000000000..890deee5b9be
--- /dev/null
+++ b/tools/damon/wss.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+"Print out the distribution of the working set sizes of the given trace"
+
+import argparse
+import struct
+import sys
+import tempfile
+
+import _dist
+
+def set_argparser(parser):
+    parser.add_argument('--input', '-i', type=str, metavar='<file>',
+            default='damon.data', help='input file name')
+    parser.add_argument('--range', '-r', type=int, nargs=3,
+            metavar=('<start>', '<stop>', '<step>'),
+            help='range of wss percentiles to print')
+    parser.add_argument('--sortby', '-s', choices=['time', 'size'],
+            help='the metric to be used for the sort of the working set sizes')
+    parser.add_argument('--plot', '-p', type=str, metavar='<file>',
+            help='plot the distribution to an image file')
+
+def main(args=None):
+    if not args:
+        parser = argparse.ArgumentParser()
+        set_argparser(parser)
+        args = parser.parse_args()
+
+    percentiles = [0, 25, 50, 75, 100]
+
+    file_path = args.input
+    if args.range:
+        percentiles = range(args.range[0], args.range[1], args.range[2])
+    wss_sort = True
+    if args.sortby == 'time':
+        wss_sort = False
+
+    pid_pattern_map = {}
+    with open(file_path, 'rb') as f:
+        start_time = None
+        while True:
+            timebin = f.read(16)
+            if len(timebin) != 16:
+                break
+            nr_tasks = struct.unpack('I', f.read(4))[0]
+            for t in range(nr_tasks):
+                pid = struct.unpack('L', f.read(8))[0]
+                if not pid in pid_pattern_map:
+                    pid_pattern_map[pid] = []
+                pid_pattern_map[pid].append(_dist.access_patterns(f))
+
+    orig_stdout = sys.stdout
+    if args.plot:
+        tmp_path = tempfile.mkstemp()[1]
+        tmp_file = open(tmp_path, 'w')
+        sys.stdout = tmp_file
+
+    print('# <percentile> <wss>')
+    for pid in pid_pattern_map.keys():
+        # Skip first 20 snapshots as regions may not adjusted yet.
+        snapshots = pid_pattern_map[pid][20:]
+        wss_dist = []
+        for snapshot in snapshots:
+            wss = 0
+            for p in snapshot:
+                # Ignore regions not accessed
+                if p[1] <= 0:
+                    continue
+                wss += p[0]
+            wss_dist.append(wss)
+        if wss_sort:
+            wss_dist.sort(reverse=False)
+
+        print('# pid\t%s' % pid)
+        print('# avr:\t%d' % (sum(wss_dist) / len(wss_dist)))
+        for percentile in percentiles:
+            thres_idx = int(percentile / 100.0 * len(wss_dist))
+            if thres_idx == len(wss_dist):
+                thres_idx -= 1
+            threshold = wss_dist[thres_idx]
+            print('%d\t%d' % (percentile, wss_dist[thres_idx]))
+
+    if args.plot:
+        sys.stdout = orig_stdout
+        tmp_file.flush()
+        tmp_file.close()
+        xlabel = 'runtime (percent)'
+        if wss_sort:
+            xlabel = 'percentile'
+        _dist.plot_dist(tmp_path, args.plot, xlabel,
+                'working set size (bytes)')
+
+if __name__ == '__main__':
+    main()
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document for DAMON
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (9 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 10/14] tools: Add a minimal user-space tool for DAMON SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-10  9:03   ` Jonathan Cameron
  2020-02-24 12:30 ` [PATCH v6 12/14] mm/damon: Add kunit tests SeongJae Park
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit adds a simple document for DAMON under
`Documentation/admin-guide/mm`.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 .../admin-guide/mm/data_access_monitor.rst    | 414 ++++++++++++++++++
 Documentation/admin-guide/mm/index.rst        |   1 +
 2 files changed, 415 insertions(+)
 create mode 100644 Documentation/admin-guide/mm/data_access_monitor.rst

diff --git a/Documentation/admin-guide/mm/data_access_monitor.rst b/Documentation/admin-guide/mm/data_access_monitor.rst
new file mode 100644
index 000000000000..4d836c3866e2
--- /dev/null
+++ b/Documentation/admin-guide/mm/data_access_monitor.rst
@@ -0,0 +1,414 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+DAMON: Data Access MONitor
+==========================
+
+Introduction
+============
+
+Memory management decisions can normally be more efficient if finer data access
+information is available.  However, because finer information usually comes
+with higher overhead, most systems including Linux made a tradeoff: Forgive
+some wise decisions and use coarse information and/or light-weight heuristics.
+
+A number of experimental data access pattern awared memory management
+optimizations say the sacrifices are
+huge (2.55x slowdown).  However, none of those has successfully adopted to
+Linux kernel mainly due to the absence of a scalable and efficient data access
+monitoring mechanism.
+
+DAMON is a data access monitoring solution for the problem.  It is 1) accurate
+enough for the DRAM level memory management, 2) light-weight enough to be
+applied online, and 3) keeps predefined upper-bound overhead regardless of the
+size of target workloads (thus scalable).
+
+DAMON is implemented as a standalone kernel module and provides several simple
+interfaces.  Owing to that, though it has mainly designed for the kernel's
+memory management mechanisms, it can be also used for a wide range of user
+space programs and people.
+
+
+Frequently Asked Questions
+==========================
+
+Q: Why not integrated with perf?
+A: From the perspective of perf like profilers, DAMON can be thought of as a
+data source in kernel, like tracepoints, pressure stall information (psi), or
+idle page tracking.  Thus, it can be easily integrated with those.  However,
+this patchset doesn't provide a fancy perf integration because current step of
+DAMON development is focused on its core logic only.  That said, DAMON already
+provides two interfaces for user space programs, which based on debugfs and
+tracepoint, respectively.  Using the tracepoint interface, you can use DAMON
+with perf.  This patchset also provides the debugfs interface based user space
+tool for DAMON.  It can be used to record, visualize, and analyze data access
+pattern of target processes in a convenient way.
+
+Q: Why a new module, instead of extending perf or other tools?
+A: First, DAMON aims to be used by other programs including the kernel.
+Therefore, having dependency to specific tools like perf is not desirable.
+Second, because it need to be lightweight as much as possible so that it can be
+used online, any unnecessary overhead such as kernel - user space context
+switching cost should be avoided.  These are the two most biggest reasons why
+DAMON is implemented in the kernel space.  The idle page tracking subsystem
+would be the kernel module that most seems similar to DAMON.  However, it's own
+interface is not compatible with DAMON.  Also, the internal implementation of
+it has no common part to be reused by DAMON.
+
+Q: Can 'perf mem' provide the data required for DAMON?
+A: On the systems supporting 'perf mem', yes.  DAMON is using the PTE Accessed
+bits in low level.  Other H/W or S/W features that can be used for the purpose
+could be used.  However, as explained with above question, DAMON need to be
+implemented in the kernel space.
+
+
+Expected Use-cases
+==================
+
+A straightforward usecase of DAMON would be the program behavior analysis.
+With the DAMON output, users can confirm whether the program is running as
+intended or not.  This will be useful for debuggings and tests of design
+points.
+
+The monitored results can also be useful for counting the dynamic working set
+size of workloads.  For the administration of memory overcommitted systems or
+selection of the environments (e.g., containers providing different amount of
+memory) for your workloads, this will be useful.
+
+If you are a programmer, you can optimize your program by managing the memory
+based on the actual data access pattern.  For example, you can identify the
+dynamic hotness of your data using DAMON and call ``mlock()`` to keep your hot
+data in DRAM, or call ``madvise()`` with ``MADV_PAGEOUT`` to proactively
+reclaim cold data.  Even though your program is guaranteed to not encounter
+memory pressure, you can still improve the performance by applying the DAMON
+outputs for call of ``MADV_HUGEPAGE`` and ``MADV_NOHUGEPAGE``.  More creative
+optimizations would be possible.  Our evaluations of DAMON includes a
+straightforward optimization using the ``mlock()``.  Please refer to the below
+Evaluation section for more detail.
+
+As DAMON incurs very low overhead, such optimizations can be applied not only
+offline, but also online.  Also, there is no reason to limit such optimizations
+to the user space.  Several parts of the kernel's memory management mechanisms
+could be also optimized using DAMON. The reclamation, the THP (de)promotion
+decisions, and the compaction would be such a candidates.
+
+
+Mechanisms of DAMON
+===================
+
+
+Basic Access Check
+------------------
+
+DAMON basically reports what pages are how frequently accessed.  The report is
+passed to users in binary format via a ``result file`` which users can set it's
+path.  Note that the frequency is not an absolute number of accesses, but a
+relative frequency among the pages of the target workloads.
+
+Users can also control the resolution of the reports by setting two time
+intervals, ``sampling interval`` and ``aggregation interval``.  In detail,
+DAMON checks access to each page per ``sampling interval``, aggregates the
+results (counts the number of the accesses to each page), and reports the
+aggregated results per ``aggregation interval``.  For the access check of each
+page, DAMON uses the Accessed bits of PTEs.
+
+This is thus similar to the previously mentioned periodic access checks based
+mechanisms, which overhead is increasing as the size of the target process
+grows.
+
+
+Region Based Sampling
+---------------------
+
+To avoid the unbounded increase of the overhead, DAMON groups a number of
+adjacent pages that assumed to have same access frequencies into a region.  As
+long as the assumption (pages in a region have same access frequencies) is
+kept, only one page in the region is required to be checked.  Thus, for each
+``sampling interval``, DAMON randomly picks one page in each region and clears
+its Accessed bit.  After one more ``sampling interval``, DAMON reads the
+Accessed bit of the page and increases the access frequency of the region if
+the bit has set meanwhile.  Therefore, the monitoring overhead is controllable
+by setting the number of regions.  DAMON allows users to set the minimal and
+maximum number of regions for the trade-off.
+
+Except the assumption, this is almost same with the above-mentioned
+miniature-like static region based sampling.  In other words, this scheme
+cannot preserve the quality of the output if the assumption is not guaranteed.
+
+
+Adaptive Regions Adjustment
+---------------------------
+
+At the beginning of the monitoring, DAMON constructs the initial regions by
+evenly splitting the memory mapped address space of the process into the
+user-specified minimal number of regions.  In this initial state, the
+assumption is normally not kept and thus the quality could be low.  To keep the
+assumption as much as possible, DAMON adaptively merges and splits each region.
+For each ``aggregation interval``, it compares the access frequencies of
+adjacent regions and merges those if the frequency difference is small.  Then,
+after it reports and clears the aggregated access frequency of each region, it
+splits each region into two regions if the total number of regions is smaller
+than the half of the user-specified maximum number of regions.
+
+In this way, DAMON provides its best-effort quality and minimal overhead while
+keeping the bounds users set for their trade-off.
+
+
+Applying Dynamic Memory Mappings
+--------------------------------
+
+Only a number of small parts in the super-huge virtual address space of the
+processes is mapped to physical memory and accessed.  Thus, tracking the
+unmapped address regions is just wasteful.  However, tracking every memory
+mapping change might incur an overhead.  For the reason, DAMON applies the
+dynamic memory mapping changes to the tracking regions only for each of an
+user-specified time interval (``regions update interval``).
+
+
+``debugfs`` Interface
+=====================
+
+DAMON exports four files, ``attrs``, ``pids``, ``record``, and ``monitor_on``
+under its debugfs directory, ``<debugfs>/damon/``.
+
+Attributes
+----------
+
+Users can read and write the ``sampling interval``, ``aggregation interval``,
+``regions update interval``, and min/max number of monitoring target regions by
+reading from and writing to the ``attrs`` file.  For example, below commands
+set those values to 5 ms, 100 ms, 1,000 ms, 10, 1000 and check it again::
+
+    # cd <debugfs>/damon
+    # echo 5000 100000 1000000 10 1000 > attrs
+    # cat attrs
+    5000 100000 1000000 10 1000
+
+Target PIDs
+-----------
+
+Users can read and write the pids of current monitoring target processes by
+reading from and writing to the ``pids`` file.  For example, below commands set
+processes having pids 42 and 4242 as the processes to be monitored and check it
+again::
+
+    # cd <debugfs>/damon
+    # echo 42 4242 > pids
+    # cat pids
+    42 4242
+
+Note that setting the pids doesn't starts the monitoring.
+
+Record
+------
+
+DAMON support direct monitoring result record feature.  The recorded results
+are first written to a buffer and flushed to a file in batch.  Users can set
+the size of the buffer and the path to the result file by reading from and
+writing to the ``record`` file.  For example, below commands set the buffer to
+be 4 KiB and the result to be saved in ``/damon.data``.
+
+    # cd <debugfs>/damon
+    # echo "4096 /damon.data" > pids
+    # cat record
+    4096 /damon.data
+
+Turning On/Off
+--------------
+
+You can check current status, start and stop the monitoring by reading from and
+writing to the ``monitor_on`` file.  Writing ``on`` to the file starts DAMON to
+monitor the target processes with the attributes.  Writing ``off`` to the file
+stops DAMON.  DAMON also stops if every target processes is be terminated.
+Below example commands turn on, off, and check status of DAMON::
+
+    # cd <debugfs>/damon
+    # echo on > monitor_on
+    # echo off > monitor_on
+    # cat monitor_on
+    off
+
+Please note that you cannot write to the ``attrs`` and ``pids`` files while the
+monitoring is turned on.  If you write to the files while DAMON is running,
+``-EINVAL`` will be returned.
+
+
+User Space Tool for DAMON
+=========================
+
+There is a user space tool for DAMON, ``/tools/damon/damo``.  It provides
+another user interface which more convenient than the debugfs interface.
+Nevertheless, note that it is only aimed to be used for minimal reference of
+the DAMON's debugfs interfaces and for tests of the DAMON itself.  Based on the
+debugfs interface, you can create another cool and more convenient user space
+tools.
+
+The interface of the tool is basically subcommand based.  You can almost always
+use ``-h`` option to get help of the use of each subcommand.  Currently, it
+supports two subcommands, ``record`` and ``report``.
+
+
+Recording Data Access Pattern
+-----------------------------
+
+The ``record`` subcommand records the data access pattern of target process in
+a file (``./damon.data`` by default) using DAMON.  You can specifies the target
+as either pid or a command for an execution of the process.  Below example
+shows a command target usage::
+
+    # cd <kernel>/tools/damon/
+    # ./damo record "sleep 5"
+
+The tool will execute ``sleep 5`` by itself and record the data access patterns
+of the process.  Below example shows a pid target usage::
+
+    # sleep 5 &
+    # ./damo record `pidof sleep`
+
+You can set more detailed attributes and path to the recorded data file using
+optional arguments to the subcommand.  Use the ``-h`` option for more help.
+
+
+Analyzing Data Access Pattern
+-----------------------------
+
+The ``report`` subcommand reads a data access pattern record file (if not
+explicitly specified, reads ``./damon.data`` file if exists) and generates
+reports of various types.  You can specify what type of report you want using
+sub-subcommand to ``report`` subcommand.  For supported types, pass the ``-h``
+option to ``report`` subcommand.
+
+
+raw
+~~~
+
+``raw`` sub-subcommand simply transforms the record, which is storing the data
+access patterns in binary format to human readable text.  For example::
+
+    $ ./damo report raw
+    start_time:  193485829398
+    rel time:                0
+    nr_tasks:  1
+    pid:  1348
+    nr_regions:  4
+    560189609000-56018abce000(  22827008):  0
+    7fbdff59a000-7fbdffaf1a00(   5601792):  0
+    7fbdffaf1a00-7fbdffbb5000(    800256):  1
+    7ffea0dc0000-7ffea0dfd000(    249856):  0
+
+    rel time:        100000731
+    nr_tasks:  1
+    pid:  1348
+    nr_regions:  6
+    560189609000-56018abce000(  22827008):  0
+    7fbdff59a000-7fbdff8ce933(   3361075):  0
+    7fbdff8ce933-7fbdffaf1a00(   2240717):  1
+    7fbdffaf1a00-7fbdffb66d99(    480153):  0
+    7fbdffb66d99-7fbdffbb5000(    320103):  1
+    7ffea0dc0000-7ffea0dfd000(    249856):  0
+
+The first line shows recording started timestamp (nanosecond).  Records of data
+access patterns are following this.  Each record is sperated by a blank line.
+Each record first specifies the recorded time (``rel time``), number of
+monitored tasks in this record (``nr_tasks``).  Multiple number of records of
+data access pattern for each task continue.  Each data access pattern for each
+task shows first it's pid (``pid``) and number of monitored virtual address
+regions in this access pattern (``nr_regions``).  After that, each line shows
+start/end address, size, and number of monitored accesses to the region for
+each of the regions.
+
+
+heats
+~~~~~
+
+The ``raw`` type shows detailed information but it is exhaustive to manually
+read and analyzed.  For the reason, ``heats`` plots the data in heatmap form,
+using time as x-axis, virtual address as y-axis, and access frequency as
+z-axis.  Also, users set the resolution and start/end point of each axis via
+optional arguments.  For example::
+
+    $ ./damo report heats --tres 3 --ares 3
+    0               0               0.0
+    0               7609002         0.0
+    0               15218004        0.0
+    66112620851     0               0.0
+    66112620851     7609002         0.0
+    66112620851     15218004        0.0
+    132225241702    0               0.0
+    132225241702    7609002         0.0
+    132225241702    15218004        0.0
+
+This command shows the recorded access pattern of the ``sleep`` command using 3
+data points for each of time axis and address axis.  Therefore, it shows 9 data
+points in total.
+
+Users can easily converts this text output into heatmap image or other 3D
+representation using various tools such as 'gnuplot'.  ``raw`` sub-subcommand
+also provides 'gnuplot' based heatmap image creation.  For this, you can use
+``--heatmap`` option.  Also, note that because it uses 'gnuplot' internally, it
+will fail if 'gnuplot' is not installed on your system.  For example::
+
+    $ ./damo report heats --heatmap heatmap.png
+
+Creates ``heatmap.png`` file containing the heatmap image.  It supports
+``pdf``, ``png``, ``jpeg``, and ``svg``.
+
+For proper zoom in / zoom out, you need to see the layout of the record.  For
+that, use '--guide' option.  If the option is given, it will provide useful
+information about the records in the record file.  For example::
+
+    $ ./damo report heats --guide
+    pid:1348
+    time: 193485829398-198337863555 (4852034157)
+    region   0: 00000094564599762944-00000094564622589952 (22827008)
+    region   1: 00000140454009610240-00000140454016012288 (6402048)
+    region   2: 00000140731597193216-00000140731597443072 (249856)
+
+The output shows monitored regions (start and end addresses in byte) and
+monitored time duration (start and end time in nanosecond) of each target task.
+Therefore, it would be wise to plot only each region rather than plotting
+entire address space in one heatmap because the gaps between the regions are so
+huge in this case.
+
+
+wss
+~~~
+
+The ``wss`` type shows the distribution or time-varying working set sizes of
+the recorded workload using the records.  For example::
+
+    $ ./damo report wss
+    # <percentile> <wss>
+    # pid   1348
+    # avr:  66228
+    0       0
+    25      0
+    50      0
+    75      0
+    100     1920615
+
+Without any option, it shows the distribution of the working set sizes as
+above.  Basically it shows 0th, 25th, 50th, 75th, and 100th percentile and
+average of the measured working set sizes in the access pattern records.  In
+this case, the working set size was zero for 75th percentile but 1,920,615
+bytes in max and 66,228 in average.
+
+By setting the sort key of the percentile using '--sortby', you can also see
+how the working set size is chronologically changed.  For example::
+
+    $ ./damo report wss --sortby time
+    # <percentile> <wss>
+    # pid   1348
+    # avr:  66228
+    0       0
+    25      0
+    50      0
+    75      0
+    100     0
+
+The average is still 66,228.  And, because we sorted the working set using
+recorded time and the access is very short, we cannot show when the access
+made.
+
+Users can specify the resolution of the distribution (``--range``).  It also
+supports 'gnuplot' based simple visualization (``--plot``) of the distribution.
diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst
index 11db46448354..d3d0ba373eb6 100644
--- a/Documentation/admin-guide/mm/index.rst
+++ b/Documentation/admin-guide/mm/index.rst
@@ -27,6 +27,7 @@ the Linux memory management.
 
    concepts
    cma_debugfs
+   data_access_monitor
    hugetlbpage
    idle_page_tracking
    ksm
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 12/14] mm/damon: Add kunit tests
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (10 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document " SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 13/14] mm/damon: Add user selftests SeongJae Park
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit adds kunit based unit tests for DAMON.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
---
 mm/Kconfig      |  11 +
 mm/damon-test.h | 604 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/damon.c      |   2 +
 3 files changed, 617 insertions(+)
 create mode 100644 mm/damon-test.h

diff --git a/mm/Kconfig b/mm/Kconfig
index 387d469f40ec..1a745ce0cbcb 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -751,4 +751,15 @@ config DAMON
 	  be 1) accurate enough to be useful for performance-centric domains,
 	  and 2) sufficiently light-weight so that it can be applied online.
 
+config DAMON_KUNIT_TEST
+	bool "Test for damon"
+	depends on DAMON=y && KUNIT
+	help
+	  This builds the DAMON Kunit test suite.
+
+	  For more information on KUnit and unit tests in general, please refer
+	  to the KUnit documentation.
+
+	  If unsure, say N.
+
 endmenu
diff --git a/mm/damon-test.h b/mm/damon-test.h
new file mode 100644
index 000000000000..c7dc21325c77
--- /dev/null
+++ b/mm/damon-test.h
@@ -0,0 +1,604 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Data Access Monitor Unit Tests
+ *
+ * Copyright 2019 Amazon.com, Inc. or its affiliates.  All rights reserved.
+ *
+ * Author: SeongJae Park <sjpark@amazon.de>
+ */
+
+#ifdef CONFIG_DAMON_KUNIT_TEST
+
+#ifndef _DAMON_TEST_H
+#define _DAMON_TEST_H
+
+#include <kunit/test.h>
+
+static void damon_test_str_to_pids(struct kunit *test)
+{
+	char *question;
+	unsigned long *answers;
+	unsigned long expected[] = {12, 35, 46};
+	ssize_t nr_integers = 0, i;
+
+	question = "123";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)1, nr_integers);
+	KUNIT_EXPECT_EQ(test, 123ul, answers[0]);
+	kfree(answers);
+
+	question = "123abc";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)1, nr_integers);
+	KUNIT_EXPECT_EQ(test, 123ul, answers[0]);
+	kfree(answers);
+
+	question = "a123";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)0, nr_integers);
+	KUNIT_EXPECT_PTR_EQ(test, answers, (unsigned long *)NULL);
+
+	question = "12 35";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)2, nr_integers);
+	for (i = 0; i < nr_integers; i++)
+		KUNIT_EXPECT_EQ(test, expected[i], answers[i]);
+	kfree(answers);
+
+	question = "12 35 46";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)3, nr_integers);
+	for (i = 0; i < nr_integers; i++)
+		KUNIT_EXPECT_EQ(test, expected[i], answers[i]);
+	kfree(answers);
+
+	question = "12 35 abc 46";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)2, nr_integers);
+	for (i = 0; i < 2; i++)
+		KUNIT_EXPECT_EQ(test, expected[i], answers[i]);
+	kfree(answers);
+
+	question = "";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)0, nr_integers);
+	KUNIT_EXPECT_PTR_EQ(test, (unsigned long *)NULL, answers);
+	kfree(answers);
+
+	question = "\n";
+	answers = str_to_pids(question, strnlen(question, 128), &nr_integers);
+	KUNIT_EXPECT_EQ(test, (ssize_t)0, nr_integers);
+	KUNIT_EXPECT_PTR_EQ(test, (unsigned long *)NULL, answers);
+	kfree(answers);
+}
+
+static void damon_test_regions(struct kunit *test)
+{
+	struct damon_region *r;
+	struct damon_task *t;
+
+	r = damon_new_region(&damon_user_ctx, 1, 2);
+	KUNIT_EXPECT_EQ(test, 1ul, r->vm_start);
+	KUNIT_EXPECT_EQ(test, 2ul, r->vm_end);
+	KUNIT_EXPECT_EQ(test, 0u, r->nr_accesses);
+	KUNIT_EXPECT_TRUE(test, r->sampling_addr >= r->vm_start);
+	KUNIT_EXPECT_TRUE(test, r->sampling_addr < r->vm_end);
+
+	t = damon_new_task(42);
+	KUNIT_EXPECT_EQ(test, 0u, nr_damon_regions(t));
+
+	damon_add_region_tail(r, t);
+	KUNIT_EXPECT_EQ(test, 1u, nr_damon_regions(t));
+
+	damon_del_region(r);
+	KUNIT_EXPECT_EQ(test, 0u, nr_damon_regions(t));
+
+	damon_free_task(t);
+}
+
+static void damon_test_tasks(struct kunit *test)
+{
+	struct damon_ctx *c = &damon_user_ctx;
+	struct damon_task *t;
+
+	t = damon_new_task(42);
+	KUNIT_EXPECT_EQ(test, 42ul, t->pid);
+	KUNIT_EXPECT_EQ(test, 0u, nr_damon_tasks(c));
+
+	damon_add_task_tail(&damon_user_ctx, t);
+	KUNIT_EXPECT_EQ(test, 1u, nr_damon_tasks(c));
+
+	damon_destroy_task(t);
+	KUNIT_EXPECT_EQ(test, 0u, nr_damon_tasks(c));
+}
+
+static void damon_test_set_pids(struct kunit *test)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	unsigned long pids[] = {1, 2, 3};
+	char buf[64];
+
+	damon_set_pids(ctx, pids, 3);
+	damon_sprint_pids(ctx, buf, 64);
+	KUNIT_EXPECT_STREQ(test, (char *)buf, "1 2 3\n");
+
+	damon_set_pids(ctx, NULL, 0);
+	damon_sprint_pids(ctx, buf, 64);
+	KUNIT_EXPECT_STREQ(test, (char *)buf, "\n");
+
+	damon_set_pids(ctx, (unsigned long []){1, 2}, 2);
+	damon_sprint_pids(ctx, buf, 64);
+	KUNIT_EXPECT_STREQ(test, (char *)buf, "1 2\n");
+
+	damon_set_pids(ctx, (unsigned long []){2}, 1);
+	damon_sprint_pids(ctx, buf, 64);
+	KUNIT_EXPECT_STREQ(test, (char *)buf, "2\n");
+
+	damon_set_pids(ctx, NULL, 0);
+	damon_sprint_pids(ctx, buf, 64);
+	KUNIT_EXPECT_STREQ(test, (char *)buf, "\n");
+}
+
+/*
+ * Test damon_three_regions_in_vmas() function
+ *
+ * DAMON converts the complex and dynamic memory mappings of each target task
+ * to three discontiguous regions which cover every mapped areas.  However, the
+ * three regions should not include the two biggest unmapped areas in the
+ * original mapping, because the two biggest areas are normally the areas
+ * between 1) heap and the mmap()-ed regions, and 2) the mmap()-ed regions and
+ * stack.  Because these two unmapped areas are very huge but obviously never
+ * accessed, covering the region is just a waste.
+ *
+ * 'damon_three_regions_in_vmas() receives an address space of a process.  It
+ * first identifies the start of mappings, end of mappings, and the two biggest
+ * unmapped areas.  After that, based on the information, it constructs the
+ * three regions and returns.  For more detail, refer to the comment of
+ * 'damon_init_regions_of()' function definition in 'mm/damon.c' file.
+ *
+ * For example, suppose virtual address ranges of 10-20, 20-25, 200-210,
+ * 210-220, 300-305, and 307-330 (Other comments represent this mappings in
+ * more short form: 10-20-25, 200-210-220, 300-305, 307-330) of a process are
+ * mapped.  To cover every mappings, the three regions should start with 10,
+ * and end with 305.  The process also has three unmapped areas, 25-200,
+ * 220-300, and 305-307.  Among those, 25-200 and 220-300 are the biggest two
+ * unmapped areas, and thus it should be converted to three regions of 10-25,
+ * 200-220, and 300-330.
+ */
+static void damon_test_three_regions_in_vmas(struct kunit *test)
+{
+	struct region regions[3] = {0,};
+	/* 10-20-25, 200-210-220, 300-305, 307-330 */
+	struct vm_area_struct vmas[] = {
+		(struct vm_area_struct) {.vm_start = 10, .vm_end = 20},
+		(struct vm_area_struct) {.vm_start = 20, .vm_end = 25},
+		(struct vm_area_struct) {.vm_start = 200, .vm_end = 210},
+		(struct vm_area_struct) {.vm_start = 210, .vm_end = 220},
+		(struct vm_area_struct) {.vm_start = 300, .vm_end = 305},
+		(struct vm_area_struct) {.vm_start = 307, .vm_end = 330},
+	};
+	vmas[0].vm_next = &vmas[1];
+	vmas[1].vm_next = &vmas[2];
+	vmas[2].vm_next = &vmas[3];
+	vmas[3].vm_next = &vmas[4];
+	vmas[4].vm_next = &vmas[5];
+	vmas[5].vm_next = NULL;
+
+	damon_three_regions_in_vmas(&vmas[0], regions);
+
+	KUNIT_EXPECT_EQ(test, 10ul, regions[0].start);
+	KUNIT_EXPECT_EQ(test, 25ul, regions[0].end);
+	KUNIT_EXPECT_EQ(test, 200ul, regions[1].start);
+	KUNIT_EXPECT_EQ(test, 220ul, regions[1].end);
+	KUNIT_EXPECT_EQ(test, 300ul, regions[2].start);
+	KUNIT_EXPECT_EQ(test, 330ul, regions[2].end);
+}
+
+/* Clean up global state of damon */
+static void damon_cleanup_global_state(void)
+{
+	struct damon_task *t, *next;
+
+	damon_for_each_task_safe(&damon_user_ctx, t, next)
+		damon_destroy_task(t);
+
+	damon_user_ctx.rbuf_offset = 0;
+}
+
+/*
+ * Test kdamond_flush_aggregated()
+ *
+ * DAMON checks access to each region and aggregates this information as the
+ * access frequency of each region.  In detail, it increases '->nr_accesses' of
+ * regions that an access has confirmed.  'kdamond_flush_aggregated()' flushes
+ * the aggregated information ('->nr_accesses' of each regions) to the result
+ * buffer.  As a result of the flushing, the '->nr_accesses' of regions are
+ * initialized to zero.
+ */
+static void damon_test_aggregate(struct kunit *test)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	unsigned long pids[] = {1, 2, 3};
+	unsigned long saddr[][3] = {{10, 20, 30}, {5, 42, 49}, {13, 33, 55} };
+	unsigned long eaddr[][3] = {{15, 27, 40}, {31, 45, 55}, {23, 44, 66} };
+	unsigned long accesses[][3] = {{42, 95, 84}, {10, 20, 30}, {0, 1, 2} };
+	struct damon_task *t;
+	struct damon_region *r;
+	int it, ir;
+	ssize_t sz, sr, sp;
+
+	damon_set_recording(ctx, 256, "damon.data");
+	damon_set_pids(ctx, pids, 3);
+
+	it = 0;
+	damon_for_each_task(ctx, t) {
+		for (ir = 0; ir < 3; ir++) {
+			r = damon_new_region(ctx,
+					saddr[it][ir], eaddr[it][ir]);
+			r->nr_accesses = accesses[it][ir];
+			damon_add_region_tail(r, t);
+		}
+		it++;
+	}
+	kdamond_flush_aggregated(ctx);
+	it = 0;
+	damon_for_each_task(ctx, t) {
+		ir = 0;
+		/* '->nr_accesses' should be zeroed */
+		damon_for_each_region(r, t) {
+			KUNIT_EXPECT_EQ(test, 0u, r->nr_accesses);
+			ir++;
+		}
+		/* regions should be preserved */
+		KUNIT_EXPECT_EQ(test, 3, ir);
+		it++;
+	}
+	/* tasks also should be preserved */
+	KUNIT_EXPECT_EQ(test, 3, it);
+
+	/* The aggregated information should be written in the buffer */
+	sr = sizeof(r->vm_start) + sizeof(r->vm_end) + sizeof(r->nr_accesses);
+	sp = sizeof(t->pid) + sizeof(unsigned int) + 3 * sr;
+	sz = sizeof(struct timespec64) + sizeof(unsigned int) + 3 * sp;
+	KUNIT_EXPECT_EQ(test, (unsigned int)sz, ctx->rbuf_offset);
+
+	damon_set_recording(ctx, 0, "damon.data");
+	damon_cleanup_global_state();
+}
+
+static void damon_test_write_rbuf(struct kunit *test)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char *data;
+
+	damon_set_recording(&damon_user_ctx, 256, "damon.data");
+
+	data = "hello";
+	damon_write_rbuf(ctx, data, strnlen(data, 256));
+	KUNIT_EXPECT_EQ(test, ctx->rbuf_offset, 5u);
+
+	damon_write_rbuf(ctx, data, 0);
+	KUNIT_EXPECT_EQ(test, ctx->rbuf_offset, 5u);
+
+	KUNIT_EXPECT_STREQ(test, (char *)ctx->rbuf, data);
+	damon_set_recording(&damon_user_ctx, 0, "damon.data");
+}
+
+/*
+ * Test 'damon_apply_three_regions()'
+ *
+ * test			kunit object
+ * regions		an array containing start/end addresses of current
+ *			monitoring target regions
+ * nr_regions		the number of the addresses in 'regions'
+ * three_regions	The three regions that need to be applied now
+ * expected		start/end addresses of monitoring target regions that
+ *			'three_regions' are applied
+ * nr_expected		the number of addresses in 'expected'
+ *
+ * The memory mapping of the target processes changes dynamically.  To follow
+ * the change, DAMON periodically reads the mappings, simplifies it to the
+ * three regions, and updates the monitoring target regions to fit in the three
+ * regions.  The update of current target regions is the role of
+ * 'damon_apply_three_regions()'.
+ *
+ * This test passes the given target regions and the new three regions that
+ * need to be applied to the function and check whether it updates the regions
+ * as expected.
+ */
+static void damon_do_test_apply_three_regions(struct kunit *test,
+				unsigned long *regions, int nr_regions,
+				struct region *three_regions,
+				unsigned long *expected, int nr_expected)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+	int i;
+
+	t = damon_new_task(42);
+	for (i = 0; i < nr_regions / 2; i++) {
+		r = damon_new_region(&damon_user_ctx,
+				regions[i * 2], regions[i * 2 + 1]);
+		damon_add_region_tail(r, t);
+	}
+	damon_add_task_tail(&damon_user_ctx, t);
+
+	damon_apply_three_regions(&damon_user_ctx, t, three_regions);
+
+	for (i = 0; i < nr_expected / 2; i++) {
+		r = damon_nth_region_of(t, i);
+		KUNIT_EXPECT_EQ(test, r->vm_start, expected[i * 2]);
+		KUNIT_EXPECT_EQ(test, r->vm_end, expected[i * 2 + 1]);
+	}
+
+	damon_cleanup_global_state();
+}
+
+/*
+ * This function test most common case where the three big regions are only
+ * slightly changed.  Target regions should adjust their boundary (10-20-30,
+ * 50-55, 70-80, 90-100) to fit with the new big regions or remove target
+ * regions (57-79) that now out of the three regions.
+ */
+static void damon_test_apply_three_regions1(struct kunit *test)
+{
+	/* 10-20-30, 50-55-57-59, 70-80-90-100 */
+	unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59,
+				70, 80, 80, 90, 90, 100};
+	/* 5-27, 45-55, 73-104 */
+	struct region new_three_regions[3] = {
+		(struct region){.start = 5, .end = 27},
+		(struct region){.start = 45, .end = 55},
+		(struct region){.start = 73, .end = 104} };
+	/* 5-20-27, 45-55, 73-80-90-104 */
+	unsigned long expected[] = {5, 20, 20, 27, 45, 55,
+				73, 80, 80, 90, 90, 104};
+
+	damon_do_test_apply_three_regions(test, regions, ARRAY_SIZE(regions),
+			new_three_regions, expected, ARRAY_SIZE(expected));
+}
+
+/*
+ * Test slightly bigger change.  Similar to above, but the second big region
+ * now require two target regions (50-55, 57-59) to be removed.
+ */
+static void damon_test_apply_three_regions2(struct kunit *test)
+{
+	/* 10-20-30, 50-55-57-59, 70-80-90-100 */
+	unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59,
+				70, 80, 80, 90, 90, 100};
+	/* 5-27, 56-57, 65-104 */
+	struct region new_three_regions[3] = {
+		(struct region){.start = 5, .end = 27},
+		(struct region){.start = 56, .end = 57},
+		(struct region){.start = 65, .end = 104} };
+	/* 5-20-27, 56-57, 65-80-90-104 */
+	unsigned long expected[] = {5, 20, 20, 27, 56, 57,
+				65, 80, 80, 90, 90, 104};
+
+	damon_do_test_apply_three_regions(test, regions, ARRAY_SIZE(regions),
+			new_three_regions, expected, ARRAY_SIZE(expected));
+}
+
+/*
+ * Test a big change.  The second big region has totally freed and mapped to
+ * different area (50-59 -> 61-63).  The target regions which were in the old
+ * second big region (50-55-57-59) should be removed and new target region
+ * covering the second big region (61-63) should be created.
+ */
+static void damon_test_apply_three_regions3(struct kunit *test)
+{
+	/* 10-20-30, 50-55-57-59, 70-80-90-100 */
+	unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59,
+				70, 80, 80, 90, 90, 100};
+	/* 5-27, 61-63, 65-104 */
+	struct region new_three_regions[3] = {
+		(struct region){.start = 5, .end = 27},
+		(struct region){.start = 61, .end = 63},
+		(struct region){.start = 65, .end = 104} };
+	/* 5-20-27, 61-63, 65-80-90-104 */
+	unsigned long expected[] = {5, 20, 20, 27, 61, 63,
+				65, 80, 80, 90, 90, 104};
+
+	damon_do_test_apply_three_regions(test, regions, ARRAY_SIZE(regions),
+			new_three_regions, expected, ARRAY_SIZE(expected));
+}
+
+/*
+ * Test another big change.  Both of the second and third big regions (50-59
+ * and 70-100) has totally freed and mapped to different area (30-32 and
+ * 65-68).  The target regions which were in the old second and third big
+ * regions should now be removed and new target regions covering the new second
+ * and third big regions should be crated.
+ */
+static void damon_test_apply_three_regions4(struct kunit *test)
+{
+	/* 10-20-30, 50-55-57-59, 70-80-90-100 */
+	unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59,
+				70, 80, 80, 90, 90, 100};
+	/* 5-7, 30-32, 65-68 */
+	struct region new_three_regions[3] = {
+		(struct region){.start = 5, .end = 7},
+		(struct region){.start = 30, .end = 32},
+		(struct region){.start = 65, .end = 68} };
+	/* expect 5-7, 30-32, 65-68 */
+	unsigned long expected[] = {5, 7, 30, 32, 65, 68};
+
+	damon_do_test_apply_three_regions(test, regions, ARRAY_SIZE(regions),
+			new_three_regions, expected, ARRAY_SIZE(expected));
+}
+
+static void damon_test_split_evenly(struct kunit *test)
+{
+	struct damon_ctx *c = &damon_user_ctx;
+	struct damon_task *t;
+	struct damon_region *r;
+	unsigned long i;
+
+	KUNIT_EXPECT_EQ(test, damon_split_region_evenly(c, NULL, 5), -EINVAL);
+
+	t = damon_new_task(42);
+	r = damon_new_region(&damon_user_ctx, 0, 100);
+	KUNIT_EXPECT_EQ(test, damon_split_region_evenly(c, r, 0), -EINVAL);
+
+	damon_add_region_tail(r, t);
+	KUNIT_EXPECT_EQ(test, damon_split_region_evenly(c, r, 10), 0);
+	KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 10u);
+
+	i = 0;
+	damon_for_each_region(r, t) {
+		KUNIT_EXPECT_EQ(test, r->vm_start, i++ * 10);
+		KUNIT_EXPECT_EQ(test, r->vm_end, i * 10);
+	}
+	damon_free_task(t);
+
+	t = damon_new_task(42);
+	r = damon_new_region(&damon_user_ctx, 5, 59);
+	damon_add_region_tail(r, t);
+	KUNIT_EXPECT_EQ(test, damon_split_region_evenly(c, r, 5), 0);
+	KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 5u);
+
+	i = 0;
+	damon_for_each_region(r, t) {
+		if (i == 4)
+			break;
+		KUNIT_EXPECT_EQ(test, r->vm_start, 5 + 10 * i++);
+		KUNIT_EXPECT_EQ(test, r->vm_end, 5 + 10 * i);
+	}
+	KUNIT_EXPECT_EQ(test, r->vm_start, 5 + 10 * i);
+	KUNIT_EXPECT_EQ(test, r->vm_end, 59ul);
+	damon_free_task(t);
+
+	t = damon_new_task(42);
+	r = damon_new_region(&damon_user_ctx, 5, 6);
+	damon_add_region_tail(r, t);
+	KUNIT_EXPECT_EQ(test, damon_split_region_evenly(c, r, 2), -EINVAL);
+	KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 1u);
+
+	damon_for_each_region(r, t) {
+		KUNIT_EXPECT_EQ(test, r->vm_start, 5ul);
+		KUNIT_EXPECT_EQ(test, r->vm_end, 6ul);
+	}
+	damon_free_task(t);
+}
+
+static void damon_test_split_at(struct kunit *test)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+
+	t = damon_new_task(42);
+	r = damon_new_region(&damon_user_ctx, 0, 100);
+	damon_add_region_tail(r, t);
+	damon_split_region_at(&damon_user_ctx, r, 25);
+	KUNIT_EXPECT_EQ(test, r->vm_start, 0ul);
+	KUNIT_EXPECT_EQ(test, r->vm_end, 25ul);
+
+	r = damon_next_region(r);
+	KUNIT_EXPECT_EQ(test, r->vm_start, 25ul);
+	KUNIT_EXPECT_EQ(test, r->vm_end, 100ul);
+
+	damon_free_task(t);
+}
+
+static void damon_test_merge_two(struct kunit *test)
+{
+	struct damon_task *t;
+	struct damon_region *r, *r2, *r3;
+	int i;
+
+	t = damon_new_task(42);
+	r = damon_new_region(&damon_user_ctx, 0, 100);
+	r->nr_accesses = 10;
+	damon_add_region_tail(r, t);
+	r2 = damon_new_region(&damon_user_ctx, 100, 300);
+	r2->nr_accesses = 20;
+	damon_add_region_tail(r2, t);
+
+	damon_merge_two_regions(r, r2);
+	KUNIT_EXPECT_EQ(test, r->vm_start, 0ul);
+	KUNIT_EXPECT_EQ(test, r->vm_end, 300ul);
+	KUNIT_EXPECT_EQ(test, r->nr_accesses, 16u);
+
+	i = 0;
+	damon_for_each_region(r3, t) {
+		KUNIT_EXPECT_PTR_EQ(test, r, r3);
+		i++;
+	}
+	KUNIT_EXPECT_EQ(test, i, 1);
+
+	damon_free_task(t);
+}
+
+static void damon_test_merge_regions_of(struct kunit *test)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+	unsigned long sa[] = {0, 100, 114, 122, 130, 156, 170, 184};
+	unsigned long ea[] = {100, 112, 122, 130, 156, 170, 184, 230};
+	unsigned int nrs[] = {0, 0, 10, 10, 20, 30, 1, 2};
+
+	unsigned long saddrs[] = {0, 114, 130, 156, 170};
+	unsigned long eaddrs[] = {112, 130, 156, 170, 230};
+	int i;
+
+	t = damon_new_task(42);
+	for (i = 0; i < ARRAY_SIZE(sa); i++) {
+		r = damon_new_region(&damon_user_ctx, sa[i], ea[i]);
+		r->nr_accesses = nrs[i];
+		damon_add_region_tail(r, t);
+	}
+
+	damon_merge_regions_of(t, 9);
+	/* 0-112, 114-130, 130-156, 156-170 */
+	KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 5u);
+	for (i = 0; i < 5; i++) {
+		r = damon_nth_region_of(t, i);
+		KUNIT_EXPECT_EQ(test, r->vm_start, saddrs[i]);
+		KUNIT_EXPECT_EQ(test, r->vm_end, eaddrs[i]);
+	}
+	damon_free_task(t);
+}
+
+static void damon_test_split_regions_of(struct kunit *test)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+
+	t = damon_new_task(42);
+	r = damon_new_region(&damon_user_ctx, 0, 22);
+	damon_add_region_tail(r, t);
+	damon_split_regions_of(&damon_user_ctx, t);
+	KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 2u);
+	damon_free_task(t);
+}
+
+static struct kunit_case damon_test_cases[] = {
+	KUNIT_CASE(damon_test_str_to_pids),
+	KUNIT_CASE(damon_test_tasks),
+	KUNIT_CASE(damon_test_regions),
+	KUNIT_CASE(damon_test_set_pids),
+	KUNIT_CASE(damon_test_three_regions_in_vmas),
+	KUNIT_CASE(damon_test_aggregate),
+	KUNIT_CASE(damon_test_write_rbuf),
+	KUNIT_CASE(damon_test_apply_three_regions1),
+	KUNIT_CASE(damon_test_apply_three_regions2),
+	KUNIT_CASE(damon_test_apply_three_regions3),
+	KUNIT_CASE(damon_test_apply_three_regions4),
+	KUNIT_CASE(damon_test_split_evenly),
+	KUNIT_CASE(damon_test_split_at),
+	KUNIT_CASE(damon_test_merge_two),
+	KUNIT_CASE(damon_test_merge_regions_of),
+	KUNIT_CASE(damon_test_split_regions_of),
+	{},
+};
+
+static struct kunit_suite damon_test_suite = {
+	.name = "damon",
+	.test_cases = damon_test_cases,
+};
+kunit_test_suite(damon_test_suite);
+
+#endif /* _DAMON_TEST_H */
+
+#endif	/* CONFIG_DAMON_KUNIT_TEST */
diff --git a/mm/damon.c b/mm/damon.c
index 8faf3879f99e..ff150ae7532a 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -1423,3 +1423,5 @@ module_exit(damon_exit);
 MODULE_LICENSE("GPL");
 MODULE_AUTHOR("SeongJae Park <sjpark@amazon.de>");
 MODULE_DESCRIPTION("DAMON: Data Access MONitor");
+
+#include "damon-test.h"
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 13/14] mm/damon: Add user selftests
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (11 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 12/14] mm/damon: Add kunit tests SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-02-24 12:30 ` [PATCH v6 14/14] MAINTAINERS: Update for DAMON SeongJae Park
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit adds a simple user space tests for DAMON.  The tests are
using kselftest framework.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 tools/testing/selftests/damon/Makefile        |   7 +
 .../selftests/damon/_chk_dependency.sh        |  28 ++++
 tools/testing/selftests/damon/_chk_record.py  |  89 +++++++++++
 .../testing/selftests/damon/debugfs_attrs.sh  | 139 ++++++++++++++++++
 .../testing/selftests/damon/debugfs_record.sh |  50 +++++++
 5 files changed, 313 insertions(+)
 create mode 100644 tools/testing/selftests/damon/Makefile
 create mode 100644 tools/testing/selftests/damon/_chk_dependency.sh
 create mode 100644 tools/testing/selftests/damon/_chk_record.py
 create mode 100755 tools/testing/selftests/damon/debugfs_attrs.sh
 create mode 100755 tools/testing/selftests/damon/debugfs_record.sh

diff --git a/tools/testing/selftests/damon/Makefile b/tools/testing/selftests/damon/Makefile
new file mode 100644
index 000000000000..cfd5393a4639
--- /dev/null
+++ b/tools/testing/selftests/damon/Makefile
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for damon selftests
+
+TEST_FILES = _chk_dependency.sh _chk_record_file.py
+TEST_PROGS = debugfs_attrs.sh debugfs_record.sh
+
+include ../lib.mk
diff --git a/tools/testing/selftests/damon/_chk_dependency.sh b/tools/testing/selftests/damon/_chk_dependency.sh
new file mode 100644
index 000000000000..814dcadd5e96
--- /dev/null
+++ b/tools/testing/selftests/damon/_chk_dependency.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+DBGFS=/sys/kernel/debug/damon
+
+if [ $EUID -ne 0 ];
+then
+	echo "Run as root"
+	exit $ksft_skip
+fi
+
+if [ ! -d $DBGFS ]
+then
+	echo "$DBGFS not found"
+	exit $ksft_skip
+fi
+
+for f in attrs record pids monitor_on
+do
+	if [ ! -f "$DBGFS/$f" ]
+	then
+		echo "$f not found"
+		exit 1
+	fi
+done
diff --git a/tools/testing/selftests/damon/_chk_record.py b/tools/testing/selftests/damon/_chk_record.py
new file mode 100644
index 000000000000..ef55f478c2af
--- /dev/null
+++ b/tools/testing/selftests/damon/_chk_record.py
@@ -0,0 +1,89 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+"Check whether the DAMON record file is valid"
+
+import argparse
+import struct
+import sys
+
+def err_percent(val, expected):
+    return abs(val - expected) / expected * 100
+
+def chk_task_info(f):
+    pid = struct.unpack('L', f.read(8))[0]
+    nr_regions = struct.unpack('I', f.read(4))[0]
+
+    if nr_regions > max_nr_regions:
+        print('too many regions: %d > %d' % (nr_regions, max_nr_regions))
+        exit(1)
+
+    nr_gaps = 0
+    eaddr = 0
+    for r in range(nr_regions):
+        saddr = struct.unpack('L', f.read(8))[0]
+        if eaddr and saddr != eaddr:
+            nr_gaps += 1
+        eaddr = struct.unpack('L', f.read(8))[0]
+        nr_accesses = struct.unpack('I', f.read(4))[0]
+
+        if saddr >= eaddr:
+            print('wrong region [%d,%d)' % (saddr, eaddr))
+            exit(1)
+
+        max_nr_accesses = aint / sint
+        if nr_accesses > max_nr_accesses:
+            if err_percent(nr_accesses, max_nr_accesses) > 15:
+                print('too high nr_access: expected %d but %d' %
+                        (max_nr_accesses, nr_accesses))
+                exit(1)
+    if nr_gaps != 2:
+        print('number of gaps are not two but %d' % nr_gaps)
+        exit(1)
+
+def parse_time_us(bindat):
+    sec = struct.unpack('l', bindat[0:8])[0]
+    nsec = struct.unpack('l', bindat[8:16])[0]
+    return (sec * 1000000000 + nsec) / 1000
+
+def main():
+    global sint
+    global aint
+    global min_nr
+    global max_nr_regions
+
+    parser = argparse.ArgumentParser()
+    parser.add_argument('file', metavar='<file>',
+            help='path to the record file')
+    parser.add_argument('--attrs', metavar='<attrs>',
+            default='5000 100000 1000000 10 1000',
+            help='content of debugfs attrs file')
+    args = parser.parse_args()
+    file_path = args.file
+    attrs = [int(x) for x in args.attrs.split()]
+    sint, aint, rint, min_nr, max_nr_regions = attrs
+
+    with open(file_path, 'rb') as f:
+        last_aggr_time = None
+        while True:
+            timebin = f.read(16)
+            if len(timebin) != 16:
+                break
+
+            now = parse_time_us(timebin)
+            if not last_aggr_time:
+                last_aggr_time = now
+            else:
+                error = err_percent(now - last_aggr_time, aint)
+                if error > 15:
+                    print('wrong aggr interval: expected %d, but %d' %
+                            (aint, now - last_aggr_time))
+                    exit(1)
+                last_aggr_time = now
+
+            nr_tasks = struct.unpack('I', f.read(4))[0]
+            for t in range(nr_tasks):
+                chk_task_info(f)
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/testing/selftests/damon/debugfs_attrs.sh b/tools/testing/selftests/damon/debugfs_attrs.sh
new file mode 100755
index 000000000000..d5188b0f71b1
--- /dev/null
+++ b/tools/testing/selftests/damon/debugfs_attrs.sh
@@ -0,0 +1,139 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+source ./_chk_dependency.sh
+
+# Test attrs file
+file="$DBGFS/attrs"
+
+ORIG_CONTENT=$(cat $file)
+
+echo 1 2 3 4 5 > $file
+if [ $? -ne 0 ]
+then
+	echo "$file write failed"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo 1 2 3 4 > $file
+if [ $? -eq 0 ]
+then
+	echo "$file write success (should failed)"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+CONTENT=$(cat $file)
+if [ "$CONTENT" != "1 2 3 4 5" ]
+then
+	echo "$file not written"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo $ORIG_CONTENT > $file
+
+# Test record file
+file="$DBGFS/record"
+
+ORIG_CONTENT=$(cat $file)
+
+echo "4242 foo.bar" > $file
+if [ $? -ne 0 ]
+then
+	echo "$file writing sane input failed"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo abc 2 3 > $file
+if [ $? -eq 0 ]
+then
+	echo "$file writing insane input 1 success (should failed)"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo 123 > $file
+if [ $? -eq 0 ]
+then
+	echo "$file writing insane input 2 success (should failed)"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+CONTENT=$(cat $file)
+if [ "$CONTENT" != "4242 foo.bar" ]
+then
+	echo "$file not written"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo "0 null" > $file
+if [ $? -ne 0 ]
+then
+	echo "$file disabling write fail"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+CONTENT=$(cat $file)
+if [ "$CONTENT" != "0 null" ]
+then
+	echo "$file not disabled"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo "4242 foo.bar" > $file
+if [ $? -ne 0 ]
+then
+	echo "$file writing sane data again fail"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo $ORIG_CONTENT > $file
+
+# Test pids file
+file="$DBGFS/pids"
+
+ORIG_CONTENT=$(cat $file)
+
+echo "1 2 3 4" > $file
+if [ $? -ne 0 ]
+then
+	echo "$file write fail"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo "1 2 abc 4" > $file
+if [ $? -ne 0 ]
+then
+	echo "$file write fail"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo abc 2 3 > $file
+if [ $? -eq 0 ]
+then
+	echo "$file write success (should failed)"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+CONTENT=$(cat $file)
+if [ "$CONTENT" != "1 2" ]
+then
+	echo "$file not written"
+	echo $ORIG_CONTENT > $file
+	exit 1
+fi
+
+echo $ORIG_CONTENT > $file
+
+echo "PASS"
diff --git a/tools/testing/selftests/damon/debugfs_record.sh b/tools/testing/selftests/damon/debugfs_record.sh
new file mode 100755
index 000000000000..fa9e07eea258
--- /dev/null
+++ b/tools/testing/selftests/damon/debugfs_record.sh
@@ -0,0 +1,50 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+source ./_chk_dependency.sh
+
+restore_attrs()
+{
+	echo $ORIG_ATTRS > $DBGFS/attrs
+	echo $ORIG_PIDS > $DBGFS/pids
+	echo $ORIG_RECORD > $DBGFS/record
+}
+
+ORIG_ATTRS=$(cat $DBGFS/attrs)
+ORIG_PIDS=$(cat $DBGFS/pids)
+ORIG_RECORD=$(cat $DBGFS/record)
+
+rfile=$pwd/damon.data
+
+rm -f $rfile
+ATTRS="5000 100000 1000000 10 1000"
+echo $ATTRS > $DBGFS/attrs
+echo 4096 $rfile > $DBGFS/record
+sleep 5 &
+echo $(pidof sleep) > $DBGFS/pids
+echo on > $DBGFS/monitor_on
+sleep 0.5
+killall sleep
+echo off > $DBGFS/monitor_on
+
+sync
+
+if [ ! -f $rfile ]
+then
+	echo "record file not made"
+	restore_attrs
+
+	exit 1
+fi
+
+python3 ./_chk_record.py $rfile --attrs "$ATTRS"
+if [ $? -ne 0 ]
+then
+	echo "record file is wrong"
+	restore_attrs
+	exit 1
+fi
+
+rm -f $rfile
+restore_attrs
+echo "PASS"
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v6 14/14] MAINTAINERS: Update for DAMON
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (12 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 13/14] mm/damon: Add user selftests SeongJae Park
@ 2020-02-24 12:30 ` SeongJae Park
  2020-03-02 11:35 ` [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
  2020-03-10 17:21 ` Shakeel Butt
  15 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-02-24 12:30 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit updates MAINTAINERS file for DAMON related files.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 MAINTAINERS | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 56765f542244..422c86f64cdd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4611,6 +4611,18 @@ F:	net/ax25/ax25_out.c
 F:	net/ax25/ax25_timer.c
 F:	net/ax25/sysctl_net_ax25.c
 
+DATA ACCESS MONITOR
+M:	SeongJae Park <sjpark@amazon.de>
+L:	linux-mm@kvack.org
+S:	Maintained
+F:	Documentation/admin-guide/mm/data_access_monitor.rst
+F:	include/linux/damon.h
+F:	include/trace/events/damon.h
+F:	mm/damon-test.h
+F:	mm/damon.c
+F:	tools/damon/*
+F:	tools/testing/selftests/damon/*
+
 DAVICOM FAST ETHERNET (DMFE) NETWORK DRIVER
 L:	netdev@vger.kernel.org
 S:	Orphan
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (13 preceding siblings ...)
  2020-02-24 12:30 ` [PATCH v6 14/14] MAINTAINERS: Update for DAMON SeongJae Park
@ 2020-03-02 11:35 ` SeongJae Park
  2020-03-09 10:23   ` SeongJae Park
  2020-03-10 17:21 ` Shakeel Butt
  15 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-02 11:35 UTC (permalink / raw)
  To: akpm, SeongJae Park
  Cc: SeongJae Park, aarcange, yang.shi, acme, alexander.shishkin,
	amit, brendan.d.gregg, brendanhiggins, cai, colin.king, corbet,
	dwmw, jolsa, kirill, mark.rutland, mgorman, minchan, mingo,
	namhyung, peterz, rdunlap, rientjes, rostedt, shuah, sj38.park,
	vbabka, vdavydov.dev, linux-mm, linux-doc, linux-kernel

Hello,

On Mon, 24 Feb 2020 13:30:33 +0100 SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> Introduction
> ============
> 
> Memory management decisions can be improved if finer data access information is
> available.  However, because such finer information usually comes with higher
> overhead, most systems including Linux forgives the potential improvement and
> rely on only coarse information or some light-weight heuristics.  The
> pseudo-LRU and the aggressive THP promotions are such examples.
> 
> A number of experimental data access pattern awared memory management
> optimizations (refer to 'Appendix A' for more details) say the sacrifices are
> huge.  However, none of those has successfully adopted to Linux kernel mainly
> due to the absence of a scalable and efficient data access monitoring
> mechanism.  Refer to 'Appendix B' to see the limitations of existing memory
> monitoring mechanisms.
> 
> DAMON is a data access monitoring subsystem for the problem.  It is 1) accurate
> enough to be used for the DRAM level memory management (a straightforward
> DAMON-based optimization achieved up to 2.55x speedup), 2) light-weight enough
> to be applied online (compared to a straightforward access monitoring scheme,
> DAMON is up to 94.242.42x lighter) and 3) keeps predefined upper-bound overhead
> regardless of the size of target workloads (thus scalable).  Refer to 'Appendix
> C' if you interested in how it is possible.
> 
> DAMON has mainly designed for the kernel's memory management mechanisms.
> However, because it is implemented as a standalone kernel module and provides
> several interfaces, it can be used by a wide range of users including kernel
> space programs, user space programs, programmers, and administrators.  DAMON
> is now supporting the monitoring only, but it will also provide simple and
> convenient data access pattern awared memory managements by itself.  Refer to
> 'Appendix D' for more detailed expected usages of DAMON.

I have posted this patchset once per week, but skip this week because there
were no comments in last week and therefore made no change in the patchset.

I think I answered to all previous comments and fixed all bugs previously
found.  May I ask some more comments or reviews?  If I missed something or
doing wrong, please let me know.


Thanks,
SeongJae Park

[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-02 11:35 ` [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
@ 2020-03-09 10:23   ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-09 10:23 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, riel, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel, SeongJae Park

On Mon, 2 Mar 2020 12:35:12 +0100 SeongJae Park <sjpark@amazon.com> wrote:

> Hello,
> 
> On Mon, 24 Feb 2020 13:30:33 +0100 SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > Introduction
> > ============
> > 
> > Memory management decisions can be improved if finer data access information is
> > available.  However, because such finer information usually comes with higher
> > overhead, most systems including Linux forgives the potential improvement and
> > rely on only coarse information or some light-weight heuristics.  The
> > pseudo-LRU and the aggressive THP promotions are such examples.
> > 
> > A number of experimental data access pattern awared memory management
> > optimizations (refer to 'Appendix A' for more details) say the sacrifices are
> > huge.  However, none of those has successfully adopted to Linux kernel mainly
> > due to the absence of a scalable and efficient data access monitoring
> > mechanism.  Refer to 'Appendix B' to see the limitations of existing memory
> > monitoring mechanisms.
> > 
> > DAMON is a data access monitoring subsystem for the problem.  It is 1) accurate
> > enough to be used for the DRAM level memory management (a straightforward
> > DAMON-based optimization achieved up to 2.55x speedup), 2) light-weight enough
> > to be applied online (compared to a straightforward access monitoring scheme,
> > DAMON is up to 94.242.42x lighter) and 3) keeps predefined upper-bound overhead
> > regardless of the size of target workloads (thus scalable).  Refer to 'Appendix
> > C' if you interested in how it is possible.
> > 
> > DAMON has mainly designed for the kernel's memory management mechanisms.
> > However, because it is implemented as a standalone kernel module and provides
> > several interfaces, it can be used by a wide range of users including kernel
> > space programs, user space programs, programmers, and administrators.  DAMON
> > is now supporting the monitoring only, but it will also provide simple and
> > convenient data access pattern awared memory managements by itself.  Refer to
> > 'Appendix D' for more detailed expected usages of DAMON.
> 
> I have posted this patchset once per week, but skip this week because there
> were no comments in last week and therefore made no change in the patchset.
> 
> I think I answered to all previous comments and fixed all bugs previously
> found.  May I ask some more comments or reviews?  If I missed something or
> doing wrong, please let me know.

There was no review/comment in last week, and therefore I made no change in the
patchset.  Instead, I ran more evaluation tests to prove the concepts in more
formal way.  Sharing the results with you.

I hope this evaluation results makes more REVIEWS/COMMENTS than my patchsets ;)


Thanks,
SeongJae Park

================================== >8 =========================================


TL;DR
-----

DAMON is lightweight.  It makes target worloads only 0.76% slower and consumes
only -0.08% more system memory.

DAMON is accurate and useful for memory management optimizations.
An experimental DAMON-based operation scheme for THP removes 83.66% of THP
memory overheads while preserving 40.67% of THP speedup.
Another experimental DAMON-based 'proactive reclamation' implementation reduced
22.42% of system memory usage and 88.86% of residential sets while incurring
only 3.07% runtime overhead in best case.

NOTE that the experimentail THP optimization and proactive reclamation are not
for production, just only for proof of concepts.


Setup
-----

On my personal QEMU/KVM based virtual machine on an Intel i7 host machine
running Ubuntu 18.04, I measure runtime and consumed system memory while
running various realistic workloads with several configurations.  I use 13 and
12 workloads in PARSEC3[3] and SPLASH-2X[4] benchmark suites, respectively.  I
personally use another wrapper scripts[5] for setup and run of the workloads.
On top of this patchset, we also applied the DAMON-based operation schemes
patchset[6] for this evaluation.

Measurement
~~~~~~~~~~~

For the measurement of the amount of consumed memory in system global scope, I
drop caches before starting each of the workloads and monitor 'MemFree' in the
'/proc/meminfo' file.  To make results more stable, I repeat the runs 5 times
and average results.  You can get stdev, min, and max of the numbers among the
repeated runs in appendix below.

Configurations
~~~~~~~~~~~~~~

The configurations I use are as below.

    orig: Linux v5.5 with 'madvise' THP policy
    rec: 'orig' plus DAMON running with record feature
    thp: same with 'orig', but use 'always' THP policy
    ethp: 'orig' plus a DAMON operation scheme[6], 'efficient THP'
    prcl: 'orig' plus a DAMON operation scheme, 'proactive reclaim[7]'

I use 'rec' for measurement of DAMON overheads to target workloads and system
memory.  The remaining configs including 'thp', 'ethp', and 'prcl' are for
measurement of DAMON monitoring accuracy.

'ethp' and 'prcl' is simple DAMON-based operation schemes developed for
proof of concepts of DAMON.  'ethp' reduces memory space waste of THP by using
DAMON for decision of promotions and demotion for huge pages, while 'prcl' is
as similar as the original work.  Those are implemented as below:

    # format: <min/max size> <min/max frequency (0-100)> <min/max age> <action>
    # ethp: Use huge pages if a region >2MB shows >5% access rate, use regular
    # pages if a region >2MB shows <5% access rate for >1 second
    2M null    5 null    null null    hugepage
    2M null    null 5    1s null      nohugepage

    # prcl: If a region >4KB shows <5% access rate for >5 seconds, page out.
    4K null    null 5    5s null      pageout

Note that both 'ethp' and 'prcl' are designed with my only straightforward
intuition, because those are for only proof of concepts and monitoring accuracy
of DAMON.  In other words, those are not for production.  For production use,
those should be tuned more.


[1] "Redis latency problems troubleshooting", https://redis.io/topics/latency
[2] "Disable Transparent Huge Pages (THP)",
    https://docs.mongodb.com/manual/tutorial/transparent-huge-pages/
[3] "The PARSEC Becnhmark Suite", https://parsec.cs.princeton.edu/index.htm
[4] "SPLASH-2x", https://parsec.cs.princeton.edu/parsec3-doc.htm#splash2x
[5] "parsec3_on_ubuntu", https://github.com/sjp38/parsec3_on_ubuntu
[6] "[RFC v4 0/7] Implement Data Access Monitoring-based Memory Operation
    Schemes",
    https://lore.kernel.org/linux-mm/20200303121406.20954-1-sjpark@amazon.com/
[7] "Proactively reclaiming idle memory", https://lwn.net/Articles/787611/


Results
-------

Below two tables show the measurement results.  The runtimes are in seconds
while the memory usages are in KiB.  Each configurations except 'orig' shows
its overhead relative to 'orig' in percent within parenthesises.

runtime                 orig     rec      (overhead) thp      (overhead) ethp     (overhead) prcl     (overhead)
parsec3/blackscholes    106.586  107.160  (0.54)     106.535  (-0.05)    107.393  (0.76)     114.543  (7.47)    
parsec3/bodytrack       78.621   79.220   (0.76)     78.678   (0.07)     79.169   (0.70)     80.793   (2.76)    
parsec3/canneal         138.951  142.258  (2.38)     123.555  (-11.08)   133.588  (-3.86)    143.239  (3.09)    
parsec3/dedup           11.876   11.918   (0.35)     11.767   (-0.92)    11.957   (0.68)     13.235   (11.44)   
parsec3/facesim         207.761  208.159  (0.19)     204.735  (-1.46)    207.172  (-0.28)    208.663  (0.43)    
parsec3/ferret          190.694  192.004  (0.69)     190.345  (-0.18)    190.453  (-0.13)    192.081  (0.73)    
parsec3/fluidanimate    210.189  212.511  (1.10)     208.695  (-0.71)    210.843  (0.31)     213.379  (1.52)    
parsec3/freqmine        289.000  289.483  (0.17)     287.724  (-0.44)    289.761  (0.26)     297.878  (3.07)    
parsec3/raytrace        118.482  119.346  (0.73)     118.861  (0.32)     119.151  (0.56)     136.566  (15.26)   
parsec3/streamcluster   323.338  328.431  (1.58)     285.039  (-11.85)   296.830  (-8.20)    331.670  (2.58)    
parsec3/swaptions       155.853  156.826  (0.62)     154.089  (-1.13)    156.332  (0.31)     155.422  (-0.28)   
parsec3/vips            58.864   59.408   (0.92)     58.450   (-0.70)    58.976   (0.19)     61.068   (3.74)    
parsec3/x264            69.201   69.208   (0.01)     68.795   (-0.59)    71.501   (3.32)     71.766   (3.71)    
splash2x/barnes         81.140   80.869   (-0.33)    74.734   (-7.90)    79.859   (-1.58)    108.875  (34.18)   
splash2x/fft            33.442   33.579   (0.41)     22.949   (-31.38)   27.055   (-19.10)   40.261   (20.39)   
splash2x/lu_cb          85.064   85.441   (0.44)     84.688   (-0.44)    85.868   (0.95)     88.949   (4.57)    
splash2x/lu_ncb         92.606   93.615   (1.09)     90.484   (-2.29)    93.368   (0.82)     93.279   (0.73)    
splash2x/ocean_cp       44.672   44.826   (0.34)     43.024   (-3.69)    43.671   (-2.24)    45.889   (2.72)    
splash2x/ocean_ncp      81.360   81.434   (0.09)     51.157   (-37.12)   66.711   (-18.00)   91.611   (12.60)   
splash2x/radiosity      91.374   91.568   (0.21)     90.406   (-1.06)    91.609   (0.26)     103.790  (13.59)   
splash2x/radix          31.330   31.509   (0.57)     25.145   (-19.74)   26.296   (-16.07)   31.835   (1.61)    
splash2x/raytrace       84.715   85.274   (0.66)     82.034   (-3.16)    84.458   (-0.30)    84.967   (0.30)    
splash2x/volrend        86.625   87.844   (1.41)     86.206   (-0.48)    87.851   (1.42)     87.809   (1.37)    
splash2x/water_nsquared 231.661  233.817  (0.93)     221.024  (-4.59)    228.020  (-1.57)    236.306  (2.01)    
splash2x/water_spatial  89.101   89.616   (0.58)     88.845   (-0.29)    89.710   (0.68)     103.370  (16.01)   
total                   2992.490 3015.330 (0.76)     2857.950 (-4.50)    2937.610 (-1.83)    3137.260 (4.84)    


memused.avg             orig         rec          (overhead) thp          (overhead) ethp         (overhead) prcl         (overhead)
parsec3/blackscholes    1822704.400  1833697.600  (0.60)     1826160.400  (0.19)     1833316.800  (0.58)     1657871.000  (-9.04)   
parsec3/bodytrack       1417677.600  1434893.200  (1.21)     1420652.200  (0.21)     1431637.000  (0.98)     1433359.800  (1.11)    
parsec3/canneal         1044807.000  1056496.200  (1.12)     1037582.400  (-0.69)    1050845.200  (0.58)     1051668.200  (0.66)    
parsec3/dedup           2408896.200  2433019.000  (1.00)     2403343.200  (-0.23)    2421191.800  (0.51)     2461284.400  (2.17)    
parsec3/facesim         541808.200   554404.200   (2.32)     545591.600   (0.70)     553669.600   (2.19)     553910.600   (2.23)    
parsec3/ferret          319697.200   331642.400   (3.74)     320722.000   (0.32)     332126.000   (3.89)     330581.800   (3.40)    
parsec3/fluidanimate    573267.400   587376.200   (2.46)     574660.200   (0.24)     596108.600   (3.98)     538974.600   (-5.98)   
parsec3/freqmine        986872.400   998956.200   (1.22)     992037.800   (0.52)     989680.800   (0.28)     765626.800   (-22.42)  
parsec3/raytrace        1749641.800  1761473.200  (0.68)     1743617.800  (-0.34)    1753105.600  (0.20)     1580514.800  (-9.67)   
parsec3/streamcluster   125165.400   149479.600   (19.43)    122082.000   (-2.46)    140484.200   (12.24)    132027.000   (5.48)    
parsec3/swaptions       15515.400    29577.200    (90.63)    15692.000    (1.14)     26733.200    (72.30)    28423.000    (83.19)   
parsec3/vips            2954233.800  2970852.400  (0.56)     2954338.800  (0.00)     2959100.200  (0.16)     2951979.600  (-0.08)   
parsec3/x264            3174959.000  3191900.200  (0.53)     3192736.200  (0.56)     3201927.200  (0.85)     3194867.400  (0.63)    
splash2x/barnes         1215064.400  1209725.600  (-0.44)    1215945.600  (0.07)     1212294.600  (-0.23)    937605.800   (-22.83)  
splash2x/fft            9429331.600  9187727.600  (-2.56)    9290976.600  (-1.47)    9036430.800  (-4.17)    9409815.800  (-0.21)   
splash2x/lu_cb          512744.800   521964.600   (1.80)     521795.800   (1.77)     522445.600   (1.89)     346352.200   (-32.45)  
splash2x/lu_ncb         516623.000   523673.200   (1.36)     520129.200   (0.68)     522398.800   (1.12)     522246.200   (1.09)    
splash2x/ocean_cp       3325422.200  3287326.200  (-1.15)    3381646.400  (1.69)     3294803.400  (-0.92)    3287401.800  (-1.14)   
splash2x/ocean_ncp      3894128.600  3868638.400  (-0.65)    7065137.400  (81.43)    4844981.600  (24.42)    3811968.400  (-2.11)   
splash2x/radiosity      1471464.000  1470680.800  (-0.05)    1481054.600  (0.65)     1472332.200  (0.06)     521064.000   (-64.59)  
splash2x/radix          1698164.400  1707518.400  (0.55)     1385276.800  (-18.42)   1415885.000  (-16.62)   1717103.600  (1.12)    
splash2x/raytrace       45334.200    59478.400    (31.20)    52893.400    (16.67)    62366.000    (37.57)    53765.800    (18.60)   
splash2x/volrend        151118.400   167429.800   (10.79)    151600.000   (0.32)     163950.800   (8.49)     162873.800   (7.78)    
splash2x/water_nsquared 46839.000    61947.000    (32.26)    49173.600    (4.98)     58301.200    (24.47)    56678.400    (21.01)   
splash2x/water_spatial  666960.000   674851.600   (1.18)     668957.600   (0.30)     673287.400   (0.95)     463938.800   (-30.44)  
total                   40108199.000 40074800.000 (-0.08)    42933800.000 (7.04)     40569300.000 (1.15)     37972000.000 (-5.33)   


DAMON Overheads
~~~~~~~~~~~~~~~

In total, DAMON recording feature incurs 0.76% runtime overhead (up to 2.38% in
worst case with 'parsec3/canneal') and -0.08% memory space overhead.

For convenience test run of 'rec', I use a Python wrapper.  The wrapper
constantly consumes about 10-15MB of memory.  This becomes high memory overhead
if the target workload has small memory footprint.  In detail, 19%, 90%, 31%,
10%, and 32% overheads shown for parsec3/streamcluster (125 MiB),
parsec3/swaptions (15 MiB), splash2x/raytrace (45 MiB), splash2x/volrend (151
MiB), and splash2x/water_nsquared (46 MiB)).  Nonetheless, the overheads are
not from DAMON, but from the wrapper, and thus should be ignored.  This fake
memory overhead continues in 'ethp' and 'prcl', as those configurations are
also using the Python wrapper.


Efficient THP
~~~~~~~~~~~~~

THP 'always' enabled policy achieves 4.5% speedup but incurs 7.04% memory
overhead.  It achieves 37.12% speedup in best case, but 81.43% memory overhead
in worst case.  Interestingly, both the best and worst case are with
'splash2x/ocean_ncp').

The 2-lines implementation of data access monitoring based THP version ('ethp')
shows 1.83% speedup and 1.15% memory overhead.  In other words, 'ethp' removes
83.66% of THP memory waste while preserving 40.67% of THP speedup in total.

In case of the 'splash2x/ocean_ncp', which is best for speedup but worst for
memory overhead of THP, 'ethp' removes 70% of THP memory space overhead while
preserving 48.49% of THP speedup.


Proactive Reclamation
~~~~~~~~~~~~~~~~~~~~~

As same to the original work, I use 'zram' swap device for this configuration.

In total, our 1 line implementation of Proactive Reclamation, 'prcl', incurred
4.84% runtime overhead in total while achieving 5.33% system memory usage
reduction.

Nonetheless, as the memory usage is calculated with 'MemFree' in
'/proc/meminfo', it contains the SwapCached pages.  As the swapcached pages can
be easily evicted, I also measured the residential set size of the workloads:

rss.avg                 orig         prcl         (overhead)
parsec3/blackscholes    589633.600   329611.400   (-44.10)
parsec3/bodytrack       32217.600    21652.200    (-32.79)
parsec3/canneal         840411.600   838931.000   (-0.18)
parsec3/dedup           1223907.600  835473.600   (-31.74)
parsec3/facesim         311271.600   311070.200   (-0.06)
parsec3/ferret          99635.600    89290.800    (-10.38)
parsec3/fluidanimate    531760.000   484945.600   (-8.80)
parsec3/freqmine        552609.400   61583.600    (-88.86)
parsec3/raytrace        896446.600   317792.000   (-64.55)
parsec3/streamcluster   110793.600   108061.600   (-2.47)
parsec3/swaptions       5604.600     2694.400     (-51.93)
parsec3/vips            31779.600    28422.200    (-10.56)
parsec3/x264            81943.800    81874.600    (-0.08)
splash2x/barnes         1219389.600  619038.600   (-49.23)
splash2x/fft            9597789.600  7264542.200  (-24.31)
splash2x/lu_cb          510524.000   327813.600   (-35.79)
splash2x/lu_ncb         510131.200   510146.800   (0.00)
splash2x/ocean_cp       3406968.600  3341620.400  (-1.92)
splash2x/ocean_ncp      3919926.800  3670768.800  (-6.36)
splash2x/radiosity      1474387.800  254678.600   (-82.73)
splash2x/radix          1723283.200  1763916.000  (2.36)
splash2x/raytrace       23194.400    17454.000    (-24.75)
splash2x/volrend        43980.000    32524.600    (-26.05)
splash2x/water_nsquared 29327.200    23989.200    (-18.20)
splash2x/water_spatial  656323.200   381068.600   (-41.94)
total                   28423300.000 21719000.000 (-23.59)

In total, 23.59% of residential sets were reduced.

With parsec3/freqmine, 'prcl' reduced 22.42% of system memory
usage and 88.86% of residential sets while incurring only 3.07% runtime
overhead.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 01/14] mm: Introduce Data Access MONitor (DAMON)
  2020-02-24 12:30 ` [PATCH v6 01/14] mm: " SeongJae Park
@ 2020-03-10  8:54   ` Jonathan Cameron
  2020-03-10 11:50     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  8:54 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

Apologies if anyone gets these twice. I had an email server throttling
issue yesterday.

On Mon, 24 Feb 2020 13:30:34 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit introduces a kernel module named DAMON.  Note that this
> commit is implementing only the stub for the module load/unload, basic
> data structures, and simple manipulation functions of the structures to
> keep the size of commit small.  The core mechanisms of DAMON will be
> implemented one by one by following commits.

Interesting piece of work.  I'm reviewing this partly as an exercise in
understanding it, but I'll point out minor stuff on the basis I might
as well whilst I'm here. ;)  Note I review bottom up so some comments
won't make much sense read from the top.

> 
> Brief Introduction
> ==================

I'd keep this level of intro for the cover letter / docs.  It's not
particularly useful in commit message it git.

> 
> Memory management decisions can be improved if finer data access
> information is available.  However, because such finer information
> usually comes with higher overhead, most systems including Linux
> forgives the potential improvement and rely on only coarse information
> or some light-weight heuristics.  The pseudo-LRU and the aggressive THP
> promotions are such examples.
> 
> A number of experimental data access pattern awared memory management
> optimizations say the sacrifices are huge.  However, none of those has
> successfully adopted to Linux kernel mainly due to the absence of a
> scalable and efficient data access monitoring mechanism.
> 
> DAMON is a data access monitoring solution for the problem.  It is 1)
> accurate enough for the DRAM level memory management, 2) light-weight
> enough to be applied online, and 3) keeps predefined upper-bound
> overhead regardless of the size of target workloads (thus scalable).
> 
> DAMON is implemented as a standalone kernel module and provides several
> simple interfaces.  Owing to that, though it has mainly designed for the
> kernel's memory management mechanisms, it can be also used for a wide
> range of user space programs and people.
> 
> Frequently Asked Questions
> ==========================
> 
> Q: Why not integrated with perf?
> A: From the perspective of perf like profilers, DAMON can be thought of
> as a data source in kernel, like tracepoints, pressure stall information
> (psi), or idle page tracking.  Thus, it can be easily integrated with
> those.  However, this patchset doesn't provide a fancy perf integration
> because current step of DAMON development is focused on its core logic
> only.  That said, DAMON already provides two interfaces for user space
> programs, which based on debugfs and tracepoint, respectively.  Using
> the tracepoint interface, you can use DAMON with perf.  This patchset
> also provides the debugfs interface based user space tool for DAMON.  It
> can be used to record, visualize, and analyze data access pattern of
> target processes in a convenient way.
> 
> Q: Why a new module, instead of extending perf or other tools?
> A: First, DAMON aims to be used by other programs including the kernel.
> Therefore, having dependency to specific tools like perf is not
> desirable.  Second, because it need to be lightweight as much as
> possible so that it can be used online, any unnecessary overhead such as
> kernel - user space context switching cost should be avoided.  These are
> the two most biggest reasons why DAMON is implemented in the kernel
> space.  The idle page tracking subsystem would be the kernel module that
> most seems similar to DAMON.  However, it's own interface is not
> compatible with DAMON.  Also, the internal implementation of it has no
> common part to be reused by DAMON.
> 
> Q: Can 'perf mem' provide the data required for DAMON?
> A: On the systems supporting 'perf mem', yes.  DAMON is using the PTE
> Accessed bits in low level.  Other H/W or S/W features that can be used
> for the purpose could be used.  However, as explained with above
> question, DAMON need to be implemented in the kernel space.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>
> ---
>  mm/Kconfig  |  12 +++
>  mm/Makefile |   1 +
>  mm/damon.c  | 224 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 237 insertions(+)
>  create mode 100644 mm/damon.c
> 
> diff --git a/mm/Kconfig b/mm/Kconfig
> index ab80933be65f..387d469f40ec 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -739,4 +739,16 @@ config ARCH_HAS_HUGEPD
>  config MAPPING_DIRTY_HELPERS
>          bool
>  
> +config DAMON
> +	tristate "Data Access Monitor"
> +	depends on MMU
> +	default n

No need to specify a default of n.

> +	help
> +	  Provides data access monitoring.
> +
> +	  DAMON is a kernel module that allows users to monitor the actual
> +	  memory access pattern of specific user-space processes.  It aims to
> +	  be 1) accurate enough to be useful for performance-centric domains,
> +	  and 2) sufficiently light-weight so that it can be applied online.
> +
>  endmenu
> diff --git a/mm/Makefile b/mm/Makefile
> index 1937cc251883..2911b3832c90 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -108,3 +108,4 @@ obj-$(CONFIG_ZONE_DEVICE) += memremap.o
>  obj-$(CONFIG_HMM_MIRROR) += hmm.o
>  obj-$(CONFIG_MEMFD_CREATE) += memfd.o
>  obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
> +obj-$(CONFIG_DAMON) += damon.o
> diff --git a/mm/damon.c b/mm/damon.c
> new file mode 100644
> index 000000000000..aafdca35b7b8
> --- /dev/null
> +++ b/mm/damon.c
> @@ -0,0 +1,224 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Data Access Monitor
> + *
> + * Copyright 2019 Amazon.com, Inc. or its affiliates.  All rights reserved.
> + *
> + * Author: SeongJae Park <sjpark@amazon.de>
> + */
> +
> +#define pr_fmt(fmt) "damon: " fmt
> +
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +#include <linux/random.h>
> +#include <linux/slab.h>
> +
> +#define damon_get_task_struct(t) \
> +	(get_pid_task(find_vpid(t->pid), PIDTYPE_PID))
> +
> +#define damon_next_region(r) \
> +	(container_of(r->list.next, struct damon_region, list))
> +
> +#define damon_prev_region(r) \
> +	(container_of(r->list.prev, struct damon_region, list))
> +
> +#define damon_for_each_region(r, t) \
> +	list_for_each_entry(r, &t->regions_list, list)
> +
> +#define damon_for_each_region_safe(r, next, t) \
> +	list_for_each_entry_safe(r, next, &t->regions_list, list)
> +
> +#define damon_for_each_task(ctx, t) \
> +	list_for_each_entry(t, &(ctx)->tasks_list, list)
> +
> +#define damon_for_each_task_safe(ctx, t, next) \
> +	list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list)
> +
> +/* Represents a monitoring target region on the virtual address space */
> +struct damon_region {
> +	unsigned long vm_start;
> +	unsigned long vm_end;
> +	unsigned long sampling_addr;
> +	unsigned int nr_accesses;
> +	struct list_head list;
> +};
> +
> +/* Represents a monitoring target task */
> +struct damon_task {
> +	unsigned long pid;
> +	struct list_head regions_list;
> +	struct list_head list;
> +};
> +
> +struct damon_ctx {
> +	struct rnd_state rndseed;
> +
> +	struct list_head tasks_list;	/* 'damon_task' objects */
> +};
> +
> +/* Get a random number in [l, r) */
> +#define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l))
> +
> +/*
> + * Construct a damon_region struct
> + *
> + * Returns the pointer to the new struct if success, or NULL otherwise
> + */
> +static struct damon_region *damon_new_region(struct damon_ctx *ctx,
> +				unsigned long vm_start, unsigned long vm_end)
> +{
> +	struct damon_region *ret;

I'd give this a different variable name.  Expectation in kernel is often
that ret is simply an magic handle to be passed on.  Don't normally expect
to set elements of it.  I'd go long hand and call it region.

> +
> +	ret = kmalloc(sizeof(struct damon_region), GFP_KERNEL);

sizeof(*ret)

> +	if (!ret)
> +		return NULL;

blank line.

> +	ret->vm_start = vm_start;
> +	ret->vm_end = vm_end;
> +	ret->nr_accesses = 0;
> +	ret->sampling_addr = damon_rand(ctx, vm_start, vm_end);
> +	INIT_LIST_HEAD(&ret->list);
> +
> +	return ret;
> +}
> +
> +/*
> + * Add a region between two other regions
Interestingly even the list.h comments for __list_add call this
function "insert".   No idea why it isn't simply called that..

Perhaps damon_insert_region would be clearer and avoid need
for comment?

> + */
> +static inline void damon_add_region(struct damon_region *r,
> +		struct damon_region *prev, struct damon_region *next)
> +{
> +	__list_add(&r->list, &prev->list, &next->list);
> +}
> +
> +/*
> + * Append a region to a task's list of regions

I'd argue the naming is sufficient that the comment adds little.

> + */
> +static void damon_add_region_tail(struct damon_region *r, struct damon_task *t)
> +{
> +	list_add_tail(&r->list, &t->regions_list);
> +}
> +
> +/*
> + * Delete a region from its list

The list is an implementation detail. I'd not mention that in the comments.

> + */
> +static void damon_del_region(struct damon_region *r)
> +{
> +	list_del(&r->list);
> +}
> +
> +/*
> + * De-allocate a region

Obvious comment - seem rot risk note below.

> + */
> +static void damon_free_region(struct damon_region *r)
> +{
> +	kfree(r);
> +}
> +
> +static void damon_destroy_region(struct damon_region *r)
> +{
> +	damon_del_region(r);
> +	damon_free_region(r);
> +}
> +
> +/*
> + * Construct a damon_task struct
> + *
> + * Returns the pointer to the new struct if success, or NULL otherwise
> + */
> +static struct damon_task *damon_new_task(unsigned long pid)
> +{
> +	struct damon_task *t;
> +
> +	t = kmalloc(sizeof(struct damon_task), GFP_KERNEL);

sizeof(*t) is probably less error prone if this code is maintained
in the long run.

> +	if (!t)
> +		return NULL;

blank line.

> +	t->pid = pid;
> +	INIT_LIST_HEAD(&t->regions_list);
> +
> +	return t;
> +}
> +
> +/* Returns n-th damon_region of the given task */
> +struct damon_region *damon_nth_region_of(struct damon_task *t, unsigned int n)
> +{
> +	struct damon_region *r;
> +	unsigned int i;
> +
> +	i = 0;
	unsigned int i = 0;

> +	damon_for_each_region(r, t) {
> +		if (i++ == n)
> +			return r;
> +	}

blank line helps readability a little.

> +	return NULL;
> +}
> +
> +static void damon_add_task_tail(struct damon_ctx *ctx, struct damon_task *t)

I'm curious, do we care that it's on the tail?  If not I'd look on that as an
implementation detail and just call this 

damon_add_task()

> +{
> +	list_add_tail(&t->list, &ctx->tasks_list);
> +}
> +
> +static void damon_del_task(struct damon_task *t)
> +{
> +	list_del(&t->list);
> +}
> +
> +static void damon_free_task(struct damon_task *t)
> +{
> +	struct damon_region *r, *next;
> +
> +	damon_for_each_region_safe(r, next, t)
> +		damon_free_region(r);
> +	kfree(t);
> +}
> +
> +static void damon_destroy_task(struct damon_task *t)
> +{
> +	damon_del_task(t);
> +	damon_free_task(t);
> +}
> +
> +/*
> + * Returns number of monitoring target tasks

As below, kind of obvious so just room for rot.

> + */
> +static unsigned int nr_damon_tasks(struct damon_ctx *ctx)
> +{
> +	struct damon_task *t;
> +	unsigned int ret = 0;
> +
> +	damon_for_each_task(ctx, t)
> +		ret++;
> +	return ret;
> +}
> +
> +/*
> + * Returns the number of target regions for a given target task

Always a trade off between useful comments and possibility of docs
rotting.  I'd drop this comment certainly.
The function name is self explanatory.

> + */
> +static unsigned int nr_damon_regions(struct damon_task *t)
> +{
> +	struct damon_region *r;
> +	unsigned int ret = 0;
> +
> +	damon_for_each_region(r, t)
> +		ret++;

Blank line here would help readability a tiny bit.
Same in other places where we have something followed by a nice
simple return statement.

> +	return ret;
> +}
> +
> +static int __init damon_init(void)
> +{
> +	pr_info("init\n");

Drop these. They are just noise.

> +
> +	return 0;
> +}
> +
> +static void __exit damon_exit(void)
> +{
> +	pr_info("exit\n");
> +}
> +
> +module_init(damon_init);
> +module_exit(damon_exit);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("SeongJae Park <sjpark@amazon.de>");
> +MODULE_DESCRIPTION("DAMON: Data Access MONitor");




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-02-24 12:30 ` [PATCH v6 02/14] mm/damon: Implement region based sampling SeongJae Park
@ 2020-03-10  8:57   ` Jonathan Cameron
  2020-03-10 11:52     ` SeongJae Park
  2020-03-13 17:29   ` Jonathan Cameron
  1 sibling, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  8:57 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:35 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit implements DAMON's basic access check and region based
> sampling mechanisms.  This change would seems make no sense, mainly
> because it is only a part of the DAMON's logics.  Following two commits
> will make more sense.
> 
> This commit also exports `lookup_page_ext()` to GPL modules because
> DAMON uses the function but also supports the module build.

Do that as a separate patch before this one.  Makes it easy to spot.

> 
> Basic Access Check
> ------------------
> 
> DAMON basically reports what pages are how frequently accessed.  Note
> that the frequency is not an absolute number of accesses, but a relative
> frequency among the pages of the target workloads.
> 
> Users can control the resolution of the reports by setting two time
> intervals, ``sampling interval`` and ``aggregation interval``.  In
> detail, DAMON checks access to each page per ``sampling interval``,
> aggregates the results (counts the number of the accesses to each page),
> and reports the aggregated results per ``aggregation interval``.  For
> the access check of each page, DAMON uses the Accessed bits of PTEs.
> 
> This is thus similar to common periodic access checks based access
> tracking mechanisms, which overhead is increasing as the size of the
> target process grows.
> 
> Region Based Sampling
> ---------------------
> 
> To avoid the unbounded increase of the overhead, DAMON groups a number
> of adjacent pages that assumed to have same access frequencies into a
> region.  As long as the assumption (pages in a region have same access
> frequencies) is kept, only one page in the region is required to be
> checked.  Thus, for each ``sampling interval``, DAMON randomly picks one
> page in each region and clears its Accessed bit.  After one more
> ``sampling interval``, DAMON reads the Accessed bit of the page and
> increases the access frequency of the region if the bit has set
> meanwhile.  Therefore, the monitoring overhead is controllable by
> setting the number of regions.
> 
> Nonetheless, this scheme cannot preserve the quality of the output if
> the assumption is not kept.  Following commit will introduce how we can
> make the guarantee with best effort.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

Various things inline. In particularly can you make use of standard
kthread_stop infrastructure rather than rolling your own?

> ---
>  mm/damon.c    | 509 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/page_ext.c |   1 +
>  2 files changed, 510 insertions(+)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index aafdca35b7b8..6bdeb84d89af 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -9,9 +9,14 @@
>  
>  #define pr_fmt(fmt) "damon: " fmt
>  
> +#include <linux/delay.h>
> +#include <linux/kthread.h>
>  #include <linux/mm.h>
>  #include <linux/module.h>
> +#include <linux/page_idle.h>
>  #include <linux/random.h>
> +#include <linux/sched/mm.h>
> +#include <linux/sched/task.h>
>  #include <linux/slab.h>
>  
>  #define damon_get_task_struct(t) \
> @@ -51,7 +56,24 @@ struct damon_task {
>  	struct list_head list;
>  };
>  
> +/*
> + * For each 'sample_interval', DAMON checks whether each region is accessed or
> + * not.  It aggregates and keeps the access information (number of accesses to
> + * each region) for each 'aggr_interval' time.
> + *
> + * All time intervals are in micro-seconds.
> + */
>  struct damon_ctx {
> +	unsigned long sample_interval;
> +	unsigned long aggr_interval;
> +	unsigned long min_nr_regions;
> +
> +	struct timespec64 last_aggregation;
> +
> +	struct task_struct *kdamond;
> +	bool kdamond_stop;
> +	spinlock_t kdamond_lock;
> +
>  	struct rnd_state rndseed;
>  
>  	struct list_head tasks_list;	/* 'damon_task' objects */
> @@ -204,6 +226,493 @@ static unsigned int nr_damon_regions(struct damon_task *t)
>  	return ret;
>  }
>  
> +/*
> + * Get the mm_struct of the given task
> + *
> + * Callser should put the mm_struct after use, unless it is NULL.

Caller 

> + *
> + * Returns the mm_struct of the task on success, NULL on failure
> + */
> +static struct mm_struct *damon_get_mm(struct damon_task *t)
> +{
> +	struct task_struct *task;
> +	struct mm_struct *mm;
> +
> +	task = damon_get_task_struct(t);
> +	if (!task)
> +		return NULL;
> +
> +	mm = get_task_mm(task);
> +	put_task_struct(task);
> +	return mm;
> +}
> +
> +/*
> + * Size-evenly split a region into 'nr_pieces' small regions
> + *
> + * Returns 0 on success, or negative error code otherwise.
> + */
> +static int damon_split_region_evenly(struct damon_ctx *ctx,
> +		struct damon_region *r, unsigned int nr_pieces)
> +{
> +	unsigned long sz_orig, sz_piece, orig_end;
> +	struct damon_region *piece = NULL, *next;
> +	unsigned long start;
> +
> +	if (!r || !nr_pieces)
> +		return -EINVAL;
> +
> +	orig_end = r->vm_end;
> +	sz_orig = r->vm_end - r->vm_start;
> +	sz_piece = sz_orig / nr_pieces;
> +
> +	if (!sz_piece)
> +		return -EINVAL;
> +
> +	r->vm_end = r->vm_start + sz_piece;
> +	next = damon_next_region(r);
> +	for (start = r->vm_end; start + sz_piece <= orig_end;
> +			start += sz_piece) {
> +		piece = damon_new_region(ctx, start, start + sz_piece);
> +		damon_add_region(piece, r, next);
> +		r = piece;
> +	}

I'd add a comment here. I think this next bit is to catch any rounding error
holes, but I'm not 100% sure.

> +	if (piece)
> +		piece->vm_end = orig_end;

blank line here.

> +	return 0;
> +}
> +
> +struct region {
> +	unsigned long start;
> +	unsigned long end;
> +};
> +
> +static unsigned long sz_region(struct region *r)
> +{
> +	return r->end - r->start;
> +}
> +
> +static void swap_regions(struct region *r1, struct region *r2)
> +{
> +	struct region tmp;
> +
> +	tmp = *r1;
> +	*r1 = *r2;
> +	*r2 = tmp;
> +}
> +
> +/*
> + * Find the three regions in an address space
> + *
> + * vma		the head vma of the target address space
> + * regions	an array of three 'struct region's that results will be saved
> + *
> + * This function receives an address space and finds three regions in it which
> + * separated by the two biggest unmapped regions in the space.  Please refer to
> + * below comments of 'damon_init_regions_of()' function to know why this is
> + * necessary.
> + *
> + * Returns 0 if success, or negative error code otherwise.
> + */
> +static int damon_three_regions_in_vmas(struct vm_area_struct *vma,
> +		struct region regions[3])
> +{
> +	struct region gap = {0,}, first_gap = {0,}, second_gap = {0,};
> +	struct vm_area_struct *last_vma = NULL;
> +	unsigned long start = 0;
> +
> +	/* Find two biggest gaps so that first_gap > second_gap > others */
> +	for (; vma; vma = vma->vm_next) {
> +		if (!last_vma) {
> +			start = vma->vm_start;
> +			last_vma = vma;
> +			continue;
> +		}
> +		gap.start = last_vma->vm_end;
> +		gap.end = vma->vm_start;
> +		if (sz_region(&gap) > sz_region(&second_gap)) {
> +			swap_regions(&gap, &second_gap);
> +			if (sz_region(&second_gap) > sz_region(&first_gap))
> +				swap_regions(&second_gap, &first_gap);
> +		}
> +		last_vma = vma;
> +	}
> +
> +	if (!sz_region(&second_gap) || !sz_region(&first_gap))
> +		return -EINVAL;
> +
> +	/* Sort the two biggest gaps by address */
> +	if (first_gap.start > second_gap.start)
> +		swap_regions(&first_gap, &second_gap);
> +
> +	/* Store the result */
> +	regions[0].start = start;
> +	regions[0].end = first_gap.start;
> +	regions[1].start = first_gap.end;
> +	regions[1].end = second_gap.start;
> +	regions[2].start = second_gap.end;
> +	regions[2].end = last_vma->vm_end;
> +
> +	return 0;
> +}
> +
> +/*
> + * Get the three regions in the given task
> + *
> + * Returns 0 on success, negative error code otherwise.
> + */
> +static int damon_three_regions_of(struct damon_task *t,
> +				struct region regions[3])
> +{
> +	struct mm_struct *mm;
> +	int ret;
> +
> +	mm = damon_get_mm(t);
> +	if (!mm)
> +		return -EINVAL;
> +
> +	down_read(&mm->mmap_sem);
> +	ret = damon_three_regions_in_vmas(mm->mmap, regions);
> +	up_read(&mm->mmap_sem);
> +
> +	mmput(mm);
> +	return ret;
> +}
> +
> +/*
> + * Initialize the monitoring target regions for the given task
> + *
> + * t	the given target task
> + *
> + * Because only a number of small portions of the entire address space
> + * is acutally mapped to the memory and accessed, monitoring the unmapped

actually

> + * regions is wasteful.  That said, because we can deal with small noises,
> + * tracking every mapping is not strictly required but could even incur a high
> + * overhead if the mapping frequently changes or the number of mappings is
> + * high.  Nonetheless, this may seems very weird.  DAMON's dynamic regions
> + * adjustment mechanism, which will be implemented with following commit will
> + * make this more sense.
> + *
> + * For the reason, we convert the complex mappings to three distinct regions
> + * that cover every mapped areas of the address space.  Also the two gaps
> + * between the three regions are the two biggest unmapped areas in the given
> + * address space.  In detail, this function first identifies the start and the
> + * end of the mappings and the two biggest unmapped areas of the address space.
> + * Then, it constructs the three regions as below:
> + *
> + *     [mappings[0]->start, big_two_unmapped_areas[0]->start)
> + *     [big_two_unmapped_areas[0]->end, big_two_unmapped_areas[1]->start)
> + *     [big_two_unmapped_areas[1]->end, mappings[nr_mappings - 1]->end)
> + *
> + * As usual memory map of processes is as below, the gap between the heap and
> + * the uppermost mmap()-ed region, and the gap between the lowermost mmap()-ed
> + * region and the stack will be two biggest unmapped regions.  Because these
> + * gaps are exceptionally huge areas in usual address space, excluding these
> + * two biggest unmapped regions will be sufficient to make a trade-off.
> + *
> + *   <heap>
> + *   <BIG UNMAPPED REGION 1>
> + *   <uppermost mmap()-ed region>
> + *   (other mmap()-ed regions and small unmapped regions)
> + *   <lowermost mmap()-ed region>
> + *   <BIG UNMAPPED REGION 2>
> + *   <stack>
> + */
> +static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t)
> +{
> +	struct damon_region *r;
> +	struct region regions[3];
> +	int i;
> +
> +	if (damon_three_regions_of(t, regions)) {
> +		pr_err("Failed to get three regions of task %lu\n", t->pid);
> +		return;
> +	}
> +
> +	/* Set the initial three regions of the task */
> +	for (i = 0; i < 3; i++) {
> +		r = damon_new_region(c, regions[i].start, regions[i].end);
> +		damon_add_region_tail(r, t);
> +	}
> +
> +	/* Split the middle region into 'min_nr_regions - 2' regions */
> +	r = damon_nth_region_of(t, 1);
> +	if (damon_split_region_evenly(c, r, c->min_nr_regions - 2))
> +		pr_warn("Init middle region failed to be split\n");
> +}
> +
> +/* Initialize '->regions_list' of every task */
> +static void kdamond_init_regions(struct damon_ctx *ctx)
> +{
> +	struct damon_task *t;
> +
> +	damon_for_each_task(ctx, t)
> +		damon_init_regions_of(ctx, t);
> +}
> +
> +/*
> + * Check whether the given region has accessed since the last check

Should also make clear that this sets us up for the next access check at
a different memory address it the region.

Given the lack of connection between activities perhaps just split this into
two functions that are always called next to each other.

> + *
> + * mm	'mm_struct' for the given virtual address space
> + * r	the region to be checked
> + */
> +static void kdamond_check_access(struct damon_ctx *ctx,
> +			struct mm_struct *mm, struct damon_region *r)
> +{
> +	pte_t *pte = NULL;
> +	pmd_t *pmd = NULL;
> +	spinlock_t *ptl;
> +
> +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> +		goto mkold;
> +
> +	/* Read the page table access bit of the page */
> +	if (pte && pte_young(*pte))
> +		r->nr_accesses++;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE

Is it worth having this protection?  Seems likely to have only a very small
influence on performance and makes it a little harder to reason about the code.

> +	else if (pmd && pmd_young(*pmd))
> +		r->nr_accesses++;
> +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> +
> +	spin_unlock(ptl);
> +
> +mkold:
> +	/* mkold next target */
> +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> +
> +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> +		return;
> +
> +	if (pte) {
> +		if (pte_young(*pte)) {
> +			clear_page_idle(pte_page(*pte));
> +			set_page_young(pte_page(*pte));
> +		}
> +		*pte = pte_mkold(*pte);
> +	}
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	else if (pmd) {
> +		if (pmd_young(*pmd)) {
> +			clear_page_idle(pmd_page(*pmd));
> +			set_page_young(pmd_page(*pmd));
> +		}
> +		*pmd = pmd_mkold(*pmd);
> +	}
> +#endif
> +
> +	spin_unlock(ptl);
> +}
> +
> +/*
> + * Check whether a time interval is elapsed

Another comment block that would be clearer if it was kernel-doc rather
than nearly kernel-doc

> + *
> + * baseline	the time to check whether the interval has elapsed since
> + * interval	the time interval (microseconds)
> + *
> + * See whether the given time interval has passed since the given baseline
> + * time.  If so, it also updates the baseline to current time for next check.
> + *
> + * Returns true if the time interval has passed, or false otherwise.
> + */
> +static bool damon_check_reset_time_interval(struct timespec64 *baseline,
> +		unsigned long interval)
> +{
> +	struct timespec64 now;
> +
> +	ktime_get_coarse_ts64(&now);
> +	if ((timespec64_to_ns(&now) - timespec64_to_ns(baseline)) <
> +			interval * 1000)
> +		return false;
> +	*baseline = now;
> +	return true;
> +}
> +
> +/*
> + * Check whether it is time to flush the aggregated information
> + */
> +static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx)
> +{
> +	return damon_check_reset_time_interval(&ctx->last_aggregation,
> +			ctx->aggr_interval);
> +}
> +
> +/*
> + * Reset the aggregated monitoring results
> + */
> +static void kdamond_flush_aggregated(struct damon_ctx *c)

I wouldn't expect a reset function to be called flush.

> +{
> +	struct damon_task *t;
> +	struct damon_region *r;
> +
> +	damon_for_each_task(c, t) {
> +		damon_for_each_region(r, t)
> +			r->nr_accesses = 0;
> +	}
> +}
> +
> +/*
> + * Check whether current monitoring should be stopped
> + *
> + * If users asked to stop, need stop.  Even though no user has asked to stop,
> + * need stop if every target task has dead.
> + *
> + * Returns true if need to stop current monitoring.
> + */
> +static bool kdamond_need_stop(struct damon_ctx *ctx)
> +{
> +	struct damon_task *t;
> +	struct task_struct *task;
> +	bool stop;
> +

As below comment asks, can you use kthread_should_stop?

> +	spin_lock(&ctx->kdamond_lock);
> +	stop = ctx->kdamond_stop;
> +	spin_unlock(&ctx->kdamond_lock);
> +	if (stop)
> +		return true;
> +
> +	damon_for_each_task(ctx, t) {
> +		task = damon_get_task_struct(t);
> +		if (task) {
> +			put_task_struct(task);
> +			return false;
> +		}
> +	}
> +
> +	return true;
> +}
> +
> +/*
> + * The monitoring daemon that runs as a kernel thread
> + */
> +static int kdamond_fn(void *data)
> +{
> +	struct damon_ctx *ctx = (struct damon_ctx *)data;

Never any need to explicitly cast a void * to some other pointer type.
(C spec)

	struct damon_ctx *ctx = data;
> +	struct damon_task *t;
> +	struct damon_region *r, *next;
> +	struct mm_struct *mm;
> +
> +	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
> +	kdamond_init_regions(ctx);
> +	while (!kdamond_need_stop(ctx)) {
> +		damon_for_each_task(ctx, t) {
> +			mm = damon_get_mm(t);
> +			if (!mm)
> +				continue;
> +			damon_for_each_region(r, t)
> +				kdamond_check_access(ctx, mm, r);
> +			mmput(mm);
> +		}
> +
> +		if (kdamond_aggregate_interval_passed(ctx))
> +			kdamond_flush_aggregated(ctx);
> +
> +		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);

Is there any purpose in using a range for such a narrow window?

> +	}
> +	damon_for_each_task(ctx, t) {
> +		damon_for_each_region_safe(r, next, t)
> +			damon_destroy_region(r);
> +	}
> +	pr_info("kdamond (%d) finishes\n", ctx->kdamond->pid);

Feels like noise.  I'd drop tis to pr_debug.

> +	spin_lock(&ctx->kdamond_lock);
> +	ctx->kdamond = NULL;
> +	spin_unlock(&ctx->kdamond_lock);

blank line.

> +	return 0;
> +}
> +
> +/*
> + * Controller functions
> + */
> +
> +/*
> + * Start or stop the kdamond
> + *
> + * Returns 0 if success, negative error code otherwise.
> + */
> +static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
> +{
> +	spin_lock(&ctx->kdamond_lock);
> +	ctx->kdamond_stop = !on;

Can't use the kthread_stop / kthread_should_stop approach?

> +	if (!ctx->kdamond && on) {
> +		ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond");
> +		if (!ctx->kdamond)
> +			goto fail;
> +		goto success;

cleaner as 
int ret = 0; above then

		if (!ctx->kdamond)
			ret = -EINVAL;
		goto unlock;

with

unlock:
	spin_unlock(&ctx->dmanond_lock);
	return ret;

> +	}
> +	if (ctx->kdamond && !on) {
> +		spin_unlock(&ctx->kdamond_lock);
> +		while (true) {

An unbounded loop is probably a bad idea.

> +			spin_lock(&ctx->kdamond_lock);
> +			if (!ctx->kdamond)
> +				goto success;
> +			spin_unlock(&ctx->kdamond_lock);
> +
> +			usleep_range(ctx->sample_interval,
> +					ctx->sample_interval * 2);
> +		}
> +	}
> +
> +	/* tried to turn on while turned on, or turn off while turned off */
> +
> +fail:
> +	spin_unlock(&ctx->kdamond_lock);
> +	return -EINVAL;
> +
> +success:
> +	spin_unlock(&ctx->kdamond_lock);
> +	return 0;
> +}
> +
> +/*
> + * This function should not be called while the kdamond is running.
> + */
> +static int damon_set_pids(struct damon_ctx *ctx,
> +			unsigned long *pids, ssize_t nr_pids)
> +{
> +	ssize_t i;
> +	struct damon_task *t, *next;
> +
> +	damon_for_each_task_safe(ctx, t, next)
> +		damon_destroy_task(t);
> +
> +	for (i = 0; i < nr_pids; i++) {
> +		t = damon_new_task(pids[i]);
> +		if (!t) {
> +			pr_err("Failed to alloc damon_task\n");
> +			return -ENOMEM;
> +		}
> +		damon_add_task_tail(ctx, t);
> +	}
> +
> +	return 0;
> +}
> +
> +/*

This is kind of similar to kernel-doc formatting.  Might as well just make
it kernel-doc!

> + * Set attributes for the monitoring
> + *
> + * sample_int		time interval between samplings
> + * aggr_int		time interval between aggregations
> + * min_nr_reg		minimal number of regions
> + *
> + * This function should not be called while the kdamond is running.
> + * Every time interval is in micro-seconds.
> + *
> + * Returns 0 on success, negative error code otherwise.
> + */
> +static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
> +		unsigned long aggr_int, unsigned long min_nr_reg)
> +{
> +	if (min_nr_reg < 3) {
> +		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
> +				min_nr_reg);
> +		return -EINVAL;
> +	}
> +
> +	ctx->sample_interval = sample_int;
> +	ctx->aggr_interval = aggr_int;
> +	ctx->min_nr_regions = min_nr_reg;

blank line helps readability a tiny little bit.

> +	return 0;
> +}
> +
>  static int __init damon_init(void)
>  {
>  	pr_info("init\n");
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index 4ade843ff588..71169b45bba9 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -131,6 +131,7 @@ struct page_ext *lookup_page_ext(const struct page *page)
>  					MAX_ORDER_NR_PAGES);
>  	return get_entry(base, index);
>  }
> +EXPORT_SYMBOL_GPL(lookup_page_ext);
>  
>  static int __init alloc_node_page_ext(int nid)
>  {




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 03/14] mm/damon: Adaptively adjust regions
  2020-02-24 12:30 ` [PATCH v6 03/14] mm/damon: Adaptively adjust regions SeongJae Park
@ 2020-03-10  8:57   ` Jonathan Cameron
  2020-03-10 11:53     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  8:57 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:36 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> At the beginning of the monitoring, DAMON constructs the initial regions
> by evenly splitting the memory mapped address space of the process into
> the user-specified minimal number of regions.  In this initial state,
> the assumption of the regions (pages in same region have similar access
> frequencies) is normally not kept and thus the monitoring quality could
> be low.  To keep the assumption as much as possible, DAMON adaptively
> merges and splits each region.
> 
> For each ``aggregation interval``, it compares the access frequencies of
> adjacent regions and merges those if the frequency difference is small.
> Then, after it reports and clears the aggregated access frequency of
> each region, it splits each region into two regions if the total number
> of regions is smaller than the half of the user-specified maximum number
> of regions.
> 
> In this way, DAMON provides its best-effort quality and minimal overhead
> while keeping the bounds users set for their trade-off.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

Really minor comments inline.

> ---
>  mm/damon.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 144 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index 6bdeb84d89af..1c8bb71bbce9 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -67,6 +67,7 @@ struct damon_ctx {
>  	unsigned long sample_interval;
>  	unsigned long aggr_interval;
>  	unsigned long min_nr_regions;
> +	unsigned long max_nr_regions;
>  
>  	struct timespec64 last_aggregation;
>  
> @@ -389,9 +390,12 @@ static int damon_three_regions_of(struct damon_task *t,
>   * regions is wasteful.  That said, because we can deal with small noises,
>   * tracking every mapping is not strictly required but could even incur a high
>   * overhead if the mapping frequently changes or the number of mappings is
> - * high.  Nonetheless, this may seems very weird.  DAMON's dynamic regions
> - * adjustment mechanism, which will be implemented with following commit will
> - * make this more sense.
> + * high.  The adaptive regions adjustment mechanism will further help to deal
> + * with the noises by simply identifying the unmapped areas as a region that
> + * has no access.  Moreover, applying the real mappings that would have many
> + * unmapped areas inside will make the adaptive mechanism quite complex.  That
> + * said, too huge unmapped areas inside the monitoring target should be removed
> + * to not take the time for the adaptive mechanism.
>   *
>   * For the reason, we convert the complex mappings to three distinct regions
>   * that cover every mapped areas of the address space.  Also the two gaps
> @@ -550,6 +554,123 @@ static void kdamond_flush_aggregated(struct damon_ctx *c)
>  	}
>  }
>  
> +#define sz_damon_region(r) (r->vm_end - r->vm_start)
> +
> +/*
> + * Merge two adjacent regions into one region
> + */
> +static void damon_merge_two_regions(struct damon_region *l,
> +				struct damon_region *r)
> +{
> +	l->nr_accesses = (l->nr_accesses * sz_damon_region(l) +
> +			r->nr_accesses * sz_damon_region(r)) /
> +			(sz_damon_region(l) + sz_damon_region(r));
> +	l->vm_end = r->vm_end;
> +	damon_destroy_region(r);
> +}
> +
> +#define diff_of(a, b) (a > b ? a - b : b - a)
> +
> +/*
> + * Merge adjacent regions having similar access frequencies
> + *
> + * t		task that merge operation will make change
> + * thres	merge regions having '->nr_accesses' diff smaller than this
> + */
> +static void damon_merge_regions_of(struct damon_task *t, unsigned int thres)
> +{
> +	struct damon_region *r, *prev = NULL, *next;
> +
> +	damon_for_each_region_safe(r, next, t) {
> +		if (!prev || prev->vm_end != r->vm_start)
> +			goto next;
> +		if (diff_of(prev->nr_accesses, r->nr_accesses) > thres) 
> +			goto next;

		if (!prev || prev->vm_end != r->vm_start ||
		    diff_of(prev->nr_accesses, r->nr_accesses) > thres) {
			prev = r;
			continue;
		}

Seems more logical to my head.  Maybe it's just me though.  A goto inside a
loop isn't pretty to my mind.

> +		damon_merge_two_regions(prev, r);
> +		continue;
> +next:
> +		prev = r;
> +	}
> +}
> +
> +/*
> + * Merge adjacent regions having similar access frequencies
> + *
> + * threshold	merge regions havind nr_accesses diff larger than this
> + *
> + * This function merges monitoring target regions which are adjacent and their
> + * access frequencies are similar.  This is for minimizing the monitoring
> + * overhead under the dynamically changeable access pattern.  If a merge was
> + * unnecessarily made, later 'kdamond_split_regions()' will revert it.
> + */
> +static void kdamond_merge_regions(struct damon_ctx *c, unsigned int threshold)
> +{
> +	struct damon_task *t;
> +
> +	damon_for_each_task(c, t)
> +		damon_merge_regions_of(t, threshold);
> +}
> +
> +/*
> + * Split a region into two small regions
> + *
> + * r		the region to be split
> + * sz_r		size of the first sub-region that will be made
> + */
> +static void damon_split_region_at(struct damon_ctx *ctx,
> +		struct damon_region *r, unsigned long sz_r)
> +{
> +	struct damon_region *new;
> +
> +	new = damon_new_region(ctx, r->vm_start + sz_r, r->vm_end);
> +	r->vm_end = new->vm_start;
> +
> +	damon_add_region(new, r, damon_next_region(r));
> +}
> +
> +static void damon_split_regions_of(struct damon_ctx *ctx, struct damon_task *t)
> +{
> +	struct damon_region *r, *next;
> +	unsigned long sz_left_region;
> +
> +	damon_for_each_region_safe(r, next, t) {
> +		/*
> +		 * Randomly select size of left sub-region to be at least
> +		 * 10 percent and at most 90% of original region
> +		 */
> +		sz_left_region = (prandom_u32_state(&ctx->rndseed) % 9 + 1) *
> +			(r->vm_end - r->vm_start) / 10;
> +		/* Do not allow blank region */
> +		if (sz_left_region == 0)
> +			continue;
> +		damon_split_region_at(ctx, r, sz_left_region);
> +	}
> +}
> +
> +/*
> + * splits every target regions into two randomly-sized regions
> + *
> + * This function splits every target regions into two random-sized regions if
> + * current total number of the regions is smaller than the half of the
> + * user-specified maximum number of regions.  This is for maximizing the
> + * monitoring accuracy under the dynamically changeable access patterns.  If a
> + * split was unnecessarily made, later 'kdamond_merge_regions()' will revert
> + * it.
> + */
> +static void kdamond_split_regions(struct damon_ctx *ctx)
> +{
> +	struct damon_task *t;
> +	unsigned int nr_regions = 0;
> +
> +	damon_for_each_task(ctx, t)
> +		nr_regions += nr_damon_regions(t);
> +	if (nr_regions > ctx->max_nr_regions / 2)
> +		return;
> +
> +	damon_for_each_task(ctx, t)
> +		damon_split_regions_of(ctx, t);
> +}
> +
>  /*
>   * Check whether current monitoring should be stopped
>   *
> @@ -590,21 +711,29 @@ static int kdamond_fn(void *data)
>  	struct damon_task *t;
>  	struct damon_region *r, *next;
>  	struct mm_struct *mm;
> +	unsigned long max_nr_accesses;
>  
>  	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
>  	kdamond_init_regions(ctx);
>  	while (!kdamond_need_stop(ctx)) {
> +		max_nr_accesses = 0;
>  		damon_for_each_task(ctx, t) {
>  			mm = damon_get_mm(t);
>  			if (!mm)
>  				continue;
> -			damon_for_each_region(r, t)
> +			damon_for_each_region(r, t) {
>  				kdamond_check_access(ctx, mm, r);
> +				if (r->nr_accesses > max_nr_accesses)
> +					max_nr_accesses = r->nr_accesses;

max_nr_accesses = max(r->nr_accesses, max_nr_accesses)

> +			}
>  			mmput(mm);
>  		}
>  
> -		if (kdamond_aggregate_interval_passed(ctx))
> +		if (kdamond_aggregate_interval_passed(ctx)) {
> +			kdamond_merge_regions(ctx, max_nr_accesses / 10);
>  			kdamond_flush_aggregated(ctx);
> +			kdamond_split_regions(ctx);
> +		}
>  
>  		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
>  	}
> @@ -692,24 +821,32 @@ static int damon_set_pids(struct damon_ctx *ctx,
>   * sample_int		time interval between samplings
>   * aggr_int		time interval between aggregations
>   * min_nr_reg		minimal number of regions
> + * max_nr_reg		maximum number of regions
>   *
>   * This function should not be called while the kdamond is running.
>   * Every time interval is in micro-seconds.
>   *
>   * Returns 0 on success, negative error code otherwise.
>   */
> -static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
> -		unsigned long aggr_int, unsigned long min_nr_reg)
> +static int damon_set_attrs(struct damon_ctx *ctx,
> +			unsigned long sample_int, unsigned long aggr_int,
> +			unsigned long min_nr_reg, unsigned long max_nr_reg)
>  {
>  	if (min_nr_reg < 3) {
>  		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
>  				min_nr_reg);
>  		return -EINVAL;
>  	}
> +	if (min_nr_reg >= ctx->max_nr_regions) {
> +		pr_err("invalid nr_regions.  min (%lu) >= max (%lu)\n",
> +				min_nr_reg, max_nr_reg);
> +		return -EINVAL;
> +	}
>  
>  	ctx->sample_interval = sample_int;
>  	ctx->aggr_interval = aggr_int;
>  	ctx->min_nr_regions = min_nr_reg;
> +	ctx->max_nr_regions = max_nr_reg;
>  	return 0;
>  }
>  




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes
  2020-02-24 12:30 ` [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes SeongJae Park
@ 2020-03-10  9:00   ` Jonathan Cameron
  2020-03-10 11:53     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:00 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:37 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> Only a number of parts in the virtual address space of the processes is
> mapped to physical memory and accessed.  Thus, tracking the unmapped
> address regions is just wasteful.  However, tracking every memory
> mapping change might incur an overhead.  For the reason, DAMON applies
> the dynamic memory mapping changes to the tracking regions only for each
> of a user-specified time interval (``regions update interval``).
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>
Trivial inline. Otherwise makes sense to me.

> ---
>  mm/damon.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 95 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index 1c8bb71bbce9..6a17408e83c2 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -59,17 +59,22 @@ struct damon_task {
>  /*
>   * For each 'sample_interval', DAMON checks whether each region is accessed or
>   * not.  It aggregates and keeps the access information (number of accesses to
> - * each region) for each 'aggr_interval' time.
> + * each region) for each 'aggr_interval' time.  And for each
> + * 'regions_update_interval', damon checks whether the memory mapping of the
> + * target tasks has changed (e.g., by mmap() calls from the applications) and
> + * applies the changes.
>   *
>   * All time intervals are in micro-seconds.
>   */
>  struct damon_ctx {
>  	unsigned long sample_interval;
>  	unsigned long aggr_interval;
> +	unsigned long regions_update_interval;
>  	unsigned long min_nr_regions;
>  	unsigned long max_nr_regions;
>  
>  	struct timespec64 last_aggregation;
> +	struct timespec64 last_regions_update;
>  
>  	struct task_struct *kdamond;
>  	bool kdamond_stop;
> @@ -671,6 +676,87 @@ static void kdamond_split_regions(struct damon_ctx *ctx)
>  		damon_split_regions_of(ctx, t);
>  }
>  
> +/*
> + * Check whether it is time to check and apply the dynamic mmap changes
> + *
> + * Returns true if it is.
> + */
> +static bool kdamond_need_update_regions(struct damon_ctx *ctx)
> +{
> +	return damon_check_reset_time_interval(&ctx->last_regions_update,
> +			ctx->regions_update_interval);
> +}
> +
> +static bool damon_intersect(struct damon_region *r, struct region *re)
> +{
> +	return !(r->vm_end <= re->start || re->end <= r->vm_start);
> +}
> +
> +/*
> + * Update damon regions for the three big regions of the given task
> + *
> + * t		the given task
> + * bregions	the three big regions of the task
> + */
> +static void damon_apply_three_regions(struct damon_ctx *ctx,
> +		struct damon_task *t, struct region bregions[3])
> +{
> +	struct damon_region *r, *next;
> +	unsigned int i = 0;
> +
> +	/* Remove regions which isn't in the three big regions now */
> +	damon_for_each_region_safe(r, next, t) {
> +		for (i = 0; i < 3; i++) {
> +			if (damon_intersect(r, &bregions[i]))
> +				break;
> +		}
> +		if (i == 3)
> +			damon_destroy_region(r);
> +	}
> +
> +	/* Adjust intersecting regions to fit with the threee big regions */

three

> +	for (i = 0; i < 3; i++) {
> +		struct damon_region *first = NULL, *last;
> +		struct damon_region *newr;
> +		struct region *br;
> +
> +		br = &bregions[i];
> +		/* Get the first and last regions which intersects with br */
> +		damon_for_each_region(r, t) {
> +			if (damon_intersect(r, br)) {
> +				if (!first)
> +					first = r;
> +				last = r;
> +			}
> +			if (r->vm_start >= br->end)
> +				break;
> +		}
> +		if (!first) {
> +			/* no damon_region intersects with this big region */
> +			newr = damon_new_region(ctx, br->start, br->end);
> +			damon_add_region(newr, damon_prev_region(r), r);
> +		} else {
> +			first->vm_start = br->start;
> +			last->vm_end = br->end;
> +		}
> +	}
> +}
> +
> +/*
> + * Update regions for current memory mappings
> + */
> +static void kdamond_update_regions(struct damon_ctx *ctx)
> +{
> +	struct region three_regions[3];
> +	struct damon_task *t;
> +
> +	damon_for_each_task(ctx, t) {
> +		if (damon_three_regions_of(t, three_regions))
> +			continue;
> +		damon_apply_three_regions(ctx, t, three_regions);
> +	}
> +}
> +
>  /*
>   * Check whether current monitoring should be stopped
>   *
> @@ -735,6 +821,9 @@ static int kdamond_fn(void *data)
>  			kdamond_split_regions(ctx);
>  		}
>  
> +		if (kdamond_need_update_regions(ctx))
> +			kdamond_update_regions(ctx);
> +
>  		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
>  	}
>  	damon_for_each_task(ctx, t) {
> @@ -820,6 +909,7 @@ static int damon_set_pids(struct damon_ctx *ctx,
>   *
>   * sample_int		time interval between samplings
>   * aggr_int		time interval between aggregations
> + * regions_update_int	time interval between vma update checks
>   * min_nr_reg		minimal number of regions
>   * max_nr_reg		maximum number of regions
>   *
> @@ -828,9 +918,9 @@ static int damon_set_pids(struct damon_ctx *ctx,
>   *
>   * Returns 0 on success, negative error code otherwise.
>   */
> -static int damon_set_attrs(struct damon_ctx *ctx,
> -			unsigned long sample_int, unsigned long aggr_int,
> -			unsigned long min_nr_reg, unsigned long max_nr_reg)
> +static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
> +		unsigned long aggr_int, unsigned long regions_update_int,
> +		unsigned long min_nr_reg, unsigned long max_nr_reg)
>  {
>  	if (min_nr_reg < 3) {
>  		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
> @@ -845,6 +935,7 @@ static int damon_set_attrs(struct damon_ctx *ctx,
>  
>  	ctx->sample_interval = sample_int;
>  	ctx->aggr_interval = aggr_int;
> +	ctx->regions_update_interval = regions_update_int;
>  	ctx->min_nr_regions = min_nr_reg;
>  	ctx->max_nr_regions = max_nr_reg;
>  	return 0;




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 05/14] mm/damon: Implement callbacks
  2020-02-24 12:30 ` [PATCH v6 05/14] mm/damon: Implement callbacks SeongJae Park
@ 2020-03-10  9:01   ` Jonathan Cameron
  2020-03-10 11:55     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:01 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:38 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit implements callbacks for DAMON.  Using this, DAMON users can
> install their callbacks for each step of the access monitoring so that
> they can do something interesting with the monitored access pattrns

patterns

> online.  For example, callbacks can report the monitored patterns to
> users or do some access pattern based memory management such as
> proactive reclamations or access pattern based THP promotions/demotions
> decision makings.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>
> ---
>  mm/damon.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index 6a17408e83c2..554720778e8a 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -83,6 +83,10 @@ struct damon_ctx {
>  	struct rnd_state rndseed;
>  
>  	struct list_head tasks_list;	/* 'damon_task' objects */
> +
> +	/* callbacks */
> +	void (*sample_cb)(struct damon_ctx *context);
> +	void (*aggregate_cb)(struct damon_ctx *context);
>  };
>  
>  /* Get a random number in [l, r) */
> @@ -814,9 +818,13 @@ static int kdamond_fn(void *data)
>  			}
>  			mmput(mm);
>  		}
> +		if (ctx->sample_cb)
> +			ctx->sample_cb(ctx);
>  
>  		if (kdamond_aggregate_interval_passed(ctx)) {
>  			kdamond_merge_regions(ctx, max_nr_accesses / 10);
> +			if (ctx->aggregate_cb)
> +				ctx->aggregate_cb(ctx);
>  			kdamond_flush_aggregated(ctx);
>  			kdamond_split_regions(ctx);
>  		}




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 06/14] mm/damon: Implement access pattern recording
  2020-02-24 12:30 ` [PATCH v6 06/14] mm/damon: Implement access pattern recording SeongJae Park
@ 2020-03-10  9:01   ` Jonathan Cameron
  2020-03-10 11:55     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:01 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:39 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit implements the recording feature of DAMON. If this feature
> is enabled, DAMON writes the monitored access patterns in its binary
> format into a file which specified by the user. This is already able to
> be implemented by each user using the callbacks.  However, as the
> recording is expected to be used widely, this commit implements the
> feature in the DAMON, for more convenience and efficiency.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

I guess this work whilst you are still developing, but I'm not convinced
writing to a file should be a standard feature...

> ---
>  mm/damon.c | 126 +++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 123 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index 554720778e8a..a7edb2dfa700 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -76,6 +76,11 @@ struct damon_ctx {
>  	struct timespec64 last_aggregation;
>  	struct timespec64 last_regions_update;
>  
> +	unsigned char *rbuf;
> +	unsigned int rbuf_len;
> +	unsigned int rbuf_offset;
> +	char *rfile_path;
> +
>  	struct task_struct *kdamond;
>  	bool kdamond_stop;
>  	spinlock_t kdamond_lock;
> @@ -89,6 +94,8 @@ struct damon_ctx {
>  	void (*aggregate_cb)(struct damon_ctx *context);
>  };
>  
> +#define MAX_RFILE_PATH_LEN	256
> +
>  /* Get a random number in [l, r) */
>  #define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l))
>  
> @@ -550,16 +557,81 @@ static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx)
>  }
>  
>  /*
> - * Reset the aggregated monitoring results
> + * Flush the content in the result buffer to the result file
> + */
> +static void damon_flush_rbuffer(struct damon_ctx *ctx)
> +{
> +	ssize_t sz;
> +	loff_t pos;
> +	struct file *rfile;
> +
> +	while (ctx->rbuf_offset) {
> +		pos = 0;
> +		rfile = filp_open(ctx->rfile_path, O_CREAT | O_RDWR | O_APPEND,
> +				0644);
> +		if (IS_ERR(rfile)) {
> +			pr_err("Cannot open the result file %s\n",
> +					ctx->rfile_path);
> +			return;
> +		}
> +
> +		sz = kernel_write(rfile, ctx->rbuf, ctx->rbuf_offset, &pos);
> +		filp_close(rfile, NULL);
> +
> +		ctx->rbuf_offset -= sz;
> +	}
> +}
> +
> +/*
> + * Write a data into the result buffer
> + */
> +static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size)
> +{
> +	if (!ctx->rbuf_len || !ctx->rbuf)
> +		return;
> +	if (ctx->rbuf_offset + size > ctx->rbuf_len)
> +		damon_flush_rbuffer(ctx);
> +
> +	memcpy(&ctx->rbuf[ctx->rbuf_offset], data, size);
> +	ctx->rbuf_offset += size;
> +}
> +
> +/*
> + * Flush the aggregated monitoring results to the result buffer
> + *
> + * Stores current tracking results to the result buffer and reset 'nr_accesses'
> + * of each regions.  The format for the result buffer is as below:
> + *
> + *   <time> <number of tasks> <array of task infos>
> + *
> + *   task info: <pid> <number of regions> <array of region infos>
> + *   region info: <start address> <end address> <nr_accesses>
>   */
>  static void kdamond_flush_aggregated(struct damon_ctx *c)
>  {
>  	struct damon_task *t;
> -	struct damon_region *r;
> +	struct timespec64 now;
> +	unsigned int nr;
> +
> +	ktime_get_coarse_ts64(&now);
> +
> +	damon_write_rbuf(c, &now, sizeof(struct timespec64));
> +	nr = nr_damon_tasks(c);
> +	damon_write_rbuf(c, &nr, sizeof(nr));
>  
>  	damon_for_each_task(c, t) {
> -		damon_for_each_region(r, t)
> +		struct damon_region *r;
> +
> +		damon_write_rbuf(c, &t->pid, sizeof(t->pid));
> +		nr = nr_damon_regions(t);
> +		damon_write_rbuf(c, &nr, sizeof(nr));
> +		damon_for_each_region(r, t) {
> +			damon_write_rbuf(c, &r->vm_start, sizeof(r->vm_start));
> +			damon_write_rbuf(c, &r->vm_end, sizeof(r->vm_end));
> +			damon_write_rbuf(c, &r->nr_accesses,
> +					sizeof(r->nr_accesses));
>  			r->nr_accesses = 0;
> +		}
>  	}
>  }
>  
> @@ -834,6 +906,7 @@ static int kdamond_fn(void *data)
>  
>  		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
>  	}
> +	damon_flush_rbuffer(ctx);
>  	damon_for_each_task(ctx, t) {
>  		damon_for_each_region_safe(r, next, t)
>  			damon_destroy_region(r);
> @@ -912,6 +985,53 @@ static int damon_set_pids(struct damon_ctx *ctx,
>  	return 0;
>  }
>  
> +/*
> + * Set attributes for the recording
> + *
> + * ctx		target kdamond context
> + * rbuf_len	length of the result buffer
> + * rfile_path	path to the monitor result files
> + *
> + * Setting 'rbuf_len' 0 disables recording.
> + *
> + * This function should not be called while the kdamond is running.
> + *
> + * Returns 0 on success, negative error code otherwise.
> + */
> +static int damon_set_recording(struct damon_ctx *ctx,
> +				unsigned int rbuf_len, char *rfile_path)
> +{
> +	size_t rfile_path_len;
> +
> +	if (rbuf_len > 4 * 1024 * 1024) {
> +		pr_err("too long (>%d) result buffer length\n",
> +				4 * 1024 * 1024);
> +		return -EINVAL;
> +	}
> +	rfile_path_len = strnlen(rfile_path, MAX_RFILE_PATH_LEN);
> +	if (rfile_path_len >= MAX_RFILE_PATH_LEN) {
> +		pr_err("too long (>%d) result file path %s\n",
> +				MAX_RFILE_PATH_LEN, rfile_path);
> +		return -EINVAL;
> +	}
> +	ctx->rbuf_len = rbuf_len;
> +	kfree(ctx->rbuf);
> +	kfree(ctx->rfile_path);
> +	ctx->rfile_path = NULL;
> +	if (!rbuf_len) {
> +		ctx->rbuf = NULL;
> +	} else {
> +		ctx->rbuf = kvmalloc(rbuf_len, GFP_KERNEL);
> +		if (!ctx->rbuf)
> +			return -ENOMEM;
> +	}
> +	ctx->rfile_path = kmalloc(rfile_path_len + 1, GFP_KERNEL);
> +	if (!ctx->rfile_path)
> +		return -ENOMEM;
> +	strncpy(ctx->rfile_path, rfile_path, rfile_path_len + 1);
> +	return 0;
> +}
> +
>  /*
>   * Set attributes for the monitoring
>   *




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 07/14] mm/damon: Implement kernel space API
  2020-02-24 12:30 ` [PATCH v6 07/14] mm/damon: Implement kernel space API SeongJae Park
@ 2020-03-10  9:01   ` Jonathan Cameron
  2020-03-10 11:56     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:01 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:40 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit implements the DAMON api for the kernel.  Other kernel code
> can use DAMON by calling damon_start() and damon_stop() with their own
> 'struct damon_ctx'.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

Seems like it would have been easier to create the header as you went along
and avoid the need to have the bits here dropping static.

Or the moves for that matter.

Also, ideally have full kernel-doc for anything that forms part of an
interface that is intended for use by others.

Jonathan

> ---
>  include/linux/damon.h | 71 +++++++++++++++++++++++++++++++++++++++++++
>  mm/damon.c            | 71 +++++++++----------------------------------
>  2 files changed, 85 insertions(+), 57 deletions(-)
>  create mode 100644 include/linux/damon.h
> 
> diff --git a/include/linux/damon.h b/include/linux/damon.h
> new file mode 100644
> index 000000000000..78785cb88d42
> --- /dev/null
> +++ b/include/linux/damon.h
> @@ -0,0 +1,71 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DAMON api
> + *
> + * Copyright 2019 Amazon.com, Inc. or its affiliates.  All rights reserved.
> + *
> + * Author: SeongJae Park <sjpark@amazon.de>
> + */
> +
> +#ifndef _DAMON_H_
> +#define _DAMON_H_
> +
> +#include <linux/random.h>
> +#include <linux/spinlock_types.h>
> +#include <linux/time64.h>
> +#include <linux/types.h>
> +
> +/* Represents a monitoring target region on the virtual address space */
> +struct damon_region {
> +	unsigned long vm_start;
> +	unsigned long vm_end;
> +	unsigned long sampling_addr;
> +	unsigned int nr_accesses;
> +	struct list_head list;
> +};
> +
> +/* Represents a monitoring target task */
> +struct damon_task {
> +	unsigned long pid;
> +	struct list_head regions_list;
> +	struct list_head list;
> +};
> +
> +struct damon_ctx {
> +	unsigned long sample_interval;
> +	unsigned long aggr_interval;
> +	unsigned long regions_update_interval;
> +	unsigned long min_nr_regions;
> +	unsigned long max_nr_regions;
> +
> +	struct timespec64 last_aggregation;
> +	struct timespec64 last_regions_update;
> +
> +	unsigned char *rbuf;
> +	unsigned int rbuf_len;
> +	unsigned int rbuf_offset;
> +	char *rfile_path;
> +
> +	struct task_struct *kdamond;
> +	bool kdamond_stop;
> +	spinlock_t kdamond_lock;
> +
> +	struct rnd_state rndseed;
> +
> +	struct list_head tasks_list;	/* 'damon_task' objects */
> +
> +	/* callbacks */
> +	void (*sample_cb)(struct damon_ctx *context);
> +	void (*aggregate_cb)(struct damon_ctx *context);
> +};
> +
> +int damon_set_pids(struct damon_ctx *ctx,
> +			unsigned long *pids, ssize_t nr_pids);
> +int damon_set_recording(struct damon_ctx *ctx,
> +			unsigned int rbuf_len, char *rfile_path);
> +int damon_set_attrs(struct damon_ctx *ctx, unsigned long s, unsigned long a,
> +			unsigned long r, unsigned long min, unsigned long max);
> +int damon_start(struct damon_ctx *ctx);
> +int damon_stop(struct damon_ctx *ctx);
> +
> +#endif
> diff --git a/mm/damon.c b/mm/damon.c
> index a7edb2dfa700..b3e9b9da5720 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -9,6 +9,7 @@
>  
>  #define pr_fmt(fmt) "damon: " fmt
>  
> +#include <linux/damon.h>
>  #include <linux/delay.h>
>  #include <linux/kthread.h>
>  #include <linux/mm.h>
> @@ -40,60 +41,6 @@
>  #define damon_for_each_task_safe(ctx, t, next) \
>  	list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list)
>  
> -/* Represents a monitoring target region on the virtual address space */
> -struct damon_region {
> -	unsigned long vm_start;
> -	unsigned long vm_end;
> -	unsigned long sampling_addr;
> -	unsigned int nr_accesses;
> -	struct list_head list;
> -};
> -
> -/* Represents a monitoring target task */
> -struct damon_task {
> -	unsigned long pid;
> -	struct list_head regions_list;
> -	struct list_head list;
> -};
> -
> -/*
> - * For each 'sample_interval', DAMON checks whether each region is accessed or
> - * not.  It aggregates and keeps the access information (number of accesses to
> - * each region) for each 'aggr_interval' time.  And for each
> - * 'regions_update_interval', damon checks whether the memory mapping of the
> - * target tasks has changed (e.g., by mmap() calls from the applications) and
> - * applies the changes.
> - *
> - * All time intervals are in micro-seconds.
> - */
> -struct damon_ctx {
> -	unsigned long sample_interval;
> -	unsigned long aggr_interval;
> -	unsigned long regions_update_interval;
> -	unsigned long min_nr_regions;
> -	unsigned long max_nr_regions;
> -
> -	struct timespec64 last_aggregation;
> -	struct timespec64 last_regions_update;
> -
> -	unsigned char *rbuf;
> -	unsigned int rbuf_len;
> -	unsigned int rbuf_offset;
> -	char *rfile_path;
> -
> -	struct task_struct *kdamond;
> -	bool kdamond_stop;
> -	spinlock_t kdamond_lock;
> -
> -	struct rnd_state rndseed;
> -
> -	struct list_head tasks_list;	/* 'damon_task' objects */
> -
> -	/* callbacks */
> -	void (*sample_cb)(struct damon_ctx *context);
> -	void (*aggregate_cb)(struct damon_ctx *context);
> -};
> -
>  #define MAX_RFILE_PATH_LEN	256
>  
>  /* Get a random number in [l, r) */
> @@ -961,10 +908,20 @@ static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
>  	return 0;
>  }
>  
> +int damon_start(struct damon_ctx *ctx)
> +{
> +	return damon_turn_kdamond(ctx, true);
> +}
> +
> +int damon_stop(struct damon_ctx *ctx)
> +{
> +	return damon_turn_kdamond(ctx, false);
> +}
> +
>  /*
>   * This function should not be called while the kdamond is running.
>   */
> -static int damon_set_pids(struct damon_ctx *ctx,
> +int damon_set_pids(struct damon_ctx *ctx,
>  			unsigned long *pids, ssize_t nr_pids)
>  {
>  	ssize_t i;
> @@ -998,7 +955,7 @@ static int damon_set_pids(struct damon_ctx *ctx,
>   *
>   * Returns 0 on success, negative error code otherwise.
>   */
> -static int damon_set_recording(struct damon_ctx *ctx,
> +int damon_set_recording(struct damon_ctx *ctx,
>  				unsigned int rbuf_len, char *rfile_path)
>  {
>  	size_t rfile_path_len;
> @@ -1046,7 +1003,7 @@ static int damon_set_recording(struct damon_ctx *ctx,
>   *
>   * Returns 0 on success, negative error code otherwise.
>   */
> -static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
> +int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
>  		unsigned long aggr_int, unsigned long regions_update_int,
>  		unsigned long min_nr_reg, unsigned long max_nr_reg)
>  {




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 08/14] mm/damon: Add debugfs interface
  2020-02-24 12:30 ` [PATCH v6 08/14] mm/damon: Add debugfs interface SeongJae Park
@ 2020-03-10  9:02   ` Jonathan Cameron
  2020-03-10 11:56     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:02 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:41 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit adds a debugfs interface for DAMON.
> 
> DAMON exports four files, ``attrs``, ``pids``, ``record``, and
> ``monitor_on`` under its debugfs directory, ``<debugfs>/damon/``.
> 
> Attributes
> ----------
> 
> Users can read and write the ``sampling interval``, ``aggregation
> interval``, ``regions update interval``, and min/max number of
> monitoring target regions by reading from and writing to the ``attrs``
> file.  For example, below commands set those values to 5 ms, 100 ms,
> 1,000 ms, 10, 1000 and check it again::
> 
>     # cd <debugfs>/damon
>     # echo 5000 100000 1000000 10 1000 > attrs
>     # cat attrs
>     5000 100000 1000000 10 1000
> 
> Target PIDs
> -----------
> 
> Users can read and write the pids of current monitoring target processes
> by reading from and writing to the ``pids`` file.  For example, below
> commands set processes having pids 42 and 4242 as the processes to be
> monitored and check it again::
> 
>     # cd <debugfs>/damon
>     # echo 42 4242 > pids
>     # cat pids
>     42 4242
> 
> Note that setting the pids doesn't starts the monitoring.
> 
> Record
> ------
> 
> DAMON support direct monitoring result record feature.  The recorded
> results are first written to a buffer and flushed to a file in batch.
> Users can set the size of the buffer and the path to the result file by
> reading from and writing to the ``record`` file.  For example, below
> commands set the buffer to be 4 KiB and the result to be saved in
> '/damon.data'.
> 
>     # cd <debugfs>/damon
>     # echo 4096 /damon.data > pids
>     # cat record
>     4096 /damon.data
> 
> Turning On/Off
> --------------
> 
> You can check current status, start and stop the monitoring by reading
> from and writing to the ``monitor_on`` file.  Writing ``on`` to the file
> starts DAMON to monitor the target processes with the attributes.
> Writing ``off`` to the file stops DAMON.  DAMON also stops if every
> target processes is be terminated.  Below example commands turn on, off,
> and check status of DAMON::
> 
>     # cd <debugfs>/damon
>     # echo on > monitor_on
>     # echo off > monitor_on
>     # cat monitor_on
>     off
> 
> Please note that you cannot write to the ``attrs`` and ``pids`` files
> while the monitoring is turned on.  If you write to the files while
> DAMON is running, ``-EINVAL`` will be returned.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

Some of the code in here seems a bit fragile and convoluted.

> ---
>  mm/damon.c | 377 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 376 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index b3e9b9da5720..facb1d7f121b 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -10,6 +10,7 @@
>  #define pr_fmt(fmt) "damon: " fmt
>  
>  #include <linux/damon.h>
> +#include <linux/debugfs.h>
>  #include <linux/delay.h>
>  #include <linux/kthread.h>
>  #include <linux/mm.h>
> @@ -46,6 +47,24 @@
>  /* Get a random number in [l, r) */
>  #define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l))
>  
> +/*
> + * For each 'sample_interval', DAMON checks whether each region is accessed or
> + * not.  It aggregates and keeps the access information (number of accesses to
> + * each region) for 'aggr_interval' and then flushes it to the result buffer if
> + * an 'aggr_interval' surpassed.  And for each 'regions_update_interval', damon
> + * checks whether the memory mapping of the target tasks has changed (e.g., by
> + * mmap() calls from the applications) and applies the changes.
> + *
> + * All time intervals are in micro-seconds.
> + */
> +static struct damon_ctx damon_user_ctx = {
> +	.sample_interval = 5 * 1000,
> +	.aggr_interval = 100 * 1000,
> +	.regions_update_interval = 1000 * 1000,
> +	.min_nr_regions = 10,
> +	.max_nr_regions = 1000,
> +};
> +
>  /*
>   * Construct a damon_region struct
>   *
> @@ -1026,15 +1045,371 @@ int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
>  	return 0;
>  }
>  
> +/*
> + * debugfs functions

Seems unnecessary when their naming makes this clear.

> + */
> +
> +static ssize_t debugfs_monitor_on_read(struct file *file,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	char monitor_on_buf[5];
> +	bool monitor_on;
> +	int ret;
> +
> +	spin_lock(&ctx->kdamond_lock);
> +	monitor_on = ctx->kdamond != NULL;
> +	spin_unlock(&ctx->kdamond_lock);
> +
> +	ret = snprintf(monitor_on_buf, 5, monitor_on ? "on\n" : "off\n");
> +
> +	return simple_read_from_buffer(buf, count, ppos, monitor_on_buf, ret);
> +}
> +
> +static ssize_t debugfs_monitor_on_write(struct file *file,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	ssize_t ret;
> +	bool on = false;
> +	char cmdbuf[5];
> +
> +	ret = simple_write_to_buffer(cmdbuf, 5, ppos, buf, count);
> +	if (ret < 0)
> +		return ret;
> +
> +	if (sscanf(cmdbuf, "%s", cmdbuf) != 1)
> +		return -EINVAL;
> +	if (!strncmp(cmdbuf, "on", 5))
> +		on = true;
> +	else if (!strncmp(cmdbuf, "off", 5))
> +		on = false;
> +	else
> +		return -EINVAL;
> +
> +	if (damon_turn_kdamond(ctx, on))
> +		return -EINVAL;
> +
> +	return ret;
> +}
> +
> +static ssize_t damon_sprint_pids(struct damon_ctx *ctx, char *buf, ssize_t len)
> +{
> +	struct damon_task *t;
> +	int written = 0;
> +	int rc;
> +
> +	damon_for_each_task(ctx, t) {
> +		rc = snprintf(&buf[written], len - written, "%lu ", t->pid);
> +		if (!rc)
> +			return -ENOMEM;
> +		written += rc;
> +	}
> +	if (written)
> +		written -= 1;
> +	written += snprintf(&buf[written], len - written, "\n");
> +	return written;
> +}
> +
> +static ssize_t debugfs_pids_read(struct file *file,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	ssize_t len;
> +	char pids_buf[320];
> +
> +	len = damon_sprint_pids(ctx, pids_buf, 320);
> +	if (len < 0)
> +		return len;
> +
> +	return simple_read_from_buffer(buf, count, ppos, pids_buf, len);
> +}
> +
> +/*
> + * Converts a string into an array of unsigned long integers
> + *
> + * Returns an array of unsigned long integers if the conversion success, or
> + * NULL otherwise.
> + */
> +static unsigned long *str_to_pids(const char *str, ssize_t len,
> +				ssize_t *nr_pids)
> +{
> +	unsigned long *pids;
> +	const int max_nr_pids = 32;
> +	unsigned long pid;
> +	int pos = 0, parsed, ret;
> +
> +	*nr_pids = 0;
> +	pids = kmalloc_array(max_nr_pids, sizeof(unsigned long), GFP_KERNEL);
> +	if (!pids)
> +		return NULL;
> +	while (*nr_pids < max_nr_pids && pos < len) {
> +		ret = sscanf(&str[pos], "%lu%n", &pid, &parsed);
> +		pos += parsed;
> +		if (ret != 1)
> +			break;
> +		pids[*nr_pids] = pid;
> +		*nr_pids += 1;
> +	}
> +	if (*nr_pids == 0) {
> +		kfree(pids);
> +		pids = NULL;
> +	}
> +
> +	return pids;
> +}
> +
> +static ssize_t debugfs_pids_write(struct file *file,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	char *kbuf;
> +	unsigned long *targets;
> +	ssize_t nr_targets;
> +	ssize_t ret;
> +
> +	kbuf = kmalloc_array(count, sizeof(char), GFP_KERNEL);
> +	if (!kbuf)
> +		return -ENOMEM;
> +
> +	ret = simple_write_to_buffer(kbuf, 512, ppos, buf, count);

Why only 512?

> +	if (ret < 0)
> +		goto out;
> +
> +	targets = str_to_pids(kbuf, ret, &nr_targets);
> +	if (!targets) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	spin_lock(&ctx->kdamond_lock);
> +	if (ctx->kdamond)
> +		goto monitor_running;
> +
> +	damon_set_pids(ctx, targets, nr_targets);
> +	spin_unlock(&ctx->kdamond_lock);
> +
> +	goto free_targets_out;
> +
> +monitor_running:
> +	spin_unlock(&ctx->kdamond_lock);
> +	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
> +	ret = -EINVAL;
> +free_targets_out:
> +	kfree(targets);
> +out:
> +	kfree(kbuf);
> +	return ret;
> +}
> +
> +static ssize_t debugfs_record_read(struct file *file,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	char record_buf[20 + MAX_RFILE_PATH_LEN];
> +	int ret;
> +
> +	ret = snprintf(record_buf, ARRAY_SIZE(record_buf), "%u %s\n",
> +			ctx->rbuf_len, ctx->rfile_path);
> +	return simple_read_from_buffer(buf, count, ppos, record_buf, ret);
> +}
> +
> +static ssize_t debugfs_record_write(struct file *file,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	char *kbuf;
> +	unsigned int rbuf_len;
> +	char rfile_path[MAX_RFILE_PATH_LEN];
> +	ssize_t ret;
> +
> +	kbuf = kmalloc_array(count + 1, sizeof(char), GFP_KERNEL);
> +	if (!kbuf)
> +		return -ENOMEM;
> +	kbuf[count] = '\0';
> +
> +	ret = simple_write_to_buffer(kbuf, count, ppos, buf, count);
> +	if (ret < 0)
> +		goto out;
> +	if (sscanf(kbuf, "%u %s",
> +				&rbuf_len, rfile_path) != 2) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	spin_lock(&ctx->kdamond_lock);
> +	if (ctx->kdamond)
> +		goto monitor_running;
> +
> +	damon_set_recording(ctx, rbuf_len, rfile_path);
> +	spin_unlock(&ctx->kdamond_lock);
> +
> +	goto out;
> +
> +monitor_running:
> +	spin_unlock(&ctx->kdamond_lock);
> +	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
> +	ret = -EINVAL;
> +out:
> +	kfree(kbuf);
> +	return ret;
> +}
> +
> +
> +static ssize_t debugfs_attrs_read(struct file *file,
> +		char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	char kbuf[128];
> +	int ret;
> +
> +	ret = snprintf(kbuf, ARRAY_SIZE(kbuf), "%lu %lu %lu %lu %lu\n",
> +			ctx->sample_interval, ctx->aggr_interval,
> +			ctx->regions_update_interval, ctx->min_nr_regions,
> +			ctx->max_nr_regions);
> +
> +	return simple_read_from_buffer(buf, count, ppos, kbuf, ret);
> +}
> +
> +static ssize_t debugfs_attrs_write(struct file *file,
> +		const char __user *buf, size_t count, loff_t *ppos)
> +{
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +	unsigned long s, a, r, minr, maxr;
> +	char *kbuf;
> +	ssize_t ret;
> +
> +	kbuf = kmalloc_array(count, sizeof(char), GFP_KERNEL);

malloc fine for array of characters.   The checks on overflow etc cannot be
relevant here.

> +	if (!kbuf)
> +		return -ENOMEM;
> +
> +	ret = simple_write_to_buffer(kbuf, count, ppos, buf, count);
> +	if (ret < 0)
> +		goto out;
> +
> +	if (sscanf(kbuf, "%lu %lu %lu %lu %lu",
> +				&s, &a, &r, &minr, &maxr) != 5) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	spin_lock(&ctx->kdamond_lock);
> +	if (ctx->kdamond)
> +		goto monitor_running;
> +
> +	damon_set_attrs(ctx, s, a, r, minr, maxr);
> +	spin_unlock(&ctx->kdamond_lock);
> +
> +	goto out;
> +
> +monitor_running:
> +	spin_unlock(&ctx->kdamond_lock);
> +	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
> +	ret = -EINVAL;

This complex exit path is a bad idea from maintainability point of view...
Just put the pr_err and spin_unlock in the error path above.

> +out:
> +	kfree(kbuf);
> +	return ret;
> +}
> +
> +static const struct file_operations monitor_on_fops = {
> +	.owner = THIS_MODULE,
> +	.read = debugfs_monitor_on_read,
> +	.write = debugfs_monitor_on_write,
> +};
> +
> +static const struct file_operations pids_fops = {
> +	.owner = THIS_MODULE,
> +	.read = debugfs_pids_read,
> +	.write = debugfs_pids_write,
> +};
> +
> +static const struct file_operations record_fops = {
> +	.owner = THIS_MODULE,
> +	.read = debugfs_record_read,
> +	.write = debugfs_record_write,
> +};
> +
> +static const struct file_operations attrs_fops = {
> +	.owner = THIS_MODULE,
> +	.read = debugfs_attrs_read,
> +	.write = debugfs_attrs_write,
> +};
> +
> +static struct dentry *debugfs_root;
> +
> +static int __init debugfs_init(void)

Prefix this function.  Chances of sometime getting a header
that includes debugfs_init feels rather too high!

> +{
> +	const char * const file_names[] = {"attrs", "record",
> +		"pids", "monitor_on"};
> +	const struct file_operations *fops[] = {&attrs_fops, &record_fops,
> +		&pids_fops, &monitor_on_fops};
> +	int i;
> +
> +	debugfs_root = debugfs_create_dir("damon", NULL);
> +	if (!debugfs_root) {
> +		pr_err("failed to create the debugfs dir\n");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(file_names); i++) {
> +		if (!debugfs_create_file(file_names[i], 0600, debugfs_root,
> +					NULL, fops[i])) {
> +			pr_err("failed to create %s file\n", file_names[i]);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int __init damon_init_user_ctx(void)
> +{
> +	int rc;
> +
> +	struct damon_ctx *ctx = &damon_user_ctx;
> +
> +	ktime_get_coarse_ts64(&ctx->last_aggregation);
> +	ctx->last_regions_update = ctx->last_aggregation;
> +
> +	ctx->rbuf_offset = 0;
> +	rc = damon_set_recording(ctx, 1024 * 1024, "/damon.data");
> +	if (rc)
> +		return rc;
> +
> +	ctx->kdamond = NULL;
> +	ctx->kdamond_stop = false;
> +	spin_lock_init(&ctx->kdamond_lock);
> +
> +	prandom_seed_state(&ctx->rndseed, 42);

:)

> +	INIT_LIST_HEAD(&ctx->tasks_list);
> +
> +	ctx->sample_cb = NULL;
> +	ctx->aggregate_cb = NULL;

Should already be set to 0.

> +
> +	return 0;
> +}
> +
>  static int __init damon_init(void)
>  {
> +	int rc;
> +
>  	pr_info("init\n");
>  
> -	return 0;
> +	rc = damon_init_user_ctx();
> +	if (rc)
> +		return rc;
> +
> +	return debugfs_init();

In theory no code should ever be dependent on debugfs succeeding..
There might be other daemon users so you should just eat the return
code.


>  }
>  
>  static void __exit damon_exit(void)
>  {
> +	damon_turn_kdamond(&damon_user_ctx, false);
> +	debugfs_remove_recursive(debugfs_root);
> +
> +	kfree(damon_user_ctx.rbuf);
> +	kfree(damon_user_ctx.rfile_path);
> +
>  	pr_info("exit\n");
>  }
>  




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing
  2020-02-24 12:30 ` [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing SeongJae Park
@ 2020-03-10  9:03   ` Jonathan Cameron
  2020-03-10 11:57     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:03 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:42 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit adds a tracepoint for DAMON's result buffer writing.  It is
> called for each writing of the DAMON results and print the result data.
> Therefore, it would be used to easily integrated with other tracepoint
> supporting tracers such as perf.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

I'm curious, why at the flush of rbuf rather than using a more structured trace
point for each of the writes into rbuf?

Seems it would make more sense to have a tracepoint for each record write out.
Probably at the level of each task, though might be more elegant to do it at the
level of each region within a task and duplicate the header stuff.

> ---
>  include/trace/events/damon.h | 32 ++++++++++++++++++++++++++++++++
>  mm/damon.c                   |  4 ++++
>  2 files changed, 36 insertions(+)
>  create mode 100644 include/trace/events/damon.h
> 
> diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h
> new file mode 100644
> index 000000000000..fb33993620ce
> --- /dev/null
> +++ b/include/trace/events/damon.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM damon
> +
> +#if !defined(_TRACE_DAMON_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_DAMON_H
> +
> +#include <linux/types.h>
> +#include <linux/tracepoint.h>
> +
> +TRACE_EVENT(damon_write_rbuf,
> +
> +	TP_PROTO(void *buf, const ssize_t sz),
> +
> +	TP_ARGS(buf, sz),
> +
> +	TP_STRUCT__entry(
> +		__dynamic_array(char, buf, sz)
> +	),
> +
> +	TP_fast_assign(
> +		memcpy(__get_dynamic_array(buf), buf, sz);
> +	),
> +
> +	TP_printk("dat=%s", __print_hex(__get_dynamic_array(buf),
> +			__get_dynamic_array_len(buf)))
> +);
> +
> +#endif /* _TRACE_DAMON_H */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>
> diff --git a/mm/damon.c b/mm/damon.c
> index facb1d7f121b..8faf3879f99e 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -9,6 +9,8 @@
>  
>  #define pr_fmt(fmt) "damon: " fmt
>  
> +#define CREATE_TRACE_POINTS
> +
>  #include <linux/damon.h>
>  #include <linux/debugfs.h>
>  #include <linux/delay.h>
> @@ -20,6 +22,7 @@
>  #include <linux/sched/mm.h>
>  #include <linux/sched/task.h>
>  #include <linux/slab.h>
> +#include <trace/events/damon.h>
>  
>  #define damon_get_task_struct(t) \
>  	(get_pid_task(find_vpid(t->pid), PIDTYPE_PID))
> @@ -553,6 +556,7 @@ static void damon_flush_rbuffer(struct damon_ctx *ctx)
>   */
>  static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size)
>  {
> +	trace_damon_write_rbuf(data, size);
>  	if (!ctx->rbuf_len || !ctx->rbuf)
>  		return;
>  	if (ctx->rbuf_offset + size > ctx->rbuf_len)




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document for DAMON
  2020-02-24 12:30 ` [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document " SeongJae Park
@ 2020-03-10  9:03   ` Jonathan Cameron
  2020-03-10 11:57     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10  9:03 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:44 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit adds a simple document for DAMON under
> `Documentation/admin-guide/mm`.
> 

Nice document to get people started.

Certainly worked for me doing some initial playing around.

In general this is an interesting piece of work.   I can see there are numerous
possible avenues to explore in making the monitoring more flexible, or potentially
better at tracking usage whilst not breaking your fundamental 'bounded overhead'
requirement.   Will be fun perhaps to explore some of those.

I'll do some more exploring and perhaps try some real world workloads.

Thanks,

Jonathan


> Signed-off-by: SeongJae Park <sjpark@amazon.de>
> ---
>  .../admin-guide/mm/data_access_monitor.rst    | 414 ++++++++++++++++++
>  Documentation/admin-guide/mm/index.rst        |   1 +
>  2 files changed, 415 insertions(+)
>  create mode 100644 Documentation/admin-guide/mm/data_access_monitor.rst
> 
> diff --git a/Documentation/admin-guide/mm/data_access_monitor.rst b/Documentation/admin-guide/mm/data_access_monitor.rst
> new file mode 100644
> index 000000000000..4d836c3866e2
> --- /dev/null
> +++ b/Documentation/admin-guide/mm/data_access_monitor.rst
> @@ -0,0 +1,414 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==========================
> +DAMON: Data Access MONitor
> +==========================
> +
> +Introduction
> +============
> +
> +Memory management decisions can normally be more efficient if finer data access
> +information is available.  However, because finer information usually comes
> +with higher overhead, most systems including Linux made a tradeoff: Forgive
> +some wise decisions and use coarse information and/or light-weight heuristics.

I'm not sure what "Forgive some wise decisions" means...

> +
> +A number of experimental data access pattern awared memory management
> +optimizations say the sacrifices are
> +huge (2.55x slowdown).  

Good to have a reference.

> However, none of those has successfully adopted to

adopted into the 

> +Linux kernel mainly due to the absence of a scalable and efficient data access
> +monitoring mechanism.
> +
> +DAMON is a data access monitoring solution for the problem.  It is 1) accurate
> +enough for the DRAM level memory management, 2) light-weight enough to be
> +applied online, and 3) keeps predefined upper-bound overhead regardless of the
> +size of target workloads (thus scalable).
> +
> +DAMON is implemented as a standalone kernel module and provides several simple
> +interfaces.  Owing to that, though it has mainly designed for the kernel's
> +memory management mechanisms, it can be also used for a wide range of user
> +space programs and people.
> +
> +
> +Frequently Asked Questions
> +==========================
> +
> +Q: Why not integrated with perf?
> +A: From the perspective of perf like profilers, DAMON can be thought of as a
> +data source in kernel, like tracepoints, pressure stall information (psi), or
> +idle page tracking.  Thus, it can be easily integrated with those.  However,
> +this patchset doesn't provide a fancy perf integration because current step of
> +DAMON development is focused on its core logic only.  That said, DAMON already
> +provides two interfaces for user space programs, which based on debugfs and
> +tracepoint, respectively.  Using the tracepoint interface, you can use DAMON
> +with perf.  This patchset also provides the debugfs interface based user space
> +tool for DAMON.  It can be used to record, visualize, and analyze data access
> +pattern of target processes in a convenient way.
> +
> +Q: Why a new module, instead of extending perf or other tools?
> +A: First, DAMON aims to be used by other programs including the kernel.
> +Therefore, having dependency to specific tools like perf is not desirable.
> +Second, because it need to be lightweight as much as possible so that it can be
> +used online, any unnecessary overhead such as kernel - user space context
> +switching cost should be avoided.  These are the two most biggest reasons why
> +DAMON is implemented in the kernel space.  The idle page tracking subsystem
> +would be the kernel module that most seems similar to DAMON.  However, it's own
> +interface is not compatible with DAMON.  Also, the internal implementation of
> +it has no common part to be reused by DAMON.
> +
> +Q: Can 'perf mem' provide the data required for DAMON?
> +A: On the systems supporting 'perf mem', yes.  DAMON is using the PTE Accessed
> +bits in low level.  Other H/W or S/W features that can be used for the purpose
> +could be used.  However, as explained with above question, DAMON need to be
> +implemented in the kernel space.
> +
> +
> +Expected Use-cases
> +==================
> +
> +A straightforward usecase of DAMON would be the program behavior analysis.
> +With the DAMON output, users can confirm whether the program is running as
> +intended or not.  This will be useful for debuggings and tests of design
> +points.
> +
> +The monitored results can also be useful for counting the dynamic working set
> +size of workloads.  For the administration of memory overcommitted systems or
> +selection of the environments (e.g., containers providing different amount of
> +memory) for your workloads, this will be useful.
> +
> +If you are a programmer, you can optimize your program by managing the memory
> +based on the actual data access pattern.  For example, you can identify the
> +dynamic hotness of your data using DAMON and call ``mlock()`` to keep your hot
> +data in DRAM, or call ``madvise()`` with ``MADV_PAGEOUT`` to proactively
> +reclaim cold data.  Even though your program is guaranteed to not encounter
> +memory pressure, you can still improve the performance by applying the DAMON
> +outputs for call of ``MADV_HUGEPAGE`` and ``MADV_NOHUGEPAGE``.  More creative
> +optimizations would be possible.  Our evaluations of DAMON includes a
> +straightforward optimization using the ``mlock()``.  Please refer to the below
> +Evaluation section for more detail.
> +
> +As DAMON incurs very low overhead, such optimizations can be applied not only
> +offline, but also online.  Also, there is no reason to limit such optimizations
> +to the user space.  Several parts of the kernel's memory management mechanisms
> +could be also optimized using DAMON. The reclamation, the THP (de)promotion
> +decisions, and the compaction would be such a candidates.
> +
> +
> +Mechanisms of DAMON
> +===================
> +
> +
> +Basic Access Check
> +------------------
> +
> +DAMON basically reports what pages are how frequently accessed.  The report is
> +passed to users in binary format via a ``result file`` which users can set it's
> +path.  Note that the frequency is not an absolute number of accesses, but a
> +relative frequency among the pages of the target workloads.
> +
> +Users can also control the resolution of the reports by setting two time
> +intervals, ``sampling interval`` and ``aggregation interval``.  In detail,
> +DAMON checks access to each page per ``sampling interval``, aggregates the
> +results (counts the number of the accesses to each page), and reports the
> +aggregated results per ``aggregation interval``.  For the access check of each
> +page, DAMON uses the Accessed bits of PTEs.
> +
> +This is thus similar to the previously mentioned periodic access checks based
> +mechanisms, which overhead is increasing as the size of the target process
> +grows.
> +
> +
> +Region Based Sampling
> +---------------------
> +
> +To avoid the unbounded increase of the overhead, DAMON groups a number of
> +adjacent pages that assumed to have same access frequencies into a region.  As
> +long as the assumption (pages in a region have same access frequencies) is
> +kept, only one page in the region is required to be checked.  Thus, for each
> +``sampling interval``, DAMON randomly picks one page in each region and clears
> +its Accessed bit.  After one more ``sampling interval``, DAMON reads the
> +Accessed bit of the page and increases the access frequency of the region if
> +the bit has set meanwhile.  Therefore, the monitoring overhead is controllable
> +by setting the number of regions.  DAMON allows users to set the minimal and
> +maximum number of regions for the trade-off.
> +
> +Except the assumption, this is almost same with the above-mentioned
> +miniature-like static region based sampling.  In other words, this scheme
> +cannot preserve the quality of the output if the assumption is not guaranteed.
> +
> +
> +Adaptive Regions Adjustment
> +---------------------------
> +
> +At the beginning of the monitoring, DAMON constructs the initial regions by
> +evenly splitting the memory mapped address space of the process into the
> +user-specified minimal number of regions.  In this initial state, the
> +assumption is normally not kept and thus the quality could be low.  To keep the
> +assumption as much as possible, DAMON adaptively merges and splits each region.
> +For each ``aggregation interval``, it compares the access frequencies of
> +adjacent regions and merges those if the frequency difference is small.  Then,
> +after it reports and clears the aggregated access frequency of each region, it
> +splits each region into two regions if the total number of regions is smaller
> +than the half of the user-specified maximum number of regions.
> +
> +In this way, DAMON provides its best-effort quality and minimal overhead while
> +keeping the bounds users set for their trade-off.
> +
> +
> +Applying Dynamic Memory Mappings
> +--------------------------------
> +
> +Only a number of small parts in the super-huge virtual address space of the
> +processes is mapped to physical memory and accessed.  Thus, tracking the
> +unmapped address regions is just wasteful.  However, tracking every memory
> +mapping change might incur an overhead.  For the reason, DAMON applies the
> +dynamic memory mapping changes to the tracking regions only for each of an
> +user-specified time interval (``regions update interval``).

One key part of the approach is the 3 region bit.  Perhaps talk about that here
somewhere?

> +
> +
> +``debugfs`` Interface
> +=====================
> +
> +DAMON exports four files, ``attrs``, ``pids``, ``record``, and ``monitor_on``
> +under its debugfs directory, ``<debugfs>/damon/``.
> +
> +Attributes
> +----------
> +
> +Users can read and write the ``sampling interval``, ``aggregation interval``,
> +``regions update interval``, and min/max number of monitoring target regions by
> +reading from and writing to the ``attrs`` file.  For example, below commands
> +set those values to 5 ms, 100 ms, 1,000 ms, 10, 1000 and check it again::
> +
> +    # cd <debugfs>/damon
> +    # echo 5000 100000 1000000 10 1000 > attrs

I'm personally a great fan of human readable interfaces.  Could we just
split this into one file per interval?  That way the file naming would
make it self describing.

> +    # cat attrs
> +    5000 100000 1000000 10 1000
> +
> +Target PIDs
> +-----------
> +
> +Users can read and write the pids of current monitoring target processes by
> +reading from and writing to the ``pids`` file.  For example, below commands set
> +processes having pids 42 and 4242 as the processes to be monitored and check it
> +again::
> +
> +    # cd <debugfs>/damon
> +    # echo 42 4242 > pids
> +    # cat pids
> +    42 4242
> +
> +Note that setting the pids doesn't starts the monitoring.
> +
> +Record
> +------
> +
> +DAMON support direct monitoring result record feature.  The recorded results
> +are first written to a buffer and flushed to a file in batch.  Users can set
> +the size of the buffer and the path to the result file by reading from and
> +writing to the ``record`` file.  For example, below commands set the buffer to
> +be 4 KiB and the result to be saved in ``/damon.data``.
> +
> +    # cd <debugfs>/damon
> +    # echo "4096 /damon.data" > pids

write it to record, not pids.

> +    # cat record
> +    4096 /damon.data
> +
> +Turning On/Off
> +--------------
> +
> +You can check current status, start and stop the monitoring by reading from and
> +writing to the ``monitor_on`` file.  Writing ``on`` to the file starts DAMON to
> +monitor the target processes with the attributes.  Writing ``off`` to the file
> +stops DAMON.  DAMON also stops if every target processes is be terminated.
> +Below example commands turn on, off, and check status of DAMON::
> +
> +    # cd <debugfs>/damon
> +    # echo on > monitor_on
> +    # echo off > monitor_on
> +    # cat monitor_on
> +    off
> +
> +Please note that you cannot write to the ``attrs`` and ``pids`` files while the
> +monitoring is turned on.  If you write to the files while DAMON is running,
> +``-EINVAL`` will be returned.

Perhaps -EBUSY would be more informative?  Implies values might be fine, but
the issue is 'not now'.
> +
> +
> +User Space Tool for DAMON
> +=========================
> +
> +There is a user space tool for DAMON, ``/tools/damon/damo``.  It provides
> +another user interface which more convenient than the debugfs interface.
> +Nevertheless, note that it is only aimed to be used for minimal reference of
> +the DAMON's debugfs interfaces and for tests of the DAMON itself.  Based on the
> +debugfs interface, you can create another cool and more convenient user space
> +tools.
> +
> +The interface of the tool is basically subcommand based.  You can almost always
> +use ``-h`` option to get help of the use of each subcommand.  Currently, it
> +supports two subcommands, ``record`` and ``report``.
> +
> +
> +Recording Data Access Pattern
> +-----------------------------
> +
> +The ``record`` subcommand records the data access pattern of target process in
> +a file (``./damon.data`` by default) using DAMON.  You can specifies the target
> +as either pid or a command for an execution of the process.  Below example
> +shows a command target usage::
> +
> +    # cd <kernel>/tools/damon/
> +    # ./damo record "sleep 5"
> +
> +The tool will execute ``sleep 5`` by itself and record the data access patterns
> +of the process.  Below example shows a pid target usage::
> +
> +    # sleep 5 &
> +    # ./damo record `pidof sleep`
> +
> +You can set more detailed attributes and path to the recorded data file using
> +optional arguments to the subcommand.  Use the ``-h`` option for more help.
> +
> +
> +Analyzing Data Access Pattern
> +-----------------------------
> +
> +The ``report`` subcommand reads a data access pattern record file (if not
> +explicitly specified, reads ``./damon.data`` file if exists) and generates
> +reports of various types.  You can specify what type of report you want using
> +sub-subcommand to ``report`` subcommand.  For supported types, pass the ``-h``
> +option to ``report`` subcommand.
> +
> +
> +raw
> +~~~
> +
> +``raw`` sub-subcommand simply transforms the record, which is storing the data
> +access patterns in binary format to human readable text.  For example::
> +
> +    $ ./damo report raw
> +    start_time:  193485829398
> +    rel time:                0
> +    nr_tasks:  1
> +    pid:  1348
> +    nr_regions:  4
> +    560189609000-56018abce000(  22827008):  0
> +    7fbdff59a000-7fbdffaf1a00(   5601792):  0
> +    7fbdffaf1a00-7fbdffbb5000(    800256):  1
> +    7ffea0dc0000-7ffea0dfd000(    249856):  0
> +
> +    rel time:        100000731
> +    nr_tasks:  1
> +    pid:  1348
> +    nr_regions:  6
> +    560189609000-56018abce000(  22827008):  0
> +    7fbdff59a000-7fbdff8ce933(   3361075):  0
> +    7fbdff8ce933-7fbdffaf1a00(   2240717):  1
> +    7fbdffaf1a00-7fbdffb66d99(    480153):  0
> +    7fbdffb66d99-7fbdffbb5000(    320103):  1
> +    7ffea0dc0000-7ffea0dfd000(    249856):  0
> +
> +The first line shows recording started timestamp (nanosecond).  Records of data
> +access patterns are following this.  Each record is sperated by a blank line.
> +Each record first specifies the recorded time (``rel time``), number of
> +monitored tasks in this record (``nr_tasks``).  Multiple number of records of
> +data access pattern for each task continue.  Each data access pattern for each
> +task shows first it's pid (``pid``) and number of monitored virtual address
> +regions in this access pattern (``nr_regions``).  After that, each line shows
> +start/end address, size, and number of monitored accesses to the region for
> +each of the regions.
> +
> +
> +heats
> +~~~~~
> +
> +The ``raw`` type shows detailed information but it is exhaustive to manually
> +read and analyzed.  For the reason, ``heats`` plots the data in heatmap form,
> +using time as x-axis, virtual address as y-axis, and access frequency as
> +z-axis.  Also, users set the resolution and start/end point of each axis via
> +optional arguments.  For example::
> +
> +    $ ./damo report heats --tres 3 --ares 3
> +    0               0               0.0
> +    0               7609002         0.0
> +    0               15218004        0.0
> +    66112620851     0               0.0
> +    66112620851     7609002         0.0
> +    66112620851     15218004        0.0
> +    132225241702    0               0.0
> +    132225241702    7609002         0.0
> +    132225241702    15218004        0.0
> +
> +This command shows the recorded access pattern of the ``sleep`` command using 3
> +data points for each of time axis and address axis.  Therefore, it shows 9 data
> +points in total.
> +
> +Users can easily converts this text output into heatmap image or other 3D
> +representation using various tools such as 'gnuplot'.  ``raw`` sub-subcommand
> +also provides 'gnuplot' based heatmap image creation.  For this, you can use
> +``--heatmap`` option.  Also, note that because it uses 'gnuplot' internally, it
> +will fail if 'gnuplot' is not installed on your system.  For example::
> +
> +    $ ./damo report heats --heatmap heatmap.png
> +
> +Creates ``heatmap.png`` file containing the heatmap image.  It supports
> +``pdf``, ``png``, ``jpeg``, and ``svg``.
> +
> +For proper zoom in / zoom out, you need to see the layout of the record.  For
> +that, use '--guide' option.  If the option is given, it will provide useful
> +information about the records in the record file.  For example::
> +
> +    $ ./damo report heats --guide
> +    pid:1348
> +    time: 193485829398-198337863555 (4852034157)
> +    region   0: 00000094564599762944-00000094564622589952 (22827008)
> +    region   1: 00000140454009610240-00000140454016012288 (6402048)
> +    region   2: 00000140731597193216-00000140731597443072 (249856)
> +
> +The output shows monitored regions (start and end addresses in byte) and
> +monitored time duration (start and end time in nanosecond) of each target task.
> +Therefore, it would be wise to plot only each region rather than plotting
> +entire address space in one heatmap because the gaps between the regions are so
> +huge in this case.
> +
> +
> +wss
> +~~~
> +
> +The ``wss`` type shows the distribution or time-varying working set sizes of
> +the recorded workload using the records.  For example::
> +
> +    $ ./damo report wss
> +    # <percentile> <wss>
> +    # pid   1348
> +    # avr:  66228
> +    0       0
> +    25      0
> +    50      0
> +    75      0
> +    100     1920615
> +
> +Without any option, it shows the distribution of the working set sizes as
> +above.  Basically it shows 0th, 25th, 50th, 75th, and 100th percentile and
> +average of the measured working set sizes in the access pattern records.  In
> +this case, the working set size was zero for 75th percentile but 1,920,615
> +bytes in max and 66,228 in average.
> +
> +By setting the sort key of the percentile using '--sortby', you can also see
> +how the working set size is chronologically changed.  For example::
> +
> +    $ ./damo report wss --sortby time
> +    # <percentile> <wss>
> +    # pid   1348
> +    # avr:  66228
> +    0       0
> +    25      0
> +    50      0
> +    75      0
> +    100     0
> +
> +The average is still 66,228.  And, because we sorted the working set using
> +recorded time and the access is very short, we cannot show when the access
> +made.
> +
> +Users can specify the resolution of the distribution (``--range``).  It also
> +supports 'gnuplot' based simple visualization (``--plot``) of the distribution.
> diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst
> index 11db46448354..d3d0ba373eb6 100644
> --- a/Documentation/admin-guide/mm/index.rst
> +++ b/Documentation/admin-guide/mm/index.rst
> @@ -27,6 +27,7 @@ the Linux memory management.
>  
>     concepts
>     cma_debugfs
> +   data_access_monitor
>     hugetlbpage
>     idle_page_tracking
>     ksm




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 01/14] mm: Introduce Data Access MONitor (DAMON)
  2020-03-10  8:54   ` Jonathan Cameron
@ 2020-03-10 11:50     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:50 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 08:54:05 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> Apologies if anyone gets these twice. I had an email server throttling
> issue yesterday.
> 
> On Mon, 24 Feb 2020 13:30:34 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit introduces a kernel module named DAMON.  Note that this
> > commit is implementing only the stub for the module load/unload, basic
> > data structures, and simple manipulation functions of the structures to
> > keep the size of commit small.  The core mechanisms of DAMON will be
> > implemented one by one by following commits.
> 
> Interesting piece of work.  I'm reviewing this partly as an exercise in
> understanding it, but I'll point out minor stuff on the basis I might
> as well whilst I'm here. ;)  Note I review bottom up so some comments
> won't make much sense read from the top.

Thanks for review, Jonathan :)  I added reply in line below, but agree to your
whole suggestion.  Will apply those in next spin.

> 
> > 
> > Brief Introduction
> > ==================
> 
> I'd keep this level of intro for the cover letter / docs.  It's not
> particularly useful in commit message it git.

Agreed.

> 
> > 
[...]
> >  
> > +config DAMON
> > +	tristate "Data Access Monitor"
> > +	depends on MMU
> > +	default n
> 
> No need to specify a default of n.

Got it.

> 
> > +	help
> > +	  Provides data access monitoring.
> > +
> > +	  DAMON is a kernel module that allows users to monitor the actual
> > +	  memory access pattern of specific user-space processes.  It aims to
> > +	  be 1) accurate enough to be useful for performance-centric domains,
> > +	  and 2) sufficiently light-weight so that it can be applied online.
> > +
> >  endmenu
[...]
> > +/*
> > + * Construct a damon_region struct
> > + *
> > + * Returns the pointer to the new struct if success, or NULL otherwise
> > + */
> > +static struct damon_region *damon_new_region(struct damon_ctx *ctx,
> > +				unsigned long vm_start, unsigned long vm_end)
> > +{
> > +	struct damon_region *ret;
> 
> I'd give this a different variable name.  Expectation in kernel is often
> that ret is simply an magic handle to be passed on.  Don't normally expect
> to set elements of it.  I'd go long hand and call it region.

Nice point, will change the name to 'region'.

> 
> > +
> > +	ret = kmalloc(sizeof(struct damon_region), GFP_KERNEL);
> 
> sizeof(*ret)

Thanks for catching it!  Will apply to other similar cases.

> 
> > +	if (!ret)
> > +		return NULL;
> 
> blank line.

Good suggestion.

> 
> > +	ret->vm_start = vm_start;
> > +	ret->vm_end = vm_end;
> > +	ret->nr_accesses = 0;
> > +	ret->sampling_addr = damon_rand(ctx, vm_start, vm_end);
> > +	INIT_LIST_HEAD(&ret->list);
> > +
> > +	return ret;
> > +}
> > +
> > +/*
> > + * Add a region between two other regions
> Interestingly even the list.h comments for __list_add call this
> function "insert".   No idea why it isn't simply called that..
> 
> Perhaps damon_insert_region would be clearer and avoid need
> for comment?

I just wanted to make the name consistent with the 'list.h' file, but your
suggestion sounds better.  Will change so.

> 
> > + */
> > +static inline void damon_add_region(struct damon_region *r,
> > +		struct damon_region *prev, struct damon_region *next)
> > +{
> > +	__list_add(&r->list, &prev->list, &next->list);
> > +}
> > +
> > +/*
> > + * Append a region to a task's list of regions
> 
> I'd argue the naming is sufficient that the comment adds little.

Yes, will delete it.

> 
> > + */
> > +static void damon_add_region_tail(struct damon_region *r, struct damon_task *t)
> > +{
> > +	list_add_tail(&r->list, &t->regions_list);
> > +}
> > +
> > +/*
> > + * Delete a region from its list
> 
> The list is an implementation detail. I'd not mention that in the comments.

Nice suggestion.

> 
> > + */
> > +static void damon_del_region(struct damon_region *r)
> > +{
> > +	list_del(&r->list);
> > +}
> > +
> > +/*
> > + * De-allocate a region
> 
> Obvious comment - seem rot risk note below.

Agreed.

> 
> > + */
> > +static void damon_free_region(struct damon_region *r)
> > +{
> > +	kfree(r);
> > +}
> > +
> > +static void damon_destroy_region(struct damon_region *r)
> > +{
> > +	damon_del_region(r);
> > +	damon_free_region(r);
> > +}
> > +
> > +/*
> > + * Construct a damon_task struct
> > + *
> > + * Returns the pointer to the new struct if success, or NULL otherwise
> > + */
> > +static struct damon_task *damon_new_task(unsigned long pid)
> > +{
> > +	struct damon_task *t;
> > +
> > +	t = kmalloc(sizeof(struct damon_task), GFP_KERNEL);
> 
> sizeof(*t) is probably less error prone if this code is maintained
> in the long run.

Good point, will apply to other cases, either.

> 
> > +	if (!t)
> > +		return NULL;
> 
> blank line.

Will add it.

> 
> > +	t->pid = pid;
> > +	INIT_LIST_HEAD(&t->regions_list);
> > +
> > +	return t;
> > +}
> > +
> > +/* Returns n-th damon_region of the given task */
> > +struct damon_region *damon_nth_region_of(struct damon_task *t, unsigned int n)
> > +{
> > +	struct damon_region *r;
> > +	unsigned int i;
> > +
> > +	i = 0;
> 	unsigned int i = 0;

Yes, it must be much better.

> 
> > +	damon_for_each_region(r, t) {
> > +		if (i++ == n)
> > +			return r;
> > +	}
> 
> blank line helps readability a little.

Yes, indeed.

> 
> > +	return NULL;
> > +}
> > +
> > +static void damon_add_task_tail(struct damon_ctx *ctx, struct damon_task *t)
> 
> I'm curious, do we care that it's on the tail?  If not I'd look on that as an
> implementation detail and just call this 
> 
> damon_add_task()

I named it to be consistent with 'damon_add_region[_tail]()' functions, but as
you suggested renaming 'damon_add_region()', it doesn't need to.  Will change
the name.

> 
> > +{
> > +	list_add_tail(&t->list, &ctx->tasks_list);
> > +}
> > +
> > +static void damon_del_task(struct damon_task *t)
> > +{
> > +	list_del(&t->list);
> > +}
> > +
> > +static void damon_free_task(struct damon_task *t)
> > +{
> > +	struct damon_region *r, *next;
> > +
> > +	damon_for_each_region_safe(r, next, t)
> > +		damon_free_region(r);
> > +	kfree(t);
> > +}
> > +
> > +static void damon_destroy_task(struct damon_task *t)
> > +{
> > +	damon_del_task(t);
> > +	damon_free_task(t);
> > +}
> > +
> > +/*
> > + * Returns number of monitoring target tasks
> 
> As below, kind of obvious so just room for rot.

Agreed.

> 
> > + */
> > +static unsigned int nr_damon_tasks(struct damon_ctx *ctx)
> > +{
> > +	struct damon_task *t;
> > +	unsigned int ret = 0;
> > +
> > +	damon_for_each_task(ctx, t)
> > +		ret++;
> > +	return ret;
> > +}
> > +
> > +/*
> > + * Returns the number of target regions for a given target task
> 
> Always a trade off between useful comments and possibility of docs
> rotting.  I'd drop this comment certainly.
> The function name is self explanatory.

Agreed!

> 
> > + */
> > +static unsigned int nr_damon_regions(struct damon_task *t)
> > +{
> > +	struct damon_region *r;
> > +	unsigned int ret = 0;
> > +
> > +	damon_for_each_region(r, t)
> > +		ret++;
> 
> Blank line here would help readability a tiny bit.
> Same in other places where we have something followed by a nice
> simple return statement.

Yes, indeed.

> 
> > +	return ret;
> > +}
> > +
> > +static int __init damon_init(void)
> > +{
> > +	pr_info("init\n");
> 
> Drop these. They are just noise.

Right, it's just noise, will remove.


Thank you again for kind review, Jonathan!


Thanks,
SeongJae Park

> 
> > +
> > +	return 0;
> > +}
> > +
> > +static void __exit damon_exit(void)
> > +{
> > +	pr_info("exit\n");
> > +}
> > +
> > +module_init(damon_init);
> > +module_exit(damon_exit);
> > +
> > +MODULE_LICENSE("GPL");
> > +MODULE_AUTHOR("SeongJae Park <sjpark@amazon.de>");
> > +MODULE_DESCRIPTION("DAMON: Data Access MONitor");
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-10  8:57   ` Jonathan Cameron
@ 2020-03-10 11:52     ` SeongJae Park
  2020-03-10 15:55       ` Jonathan Cameron
  0 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:52 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

Added replies to your every comment in line below.  I agree to your whole
opinions, will apply those in next spin! :)

On Tue, 10 Mar 2020 08:57:21 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:35 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit implements DAMON's basic access check and region based
> > sampling mechanisms.  This change would seems make no sense, mainly
> > because it is only a part of the DAMON's logics.  Following two commits
> > will make more sense.
> > 
> > This commit also exports `lookup_page_ext()` to GPL modules because
> > DAMON uses the function but also supports the module build.
> 
> Do that as a separate patch before this one.  Makes it easy to spot.

Agreed, will do so.

> 
> > 
[...]
> 
> Various things inline. In particularly can you make use of standard
> kthread_stop infrastructure rather than rolling your own?

Nice suggestion!  That will be much better, will use it.

> 
> > ---
> >  mm/damon.c    | 509 ++++++++++++++++++++++++++++++++++++++++++++++++++
> >  mm/page_ext.c |   1 +
> >  2 files changed, 510 insertions(+)
> > 
> > diff --git a/mm/damon.c b/mm/damon.c
> > index aafdca35b7b8..6bdeb84d89af 100644
> > --- a/mm/damon.c
> > +++ b/mm/damon.c
> > @@ -9,9 +9,14 @@
> >  
[...]
> > +/*
> > + * Get the mm_struct of the given task
> > + *
> > + * Callser should put the mm_struct after use, unless it is NULL.
> 
> Caller 

Good eye!  Will fix it.

> 
> > + *
> > + * Returns the mm_struct of the task on success, NULL on failure
> > + */
> > +static struct mm_struct *damon_get_mm(struct damon_task *t)
> > +{
> > +	struct task_struct *task;
> > +	struct mm_struct *mm;
> > +
> > +	task = damon_get_task_struct(t);
> > +	if (!task)
> > +		return NULL;
> > +
> > +	mm = get_task_mm(task);
> > +	put_task_struct(task);
> > +	return mm;
> > +}
> > +
> > +/*
> > + * Size-evenly split a region into 'nr_pieces' small regions
> > + *
> > + * Returns 0 on success, or negative error code otherwise.
> > + */
> > +static int damon_split_region_evenly(struct damon_ctx *ctx,
> > +		struct damon_region *r, unsigned int nr_pieces)
> > +{
> > +	unsigned long sz_orig, sz_piece, orig_end;
> > +	struct damon_region *piece = NULL, *next;
> > +	unsigned long start;
> > +
> > +	if (!r || !nr_pieces)
> > +		return -EINVAL;
> > +
> > +	orig_end = r->vm_end;
> > +	sz_orig = r->vm_end - r->vm_start;
> > +	sz_piece = sz_orig / nr_pieces;
> > +
> > +	if (!sz_piece)
> > +		return -EINVAL;
> > +
> > +	r->vm_end = r->vm_start + sz_piece;
> > +	next = damon_next_region(r);
> > +	for (start = r->vm_end; start + sz_piece <= orig_end;
> > +			start += sz_piece) {
> > +		piece = damon_new_region(ctx, start, start + sz_piece);
> > +		damon_add_region(piece, r, next);
> > +		r = piece;
> > +	}
> 
> I'd add a comment here. I think this next bit is to catch any rounding error
> holes, but I'm not 100% sure.

Yes, will make it clearer.

> 
> > +	if (piece)
> > +		piece->vm_end = orig_end;
> 
> blank line here.

Will add.

> 
> > +	return 0;
> > +}
[...]
> > +/*
> > + * Initialize the monitoring target regions for the given task
> > + *
> > + * t	the given target task
> > + *
> > + * Because only a number of small portions of the entire address space
> > + * is acutally mapped to the memory and accessed, monitoring the unmapped
> 
> actually

Good eye!  Will consider adding these in 'scripts/spelling.txt'.

> 
[...]
> > +/*
> > + * Check whether the given region has accessed since the last check
> 
> Should also make clear that this sets us up for the next access check at
> a different memory address it the region.
> 
> Given the lack of connection between activities perhaps just split this into
> two functions that are always called next to each other.

Will make the description more clearer as suggested.

Also, I found that I'm not clearing *pte and *pmd before going 'mkold', thanks
to this comment.  Will fix it, either.

> 
> > + *
> > + * mm	'mm_struct' for the given virtual address space
> > + * r	the region to be checked
> > + */
> > +static void kdamond_check_access(struct damon_ctx *ctx,
> > +			struct mm_struct *mm, struct damon_region *r)
> > +{
> > +	pte_t *pte = NULL;
> > +	pmd_t *pmd = NULL;
> > +	spinlock_t *ptl;
> > +
> > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > +		goto mkold;
> > +
> > +	/* Read the page table access bit of the page */
> > +	if (pte && pte_young(*pte))
> > +		r->nr_accesses++;
> > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> 
> Is it worth having this protection?  Seems likely to have only a very small
> influence on performance and makes it a little harder to reason about the code.

It was necessary for addressing 'implicit declaration' problem of 'pmd_young()'
and 'pmd_mkold()' for build of DAMON on several architectures including User
Mode Linux.

Will modularize the code for better readability.

> 
> > +	else if (pmd && pmd_young(*pmd))
> > +		r->nr_accesses++;
> > +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> > +
> > +	spin_unlock(ptl);
> > +
> > +mkold:
> > +	/* mkold next target */
> > +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> > +
> > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > +		return;
> > +
> > +	if (pte) {
> > +		if (pte_young(*pte)) {
> > +			clear_page_idle(pte_page(*pte));
> > +			set_page_young(pte_page(*pte));
> > +		}
> > +		*pte = pte_mkold(*pte);
> > +	}
> > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > +	else if (pmd) {
> > +		if (pmd_young(*pmd)) {
> > +			clear_page_idle(pmd_page(*pmd));
> > +			set_page_young(pmd_page(*pmd));
> > +		}
> > +		*pmd = pmd_mkold(*pmd);
> > +	}
> > +#endif
> > +
> > +	spin_unlock(ptl);
> > +}
> > +
> > +/*
> > + * Check whether a time interval is elapsed
> 
> Another comment block that would be clearer if it was kernel-doc rather
> than nearly kernel-doc

Will apply the kernel-doc syntax.

> 
> > + *
> > + * baseline	the time to check whether the interval has elapsed since
> > + * interval	the time interval (microseconds)
> > + *
> > + * See whether the given time interval has passed since the given baseline
> > + * time.  If so, it also updates the baseline to current time for next check.
> > + *
> > + * Returns true if the time interval has passed, or false otherwise.
> > + */
> > +static bool damon_check_reset_time_interval(struct timespec64 *baseline,
> > +		unsigned long interval)
> > +{
> > +	struct timespec64 now;
> > +
> > +	ktime_get_coarse_ts64(&now);
> > +	if ((timespec64_to_ns(&now) - timespec64_to_ns(baseline)) <
> > +			interval * 1000)
> > +		return false;
> > +	*baseline = now;
> > +	return true;
> > +}
> > +
> > +/*
> > + * Check whether it is time to flush the aggregated information
> > + */
> > +static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx)
> > +{
> > +	return damon_check_reset_time_interval(&ctx->last_aggregation,
> > +			ctx->aggr_interval);
> > +}
> > +
> > +/*
> > + * Reset the aggregated monitoring results
> > + */
> > +static void kdamond_flush_aggregated(struct damon_ctx *c)
> 
> I wouldn't expect a reset function to be called flush.

It will work as flushing in next commit, but it makes no sense now.  Will
rename it.

> 
> > +{
> > +	struct damon_task *t;
> > +	struct damon_region *r;
> > +
> > +	damon_for_each_task(c, t) {
> > +		damon_for_each_region(r, t)
> > +			r->nr_accesses = 0;
> > +	}
> > +}
> > +
> > +/*
> > + * Check whether current monitoring should be stopped
> > + *
> > + * If users asked to stop, need stop.  Even though no user has asked to stop,
> > + * need stop if every target task has dead.
> > + *
> > + * Returns true if need to stop current monitoring.
> > + */
> > +static bool kdamond_need_stop(struct damon_ctx *ctx)
> > +{
> > +	struct damon_task *t;
> > +	struct task_struct *task;
> > +	bool stop;
> > +
> 
> As below comment asks, can you use kthread_should_stop?

Yes, I will.

> 
> > +	spin_lock(&ctx->kdamond_lock);
> > +	stop = ctx->kdamond_stop;
> > +	spin_unlock(&ctx->kdamond_lock);
> > +	if (stop)
> > +		return true;
> > +
> > +	damon_for_each_task(ctx, t) {
> > +		task = damon_get_task_struct(t);
> > +		if (task) {
> > +			put_task_struct(task);
> > +			return false;
> > +		}
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +/*
> > + * The monitoring daemon that runs as a kernel thread
> > + */
> > +static int kdamond_fn(void *data)
> > +{
> > +	struct damon_ctx *ctx = (struct damon_ctx *)data;
> 
> Never any need to explicitly cast a void * to some other pointer type.
> (C spec)

Ah, you're right.

> 
> 	struct damon_ctx *ctx = data;
> > +	struct damon_task *t;
> > +	struct damon_region *r, *next;
> > +	struct mm_struct *mm;
> > +
> > +	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
> > +	kdamond_init_regions(ctx);
> > +	while (!kdamond_need_stop(ctx)) {
> > +		damon_for_each_task(ctx, t) {
> > +			mm = damon_get_mm(t);
> > +			if (!mm)
> > +				continue;
> > +			damon_for_each_region(r, t)
> > +				kdamond_check_access(ctx, mm, r);
> > +			mmput(mm);
> > +		}
> > +
> > +		if (kdamond_aggregate_interval_passed(ctx))
> > +			kdamond_flush_aggregated(ctx);
> > +
> > +		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
> 
> Is there any purpose in using a range for such a narrow window?

Actually, it needs to sleep only 'ctx->sample_interval', and thus I set the
interval so narrow.

> 
> > +	}
> > +	damon_for_each_task(ctx, t) {
> > +		damon_for_each_region_safe(r, next, t)
> > +			damon_destroy_region(r);
> > +	}
> > +	pr_info("kdamond (%d) finishes\n", ctx->kdamond->pid);
> 
> Feels like noise.  I'd drop tis to pr_debug.

Agreed, will remove it.

> 
> > +	spin_lock(&ctx->kdamond_lock);
> > +	ctx->kdamond = NULL;
> > +	spin_unlock(&ctx->kdamond_lock);
> 
> blank line.

Yup!

> 
> > +	return 0;
> > +}
> > +
> > +/*
> > + * Controller functions
> > + */
> > +
> > +/*
> > + * Start or stop the kdamond
> > + *
> > + * Returns 0 if success, negative error code otherwise.
> > + */
> > +static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
> > +{
> > +	spin_lock(&ctx->kdamond_lock);
> > +	ctx->kdamond_stop = !on;
> 
> Can't use the kthread_stop / kthread_should_stop approach?

Will use it.

> 
> > +	if (!ctx->kdamond && on) {
> > +		ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond");
> > +		if (!ctx->kdamond)
> > +			goto fail;
> > +		goto success;
> 
> cleaner as 
> int ret = 0; above then
> 
> 		if (!ctx->kdamond)
> 			ret = -EINVAL;
> 		goto unlock;
> 
> with
> 
> unlock:
> 	spin_unlock(&ctx->dmanond_lock);
> 	return ret;

Agreed, will change so.

> 
> > +	}
> > +	if (ctx->kdamond && !on) {
> > +		spin_unlock(&ctx->kdamond_lock);
> > +		while (true) {
> 
> An unbounded loop is probably a bad idea.

Will add clear condition here.

> 
> > +			spin_lock(&ctx->kdamond_lock);
> > +			if (!ctx->kdamond)
> > +				goto success;
> > +			spin_unlock(&ctx->kdamond_lock);
> > +
> > +			usleep_range(ctx->sample_interval,
> > +					ctx->sample_interval * 2);
> > +		}
> > +	}
> > +
> > +	/* tried to turn on while turned on, or turn off while turned off */
> > +
> > +fail:
> > +	spin_unlock(&ctx->kdamond_lock);
> > +	return -EINVAL;
> > +
> > +success:
> > +	spin_unlock(&ctx->kdamond_lock);
> > +	return 0;
> > +}
> > +
> > +/*
> > + * This function should not be called while the kdamond is running.
> > + */
> > +static int damon_set_pids(struct damon_ctx *ctx,
> > +			unsigned long *pids, ssize_t nr_pids)
> > +{
> > +	ssize_t i;
> > +	struct damon_task *t, *next;
> > +
> > +	damon_for_each_task_safe(ctx, t, next)
> > +		damon_destroy_task(t);
> > +
> > +	for (i = 0; i < nr_pids; i++) {
> > +		t = damon_new_task(pids[i]);
> > +		if (!t) {
> > +			pr_err("Failed to alloc damon_task\n");
> > +			return -ENOMEM;
> > +		}
> > +		damon_add_task_tail(ctx, t);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> 
> This is kind of similar to kernel-doc formatting.  Might as well just make
> it kernel-doc!

Agreed, will do so!

> 
> > + * Set attributes for the monitoring
> > + *
> > + * sample_int		time interval between samplings
> > + * aggr_int		time interval between aggregations
> > + * min_nr_reg		minimal number of regions
> > + *
> > + * This function should not be called while the kdamond is running.
> > + * Every time interval is in micro-seconds.
> > + *
> > + * Returns 0 on success, negative error code otherwise.
> > + */
> > +static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
> > +		unsigned long aggr_int, unsigned long min_nr_reg)
> > +{
> > +	if (min_nr_reg < 3) {
> > +		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
> > +				min_nr_reg);
> > +		return -EINVAL;
> > +	}
> > +
> > +	ctx->sample_interval = sample_int;
> > +	ctx->aggr_interval = aggr_int;
> > +	ctx->min_nr_regions = min_nr_reg;
> 
> blank line helps readability a tiny little bit.

Agreed!


Thanks,
SeongJae Park

> 
> > +	return 0;
> > +}
> > +
> >  static int __init damon_init(void)
> >  {
> >  	pr_info("init\n");
> > diff --git a/mm/page_ext.c b/mm/page_ext.c
> > index 4ade843ff588..71169b45bba9 100644
> > --- a/mm/page_ext.c
> > +++ b/mm/page_ext.c
> > @@ -131,6 +131,7 @@ struct page_ext *lookup_page_ext(const struct page *page)
> >  					MAX_ORDER_NR_PAGES);
> >  	return get_entry(base, index);
> >  }
> > +EXPORT_SYMBOL_GPL(lookup_page_ext);
> >  
> >  static int __init alloc_node_page_ext(int nid)
> >  {


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 03/14] mm/damon: Adaptively adjust regions
  2020-03-10  8:57   ` Jonathan Cameron
@ 2020-03-10 11:53     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:53 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 08:57:47 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:36 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > At the beginning of the monitoring, DAMON constructs the initial regions
> > by evenly splitting the memory mapped address space of the process into
> > the user-specified minimal number of regions.  In this initial state,
> > the assumption of the regions (pages in same region have similar access
> > frequencies) is normally not kept and thus the monitoring quality could
> > be low.  To keep the assumption as much as possible, DAMON adaptively
> > merges and splits each region.
> > 
> > For each ``aggregation interval``, it compares the access frequencies of
> > adjacent regions and merges those if the frequency difference is small.
> > Then, after it reports and clears the aggregated access frequency of
> > each region, it splits each region into two regions if the total number
> > of regions is smaller than the half of the user-specified maximum number
> > of regions.
> > 
> > In this way, DAMON provides its best-effort quality and minimal overhead
> > while keeping the bounds users set for their trade-off.
> > 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> 
> Really minor comments inline.

Very helpful comments for me.  You are indeed making this much better!  Will
apply whole your comments below in the next spin.

> 
> > ---
> >  mm/damon.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 144 insertions(+), 7 deletions(-)
> > 
> > diff --git a/mm/damon.c b/mm/damon.c
> > index 6bdeb84d89af..1c8bb71bbce9 100644
> > --- a/mm/damon.c
> > +++ b/mm/damon.c
[...]
> > +/*
> > + * Merge adjacent regions having similar access frequencies
> > + *
> > + * t		task that merge operation will make change
> > + * thres	merge regions having '->nr_accesses' diff smaller than this
> > + */
> > +static void damon_merge_regions_of(struct damon_task *t, unsigned int thres)
> > +{
> > +	struct damon_region *r, *prev = NULL, *next;
> > +
> > +	damon_for_each_region_safe(r, next, t) {
> > +		if (!prev || prev->vm_end != r->vm_start)
> > +			goto next;
> > +		if (diff_of(prev->nr_accesses, r->nr_accesses) > thres) 
> > +			goto next;
> 
> 		if (!prev || prev->vm_end != r->vm_start ||
> 		    diff_of(prev->nr_accesses, r->nr_accesses) > thres) {
> 			prev = r;
> 			continue;
> 		}
> 
> Seems more logical to my head.  Maybe it's just me though.  A goto inside a
> loop isn't pretty to my mind.

Yes, your version seems much prettier to me, either :)

> 
> > +		damon_merge_two_regions(prev, r);
> > +		continue;
> > +next:
> > +		prev = r;
> > +	}
> > +}
> > +
[...]
> > @@ -590,21 +711,29 @@ static int kdamond_fn(void *data)
> >  	struct damon_task *t;
> >  	struct damon_region *r, *next;
> >  	struct mm_struct *mm;
> > +	unsigned long max_nr_accesses;
> >  
> >  	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
> >  	kdamond_init_regions(ctx);
> >  	while (!kdamond_need_stop(ctx)) {
> > +		max_nr_accesses = 0;
> >  		damon_for_each_task(ctx, t) {
> >  			mm = damon_get_mm(t);
> >  			if (!mm)
> >  				continue;
> > -			damon_for_each_region(r, t)
> > +			damon_for_each_region(r, t) {
> >  				kdamond_check_access(ctx, mm, r);
> > +				if (r->nr_accesses > max_nr_accesses)
> > +					max_nr_accesses = r->nr_accesses;
> 
> max_nr_accesses = max(r->nr_accesses, max_nr_accesses)

Good point!


Thanks,
SeongJae Park

[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes
  2020-03-10  9:00   ` Jonathan Cameron
@ 2020-03-10 11:53     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:53 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:00:26 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:37 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > Only a number of parts in the virtual address space of the processes is
> > mapped to physical memory and accessed.  Thus, tracking the unmapped
> > address regions is just wasteful.  However, tracking every memory
> > mapping change might incur an overhead.  For the reason, DAMON applies
> > the dynamic memory mapping changes to the tracking regions only for each
> > of a user-specified time interval (``regions update interval``).
> > 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> Trivial inline. Otherwise makes sense to me.
> 
[...]
> > +static void damon_apply_three_regions(struct damon_ctx *ctx,
> > +		struct damon_task *t, struct region bregions[3])
> > +{
> > +	struct damon_region *r, *next;
> > +	unsigned int i = 0;
> > +
> > +	/* Remove regions which isn't in the three big regions now */
> > +	damon_for_each_region_safe(r, next, t) {
> > +		for (i = 0; i < 3; i++) {
> > +			if (damon_intersect(r, &bregions[i]))
> > +				break;
> > +		}
> > +		if (i == 3)
> > +			damon_destroy_region(r);
> > +	}
> > +
> > +	/* Adjust intersecting regions to fit with the threee big regions */
> 
> three

Good eye!  Thanks for finding :)

[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 05/14] mm/damon: Implement callbacks
  2020-03-10  9:01   ` Jonathan Cameron
@ 2020-03-10 11:55     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:55 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:01:16 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:38 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit implements callbacks for DAMON.  Using this, DAMON users can
> > install their callbacks for each step of the access monitoring so that
> > they can do something interesting with the monitored access pattrns
> 
> patterns

Thank you for finding!

> 
[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 06/14] mm/damon: Implement access pattern recording
  2020-03-10  9:01   ` Jonathan Cameron
@ 2020-03-10 11:55     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:55 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:01:34 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:39 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit implements the recording feature of DAMON. If this feature
> > is enabled, DAMON writes the monitored access patterns in its binary
> > format into a file which specified by the user. This is already able to
> > be implemented by each user using the callbacks.  However, as the
> > recording is expected to be used widely, this commit implements the
> > feature in the DAMON, for more convenience and efficiency.
> > 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> 
> I guess this work whilst you are still developing, but I'm not convinced
> writing to a file should be a standard feature...

I also not sure whether this is right feature of the kernel or not, but this
would minimize many efforts in user space.  I also thought that this might be
not out of the intention of the 'kernel_write()'.

Nonetheless, this patch could be simply removed, as DAMON supports tracepoints
and the recording can be implemented on user space using it.

Could I ask your other suggestions for this feature?


Thanks,
SeongJae Park

> 
> > ---
[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 07/14] mm/damon: Implement kernel space API
  2020-03-10  9:01   ` Jonathan Cameron
@ 2020-03-10 11:56     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:56 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:01:52 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:40 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit implements the DAMON api for the kernel.  Other kernel code
> > can use DAMON by calling damon_start() and damon_stop() with their own
> > 'struct damon_ctx'.
> > 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> 
> Seems like it would have been easier to create the header as you went along
> and avoid the need to have the bits here dropping static.

Yes, exporing the API from the beginning must be much simpler and easy to
review!

> 
> Or the moves for that matter.
> 
> Also, ideally have full kernel-doc for anything that forms part of an
> interface that is intended for use by others.

Agreed, will use the kernel-doc comments!


Thanks,
SeongJae Park

> 
> Jonathan
> 
> > ---
[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 08/14] mm/damon: Add debugfs interface
  2020-03-10  9:02   ` Jonathan Cameron
@ 2020-03-10 11:56     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:56 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:02:09 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:41 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit adds a debugfs interface for DAMON.
[...]
> > 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> 
> Some of the code in here seems a bit fragile and convoluted.

Indeed, it needs many fixes.

> 
> > ---
> >  mm/damon.c | 377 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 376 insertions(+), 1 deletion(-)
> > 
> > diff --git a/mm/damon.c b/mm/damon.c
> > index b3e9b9da5720..facb1d7f121b 100644
> > --- a/mm/damon.c
> > +++ b/mm/damon.c
> > @@ -10,6 +10,7 @@
[...]
> >  
> > +/*
> > + * debugfs functions
> 
> Seems unnecessary when their naming makes this clear.

Agreed, will remove it.

> 
[...]
> > +static ssize_t debugfs_pids_write(struct file *file,
> > +		const char __user *buf, size_t count, loff_t *ppos)
> > +{
> > +	struct damon_ctx *ctx = &damon_user_ctx;
> > +	char *kbuf;
> > +	unsigned long *targets;
> > +	ssize_t nr_targets;
> > +	ssize_t ret;
> > +
> > +	kbuf = kmalloc_array(count, sizeof(char), GFP_KERNEL);
> > +	if (!kbuf)
> > +		return -ENOMEM;
> > +
> > +	ret = simple_write_to_buffer(kbuf, 512, ppos, buf, count);
> 
> Why only 512?

I might lost my mind at that time :'(
Good catch, it should be 'count'.

> 
[...]
> > +
> > +static ssize_t debugfs_attrs_write(struct file *file,
> > +		const char __user *buf, size_t count, loff_t *ppos)
> > +{
> > +	struct damon_ctx *ctx = &damon_user_ctx;
> > +	unsigned long s, a, r, minr, maxr;
> > +	char *kbuf;
> > +	ssize_t ret;
> > +
> > +	kbuf = kmalloc_array(count, sizeof(char), GFP_KERNEL);
> 
> malloc fine for array of characters.   The checks on overflow etc cannot be
> relevant here.

You're right, will use 'kamlloc()' instead.

> 
> > +	if (!kbuf)
> > +		return -ENOMEM;
> > +
> > +	ret = simple_write_to_buffer(kbuf, count, ppos, buf, count);
> > +	if (ret < 0)
> > +		goto out;
> > +
> > +	if (sscanf(kbuf, "%lu %lu %lu %lu %lu",
> > +				&s, &a, &r, &minr, &maxr) != 5) {
> > +		ret = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	spin_lock(&ctx->kdamond_lock);
> > +	if (ctx->kdamond)
> > +		goto monitor_running;
> > +
> > +	damon_set_attrs(ctx, s, a, r, minr, maxr);
> > +	spin_unlock(&ctx->kdamond_lock);
> > +
> > +	goto out;
> > +
> > +monitor_running:
> > +	spin_unlock(&ctx->kdamond_lock);
> > +	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
> > +	ret = -EINVAL;
> 
> This complex exit path is a bad idea from maintainability point of view...
> Just put the pr_err and spin_unlock in the error path above.

Agreed, will do so.

> 
> > +out:
> > +	kfree(kbuf);
> > +	return ret;
> > +}
> > +
> > +static const struct file_operations monitor_on_fops = {
> > +	.owner = THIS_MODULE,
> > +	.read = debugfs_monitor_on_read,
> > +	.write = debugfs_monitor_on_write,
> > +};
> > +
> > +static const struct file_operations pids_fops = {
> > +	.owner = THIS_MODULE,
> > +	.read = debugfs_pids_read,
> > +	.write = debugfs_pids_write,
> > +};
> > +
> > +static const struct file_operations record_fops = {
> > +	.owner = THIS_MODULE,
> > +	.read = debugfs_record_read,
> > +	.write = debugfs_record_write,
> > +};
> > +
> > +static const struct file_operations attrs_fops = {
> > +	.owner = THIS_MODULE,
> > +	.read = debugfs_attrs_read,
> > +	.write = debugfs_attrs_write,
> > +};
> > +
> > +static struct dentry *debugfs_root;
> > +
> > +static int __init debugfs_init(void)
> 
> Prefix this function.  Chances of sometime getting a header
> that includes debugfs_init feels rather too high!

That's right, I will rename it.

> 
> > +{
> > +	const char * const file_names[] = {"attrs", "record",
> > +		"pids", "monitor_on"};
> > +	const struct file_operations *fops[] = {&attrs_fops, &record_fops,
> > +		&pids_fops, &monitor_on_fops};
> > +	int i;
> > +
> > +	debugfs_root = debugfs_create_dir("damon", NULL);
> > +	if (!debugfs_root) {
> > +		pr_err("failed to create the debugfs dir\n");
> > +		return -ENOMEM;
> > +	}
> > +
> > +	for (i = 0; i < ARRAY_SIZE(file_names); i++) {
> > +		if (!debugfs_create_file(file_names[i], 0600, debugfs_root,
> > +					NULL, fops[i])) {
> > +			pr_err("failed to create %s file\n", file_names[i]);
> > +			return -ENOMEM;
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int __init damon_init_user_ctx(void)
> > +{
> > +	int rc;
> > +
> > +	struct damon_ctx *ctx = &damon_user_ctx;
> > +
> > +	ktime_get_coarse_ts64(&ctx->last_aggregation);
> > +	ctx->last_regions_update = ctx->last_aggregation;
> > +
> > +	ctx->rbuf_offset = 0;
> > +	rc = damon_set_recording(ctx, 1024 * 1024, "/damon.data");
> > +	if (rc)
> > +		return rc;
> > +
> > +	ctx->kdamond = NULL;
> > +	ctx->kdamond_stop = false;
> > +	spin_lock_init(&ctx->kdamond_lock);
> > +
> > +	prandom_seed_state(&ctx->rndseed, 42);
> 
> :)

You got the answer ;)

> 
> > +	INIT_LIST_HEAD(&ctx->tasks_list);
> > +
> > +	ctx->sample_cb = NULL;
> > +	ctx->aggregate_cb = NULL;
> 
> Should already be set to 0.

Oops, right!

> 
> > +
> > +	return 0;
> > +}
> > +
> >  static int __init damon_init(void)
> >  {
> > +	int rc;
> > +
> >  	pr_info("init\n");
> >  
> > -	return 0;
> > +	rc = damon_init_user_ctx();
> > +	if (rc)
> > +		return rc;
> > +
> > +	return debugfs_init();
> 
> In theory no code should ever be dependent on debugfs succeeding..
> There might be other daemon users so you should just eat the return
> code.

Right!  Thank you for catching this!


Thanks,
SeongJae Park

> 
> 
> >  }
> >  
> >  static void __exit damon_exit(void)
> >  {
> > +	damon_turn_kdamond(&damon_user_ctx, false);
> > +	debugfs_remove_recursive(debugfs_root);
> > +
> > +	kfree(damon_user_ctx.rbuf);
> > +	kfree(damon_user_ctx.rfile_path);
> > +
> >  	pr_info("exit\n");
> >  }
> >  
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing
  2020-03-10  9:03   ` Jonathan Cameron
@ 2020-03-10 11:57     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:57 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:03:31 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:42 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit adds a tracepoint for DAMON's result buffer writing.  It is
> > called for each writing of the DAMON results and print the result data.
> > Therefore, it would be used to easily integrated with other tracepoint
> > supporting tracers such as perf.
> > 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> 
> I'm curious, why at the flush of rbuf rather than using a more structured trace
> point for each of the writes into rbuf?
> 
> Seems it would make more sense to have a tracepoint for each record write out.
> Probably at the level of each task, though might be more elegant to do it at the
> level of each region within a task and duplicate the header stuff.

I was worried if the format changes, but agree your suggestion is the right
way.  Will change so in next spin.


Thanks,
SeongJae Park

> 
> > ---
> >  include/trace/events/damon.h | 32 ++++++++++++++++++++++++++++++++
> >  mm/damon.c                   |  4 ++++
> >  2 files changed, 36 insertions(+)
> >  create mode 100644 include/trace/events/damon.h
> > 
> > diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h
> > new file mode 100644
> > index 000000000000..fb33993620ce
> > --- /dev/null
> > +++ b/include/trace/events/damon.h
> > @@ -0,0 +1,32 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#undef TRACE_SYSTEM
> > +#define TRACE_SYSTEM damon
> > +
> > +#if !defined(_TRACE_DAMON_H) || defined(TRACE_HEADER_MULTI_READ)
> > +#define _TRACE_DAMON_H
> > +
> > +#include <linux/types.h>
> > +#include <linux/tracepoint.h>
> > +
> > +TRACE_EVENT(damon_write_rbuf,
> > +
> > +	TP_PROTO(void *buf, const ssize_t sz),
> > +
> > +	TP_ARGS(buf, sz),
> > +
> > +	TP_STRUCT__entry(
> > +		__dynamic_array(char, buf, sz)
> > +	),
> > +
> > +	TP_fast_assign(
> > +		memcpy(__get_dynamic_array(buf), buf, sz);
> > +	),
> > +
> > +	TP_printk("dat=%s", __print_hex(__get_dynamic_array(buf),
> > +			__get_dynamic_array_len(buf)))
> > +);
> > +
> > +#endif /* _TRACE_DAMON_H */
> > +
> > +/* This part must be outside protection */
> > +#include <trace/define_trace.h>
> > diff --git a/mm/damon.c b/mm/damon.c
> > index facb1d7f121b..8faf3879f99e 100644
> > --- a/mm/damon.c
> > +++ b/mm/damon.c
> > @@ -9,6 +9,8 @@
> >  
> >  #define pr_fmt(fmt) "damon: " fmt
> >  
> > +#define CREATE_TRACE_POINTS
> > +
> >  #include <linux/damon.h>
> >  #include <linux/debugfs.h>
> >  #include <linux/delay.h>
> > @@ -20,6 +22,7 @@
> >  #include <linux/sched/mm.h>
> >  #include <linux/sched/task.h>
> >  #include <linux/slab.h>
> > +#include <trace/events/damon.h>
> >  
> >  #define damon_get_task_struct(t) \
> >  	(get_pid_task(find_vpid(t->pid), PIDTYPE_PID))
> > @@ -553,6 +556,7 @@ static void damon_flush_rbuffer(struct damon_ctx *ctx)
> >   */
> >  static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size)
> >  {
> > +	trace_damon_write_rbuf(data, size);
> >  	if (!ctx->rbuf_len || !ctx->rbuf)
> >  		return;
> >  	if (ctx->rbuf_offset + size > ctx->rbuf_len)
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document for DAMON
  2020-03-10  9:03   ` Jonathan Cameron
@ 2020-03-10 11:57     ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 11:57 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 09:03:48 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:44 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit adds a simple document for DAMON under
> > `Documentation/admin-guide/mm`.
> > 
> 
> Nice document to get people started.
> 
> Certainly worked for me doing some initial playing around.

Great to hear that :)

> 
> In general this is an interesting piece of work.   I can see there are numerous
> possible avenues to explore in making the monitoring more flexible, or potentially
> better at tracking usage whilst not breaking your fundamental 'bounded overhead'
> requirement.   Will be fun perhaps to explore some of those.
> 
> I'll do some more exploring and perhaps try some real world workloads.
> 
> Thanks,
> 
> Jonathan
> 
> 
> > Signed-off-by: SeongJae Park <sjpark@amazon.de>
> > ---
> >  .../admin-guide/mm/data_access_monitor.rst    | 414 ++++++++++++++++++
> >  Documentation/admin-guide/mm/index.rst        |   1 +
> >  2 files changed, 415 insertions(+)
> >  create mode 100644 Documentation/admin-guide/mm/data_access_monitor.rst
> > 
> > diff --git a/Documentation/admin-guide/mm/data_access_monitor.rst b/Documentation/admin-guide/mm/data_access_monitor.rst
> > new file mode 100644
> > index 000000000000..4d836c3866e2
> > --- /dev/null
> > +++ b/Documentation/admin-guide/mm/data_access_monitor.rst
> > @@ -0,0 +1,414 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +==========================
> > +DAMON: Data Access MONitor
> > +==========================
> > +
> > +Introduction
> > +============
> > +
> > +Memory management decisions can normally be more efficient if finer data access
> > +information is available.  However, because finer information usually comes
> > +with higher overhead, most systems including Linux made a tradeoff: Forgive
> > +some wise decisions and use coarse information and/or light-weight heuristics.
> 
> I'm not sure what "Forgive some wise decisions" means...

I wanted to mean that the mechanism makes no optimal decisions.  Will wordsmith
again.

> 
> > +
> > +A number of experimental data access pattern awared memory management
> > +optimizations say the sacrifices are
> > +huge (2.55x slowdown).  
> 
> Good to have a reference.

:)

> 
> > However, none of those has successfully adopted to
> 
> adopted into the 

Thanks for correcting.

> 
[...]
> > +Applying Dynamic Memory Mappings
> > +--------------------------------
> > +
> > +Only a number of small parts in the super-huge virtual address space of the
> > +processes is mapped to physical memory and accessed.  Thus, tracking the
> > +unmapped address regions is just wasteful.  However, tracking every memory
> > +mapping change might incur an overhead.  For the reason, DAMON applies the
> > +dynamic memory mapping changes to the tracking regions only for each of an
> > +user-specified time interval (``regions update interval``).
> 
> One key part of the approach is the 3 region bit.  Perhaps talk about that here
> somewhere?

I was afraid if it is too implementation detail, as this document is for admin
users.  Will add it in next spin, though.

> 
> > +
> > +
> > +``debugfs`` Interface
> > +=====================
> > +
> > +DAMON exports four files, ``attrs``, ``pids``, ``record``, and ``monitor_on``
> > +under its debugfs directory, ``<debugfs>/damon/``.
> > +
> > +Attributes
> > +----------
> > +
> > +Users can read and write the ``sampling interval``, ``aggregation interval``,
> > +``regions update interval``, and min/max number of monitoring target regions by
> > +reading from and writing to the ``attrs`` file.  For example, below commands
> > +set those values to 5 ms, 100 ms, 1,000 ms, 10, 1000 and check it again::
> > +
> > +    # cd <debugfs>/damon
> > +    # echo 5000 100000 1000000 10 1000 > attrs
> 
> I'm personally a great fan of human readable interfaces.  Could we just
> split this into one file per interval?  That way the file naming would
> make it self describing.

I was worried if it makes too many files.  Do you think it's ok?

> 
> > +    # cat attrs
> > +    5000 100000 1000000 10 1000
> > +
> > +Target PIDs
> > +-----------
> > +
> > +Users can read and write the pids of current monitoring target processes by
> > +reading from and writing to the ``pids`` file.  For example, below commands set
> > +processes having pids 42 and 4242 as the processes to be monitored and check it
> > +again::
> > +
> > +    # cd <debugfs>/damon
> > +    # echo 42 4242 > pids
> > +    # cat pids
> > +    42 4242
> > +
> > +Note that setting the pids doesn't starts the monitoring.
> > +
> > +Record
> > +------
> > +
> > +DAMON support direct monitoring result record feature.  The recorded results
> > +are first written to a buffer and flushed to a file in batch.  Users can set
> > +the size of the buffer and the path to the result file by reading from and
> > +writing to the ``record`` file.  For example, below commands set the buffer to
> > +be 4 KiB and the result to be saved in ``/damon.data``.
> > +
> > +    # cd <debugfs>/damon
> > +    # echo "4096 /damon.data" > pids
> 
> write it to record, not pids.

Ah, good eye!

> 
> > +    # cat record
> > +    4096 /damon.data
> > +
> > +Turning On/Off
> > +--------------
> > +
> > +You can check current status, start and stop the monitoring by reading from and
> > +writing to the ``monitor_on`` file.  Writing ``on`` to the file starts DAMON to
> > +monitor the target processes with the attributes.  Writing ``off`` to the file
> > +stops DAMON.  DAMON also stops if every target processes is be terminated.
> > +Below example commands turn on, off, and check status of DAMON::
> > +
> > +    # cd <debugfs>/damon
> > +    # echo on > monitor_on
> > +    # echo off > monitor_on
> > +    # cat monitor_on
> > +    off
> > +
> > +Please note that you cannot write to the ``attrs`` and ``pids`` files while the
> > +monitoring is turned on.  If you write to the files while DAMON is running,
> > +``-EINVAL`` will be returned.
> 
> Perhaps -EBUSY would be more informative?  Implies values might be fine, but
> the issue is 'not now'.

Agreed, will change so!


Thanks,
SeongJae Park

[...]


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-10 11:52     ` SeongJae Park
@ 2020-03-10 15:55       ` Jonathan Cameron
  2020-03-10 16:22         ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10 15:55 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 12:52:33 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> Added replies to your every comment in line below.  I agree to your whole
> opinions, will apply those in next spin! :)
> 

One additional question inline that came to mind.  Using a single statistic
to monitor huge page and normal page hits is going to give us problems
I think.

Perhaps I'm missing something?

> > > +/*
> > > + * Check whether the given region has accessed since the last check  
> > 
> > Should also make clear that this sets us up for the next access check at
> > a different memory address it the region.
> > 
> > Given the lack of connection between activities perhaps just split this into
> > two functions that are always called next to each other.  
> 
> Will make the description more clearer as suggested.
> 
> Also, I found that I'm not clearing *pte and *pmd before going 'mkold', thanks
> to this comment.  Will fix it, either.
> 
> >   
> > > + *
> > > + * mm	'mm_struct' for the given virtual address space
> > > + * r	the region to be checked
> > > + */
> > > +static void kdamond_check_access(struct damon_ctx *ctx,
> > > +			struct mm_struct *mm, struct damon_region *r)
> > > +{
> > > +	pte_t *pte = NULL;
> > > +	pmd_t *pmd = NULL;
> > > +	spinlock_t *ptl;
> > > +
> > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > +		goto mkold;
> > > +
> > > +	/* Read the page table access bit of the page */
> > > +	if (pte && pte_young(*pte))
> > > +		r->nr_accesses++;
> > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE  
> > 
> > Is it worth having this protection?  Seems likely to have only a very small
> > influence on performance and makes it a little harder to reason about the code.  
> 
> It was necessary for addressing 'implicit declaration' problem of 'pmd_young()'
> and 'pmd_mkold()' for build of DAMON on several architectures including User
> Mode Linux.
> 
> Will modularize the code for better readability.
> 
> >   
> > > +	else if (pmd && pmd_young(*pmd))
> > > +		r->nr_accesses++;

So we increment a region count by one if we have an access in a huge page, or
in a normal page.

If we get a region that has a mixture of the two, this seems likely to give a
bad approximation.

Assume the region is accessed 'evenly' but each " 4k page" is only hit 10% of the time
(where a hit is in one check period)

If our address in a page, then we'll hit 10% of the time, but if it is in a 2M
huge page then we'll hit a much higher percentage of the time.
1 - (0.9^512) ~= 1

Should we look to somehow account for this?

> > > +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> > > +
> > > +	spin_unlock(ptl);
> > > +
> > > +mkold:
> > > +	/* mkold next target */
> > > +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> > > +
> > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > +		return;
> > > +
> > > +	if (pte) {
> > > +		if (pte_young(*pte)) {
> > > +			clear_page_idle(pte_page(*pte));
> > > +			set_page_young(pte_page(*pte));
> > > +		}
> > > +		*pte = pte_mkold(*pte);
> > > +	}
> > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > +	else if (pmd) {
> > > +		if (pmd_young(*pmd)) {
> > > +			clear_page_idle(pmd_page(*pmd));
> > > +			set_page_young(pmd_page(*pmd));
> > > +		}
> > > +		*pmd = pmd_mkold(*pmd);
> > > +	}
> > > +#endif
> > > +
> > > +	spin_unlock(ptl);
> > > +}
> > > +





^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-10 15:55       ` Jonathan Cameron
@ 2020-03-10 16:22         ` SeongJae Park
  2020-03-10 17:39           ` Jonathan Cameron
  0 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-10 16:22 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 15:55:10 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Tue, 10 Mar 2020 12:52:33 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > Added replies to your every comment in line below.  I agree to your whole
> > opinions, will apply those in next spin! :)
> > 
> 
> One additional question inline that came to mind.  Using a single statistic
> to monitor huge page and normal page hits is going to give us problems
> I think.

Ah, you're right!!!  This is indeed a critical bug!

> 
> Perhaps I'm missing something?
> 
> > > > +/*
> > > > + * Check whether the given region has accessed since the last check  
> > > 
> > > Should also make clear that this sets us up for the next access check at
> > > a different memory address it the region.
> > > 
> > > Given the lack of connection between activities perhaps just split this into
> > > two functions that are always called next to each other.  
> > 
> > Will make the description more clearer as suggested.
> > 
> > Also, I found that I'm not clearing *pte and *pmd before going 'mkold', thanks
> > to this comment.  Will fix it, either.
> > 
> > >   
> > > > + *
> > > > + * mm	'mm_struct' for the given virtual address space
> > > > + * r	the region to be checked
> > > > + */
> > > > +static void kdamond_check_access(struct damon_ctx *ctx,
> > > > +			struct mm_struct *mm, struct damon_region *r)
> > > > +{
> > > > +	pte_t *pte = NULL;
> > > > +	pmd_t *pmd = NULL;
> > > > +	spinlock_t *ptl;
> > > > +
> > > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > > +		goto mkold;
> > > > +
> > > > +	/* Read the page table access bit of the page */
> > > > +	if (pte && pte_young(*pte))
> > > > +		r->nr_accesses++;
> > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE  
> > > 
> > > Is it worth having this protection?  Seems likely to have only a very small
> > > influence on performance and makes it a little harder to reason about the code.  
> > 
> > It was necessary for addressing 'implicit declaration' problem of 'pmd_young()'
> > and 'pmd_mkold()' for build of DAMON on several architectures including User
> > Mode Linux.
> > 
> > Will modularize the code for better readability.
> > 
> > >   
> > > > +	else if (pmd && pmd_young(*pmd))
> > > > +		r->nr_accesses++;
> 
> So we increment a region count by one if we have an access in a huge page, or
> in a normal page.
> 
> If we get a region that has a mixture of the two, this seems likely to give a
> bad approximation.
> 
> Assume the region is accessed 'evenly' but each " 4k page" is only hit 10% of the time
> (where a hit is in one check period)
> 
> If our address in a page, then we'll hit 10% of the time, but if it is in a 2M
> huge page then we'll hit a much higher percentage of the time.
> 1 - (0.9^512) ~= 1
> 
> Should we look to somehow account for this?

Yes, this is really critical bug and we should fix this!  Thank you so much for
finding this!

> 
> > > > +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> > > > +
> > > > +	spin_unlock(ptl);
> > > > +
> > > > +mkold:
> > > > +	/* mkold next target */
> > > > +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> > > > +
> > > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > > +		return;
> > > > +
> > > > +	if (pte) {
> > > > +		if (pte_young(*pte)) {
> > > > +			clear_page_idle(pte_page(*pte));
> > > > +			set_page_young(pte_page(*pte));
> > > > +		}
> > > > +		*pte = pte_mkold(*pte);
> > > > +	}
> > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > > +	else if (pmd) {
> > > > +		if (pmd_young(*pmd)) {
> > > > +			clear_page_idle(pmd_page(*pmd));
> > > > +			set_page_young(pmd_page(*pmd));
> > > > +		}
> > > > +		*pmd = pmd_mkold(*pmd);
> > > > +	}

This is also very problematic if several regions are backed by a single huge
page, as only one region in the huge page will be checked as accessed.

Will address these problems in next spin!


Thanks,
SeongJae Park

> > > > +#endif
> > > > +
> > > > +	spin_unlock(ptl);
> > > > +}
> > > > +
> 
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
                   ` (14 preceding siblings ...)
  2020-03-02 11:35 ` [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
@ 2020-03-10 17:21 ` Shakeel Butt
  2020-03-12 10:07   ` SeongJae Park
  15 siblings, 1 reply; 51+ messages in thread
From: Shakeel Butt @ 2020-03-10 17:21 UTC (permalink / raw)
  To: SeongJae Park
  Cc: Andrew Morton, SeongJae Park, Andrea Arcangeli, Yang Shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins,
	Qian Cai, Colin Ian King, Jonathan Corbet, dwmw, jolsa,
	Kirill A. Shutemov, mark.rutland, Mel Gorman, Minchan Kim,
	Ingo Molnar, namhyung, peterz, Randy Dunlap, David Rientjes,
	Steven Rostedt, shuah, sj38.park, Vlastimil Babka,
	Vladimir Davydov, Linux MM, linux-doc, LKML

On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
>
> From: SeongJae Park <sjpark@amazon.de>
>
> Introduction
> ============
>
> Memory management decisions can be improved if finer data access information is
> available.  However, because such finer information usually comes with higher
> overhead, most systems including Linux forgives the potential improvement and
> rely on only coarse information or some light-weight heuristics.  The
> pseudo-LRU and the aggressive THP promotions are such examples.
>
> A number of experimental data access pattern awared memory management

why experimental? [5,8] are deployed across Google fleet.

> optimizations (refer to 'Appendix A' for more details) say the sacrifices are
> huge.

It depends. For servers where stranded CPUs are common, the cost is
not that huge.

> However, none of those has successfully adopted to Linux kernel mainly

adopted? I think you mean accepted or merged

> due to the absence of a scalable and efficient data access monitoring
> mechanism.  Refer to 'Appendix B' to see the limitations of existing memory
> monitoring mechanisms.
>
> DAMON is a data access monitoring subsystem for the problem.  It is 1) accurate
> enough to be used for the DRAM level memory management (a straightforward
> DAMON-based optimization achieved up to 2.55x speedup), 2) light-weight enough
> to be applied online (compared to a straightforward access monitoring scheme,
> DAMON is up to 94.242.42x lighter)

94.242.42x ?

> and 3) keeps predefined upper-bound overhead
> regardless of the size of target workloads (thus scalable).  Refer to 'Appendix
> C' if you interested in how it is possible.
>
> DAMON has mainly designed for the kernel's memory management mechanisms.
> However, because it is implemented as a standalone kernel module and provides
> several interfaces, it can be used by a wide range of users including kernel
> space programs, user space programs, programmers, and administrators.  DAMON
> is now supporting the monitoring only, but it will also provide simple and
> convenient data access pattern awared memory managements by itself.  Refer to
> 'Appendix D' for more detailed expected usages of DAMON.
>
>
> Visualized Outputs of DAMON
> ===========================
>
> For intuitively understanding of DAMON, I made web pages[1-8] showing the
> visualized dynamic data access pattern of various realistic workloads, which I
> picked up from PARSEC3 and SPLASH-2X bechmark suites.  The figures are
> generated using the user space tool in 10th patch of this patchset.
>
> There are pages showing the heatmap format dynamic access pattern of each
> workload for heap area[1], mmap()-ed area[2], and stack[3] area.  I splitted
> the entire address space to the three area because there are huge unmapped
> regions between the areas.
>
> You can also show how the dynamic working set size of each workload is
> distributed[4], and how it is chronologically changing[5].
>
> The most important characteristic of DAMON is its promise of the upperbound of
> the monitoring overhead.  To show whether DAMON keeps the promise well, I
> visualized the number of monitoring operations required for each 5
> milliseconds, which is configured to not exceed 1000.  You can show the
> distribution of the numbers[6] and how it changes chronologically[7].
>
> [1] https://damonitor.github.io/reports/latest/by_image/heatmap.0.png.html
> [2] https://damonitor.github.io/reports/latest/by_image/heatmap.1.png.html
> [3] https://damonitor.github.io/reports/latest/by_image/heatmap.2.png.html
> [4] https://damonitor.github.io/reports/latest/by_image/wss_sz.png.html
> [5] https://damonitor.github.io/reports/latest/by_image/wss_time.png.html
> [6] https://damonitor.github.io/reports/latest/by_image/nr_regions_sz.png.html
> [7] https://damonitor.github.io/reports/latest/by_image/nr_regions_time.png.html
>
>
> Data Access Monitoring-based Operation Schemes
> ==============================================
>
> As 'Appendix D' describes, DAMON can be used for data access monitoring-based
> operation schemes (DAMOS).  RFC patchsets for DAMOS are already available
> (https://lore.kernel.org/linux-mm/20200218085309.18346-1-sjpark@amazon.com/).
>
> By applying a very simple scheme for THP promotion/demotion with a latest
> version of the patchset (not posted yet), DAMON achieved 18x lower memory space
> overhead compared to THP while preserving about 50% of the THP performance
> benefit with SPLASH-2X benchmark suite.
>
> The detailed setup and number will be posted soon with the next RFC patchset
> for DAMOS.  The posting is currently scheduled for tomorrow.
>
>
> Frequently Asked Questions
> ==========================
>
> Q: Why DAMON is not integrated with perf?
> A: From the perspective of perf like profilers, DAMON can be thought of as a
> data source in kernel, like the tracepoints, the pressure stall information
> (psi), or the idle page tracking.  Thus, it is easy to integrate DAMON with the
> profilers.  However, this patchset doesn't provide a fancy perf integration
> because current step of DAMON development is focused on its core logic only.
> That said, DAMON already provides two interfaces for user space programs, which
> based on debugfs and tracepoint, respectively.  Using the tracepoint interface,
> you can use DAMON with perf.  This patchset also provides a debugfs interface
> based user space tool for DAMON.  It can be used to record, visualize, and
> analyze data access patterns of target processes in a convenient way.

Oh it is monitoring at the process level.

>
> Q: Why a new module, instead of extending perf or other tools?
> A: First, DAMON aims to be used by other programs including the kernel.
> Therefore, having dependency to specific tools like perf is not desirable.
> Second, because it need to be lightweight as much as possible so that it can be
> used online, any unnecessary overhead such as kernel - user space context
> switching cost should be avoided.  These are the two most biggest reasons why
> DAMON is implemented in the kernel space.  The idle page tracking subsystem
> would be the kernel module that most seems similar to DAMON.  However, its own
> interface is not compatible with DAMON.  Also, the internal implementation of
> it has no common part to be reused by DAMON.
>
> Q: Can 'perf mem' provide the data required for DAMON?
> A: On the systems supporting 'perf mem', yes.  DAMON is using the PTE Accessed
> bits in low level.  Other H/W or S/W features that can be used for the purpose
> could be used.  However, as explained with above question, DAMON need to be
> implemented in the kernel space.
>
>
> Evaluations
> ===========
>
> A prototype of DAMON has evaluated on an Intel Xeon E7-8837 machine using 20
> benchmarks that picked from SPEC CPU 2006, NAS, Tensorflow Benchmark,
> SPLASH-2X, and PARSEC 3 benchmark suite.  Nonethless, this section provides
> only summary of the results.  For more detail, please refer to the slides used
> for the introduction of DAMON at the Linux Plumbers Conference 2019[1] or the
> MIDDLEWARE'19 industrial track paper[2].

The paper [2] is behind a paywall, upload it somewhere for free access.

>
>
> Quality
> -------
>
> We first traced and visualized the data access pattern of each workload.  We
> were able to confirm that the visualized results are reasonably accurate by
> manually comparing those with the source code of the workloads.
>
> To see the usefulness of the monitoring, we optimized 9 memory intensive
> workloads among them for memory pressure situations using the DAMON outputs.
> In detail, we identified frequently accessed memory regions in each workload
> based on the DAMON results and protected them with ``mlock()`` system calls.

Did you change the applications to add mlock() or was it done
dynamically through some new interface? The hot memory / working set
changes, so, dynamically m[un]locking makes sense.

> The optimized versions consistently show speedup (2.55x in best case, 1.65x in
> average) under memory pressure.
>

Do tell more about these 9 workloads and how they were evaluated. How
memory pressure was induced? Did you overcommit the memory? How many
workloads were running concurrently? How was the performance isolation
between the workloads? Is this speedup due to triggering oom-killer
earlier under memory pressure or something else?

>
> Overhead
> --------
>
> We also measured the overhead of DAMON.  It was not only under the upperbound
> we set, but was much lower (0.6 percent of the bound in best case, 13.288
> percent of the bound in average).

Why the upperbound you set matters?

> This reduction of the overhead is mainly
> resulted from its core mechanism called adaptive regions adjustment.  Refer to
> 'Appendix D' for more detail about the mechanism.  We also compared the
> overhead of DAMON with that of a straightforward periodic access check-based
> monitoring.

What is periodic access check-based monitoring?

> DAMON's overhead was smaller than it by 94,242.42x in best case,
> 3,159.61x in average.
>
> The latest version of DAMON running with its default configuration consumes
> only up to 1% of CPU time when applied to realistic workloads in PARSEC3 and
> SPLASH-2X and makes no visible slowdown to the target processes.

What about the number of processes? The alternative mechanism in [5,8]
are whole machine monitoring. Thousands of processes run on a machine.
How does this work monitoring thousands of processes compared to
[5,8].

Using sampling the cost/overhead is configurable but I would like to
know at what cost? Will the accuracy be good enough for the given
use-case?

>
>
> References
> ==========
>
> Prototypes of DAMON have introduced by an LPC kernel summit track talk[1] and
> two academic papers[2,3].  Please refer to those for more detailed information,
> especially the evaluations.  The latest version of the patchsets has also
> introduced by an LWN artice[4].
>
> [1] SeongJae Park, Tracing Data Access Pattern with Bounded Overhead and
>     Best-effort Accuracy. In The Linux Kernel Summit, September 2019.
>     https://linuxplumbersconf.org/event/4/contributions/548/
> [2] SeongJae Park, Yunjae Lee, Heon Y. Yeom, Profiling Dynamic Data Access
>     Patterns with Controlled Overhead and Quality. In 20th ACM/IFIP
>     International Middleware Conference Industry, December 2019.
>     https://dl.acm.org/doi/10.1145/3366626.3368125
> [3] SeongJae Park, Yunjae Lee, Yunhee Kim, Heon Y. Yeom, Profiling Dynamic Data
>     Access Patterns with Bounded Overhead and Accuracy. In IEEE International
>     Workshop on Foundations and Applications of Self- Systems (FAS 2019), June
>     2019.
> [4] Jonathan Corbet, Memory-management optimization with DAMON. In Linux Weekly
>     News (LWN), Feb 2020. https://lwn.net/Articles/812707/
>
>
> Sequence Of Patches
> ===================
>
> The patches are organized in the following sequence.  The first patch
> introduces DAMON module, it's data structures, and data structure related
> common functions.  Following three patches (2nd to 4th) implement the core
> logics of DAMON, namely regions based sampling, adaptive regions adjustment,
> and dynamic memory mapping chage adoption, one by one.
>
> Following five patches are for low level users of DAMON.  The 5th patch
> implements callbacks for each of monitoring steps so that users can do whatever
> they want with the access patterns.  The 6th one implements recording of access
> patterns in DAMON for better convenience and efficiency.  Each of next three
> patches (7th to 9th) respectively adds a programmable interface for other
> kernel code, a debugfs interface for privileged people and/or programs in user
> space, and a tracepoint for other tracepoints supporting tracers such as perf.
>
> Two patches for high level users of DAMON follows.  To provide a minimal
> reference to the debugfs interface and for high level use/tests of the DAMON,
> the next patch (10th) implements an user space tool.  The 11th patch adds a
> document for administrators of DAMON.
>
> Next two patches are for tests.  The 12th and 13th patches provide unit tests
> (based on kunit) and user space tests (based on kselftest) respectively.
>
> Finally, the last patch (14th) updates the MAINTAINERS file.
>
> The patches are based on the v5.5.  You can also clone the complete git
> tree:
>
>     $ git clone git://github.com/sjp38/linux -b damon/patches/v6
>
> The web is also available:
> https://github.com/sjp38/linux/releases/tag/damon/patches/v6
>
>
> Patch History
> =============
>
> Changes from v5
> (https://lore.kernel.org/linux-mm/20200217103110.30817-1-sjpark@amazon.com/)
>  - Fix minor bugs (sampling, record attributes, debugfs and user space tool)
>  - selftests: Add debugfs interface tests for the bugs
>  - Modify the user space tool to use its self default values for parameters
>  - Fix pmg huge page access check
>
> Changes from v4
> (https://lore.kernel.org/linux-mm/20200210144812.26845-1-sjpark@amazon.com/)
>  - Add 'Reviewed-by' for the kunit tests patch (Brendan Higgins)
>  - Make the unit test to depedns on 'DAMON=y' (Randy Dunlap and kbuild bot)
>    Reported-by: kbuild test robot <lkp@intel.com>
>  - Fix m68k module build issue
>    Reported-by: kbuild test robot <lkp@intel.com>
>  - Add selftests
>  - Seperate patches for low level users from core logics for better reading
>  - Clean up debugfs interface
>  - Trivial nitpicks
>
> Changes from v3
> (https://lore.kernel.org/linux-mm/20200204062312.19913-1-sj38.park@gmail.com/)
>  - Fix i386 build issue
>    Reported-by: kbuild test robot <lkp@intel.com>
>  - Increase the default size of the monitoring result buffer to 1 MiB
>  - Fix misc bugs in debugfs interface
>
> Changes from v2
> (https://lore.kernel.org/linux-mm/20200128085742.14566-1-sjpark@amazon.com/)
>  - Move MAINTAINERS changes to last commit (Brendan Higgins)
>  - Add descriptions for kunittest: why not only entire mappings and what the 4
>    input sets are trying to test (Brendan Higgins)
>  - Remove 'kdamond_need_stop()' test (Brendan Higgins)
>  - Discuss about the 'perf mem' and DAMON (Peter Zijlstra)
>  - Make CV clearly say what it actually does (Peter Zijlstra)
>  - Answer why new module (Qian Cai)
>  - Diable DAMON by default (Randy Dunlap)
>  - Change the interface: Seperate recording attributes
>    (attrs, record, rules) and allow multiple kdamond instances
>  - Implement kernel API interface
>
> Changes from v1
> (https://lore.kernel.org/linux-mm/20200120162757.32375-1-sjpark@amazon.com/)
>  - Rebase on v5.5
>  - Add a tracepoint for integration with other tracers (Kirill A. Shutemov)
>  - document: Add more description for the user space tool (Brendan Higgins)
>  - unittest: Improve readability (Brendan Higgins)
>  - unittest: Use consistent name and helpers function (Brendan Higgins)
>  - Update PG_Young to avoid reclaim logic interference (Yunjae Lee)
>
> Changes from RFC
> (https://lore.kernel.org/linux-mm/20200110131522.29964-1-sjpark@amazon.com/)
>  - Specify an ambiguous plan of access pattern based mm optimizations
>  - Support loadable module build
>  - Cleanup code
>
> SeongJae Park (14):
>   mm: Introduce Data Access MONitor (DAMON)
>   mm/damon: Implement region based sampling
>   mm/damon: Adaptively adjust regions
>   mm/damon: Apply dynamic memory mapping changes
>   mm/damon: Implement callbacks
>   mm/damon: Implement access pattern recording
>   mm/damon: Implement kernel space API
>   mm/damon: Add debugfs interface
>   mm/damon: Add a tracepoint for result writing
>   tools: Add a minimal user-space tool for DAMON
>   Documentation/admin-guide/mm: Add a document for DAMON
>   mm/damon: Add kunit tests
>   mm/damon: Add user selftests
>   MAINTAINERS: Update for DAMON
>
>  .../admin-guide/mm/data_access_monitor.rst    |  414 +++++
>  Documentation/admin-guide/mm/index.rst        |    1 +
>  MAINTAINERS                                   |   12 +
>  include/linux/damon.h                         |   71 +
>  include/trace/events/damon.h                  |   32 +
>  mm/Kconfig                                    |   23 +
>  mm/Makefile                                   |    1 +
>  mm/damon-test.h                               |  604 +++++++
>  mm/damon.c                                    | 1427 +++++++++++++++++
>  mm/page_ext.c                                 |    1 +
>  tools/damon/.gitignore                        |    1 +
>  tools/damon/_dist.py                          |   36 +
>  tools/damon/bin2txt.py                        |   64 +
>  tools/damon/damo                              |   37 +
>  tools/damon/heats.py                          |  358 +++++
>  tools/damon/nr_regions.py                     |   89 +
>  tools/damon/record.py                         |  212 +++
>  tools/damon/report.py                         |   45 +
>  tools/damon/wss.py                            |   95 ++
>  tools/testing/selftests/damon/Makefile        |    7 +
>  .../selftests/damon/_chk_dependency.sh        |   28 +
>  tools/testing/selftests/damon/_chk_record.py  |   89 +
>  .../testing/selftests/damon/debugfs_attrs.sh  |  139 ++
>  .../testing/selftests/damon/debugfs_record.sh |   50 +
>  24 files changed, 3836 insertions(+)
>  create mode 100644 Documentation/admin-guide/mm/data_access_monitor.rst
>  create mode 100644 include/linux/damon.h
>  create mode 100644 include/trace/events/damon.h
>  create mode 100644 mm/damon-test.h
>  create mode 100644 mm/damon.c
>  create mode 100644 tools/damon/.gitignore
>  create mode 100644 tools/damon/_dist.py
>  create mode 100644 tools/damon/bin2txt.py
>  create mode 100755 tools/damon/damo
>  create mode 100644 tools/damon/heats.py
>  create mode 100644 tools/damon/nr_regions.py
>  create mode 100644 tools/damon/record.py
>  create mode 100644 tools/damon/report.py
>  create mode 100644 tools/damon/wss.py
>  create mode 100644 tools/testing/selftests/damon/Makefile
>  create mode 100644 tools/testing/selftests/damon/_chk_dependency.sh
>  create mode 100644 tools/testing/selftests/damon/_chk_record.py
>  create mode 100755 tools/testing/selftests/damon/debugfs_attrs.sh
>  create mode 100755 tools/testing/selftests/damon/debugfs_record.sh
>
> --
> 2.17.1
>
> ============================= 8< ======================================
>
> Appendix A: Related Works
> =========================
>
> There are a number of researches[1,2,3,4,5,6] optimizing memory management
> mechanisms based on the actual memory access patterns that shows impressive
> results.  However, most of those has no deep consideration about the monitoring
> of the accesses itself.  Some of those focused on the overhead of the
> monitoring, but does not consider the accuracy scalability[6] or has additional
> dependencies[7].  Indeed, one recent research[5] about the proactive
> reclamation has also proposed[8] to the kernel community but the monitoring
> overhead was considered a main problem.
>
> [1] Subramanya R Dulloor, Amitabha Roy, Zheguang Zhao, Narayanan Sundaram,
>     Nadathur Satish, Rajesh Sankaran, Jeff Jackson, and Karsten Schwan. 2016.
>     Data tiering in heterogeneous memory systems. In Proceedings of the 11th
>     European Conference on Computer Systems (EuroSys). ACM, 15.
> [2] Youngjin Kwon, Hangchen Yu, Simon Peter, Christopher J Rossbach, and Emmett
>     Witchel. 2016. Coordinated and efficient huge page management with ingens.
>     In 12th USENIX Symposium on Operating Systems Design and Implementation
>     (OSDI).  705–721.
> [3] Harald Servat, Antonio J Peña, Germán Llort, Estanislao Mercadal,
>     HansChristian Hoppe, and Jesús Labarta. 2017. Automating the application
>     data placement in hybrid memory systems. In 2017 IEEE International
>     Conference on Cluster Computing (CLUSTER). IEEE, 126–136.
> [4] Vlad Nitu, Boris Teabe, Alain Tchana, Canturk Isci, and Daniel Hagimont.
>     2018. Welcome to zombieland: practical and energy-efficient memory
>     disaggregation in a datacenter. In Proceedings of the 13th European
>     Conference on Computer Systems (EuroSys). ACM, 16.
> [5] Andres Lagar-Cavilla, Junwhan Ahn, Suleiman Souhlal, Neha Agarwal, Radoslaw
>     Burny, Shakeel Butt, Jichuan Chang, Ashwin Chaugule, Nan Deng, Junaid
>     Shahid, Greg Thelen, Kamil Adam Yurtsever, Yu Zhao, and Parthasarathy
>     Ranganathan.  2019. Software-Defined Far Memory in Warehouse-Scale
>     Computers.  In Proceedings of the 24th International Conference on
>     Architectural Support for Programming Languages and Operating Systems
>     (ASPLOS).  ACM, New York, NY, USA, 317–330.
>     DOI:https://doi.org/10.1145/3297858.3304053
> [6] Carl Waldspurger, Trausti Saemundsson, Irfan Ahmad, and Nohhyun Park.
>     2017. Cache Modeling and Optimization using Miniature Simulations. In 2017
>     USENIX Annual Technical Conference (ATC). USENIX Association, Santa
>     Clara, CA, 487–498.
>     https://www.usenix.org/conference/atc17/technical-sessions/
> [7] Haojie Wang, Jidong Zhai, Xiongchao Tang, Bowen Yu, Xiaosong Ma, and
>     Wenguang Chen. 2018. Spindle: Informed Memory Access Monitoring. In 2018
>     USENIX Annual Technical Conference (ATC). USENIX Association, Boston, MA,
>     561–574.  https://www.usenix.org/conference/atc18/presentation/wang-haojie
> [8] Jonathan Corbet. 2019. Proactively reclaiming idle memory. (2019).
>     https://lwn.net/Articles/787611/.
>
>
> Appendix B: Limitations of Other Access Monitoring Techniques
> =============================================================
>
> The memory access instrumentation techniques which are applied to
> many tools such as Intel PIN is essential for correctness required cases such
> as memory access bug detections or cache level optimizations.  However, those
> usually incur exceptionally high overhead which is unacceptable.
>
> Periodic access checks based on access counting features (e.g., PTE Accessed
> bits or PG_Idle flags) can reduce the overhead.  It sacrifies some of the
> quality but it's still ok to many of this domain.  However, the overhead
> arbitrarily increase as the size of the target workload grows.  Miniature-like
> static region based sampling can set the upperbound of the overhead, but it
> will now decrease the quality of the output as the size of the workload grows.
>
> DAMON is another solution that overcomes the limitations.  It is 1) accurate
> enough for this domain, 2) light-weight so that it can be applied online, and
> 3) allow users to set the upper-bound of the overhead, regardless of the size
> of target workloads.  It is implemented as a simple and small kernel module to
> support various users in both of the user space and the kernel space.  Refer to
> 'Evaluations' section below for detailed performance of DAMON.
>
> For the goals, DAMON utilizes its two core mechanisms, which allows lightweight
> overhead and high quality of output, repectively.  To show how DAMON promises
> those, refer to 'Mechanisms of DAMON' section below.
>
>
> Appendix C: Mechanisms of DAMON
> ===============================
>
>
> Basic Access Check
> ------------------
>
> DAMON basically reports what pages are how frequently accessed.  The report is
> passed to users in binary format via a ``result file`` which users can set it's
> path.  Note that the frequency is not an absolute number of accesses, but a
> relative frequency among the pages of the target workloads.
>
> Users can also control the resolution of the reports by setting two time
> intervals, ``sampling interval`` and ``aggregation interval``.  In detail,
> DAMON checks access to each page per ``sampling interval``, aggregates the
> results (counts the number of the accesses to each page), and reports the
> aggregated results per ``aggregation interval``.

Why is "aggregation interval" important? User space can just poll
after such interval.

> For the access check of each
> page, DAMON uses the Accessed bits of PTEs.
>
> This is thus similar to the previously mentioned periodic access checks based
> mechanisms, which overhead is increasing as the size of the target process
> grows.
>
>
> Region Based Sampling
> ---------------------
>
> To avoid the unbounded increase of the overhead, DAMON groups a number of
> adjacent pages that assumed to have same access frequencies into a region.  As
> long as the assumption (pages in a region have same access frequencies) is
> kept, only one page in the region is required to be checked.  Thus, for each
> ``sampling interval``, DAMON randomly picks one page in each region and clears
> its Accessed bit.  After one more ``sampling interval``, DAMON reads the
> Accessed bit of the page and increases the access frequency of the region if
> the bit has set meanwhile.  Therefore, the monitoring overhead is controllable
> by setting the number of regions.  DAMON allows users to set the minimal and
> maximum number of regions for the trade-off.
>
> Except the assumption, this is almost same with the above-mentioned
> miniature-like static region based sampling.  In other words, this scheme
> cannot preserve the quality of the output if the assumption is not guaranteed.
>

So, the spatial locality is assumed.

>
> Adaptive Regions Adjustment
> ---------------------------
>
> At the beginning of the monitoring, DAMON constructs the initial regions by
> evenly splitting the memory mapped address space of the process into the
> user-specified minimal number of regions.  In this initial state, the
> assumption is normally not kept and thus the quality could be low.  To keep the
> assumption as much as possible, DAMON adaptively merges and splits each region.
> For each ``aggregation interval``, it compares the access frequencies of

Oh aggregation interval is used for merging event.

> adjacent regions and merges those if the frequency difference is small.  Then,
> after it reports and clears the aggregated access frequency of each region, it
> splits each region into two regions if the total number of regions is smaller
> than the half of the user-specified maximum number of regions.
>

What's the equilibrium/stable state here?

> In this way, DAMON provides its best-effort quality and minimal overhead while
> keeping the bounds users set for their trade-off.
>
>
> Applying Dynamic Memory Mappings
> --------------------------------
>
> Only a number of small parts in the super-huge virtual address space of the
> processes is mapped to physical memory and accessed.  Thus, tracking the
> unmapped address regions is just wasteful.  However, tracking every memory
> mapping change might incur an overhead.  For the reason, DAMON applies the
> dynamic memory mapping changes to the tracking regions only for each of an
> user-specified time interval (``regions update interval``).
>
>
> Appendix D: Expected Use-cases
> ==============================
>
> A straightforward usecase of DAMON would be the program behavior analysis.
> With the DAMON output, users can confirm whether the program is running as
> intended or not.  This will be useful for debuggings and tests of design
> points.
>
> The monitored results can also be useful for counting the dynamic working set
> size of workloads.  For the administration of memory overcommitted systems or
> selection of the environments (e.g., containers providing different amount of
> memory) for your workloads, this will be useful.
>
> If you are a programmer, you can optimize your program by managing the memory
> based on the actual data access pattern.  For example, you can identify the
> dynamic hotness of your data using DAMON and call ``mlock()`` to keep your hot
> data in DRAM, or call ``madvise()`` with ``MADV_PAGEOUT`` to proactively
> reclaim cold data.  Even though your program is guaranteed to not encounter
> memory pressure, you can still improve the performance by applying the DAMON
> outputs for call of ``MADV_HUGEPAGE`` and ``MADV_NOHUGEPAGE``.  More creative
> optimizations would be possible.  Our evaluations of DAMON includes a
> straightforward optimization using the ``mlock()``.  Please refer to the below
> Evaluation section for more detail.
>
> As DAMON incurs very low overhead, such optimizations can be applied not only
> offline, but also online.  Also, there is no reason to limit such optimizations
> to the user space.  Several parts of the kernel's memory management mechanisms
> could be also optimized using DAMON. The reclamation, the THP (de)promotion
> decisions, and the compaction would be such a candidates.  DAMON will continue
> its development to be highly optimized for the online/in-kernel uses.
>
>
> A Future Plan: Data Access Monitoring-based Operation Schemes
> -------------------------------------------------------------
>
> As described in the above section, DAMON could be helpful for actual access
> based memory management optimizations.  Nevertheless, users who want to do such
> optimizations should run DAMON, read the traced data (either online or
> offline), analyze it, plan a new memory management scheme, and apply the new
> scheme by themselves.  It must be easier than the past, but could still require
> some level of efforts.  In its next development stage, DAMON will reduce some
> of such efforts by allowing users to specify some access based memory
> management rules for their specific processes.
>
> Because this is just a plan, the specific interface is not fixed yet, but for
> example, users will be allowed to write their desired memory management rules
> to a special file in a DAMON specific format.  The rules will be something like
> 'if a memory region of size in a range is keeping a range of hotness for more
> than a duration, apply specific memory management rule using madvise() or
> mlock() to the region'.  For example, we can imagine rules like below:
>
>     # format is: <min/max size> <min/max frequency (0-99)> <duration> <action>
>
>     # if a region of a size keeps a very high access frequency for more than
>     # 100ms, lock the region in the main memory (call mlock()). But, if the
>     # region is larger than 500 MiB, skip it. The exception might be helpful
>     # if the system has only, say, 600 MiB of DRAM, a region of size larger
>     # than 600 MiB cannot be locked in the DRAM at all.
>     na 500M 90 99 100ms mlock
>
>     # if a region keeps a high access frequency for more than 100ms, put the
>     # region on the head of the LRU list (call madvise() with MADV_WILLNEED).
>     na na 80 90 100ms madv_willneed
>
>     # if a region keeps a low access frequency for more than 100ms, put the
>     # region on the tail of the LRU list (call madvise() with MADV_COLD).
>     na na 10 20 100ms madv_cold
>
>     # if a region keeps a very low access frequency for more than 100ms, swap
>     # out the region immediately (call madvise() with MADV_PAGEOUT).
>     na na 0 10 100ms madv_pageout
>
>     # if a region of a size bigger than 2MB keeps a very high access frequency
>     # for more than 100ms, let the region to use huge pages (call madvise()
>     # with MADV_HUGEPAGE).
>     2M na 90 99 100ms madv_hugepage
>
>     # If a regions of a size bigger than > 2MB keeps no high access frequency
>     # for more than 100ms, avoid the region from using huge pages (call
>     # madvise() with MADV_NOHUGEPAGE).
>     2M na 0 25 100ms madv_nohugepage
>
> An RFC patchset for this is available:
> https://lore.kernel.org/linux-mm/20200218085309.18346-1-sjpark@amazon.com/

I do want to question the actual motivation of the design followed by this work.

With the already present Page Idle Tracking feature in the kernel, I
can envision that the region sampling and adaptive region adjustments
can be done in the user space. Due to sampling, the additional
overhead will be very small and configurable.

Additionally the proposed mechanism has inherent assumption of the
presence of spatial locality (for virtual memory) in the monitored
processes which is very workload dependent.

Given that the the same mechanism can be implemented in the user space
within tolerable overhead and is workload dependent, why it should be
done in the kernel? What exactly is the advantage of implementing this
in kernel?

thanks,
Shakeel


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-10 16:22         ` SeongJae Park
@ 2020-03-10 17:39           ` Jonathan Cameron
  2020-03-12  9:20             ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-10 17:39 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 17:22:40 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> On Tue, 10 Mar 2020 15:55:10 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
> > On Tue, 10 Mar 2020 12:52:33 +0100
> > SeongJae Park <sjpark@amazon.com> wrote:
> >   
> > > Added replies to your every comment in line below.  I agree to your whole
> > > opinions, will apply those in next spin! :)
> > >   
> > 
> > One additional question inline that came to mind.  Using a single statistic
> > to monitor huge page and normal page hits is going to give us problems
> > I think.  
> 
> Ah, you're right!!!  This is indeed a critical bug!
> 
> > 
> > Perhaps I'm missing something?
> >   
> > > > > +/*
> > > > > + * Check whether the given region has accessed since the last check    
> > > > 
> > > > Should also make clear that this sets us up for the next access check at
> > > > a different memory address it the region.
> > > > 
> > > > Given the lack of connection between activities perhaps just split this into
> > > > two functions that are always called next to each other.    
> > > 
> > > Will make the description more clearer as suggested.
> > > 
> > > Also, I found that I'm not clearing *pte and *pmd before going 'mkold', thanks
> > > to this comment.  Will fix it, either.
> > >   
> > > >     
> > > > > + *
> > > > > + * mm	'mm_struct' for the given virtual address space
> > > > > + * r	the region to be checked
> > > > > + */
> > > > > +static void kdamond_check_access(struct damon_ctx *ctx,
> > > > > +			struct mm_struct *mm, struct damon_region *r)
> > > > > +{
> > > > > +	pte_t *pte = NULL;
> > > > > +	pmd_t *pmd = NULL;
> > > > > +	spinlock_t *ptl;
> > > > > +
> > > > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > > > +		goto mkold;
> > > > > +
> > > > > +	/* Read the page table access bit of the page */
> > > > > +	if (pte && pte_young(*pte))
> > > > > +		r->nr_accesses++;
> > > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE    
> > > > 
> > > > Is it worth having this protection?  Seems likely to have only a very small
> > > > influence on performance and makes it a little harder to reason about the code.    
> > > 
> > > It was necessary for addressing 'implicit declaration' problem of 'pmd_young()'
> > > and 'pmd_mkold()' for build of DAMON on several architectures including User
> > > Mode Linux.
> > > 
> > > Will modularize the code for better readability.
> > >   
> > > >     
> > > > > +	else if (pmd && pmd_young(*pmd))
> > > > > +		r->nr_accesses++;  
> > 
> > So we increment a region count by one if we have an access in a huge page, or
> > in a normal page.
> > 
> > If we get a region that has a mixture of the two, this seems likely to give a
> > bad approximation.
> > 
> > Assume the region is accessed 'evenly' but each " 4k page" is only hit 10% of the time
> > (where a hit is in one check period)
> > 
> > If our address in a page, then we'll hit 10% of the time, but if it is in a 2M
> > huge page then we'll hit a much higher percentage of the time.
> > 1 - (0.9^512) ~= 1
> > 
> > Should we look to somehow account for this?  
> 
> Yes, this is really critical bug and we should fix this!  Thank you so much for
> finding this!
> 
> >   
> > > > > +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> > > > > +
> > > > > +	spin_unlock(ptl);
> > > > > +
> > > > > +mkold:
> > > > > +	/* mkold next target */
> > > > > +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> > > > > +
> > > > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > > > +		return;
> > > > > +
> > > > > +	if (pte) {
> > > > > +		if (pte_young(*pte)) {
> > > > > +			clear_page_idle(pte_page(*pte));
> > > > > +			set_page_young(pte_page(*pte));
> > > > > +		}
> > > > > +		*pte = pte_mkold(*pte);
> > > > > +	}
> > > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > > > +	else if (pmd) {
> > > > > +		if (pmd_young(*pmd)) {
> > > > > +			clear_page_idle(pmd_page(*pmd));
> > > > > +			set_page_young(pmd_page(*pmd));
> > > > > +		}
> > > > > +		*pmd = pmd_mkold(*pmd);
> > > > > +	}  
> 
> This is also very problematic if several regions are backed by a single huge
> page, as only one region in the huge page will be checked as accessed.
> 
> Will address these problems in next spin!

Good point.  There is little point in ever having multiple regions including
a single huge page.  Would it be possible to tweak the region splitting algorithm
to not do this?

Jonathan

> 
> 
> Thanks,
> SeongJae Park
> 
> > > > > +#endif
> > > > > +
> > > > > +	spin_unlock(ptl);
> > > > > +}
> > > > > +  
> > 
> >   




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-10 17:39           ` Jonathan Cameron
@ 2020-03-12  9:20             ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-12  9:20 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Tue, 10 Mar 2020 17:39:38 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Tue, 10 Mar 2020 17:22:40 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > On Tue, 10 Mar 2020 15:55:10 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> > 
> > > On Tue, 10 Mar 2020 12:52:33 +0100
> > > SeongJae Park <sjpark@amazon.com> wrote:
> > >   
> > > > Added replies to your every comment in line below.  I agree to your whole
> > > > opinions, will apply those in next spin! :)
> > > >   
> > > 
> > > One additional question inline that came to mind.  Using a single statistic
> > > to monitor huge page and normal page hits is going to give us problems
> > > I think.  
> > 
> > Ah, you're right!!!  This is indeed a critical bug!
> > 
> > > 
> > > Perhaps I'm missing something?
> > >   
> > > > > > +/*
> > > > > > + * Check whether the given region has accessed since the last check    
> > > > > 
> > > > > Should also make clear that this sets us up for the next access check at
> > > > > a different memory address it the region.
> > > > > 
> > > > > Given the lack of connection between activities perhaps just split this into
> > > > > two functions that are always called next to each other.    
> > > > 
> > > > Will make the description more clearer as suggested.
> > > > 
> > > > Also, I found that I'm not clearing *pte and *pmd before going 'mkold', thanks
> > > > to this comment.  Will fix it, either.
> > > >   
> > > > >     
> > > > > > + *
> > > > > > + * mm	'mm_struct' for the given virtual address space
> > > > > > + * r	the region to be checked
> > > > > > + */
> > > > > > +static void kdamond_check_access(struct damon_ctx *ctx,
> > > > > > +			struct mm_struct *mm, struct damon_region *r)
> > > > > > +{
> > > > > > +	pte_t *pte = NULL;
> > > > > > +	pmd_t *pmd = NULL;
> > > > > > +	spinlock_t *ptl;
> > > > > > +
> > > > > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > > > > +		goto mkold;
> > > > > > +
> > > > > > +	/* Read the page table access bit of the page */
> > > > > > +	if (pte && pte_young(*pte))
> > > > > > +		r->nr_accesses++;
> > > > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE    
> > > > > 
> > > > > Is it worth having this protection?  Seems likely to have only a very small
> > > > > influence on performance and makes it a little harder to reason about the code.    
> > > > 
> > > > It was necessary for addressing 'implicit declaration' problem of 'pmd_young()'
> > > > and 'pmd_mkold()' for build of DAMON on several architectures including User
> > > > Mode Linux.
> > > > 
> > > > Will modularize the code for better readability.
> > > >   
> > > > >     
> > > > > > +	else if (pmd && pmd_young(*pmd))
> > > > > > +		r->nr_accesses++;  
> > > 
> > > So we increment a region count by one if we have an access in a huge page, or
> > > in a normal page.
> > > 
> > > If we get a region that has a mixture of the two, this seems likely to give a
> > > bad approximation.
> > > 
> > > Assume the region is accessed 'evenly' but each " 4k page" is only hit 10% of the time
> > > (where a hit is in one check period)
> > > 
> > > If our address in a page, then we'll hit 10% of the time, but if it is in a 2M
> > > huge page then we'll hit a much higher percentage of the time.
> > > 1 - (0.9^512) ~= 1
> > > 
> > > Should we look to somehow account for this?  
> > 
> > Yes, this is really critical bug and we should fix this!  Thank you so much for
> > finding this!
> > 
> > >   
> > > > > > +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> > > > > > +
> > > > > > +	spin_unlock(ptl);
> > > > > > +
> > > > > > +mkold:
> > > > > > +	/* mkold next target */
> > > > > > +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> > > > > > +
> > > > > > +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> > > > > > +		return;
> > > > > > +
> > > > > > +	if (pte) {
> > > > > > +		if (pte_young(*pte)) {
> > > > > > +			clear_page_idle(pte_page(*pte));
> > > > > > +			set_page_young(pte_page(*pte));
> > > > > > +		}
> > > > > > +		*pte = pte_mkold(*pte);
> > > > > > +	}
> > > > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > > > > +	else if (pmd) {
> > > > > > +		if (pmd_young(*pmd)) {
> > > > > > +			clear_page_idle(pmd_page(*pmd));
> > > > > > +			set_page_young(pmd_page(*pmd));
> > > > > > +		}
> > > > > > +		*pmd = pmd_mkold(*pmd);
> > > > > > +	}  
> > 
> > This is also very problematic if several regions are backed by a single huge
> > page, as only one region in the huge page will be checked as accessed.
> > 
> > Will address these problems in next spin!
> 
> Good point.  There is little point in ever having multiple regions including
> a single huge page.  Would it be possible to tweak the region splitting algorithm
> to not do this?

Yes, it would be a good solution.  However, I believe this is a problem of the
access checking mechanism, as the definition of the region is only 'memory
area having similar access frequency'.  Adding more rules such as 'it should
be aligned by HUGE PAGE size' might make things more complex.  Also, we're
currently using page table Accessed bits as the primitive for the access check,
but it could be extended to other primitives in future.   Therefore, I would
like to modify the access checking mechanism to aware the huge pages
existance.

For regions containing both regular pages and huge pages, the huge pages will
make some errorneous high access frequency as you noted before,  but the
adaptive regions adjustment will eventually split them.

If you have other concerns or opinions, please let me know.


Thanks,
SeongJae Park

> 
> Jonathan
> 
> > 
> > 
> > Thanks,
> > SeongJae Park
> > 
> > > > > > +#endif
> > > > > > +
> > > > > > +	spin_unlock(ptl);
> > > > > > +}
> > > > > > +  
> > > 
> > >   
> 
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-10 17:21 ` Shakeel Butt
@ 2020-03-12 10:07   ` SeongJae Park
  2020-03-12 10:43     ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-12 10:07 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: SeongJae Park, Andrew Morton, SeongJae Park, Andrea Arcangeli,
	Yang Shi, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, Qian Cai, Colin Ian King, Jonathan Corbet, dwmw,
	jolsa, Kirill A. Shutemov, mark.rutland, Mel Gorman, Minchan Kim,
	Ingo Molnar, namhyung, peterz, Randy Dunlap, David Rientjes,
	Steven Rostedt, shuah, sj38.park, Vlastimil Babka,
	Vladimir Davydov, Linux MM, linux-doc, LKML

On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@google.com> wrote:

> On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
> >
> > From: SeongJae Park <sjpark@amazon.de>
> >
> > Introduction
> > ============
> >
> > Memory management decisions can be improved if finer data access information is
> > available.  However, because such finer information usually comes with higher
> > overhead, most systems including Linux forgives the potential improvement and
> > rely on only coarse information or some light-weight heuristics.  The
> > pseudo-LRU and the aggressive THP promotions are such examples.
> >
> > A number of experimental data access pattern awared memory management
> 
> why experimental? [5,8] are deployed across Google fleet.

Yes, simply saying all as experimental was my mistake.  Will change this
sentence in the next spin.

> 
> > optimizations (refer to 'Appendix A' for more details) say the sacrifices are
> > huge.
> 
> It depends. For servers where stranded CPUs are common, the cost is
> not that huge.

What I wanted to say is that the potential performance benefit that can be
earned by utilizing the data access pattern based optimizations such as
proactive reclamation is large (sacrification from perspective of Linux, which
is not optimized in that way).  Will wordsmith this sentence in the next spin.

> 
> > However, none of those has successfully adopted to Linux kernel mainly
> 
> adopted? I think you mean accepted or merged

You're right, will correct this.

> 
> > due to the absence of a scalable and efficient data access monitoring
> > mechanism.  Refer to 'Appendix B' to see the limitations of existing memory
> > monitoring mechanisms.
> >
> > DAMON is a data access monitoring subsystem for the problem.  It is 1) accurate
> > enough to be used for the DRAM level memory management (a straightforward
> > DAMON-based optimization achieved up to 2.55x speedup), 2) light-weight enough
> > to be applied online (compared to a straightforward access monitoring scheme,
> > DAMON is up to 94.242.42x lighter)
> 
> 94.242.42x ?

94,242.42x (almost 100 thousands).  Sorry for confusion.

> 
> > and 3) keeps predefined upper-bound overhead
> > regardless of the size of target workloads (thus scalable).  Refer to 'Appendix
> > C' if you interested in how it is possible.
> >
> > DAMON has mainly designed for the kernel's memory management mechanisms.
> > However, because it is implemented as a standalone kernel module and provides
> > several interfaces, it can be used by a wide range of users including kernel
> > space programs, user space programs, programmers, and administrators.  DAMON
> > is now supporting the monitoring only, but it will also provide simple and
> > convenient data access pattern awared memory managements by itself.  Refer to
> > 'Appendix D' for more detailed expected usages of DAMON.
> >
[...]
> >
> > Frequently Asked Questions
> > ==========================
> >
> > Q: Why DAMON is not integrated with perf?
> > A: From the perspective of perf like profilers, DAMON can be thought of as a
> > data source in kernel, like the tracepoints, the pressure stall information
> > (psi), or the idle page tracking.  Thus, it is easy to integrate DAMON with the
> > profilers.  However, this patchset doesn't provide a fancy perf integration
> > because current step of DAMON development is focused on its core logic only.
> > That said, DAMON already provides two interfaces for user space programs, which
> > based on debugfs and tracepoint, respectively.  Using the tracepoint interface,
> > you can use DAMON with perf.  This patchset also provides a debugfs interface
> > based user space tool for DAMON.  It can be used to record, visualize, and
> > analyze data access patterns of target processes in a convenient way.
> 
> Oh it is monitoring at the process level.

Yes, exactly.

> 
> >
[...]
> >
> >
> > Evaluations
> > ===========
> >
> > A prototype of DAMON has evaluated on an Intel Xeon E7-8837 machine using 20
> > benchmarks that picked from SPEC CPU 2006, NAS, Tensorflow Benchmark,
> > SPLASH-2X, and PARSEC 3 benchmark suite.  Nonethless, this section provides
> > only summary of the results.  For more detail, please refer to the slides used
> > for the introduction of DAMON at the Linux Plumbers Conference 2019[1] or the
> > MIDDLEWARE'19 industrial track paper[2].
> 
> The paper [2] is behind a paywall, upload it somewhere for free access.

It's a shame, sorry.  But, isn't it illegal to upload it somwahere?

> 
> >
> >
> > Quality
> > -------
> >
> > We first traced and visualized the data access pattern of each workload.  We
> > were able to confirm that the visualized results are reasonably accurate by
> > manually comparing those with the source code of the workloads.
> >
> > To see the usefulness of the monitoring, we optimized 9 memory intensive
> > workloads among them for memory pressure situations using the DAMON outputs.
> > In detail, we identified frequently accessed memory regions in each workload
> > based on the DAMON results and protected them with ``mlock()`` system calls.
> 
> Did you change the applications to add mlock() or was it done
> dynamically through some new interface? The hot memory / working set
> changes, so, dynamically m[un]locking makes sense.

We manually changed the application source code to add mlock()/munlock().

> 
> > The optimized versions consistently show speedup (2.55x in best case, 1.65x in
> > average) under memory pressure.
> >
> 
> Do tell more about these 9 workloads and how they were evaluated. How
> memory pressure was induced? Did you overcommit the memory? How many
> workloads were running concurrently? How was the performance isolation
> between the workloads? Is this speedup due to triggering oom-killer
> earlier under memory pressure or something else?

The 9 workloads are 433.milc, 462.libquantum and 470.lbm from SPEC CPU 2006,
cg, sp from NPB, and ferret, water_nsquared, fft, and volrend from PARSEC3
benchmark suites.

I isolated them and induced the memory pressure by running those one by one, in
a cgroup providing memory space 30\% less than rheir orginal working set.

The speedup came from the reduced swap in events, due to the mlock() of hot
objects.

> 
> >
> > Overhead
> > --------
> >
> > We also measured the overhead of DAMON.  It was not only under the upperbound
> > we set, but was much lower (0.6 percent of the bound in best case, 13.288
> > percent of the bound in average).
> 
> Why the upperbound you set matters?

I just wanted to show that the upperbound setting really works as intended.

> 
> > This reduction of the overhead is mainly
> > resulted from its core mechanism called adaptive regions adjustment.  Refer to
> > 'Appendix D' for more detail about the mechanism.  We also compared the
> > overhead of DAMON with that of a straightforward periodic access check-based
> > monitoring.
> 
> What is periodic access check-based monitoring?

It means periodically checking the 'Accessed bit' of each page of the target
processes.

> 
> > DAMON's overhead was smaller than it by 94,242.42x in best case,
> > 3,159.61x in average.
> >
> > The latest version of DAMON running with its default configuration consumes
> > only up to 1% of CPU time when applied to realistic workloads in PARSEC3 and
> > SPLASH-2X and makes no visible slowdown to the target processes.
> 
> What about the number of processes? The alternative mechanism in [5,8]
> are whole machine monitoring. Thousands of processes run on a machine.
> How does this work monitoring thousands of processes compared to
> [5,8].

DAMON is designed to be able to monitor multiple processes, but the tests has
done for each of the single process.

I am planning to extend DAMON to support entire physical memory in future,
though.

> 
> Using sampling the cost/overhead is configurable but I would like to
> know at what cost? Will the accuracy be good enough for the given
> use-case?

The adaptive regions adjustment is for the point.  To show the correctness, I
showed the visualized patterns (seems reasonable) and the pattern based (manual
and automated) optimizations (shows reasonable improvements).

> 
> >
> >
[...]
> >
> > Appendix C: Mechanisms of DAMON
> > ===============================
> >
> >
> > Basic Access Check
> > ------------------
> >
> > DAMON basically reports what pages are how frequently accessed.  The report is
> > passed to users in binary format via a ``result file`` which users can set it's
> > path.  Note that the frequency is not an absolute number of accesses, but a
> > relative frequency among the pages of the target workloads.
> >
> > Users can also control the resolution of the reports by setting two time
> > intervals, ``sampling interval`` and ``aggregation interval``.  In detail,
> > DAMON checks access to each page per ``sampling interval``, aggregates the
> > results (counts the number of the accesses to each page), and reports the
> > aggregated results per ``aggregation interval``.
> 
> Why is "aggregation interval" important? User space can just poll
> after such interval.

You already got the answer below, but to add my explanation, it's necessary to
be able to say 'how many times' the region has accessed for last 'specific
duration'.  Users can jut poll it of course, but doing this inside the kernel
will reduce many number of kernel-user context changes.

> 
> > For the access check of each
> > page, DAMON uses the Accessed bits of PTEs.
> >
> > This is thus similar to the previously mentioned periodic access checks based
> > mechanisms, which overhead is increasing as the size of the target process
> > grows.
> >
> >
> > Region Based Sampling
> > ---------------------
> >
> > To avoid the unbounded increase of the overhead, DAMON groups a number of
> > adjacent pages that assumed to have same access frequencies into a region.  As
> > long as the assumption (pages in a region have same access frequencies) is
> > kept, only one page in the region is required to be checked.  Thus, for each
> > ``sampling interval``, DAMON randomly picks one page in each region and clears
> > its Accessed bit.  After one more ``sampling interval``, DAMON reads the
> > Accessed bit of the page and increases the access frequency of the region if
> > the bit has set meanwhile.  Therefore, the monitoring overhead is controllable
> > by setting the number of regions.  DAMON allows users to set the minimal and
> > maximum number of regions for the trade-off.
> >
> > Except the assumption, this is almost same with the above-mentioned
> > miniature-like static region based sampling.  In other words, this scheme
> > cannot preserve the quality of the output if the assumption is not guaranteed.
> >
> 
> So, the spatial locality is assumed.

Yes, exactly.  The difinition of 'region' in DAMON is 'adjacent memory area
showing similar access frequency'.

> 
> >
> > Adaptive Regions Adjustment
> > ---------------------------
> >
> > At the beginning of the monitoring, DAMON constructs the initial regions by
> > evenly splitting the memory mapped address space of the process into the
> > user-specified minimal number of regions.  In this initial state, the
> > assumption is normally not kept and thus the quality could be low.  To keep the
> > assumption as much as possible, DAMON adaptively merges and splits each region.
> > For each ``aggregation interval``, it compares the access frequencies of
> 
> Oh aggregation interval is used for merging event.

Yes, right!

> 
> > adjacent regions and merges those if the frequency difference is small.  Then,
> > after it reports and clears the aggregated access frequency of each region, it
> > splits each region into two regions if the total number of regions is smaller
> > than the half of the user-specified maximum number of regions.
> >
> 
> What's the equilibrium/stable state here?

Currently, we merge two regions if the difference of 'number of accesses' of
those is smaller than 10% of the highest number of accesses.

> 
> > In this way, DAMON provides its best-effort quality and minimal overhead while
> > keeping the bounds users set for their trade-off.
> >
> >
> > Applying Dynamic Memory Mappings
> > --------------------------------
> >
> > Only a number of small parts in the super-huge virtual address space of the
> > processes is mapped to physical memory and accessed.  Thus, tracking the
> > unmapped address regions is just wasteful.  However, tracking every memory
> > mapping change might incur an overhead.  For the reason, DAMON applies the
> > dynamic memory mapping changes to the tracking regions only for each of an
> > user-specified time interval (``regions update interval``).
> >
[...]
> 
> I do want to question the actual motivation of the design followed by this work.
> 
> With the already present Page Idle Tracking feature in the kernel, I
> can envision that the region sampling and adaptive region adjustments
> can be done in the user space. Due to sampling, the additional
> overhead will be very small and configurable.
> 
> Additionally the proposed mechanism has inherent assumption of the
> presence of spatial locality (for virtual memory) in the monitored
> processes which is very workload dependent.
> 
> Given that the the same mechanism can be implemented in the user space
> within tolerable overhead and is workload dependent, why it should be
> done in the kernel? What exactly is the advantage of implementing this
> in kernel?

First of all, DAMON is not for only user space processes, but also for kernel
space core mechanisms.  Many of the core mechanisms will be able to use DAMON
for access pattern based optimizations, with light overhead and reasonable
accuracy.

Implementing DAMON in user space is of course possible, but it will be
inefficient.  Using it from kernel space would make no sense, and it would
incur unnecessarily frequent kernel-user context switches, which is very
expensive nowadays.


Thanks,
SeongJae Park


> 
> thanks,
> Shakeel


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-12 10:07   ` SeongJae Park
@ 2020-03-12 10:43     ` SeongJae Park
  2020-03-18 19:52       ` Shakeel Butt
  0 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-12 10:43 UTC (permalink / raw)
  To: SeongJae Park
  Cc: Shakeel Butt, Andrew Morton, SeongJae Park, Andrea Arcangeli,
	Yang Shi, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, Qian Cai, Colin Ian King, Jonathan Corbet, dwmw,
	jolsa, Kirill A. Shutemov, mark.rutland, Mel Gorman, Minchan Kim,
	Ingo Molnar, namhyung, peterz, Randy Dunlap, David Rientjes,
	Steven Rostedt, shuah, sj38.park, Vlastimil Babka,
	Vladimir Davydov, Linux MM, linux-doc, LKML

On Thu, 12 Mar 2020 11:07:59 +0100 SeongJae Park <sjpark@amazon.com> wrote:

> On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> 
> > On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
> > >
> > > From: SeongJae Park <sjpark@amazon.de>
> > >
> > > Introduction
> > > ============
> > >
[...]
> > 
> > I do want to question the actual motivation of the design followed by this work.
> > 
> > With the already present Page Idle Tracking feature in the kernel, I
> > can envision that the region sampling and adaptive region adjustments
> > can be done in the user space. Due to sampling, the additional
> > overhead will be very small and configurable.
> > 
> > Additionally the proposed mechanism has inherent assumption of the
> > presence of spatial locality (for virtual memory) in the monitored
> > processes which is very workload dependent.
> > 
> > Given that the the same mechanism can be implemented in the user space
> > within tolerable overhead and is workload dependent, why it should be
> > done in the kernel? What exactly is the advantage of implementing this
> > in kernel?
> 
> First of all, DAMON is not for only user space processes, but also for kernel
> space core mechanisms.  Many of the core mechanisms will be able to use DAMON
> for access pattern based optimizations, with light overhead and reasonable
> accuracy.
> 
> Implementing DAMON in user space is of course possible, but it will be
> inefficient.  Using it from kernel space would make no sense, and it would
> incur unnecessarily frequent kernel-user context switches, which is very
> expensive nowadays.

Forgot mentioning about the spatial locality.  Yes, it is workload dependant,
but still pervasive in many case.  Also, many core mechanisms in kernel such as
read-ahead or LRU are already using some similar assumptions.

If it is so problematic, you could set the maximum number of regions to the
number of pages in the system so that each region monitors each page.


Thanks,
SeongJae Park

> 
> 
> Thanks,
> SeongJae Park
> 
> 
> > 
> > thanks,
> > Shakeel
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-02-24 12:30 ` [PATCH v6 02/14] mm/damon: Implement region based sampling SeongJae Park
  2020-03-10  8:57   ` Jonathan Cameron
@ 2020-03-13 17:29   ` Jonathan Cameron
  2020-03-13 20:16     ` SeongJae Park
  1 sibling, 1 reply; 51+ messages in thread
From: Jonathan Cameron @ 2020-03-13 17:29 UTC (permalink / raw)
  To: SeongJae Park
  Cc: akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Mon, 24 Feb 2020 13:30:35 +0100
SeongJae Park <sjpark@amazon.com> wrote:

> From: SeongJae Park <sjpark@amazon.de>
> 
> This commit implements DAMON's basic access check and region based
> sampling mechanisms.  This change would seems make no sense, mainly
> because it is only a part of the DAMON's logics.  Following two commits
> will make more sense.
> 
> This commit also exports `lookup_page_ext()` to GPL modules because
> DAMON uses the function but also supports the module build.
> 
> Basic Access Check
> ------------------
> 
> DAMON basically reports what pages are how frequently accessed.  Note
> that the frequency is not an absolute number of accesses, but a relative
> frequency among the pages of the target workloads.
> 
> Users can control the resolution of the reports by setting two time
> intervals, ``sampling interval`` and ``aggregation interval``.  In
> detail, DAMON checks access to each page per ``sampling interval``,
> aggregates the results (counts the number of the accesses to each page),
> and reports the aggregated results per ``aggregation interval``.  For
> the access check of each page, DAMON uses the Accessed bits of PTEs.
> 
> This is thus similar to common periodic access checks based access
> tracking mechanisms, which overhead is increasing as the size of the
> target process grows.
> 
> Region Based Sampling
> ---------------------
> 
> To avoid the unbounded increase of the overhead, DAMON groups a number
> of adjacent pages that assumed to have same access frequencies into a
> region.  As long as the assumption (pages in a region have same access
> frequencies) is kept, only one page in the region is required to be
> checked.  Thus, for each ``sampling interval``, DAMON randomly picks one
> page in each region and clears its Accessed bit.  After one more
> ``sampling interval``, DAMON reads the Accessed bit of the page and
> increases the access frequency of the region if the bit has set
> meanwhile.  Therefore, the monitoring overhead is controllable by
> setting the number of regions.
> 
> Nonetheless, this scheme cannot preserve the quality of the output if
> the assumption is not kept.  Following commit will introduce how we can
> make the guarantee with best effort.
> 
> Signed-off-by: SeongJae Park <sjpark@amazon.de>

Came across a minor issue inline.  kthread_run calls kthread_create.
That gives a potential sleep while atomic issue given the spin lock.

Can probably be fixed by preallocating the thread then starting it later.

Jonathan
> ---
>  mm/damon.c    | 509 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/page_ext.c |   1 +
>  2 files changed, 510 insertions(+)
> 
> diff --git a/mm/damon.c b/mm/damon.c
> index aafdca35b7b8..6bdeb84d89af 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -9,9 +9,14 @@
>  
>  #define pr_fmt(fmt) "damon: " fmt
>  
> +#include <linux/delay.h>
> +#include <linux/kthread.h>
>  #include <linux/mm.h>
>  #include <linux/module.h>
> +#include <linux/page_idle.h>
>  #include <linux/random.h>
> +#include <linux/sched/mm.h>
> +#include <linux/sched/task.h>
>  #include <linux/slab.h>
>  
>  #define damon_get_task_struct(t) \
> @@ -51,7 +56,24 @@ struct damon_task {
>  	struct list_head list;
>  };
>  
> +/*
> + * For each 'sample_interval', DAMON checks whether each region is accessed or
> + * not.  It aggregates and keeps the access information (number of accesses to
> + * each region) for each 'aggr_interval' time.
> + *
> + * All time intervals are in micro-seconds.
> + */
>  struct damon_ctx {
> +	unsigned long sample_interval;
> +	unsigned long aggr_interval;
> +	unsigned long min_nr_regions;
> +
> +	struct timespec64 last_aggregation;
> +
> +	struct task_struct *kdamond;
> +	bool kdamond_stop;
> +	spinlock_t kdamond_lock;
> +
>  	struct rnd_state rndseed;
>  
>  	struct list_head tasks_list;	/* 'damon_task' objects */
> @@ -204,6 +226,493 @@ static unsigned int nr_damon_regions(struct damon_task *t)
>  	return ret;
>  }
>  
> +/*
> + * Get the mm_struct of the given task
> + *
> + * Callser should put the mm_struct after use, unless it is NULL.
> + *
> + * Returns the mm_struct of the task on success, NULL on failure
> + */
> +static struct mm_struct *damon_get_mm(struct damon_task *t)
> +{
> +	struct task_struct *task;
> +	struct mm_struct *mm;
> +
> +	task = damon_get_task_struct(t);
> +	if (!task)
> +		return NULL;
> +
> +	mm = get_task_mm(task);
> +	put_task_struct(task);
> +	return mm;
> +}
> +
> +/*
> + * Size-evenly split a region into 'nr_pieces' small regions
> + *
> + * Returns 0 on success, or negative error code otherwise.
> + */
> +static int damon_split_region_evenly(struct damon_ctx *ctx,
> +		struct damon_region *r, unsigned int nr_pieces)
> +{
> +	unsigned long sz_orig, sz_piece, orig_end;
> +	struct damon_region *piece = NULL, *next;
> +	unsigned long start;
> +
> +	if (!r || !nr_pieces)
> +		return -EINVAL;
> +
> +	orig_end = r->vm_end;
> +	sz_orig = r->vm_end - r->vm_start;
> +	sz_piece = sz_orig / nr_pieces;
> +
> +	if (!sz_piece)
> +		return -EINVAL;
> +
> +	r->vm_end = r->vm_start + sz_piece;
> +	next = damon_next_region(r);
> +	for (start = r->vm_end; start + sz_piece <= orig_end;
> +			start += sz_piece) {
> +		piece = damon_new_region(ctx, start, start + sz_piece);
> +		damon_add_region(piece, r, next);
> +		r = piece;
> +	}
> +	if (piece)
> +		piece->vm_end = orig_end;
> +	return 0;
> +}
> +
> +struct region {
> +	unsigned long start;
> +	unsigned long end;
> +};
> +
> +static unsigned long sz_region(struct region *r)
> +{
> +	return r->end - r->start;
> +}
> +
> +static void swap_regions(struct region *r1, struct region *r2)
> +{
> +	struct region tmp;
> +
> +	tmp = *r1;
> +	*r1 = *r2;
> +	*r2 = tmp;
> +}
> +
> +/*
> + * Find the three regions in an address space
> + *
> + * vma		the head vma of the target address space
> + * regions	an array of three 'struct region's that results will be saved
> + *
> + * This function receives an address space and finds three regions in it which
> + * separated by the two biggest unmapped regions in the space.  Please refer to
> + * below comments of 'damon_init_regions_of()' function to know why this is
> + * necessary.
> + *
> + * Returns 0 if success, or negative error code otherwise.
> + */
> +static int damon_three_regions_in_vmas(struct vm_area_struct *vma,
> +		struct region regions[3])
> +{
> +	struct region gap = {0,}, first_gap = {0,}, second_gap = {0,};
> +	struct vm_area_struct *last_vma = NULL;
> +	unsigned long start = 0;
> +
> +	/* Find two biggest gaps so that first_gap > second_gap > others */
> +	for (; vma; vma = vma->vm_next) {
> +		if (!last_vma) {
> +			start = vma->vm_start;
> +			last_vma = vma;
> +			continue;
> +		}
> +		gap.start = last_vma->vm_end;
> +		gap.end = vma->vm_start;
> +		if (sz_region(&gap) > sz_region(&second_gap)) {
> +			swap_regions(&gap, &second_gap);
> +			if (sz_region(&second_gap) > sz_region(&first_gap))
> +				swap_regions(&second_gap, &first_gap);
> +		}
> +		last_vma = vma;
> +	}
> +
> +	if (!sz_region(&second_gap) || !sz_region(&first_gap))
> +		return -EINVAL;
> +
> +	/* Sort the two biggest gaps by address */
> +	if (first_gap.start > second_gap.start)
> +		swap_regions(&first_gap, &second_gap);
> +
> +	/* Store the result */
> +	regions[0].start = start;
> +	regions[0].end = first_gap.start;
> +	regions[1].start = first_gap.end;
> +	regions[1].end = second_gap.start;
> +	regions[2].start = second_gap.end;
> +	regions[2].end = last_vma->vm_end;
> +
> +	return 0;
> +}
> +
> +/*
> + * Get the three regions in the given task
> + *
> + * Returns 0 on success, negative error code otherwise.
> + */
> +static int damon_three_regions_of(struct damon_task *t,
> +				struct region regions[3])
> +{
> +	struct mm_struct *mm;
> +	int ret;
> +
> +	mm = damon_get_mm(t);
> +	if (!mm)
> +		return -EINVAL;
> +
> +	down_read(&mm->mmap_sem);
> +	ret = damon_three_regions_in_vmas(mm->mmap, regions);
> +	up_read(&mm->mmap_sem);
> +
> +	mmput(mm);
> +	return ret;
> +}
> +
> +/*
> + * Initialize the monitoring target regions for the given task
> + *
> + * t	the given target task
> + *
> + * Because only a number of small portions of the entire address space
> + * is acutally mapped to the memory and accessed, monitoring the unmapped
> + * regions is wasteful.  That said, because we can deal with small noises,
> + * tracking every mapping is not strictly required but could even incur a high
> + * overhead if the mapping frequently changes or the number of mappings is
> + * high.  Nonetheless, this may seems very weird.  DAMON's dynamic regions
> + * adjustment mechanism, which will be implemented with following commit will
> + * make this more sense.
> + *
> + * For the reason, we convert the complex mappings to three distinct regions
> + * that cover every mapped areas of the address space.  Also the two gaps
> + * between the three regions are the two biggest unmapped areas in the given
> + * address space.  In detail, this function first identifies the start and the
> + * end of the mappings and the two biggest unmapped areas of the address space.
> + * Then, it constructs the three regions as below:
> + *
> + *     [mappings[0]->start, big_two_unmapped_areas[0]->start)
> + *     [big_two_unmapped_areas[0]->end, big_two_unmapped_areas[1]->start)
> + *     [big_two_unmapped_areas[1]->end, mappings[nr_mappings - 1]->end)
> + *
> + * As usual memory map of processes is as below, the gap between the heap and
> + * the uppermost mmap()-ed region, and the gap between the lowermost mmap()-ed
> + * region and the stack will be two biggest unmapped regions.  Because these
> + * gaps are exceptionally huge areas in usual address space, excluding these
> + * two biggest unmapped regions will be sufficient to make a trade-off.
> + *
> + *   <heap>
> + *   <BIG UNMAPPED REGION 1>
> + *   <uppermost mmap()-ed region>
> + *   (other mmap()-ed regions and small unmapped regions)
> + *   <lowermost mmap()-ed region>
> + *   <BIG UNMAPPED REGION 2>
> + *   <stack>
> + */
> +static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t)
> +{
> +	struct damon_region *r;
> +	struct region regions[3];
> +	int i;
> +
> +	if (damon_three_regions_of(t, regions)) {
> +		pr_err("Failed to get three regions of task %lu\n", t->pid);
> +		return;
> +	}
> +
> +	/* Set the initial three regions of the task */
> +	for (i = 0; i < 3; i++) {
> +		r = damon_new_region(c, regions[i].start, regions[i].end);
> +		damon_add_region_tail(r, t);
> +	}
> +
> +	/* Split the middle region into 'min_nr_regions - 2' regions */
> +	r = damon_nth_region_of(t, 1);
> +	if (damon_split_region_evenly(c, r, c->min_nr_regions - 2))
> +		pr_warn("Init middle region failed to be split\n");
> +}
> +
> +/* Initialize '->regions_list' of every task */
> +static void kdamond_init_regions(struct damon_ctx *ctx)
> +{
> +	struct damon_task *t;
> +
> +	damon_for_each_task(ctx, t)
> +		damon_init_regions_of(ctx, t);
> +}
> +
> +/*
> + * Check whether the given region has accessed since the last check
> + *
> + * mm	'mm_struct' for the given virtual address space
> + * r	the region to be checked
> + */
> +static void kdamond_check_access(struct damon_ctx *ctx,
> +			struct mm_struct *mm, struct damon_region *r)
> +{
> +	pte_t *pte = NULL;
> +	pmd_t *pmd = NULL;
> +	spinlock_t *ptl;
> +
> +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> +		goto mkold;
> +
> +	/* Read the page table access bit of the page */
> +	if (pte && pte_young(*pte))
> +		r->nr_accesses++;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	else if (pmd && pmd_young(*pmd))
> +		r->nr_accesses++;
> +#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
> +
> +	spin_unlock(ptl);
> +
> +mkold:
> +	/* mkold next target */
> +	r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end);
> +
> +	if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl))
> +		return;
> +
> +	if (pte) {
> +		if (pte_young(*pte)) {
> +			clear_page_idle(pte_page(*pte));
> +			set_page_young(pte_page(*pte));
> +		}
> +		*pte = pte_mkold(*pte);
> +	}
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	else if (pmd) {
> +		if (pmd_young(*pmd)) {
> +			clear_page_idle(pmd_page(*pmd));
> +			set_page_young(pmd_page(*pmd));
> +		}
> +		*pmd = pmd_mkold(*pmd);
> +	}
> +#endif
> +
> +	spin_unlock(ptl);
> +}
> +
> +/*
> + * Check whether a time interval is elapsed
> + *
> + * baseline	the time to check whether the interval has elapsed since
> + * interval	the time interval (microseconds)
> + *
> + * See whether the given time interval has passed since the given baseline
> + * time.  If so, it also updates the baseline to current time for next check.
> + *
> + * Returns true if the time interval has passed, or false otherwise.
> + */
> +static bool damon_check_reset_time_interval(struct timespec64 *baseline,
> +		unsigned long interval)
> +{
> +	struct timespec64 now;
> +
> +	ktime_get_coarse_ts64(&now);
> +	if ((timespec64_to_ns(&now) - timespec64_to_ns(baseline)) <
> +			interval * 1000)
> +		return false;
> +	*baseline = now;
> +	return true;
> +}
> +
> +/*
> + * Check whether it is time to flush the aggregated information
> + */
> +static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx)
> +{
> +	return damon_check_reset_time_interval(&ctx->last_aggregation,
> +			ctx->aggr_interval);
> +}
> +
> +/*
> + * Reset the aggregated monitoring results
> + */
> +static void kdamond_flush_aggregated(struct damon_ctx *c)
> +{
> +	struct damon_task *t;
> +	struct damon_region *r;
> +
> +	damon_for_each_task(c, t) {
> +		damon_for_each_region(r, t)
> +			r->nr_accesses = 0;
> +	}
> +}
> +
> +/*
> + * Check whether current monitoring should be stopped
> + *
> + * If users asked to stop, need stop.  Even though no user has asked to stop,
> + * need stop if every target task has dead.
> + *
> + * Returns true if need to stop current monitoring.
> + */
> +static bool kdamond_need_stop(struct damon_ctx *ctx)
> +{
> +	struct damon_task *t;
> +	struct task_struct *task;
> +	bool stop;
> +
> +	spin_lock(&ctx->kdamond_lock);
> +	stop = ctx->kdamond_stop;
> +	spin_unlock(&ctx->kdamond_lock);
> +	if (stop)
> +		return true;
> +
> +	damon_for_each_task(ctx, t) {
> +		task = damon_get_task_struct(t);
> +		if (task) {
> +			put_task_struct(task);
> +			return false;
> +		}
> +	}
> +
> +	return true;
> +}
> +
> +/*
> + * The monitoring daemon that runs as a kernel thread
> + */
> +static int kdamond_fn(void *data)
> +{
> +	struct damon_ctx *ctx = (struct damon_ctx *)data;
> +	struct damon_task *t;
> +	struct damon_region *r, *next;
> +	struct mm_struct *mm;
> +
> +	pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
> +	kdamond_init_regions(ctx);
> +	while (!kdamond_need_stop(ctx)) {
> +		damon_for_each_task(ctx, t) {
> +			mm = damon_get_mm(t);
> +			if (!mm)
> +				continue;
> +			damon_for_each_region(r, t)
> +				kdamond_check_access(ctx, mm, r);
> +			mmput(mm);
> +		}
> +
> +		if (kdamond_aggregate_interval_passed(ctx))
> +			kdamond_flush_aggregated(ctx);
> +
> +		usleep_range(ctx->sample_interval, ctx->sample_interval + 1);
> +	}
> +	damon_for_each_task(ctx, t) {
> +		damon_for_each_region_safe(r, next, t)
> +			damon_destroy_region(r);
> +	}
> +	pr_info("kdamond (%d) finishes\n", ctx->kdamond->pid);
> +	spin_lock(&ctx->kdamond_lock);
> +	ctx->kdamond = NULL;
> +	spin_unlock(&ctx->kdamond_lock);
> +	return 0;
> +}
> +
> +/*
> + * Controller functions
> + */
> +
> +/*
> + * Start or stop the kdamond
> + *
> + * Returns 0 if success, negative error code otherwise.
> + */
> +static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
> +{
> +	spin_lock(&ctx->kdamond_lock);
> +	ctx->kdamond_stop = !on;
> +	if (!ctx->kdamond && on) {
> +		ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond");

Can't do this under a spin lock.

> +		if (!ctx->kdamond)
> +			goto fail;
> +		goto success;
> +	}
> +	if (ctx->kdamond && !on) {
> +		spin_unlock(&ctx->kdamond_lock);
> +		while (true) {
> +			spin_lock(&ctx->kdamond_lock);
> +			if (!ctx->kdamond)
> +				goto success;
> +			spin_unlock(&ctx->kdamond_lock);
> +
> +			usleep_range(ctx->sample_interval,
> +					ctx->sample_interval * 2);
> +		}
> +	}
> +
> +	/* tried to turn on while turned on, or turn off while turned off */
> +
> +fail:
> +	spin_unlock(&ctx->kdamond_lock);
> +	return -EINVAL;
> +
> +success:
> +	spin_unlock(&ctx->kdamond_lock);
> +	return 0;
> +}
> +
> +/*
> + * This function should not be called while the kdamond is running.
> + */
> +static int damon_set_pids(struct damon_ctx *ctx,
> +			unsigned long *pids, ssize_t nr_pids)
> +{
> +	ssize_t i;
> +	struct damon_task *t, *next;
> +
> +	damon_for_each_task_safe(ctx, t, next)
> +		damon_destroy_task(t);
> +
> +	for (i = 0; i < nr_pids; i++) {
> +		t = damon_new_task(pids[i]);
> +		if (!t) {
> +			pr_err("Failed to alloc damon_task\n");
> +			return -ENOMEM;
> +		}
> +		damon_add_task_tail(ctx, t);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Set attributes for the monitoring
> + *
> + * sample_int		time interval between samplings
> + * aggr_int		time interval between aggregations
> + * min_nr_reg		minimal number of regions
> + *
> + * This function should not be called while the kdamond is running.
> + * Every time interval is in micro-seconds.
> + *
> + * Returns 0 on success, negative error code otherwise.
> + */
> +static int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int,
> +		unsigned long aggr_int, unsigned long min_nr_reg)
> +{
> +	if (min_nr_reg < 3) {
> +		pr_err("min_nr_regions (%lu) should be bigger than 2\n",
> +				min_nr_reg);
> +		return -EINVAL;
> +	}
> +
> +	ctx->sample_interval = sample_int;
> +	ctx->aggr_interval = aggr_int;
> +	ctx->min_nr_regions = min_nr_reg;
> +	return 0;
> +}
> +
>  static int __init damon_init(void)
>  {
>  	pr_info("init\n");
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index 4ade843ff588..71169b45bba9 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -131,6 +131,7 @@ struct page_ext *lookup_page_ext(const struct page *page)
>  					MAX_ORDER_NR_PAGES);
>  	return get_entry(base, index);
>  }
> +EXPORT_SYMBOL_GPL(lookup_page_ext);
>  
>  static int __init alloc_node_page_ext(int nid)
>  {




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-13 17:29   ` Jonathan Cameron
@ 2020-03-13 20:16     ` SeongJae Park
  2020-03-17 11:32       ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-13 20:16 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: SeongJae Park, akpm, SeongJae Park, aarcange, yang.shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, brendanhiggins, cai,
	colin.king, corbet, dwmw, jolsa, kirill, mark.rutland, mgorman,
	minchan, mingo, namhyung, peterz, rdunlap, rientjes, rostedt,
	shuah, sj38.park, vbabka, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

On Fri, 13 Mar 2020 17:29:54 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Mon, 24 Feb 2020 13:30:35 +0100
> SeongJae Park <sjpark@amazon.com> wrote:
> 
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > This commit implements DAMON's basic access check and region based
> > sampling mechanisms.  This change would seems make no sense, mainly
> > because it is only a part of the DAMON's logics.  Following two commits
> > will make more sense.
> > 
[...]
> 
> Came across a minor issue inline.  kthread_run calls kthread_create.
> That gives a potential sleep while atomic issue given the spin lock.
> 
> Can probably be fixed by preallocating the thread then starting it later.
> 
> Jonathan
[...]
> > +/*
> > + * Start or stop the kdamond
> > + *
> > + * Returns 0 if success, negative error code otherwise.
> > + */
> > +static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
> > +{
> > +	spin_lock(&ctx->kdamond_lock);
> > +	ctx->kdamond_stop = !on;
> > +	if (!ctx->kdamond && on) {
> > +		ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond");
> 
> Can't do this under a spin lock.

Good catch!  And, agree to your suggestion.  I will fix this in that way!


Thanks,
SeongJae Park


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: Re: [PATCH v6 02/14] mm/damon: Implement region based sampling
  2020-03-13 20:16     ` SeongJae Park
@ 2020-03-17 11:32       ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-17 11:32 UTC (permalink / raw)
  To: SeongJae Park
  Cc: Jonathan Cameron, SeongJae Park, akpm, SeongJae Park, aarcange,
	yang.shi, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, cai, colin.king, corbet, dwmw, jolsa, kirill,
	mark.rutland, mgorman, minchan, mingo, namhyung, peterz, rdunlap,
	rientjes, rostedt, shuah, vbabka, vdavydov.dev, linux-mm,
	linux-doc, linux-kernel

On Fri, 13 Mar 2020 21:16:49 +0100 SeongJae Park <sj38.park@gmail.com> wrote:

> On Fri, 13 Mar 2020 17:29:54 +0000 Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
> > On Mon, 24 Feb 2020 13:30:35 +0100
> > SeongJae Park <sjpark@amazon.com> wrote:
> > 
> > > From: SeongJae Park <sjpark@amazon.de>
> > > 
> > > This commit implements DAMON's basic access check and region based
> > > sampling mechanisms.  This change would seems make no sense, mainly
> > > because it is only a part of the DAMON's logics.  Following two commits
> > > will make more sense.
> > > 
> [...]
> > 
> > Came across a minor issue inline.  kthread_run calls kthread_create.
> > That gives a potential sleep while atomic issue given the spin lock.
> > 
> > Can probably be fixed by preallocating the thread then starting it later.
> > 
> > Jonathan
> [...]
> > > +/*
> > > + * Start or stop the kdamond
> > > + *
> > > + * Returns 0 if success, negative error code otherwise.
> > > + */
> > > +static int damon_turn_kdamond(struct damon_ctx *ctx, bool on)
> > > +{
> > > +	spin_lock(&ctx->kdamond_lock);
> > > +	ctx->kdamond_stop = !on;
> > > +	if (!ctx->kdamond && on) {
> > > +		ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond");
> > 
> > Can't do this under a spin lock.
> 
> Good catch!  And, agree to your suggestion.  I will fix this in that way!

I changed my mind.  I would like to simply use mutex instead of spinlock, as
khugepaged also does.  If you have different opinion, please let me know.


Thanks,
SeongJae Park

> 
> 
> Thanks,
> SeongJae Park


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-12 10:43     ` SeongJae Park
@ 2020-03-18 19:52       ` Shakeel Butt
  2020-03-19  9:03         ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Shakeel Butt @ 2020-03-18 19:52 UTC (permalink / raw)
  To: SeongJae Park
  Cc: Andrew Morton, SeongJae Park, Andrea Arcangeli, Yang Shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, Brendan Higgins,
	Qian Cai, Colin Ian King, Jonathan Corbet, dwmw, jolsa,
	Kirill A. Shutemov, mark.rutland, Mel Gorman, Minchan Kim,
	Ingo Molnar, namhyung, peterz, Randy Dunlap, David Rientjes,
	Steven Rostedt, shuah, sj38.park, Vlastimil Babka,
	Vladimir Davydov, Linux MM, linux-doc, LKML

On Thu, Mar 12, 2020 at 3:44 AM SeongJae Park <sjpark@amazon.com> wrote:
>
> On Thu, 12 Mar 2020 11:07:59 +0100 SeongJae Park <sjpark@amazon.com> wrote:
>
> > On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> >
> > > On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
> > > >
> > > > From: SeongJae Park <sjpark@amazon.de>
> > > >
> > > > Introduction
> > > > ============
> > > >
> [...]
> > >
> > > I do want to question the actual motivation of the design followed by this work.
> > >
> > > With the already present Page Idle Tracking feature in the kernel, I
> > > can envision that the region sampling and adaptive region adjustments
> > > can be done in the user space. Due to sampling, the additional
> > > overhead will be very small and configurable.
> > >
> > > Additionally the proposed mechanism has inherent assumption of the
> > > presence of spatial locality (for virtual memory) in the monitored
> > > processes which is very workload dependent.
> > >
> > > Given that the the same mechanism can be implemented in the user space
> > > within tolerable overhead and is workload dependent, why it should be
> > > done in the kernel? What exactly is the advantage of implementing this
> > > in kernel?
> >
> > First of all, DAMON is not for only user space processes, but also for kernel
> > space core mechanisms.  Many of the core mechanisms will be able to use DAMON
> > for access pattern based optimizations, with light overhead and reasonable
> > accuracy.

Which kernel space core mechanisms? I can see memory reclaim, do you
envision some other component as well.

Let's discuss how this can interact with memory reclaim and we can see
if there is any benefit to do this in kernel.

> >
> > Implementing DAMON in user space is of course possible, but it will be
> > inefficient.  Using it from kernel space would make no sense, and it would
> > incur unnecessarily frequent kernel-user context switches, which is very
> > expensive nowadays.
>
> Forgot mentioning about the spatial locality.  Yes, it is workload dependant,
> but still pervasive in many case.  Also, many core mechanisms in kernel such as
> read-ahead or LRU are already using some similar assumptions.
>

Not sure about the LRU but yes read-ahead in several places does
assume spatial locality. However most of those are configurable and
the userspace can enable/disable the read-ahead based on the workload.

>
> If it is so problematic, you could set the maximum number of regions to the
> number of pages in the system so that each region monitors each page.
>

How will this work in the process context? Number of regions equal to
the number of mapped pages?

Basically I am trying to envision the comparison of physical memory
based monitoring (using idle page tracking) vs pid+VA based
monitoring.

Anyways I am not against your proposal. I am trying to see how to make
it more general to be applicable to more use-cases and one such
use-case which I am interested in is monitoring all the user pages on
the system for proactive reclaim purpose.

Shakeel


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-18 19:52       ` Shakeel Butt
@ 2020-03-19  9:03         ` SeongJae Park
  2020-03-23 17:29           ` Shakeel Butt
  0 siblings, 1 reply; 51+ messages in thread
From: SeongJae Park @ 2020-03-19  9:03 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: SeongJae Park, Andrew Morton, SeongJae Park, Andrea Arcangeli,
	Yang Shi, acme, alexander.shishkin, amit, brendan.d.gregg,
	Brendan Higgins, Qian Cai, Colin Ian King, Jonathan Corbet, dwmw,
	jolsa, Kirill A. Shutemov, mark.rutland, Mel Gorman, Minchan Kim,
	Ingo Molnar, namhyung, peterz, Randy Dunlap, David Rientjes,
	Steven Rostedt, shuah, sj38.park, Vlastimil Babka,
	Vladimir Davydov, Linux MM, linux-doc, LKML

On Wed, 18 Mar 2020 12:52:48 -0700 Shakeel Butt <shakeelb@google.com> wrote:

> On Thu, Mar 12, 2020 at 3:44 AM SeongJae Park <sjpark@amazon.com> wrote:
> >
> > On Thu, 12 Mar 2020 11:07:59 +0100 SeongJae Park <sjpark@amazon.com> wrote:
> >
> > > On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> > >
> > > > On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
> > > > >
> > > > > From: SeongJae Park <sjpark@amazon.de>
> > > > >
> > > > > Introduction
> > > > > ============
> > > > >
> > [...]
> > > >
> > > > I do want to question the actual motivation of the design followed by this work.
> > > >
> > > > With the already present Page Idle Tracking feature in the kernel, I
> > > > can envision that the region sampling and adaptive region adjustments
> > > > can be done in the user space. Due to sampling, the additional
> > > > overhead will be very small and configurable.
> > > >
> > > > Additionally the proposed mechanism has inherent assumption of the
> > > > presence of spatial locality (for virtual memory) in the monitored
> > > > processes which is very workload dependent.
> > > >
> > > > Given that the the same mechanism can be implemented in the user space
> > > > within tolerable overhead and is workload dependent, why it should be
> > > > done in the kernel? What exactly is the advantage of implementing this
> > > > in kernel?
> > >
> > > First of all, DAMON is not for only user space processes, but also for kernel
> > > space core mechanisms.  Many of the core mechanisms will be able to use DAMON
> > > for access pattern based optimizations, with light overhead and reasonable
> > > accuracy.
> 
> Which kernel space core mechanisms? I can see memory reclaim, do you
> envision some other component as well.

In addition to reclmation, I am thinking THP promotion/demotion decision, page
migration among NUMA nodes on tier-memory configuration, and on-demand virtual
machine live migration mechanisms could benefit from DAMON, for now.  I also
believe more use-cases could be found.

> 
> Let's discuss how this can interact with memory reclaim and we can see
> if there is any benefit to do this in kernel.

For reclaim, I believe we could try the proactive reclamation again using DAMON
(Yes, I'm a fan of the idea of proactive reclamation).  I already implemented
and evaluated a simple form of DAMON-based proactive reclamation for the proof
of the concept (not for production).  In best case (parsec3/freqmine), it
reduces 22.42% of system memory usage and 88.86% of residential sets while
incurring only 3.07% runtime overhead.  Please refer to 'Appendix E' of the v7
patchset[1] of DAMON.  It also describes the implementation and the evaluation
of a data access monitoring-based THP promotion/demotion policy.

The experimental implementation cannot be directly applied to kernel
reclamation mechanism, because it requires users to specify the target
applications.  Nonetheless, I think we can also easily adopt it inside the
kernel by modifying kswapd to periodically select processes having huge RSS as
targets, or by creating proactive reclamation type cgroups which selects every
processes in the cgroup as targets.

Of course, we can extend DAMON to support physical memory address space instead
of virtual memory of specific processes.  Actually, this is in our TODO list.
With the extension, applying DAMON to core memory management mechanisms will be
even easier.

Nonetheless, this is only example but not concrete plan.  I didn't make the
concrete plan yet, but believe that of DAMON will open the gates.

[1] https://lore.kernel.org/linux-mm/20200318112722.30143-1-sjpark@amazon.com/

> 
> > >
> > > Implementing DAMON in user space is of course possible, but it will be
> > > inefficient.  Using it from kernel space would make no sense, and it would
> > > incur unnecessarily frequent kernel-user context switches, which is very
> > > expensive nowadays.
> >
> > Forgot mentioning about the spatial locality.  Yes, it is workload dependant,
> > but still pervasive in many case.  Also, many core mechanisms in kernel such as
> > read-ahead or LRU are already using some similar assumptions.
> >
> 
> Not sure about the LRU but yes read-ahead in several places does
> assume spatial locality. However most of those are configurable and
> the userspace can enable/disable the read-ahead based on the workload.

Sorry for my ambiguous description.  LRU uses temporal locality, which is
somewhat similar to spatial locality, in terms of workload dependency.

> 
> >
> > If it is so problematic, you could set the maximum number of regions to the
> > number of pages in the system so that each region monitors each page.
> >
> 
> How will this work in the process context? Number of regions equal to
> the number of mapped pages?

Suppose that a process has 1024 pages of working set and each of the pages has
totally different access frequency.  If the maximum number of regions is 1024,
the adaptive regions adjustment mechanism of DAMON will create each region for
each page and monitor the access to each page.  So, the output will be same to
straightforward periodic page-granularity access checking methods, which does
not depend on the spatial locality.  Nevertheless, the monitoring overhead will
be also similar to that.

However, if any adjacent pages have similar access frequencies, DAMON will
group those pages into one region.  This will reduce the total number of PTE
Accessed bit checks and thus decrease the overhead.  In other words, DAMON do
its best effort to minimize the overhead while preserving quality.

Also suppose that the maximum number of region is smaller than 1024 in this
case.  Pages having different access frequency will be grouped in same region
and thus the output quality will be decreased.  However, the overhead will
be half, as DAMON does one access check per each region.  This means that you
can easily trade the monitoring quality with overhead by adjusting the maximum
number of regions.

> 
> Basically I am trying to envision the comparison of physical memory
> based monitoring (using idle page tracking) vs pid+VA based
> monitoring.

I believe the core mechanisms of DAMON could be easily extended to the physical
memory.  Indeed, it is in our TODO list, and I believe it would make use of
DAMON in kernel core mechanisms much easier.

> 
> Anyways I am not against your proposal. I am trying to see how to make
> it more general to be applicable to more use-cases and one such
> use-case which I am interested in is monitoring all the user pages on
> the system for proactive reclaim purpose.

Your questions gave me many insight and shed lights to the way DAMON should go.
Really appreciate.  If you have any more questions or need my help, please let
me know.


Thanks,
SeongJae Park

> 
> Shakeel
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-19  9:03         ` SeongJae Park
@ 2020-03-23 17:29           ` Shakeel Butt
  2020-03-24  8:34             ` SeongJae Park
  0 siblings, 1 reply; 51+ messages in thread
From: Shakeel Butt @ 2020-03-23 17:29 UTC (permalink / raw)
  To: SeongJae Park
  Cc: Andrew Morton, SeongJae Park, Andrea Arcangeli, Yang Shi, acme,
	alexander.shishkin, amit, brendan.d.gregg, Brendan Higgins,
	Qian Cai, Colin Ian King, Jonathan Corbet, dwmw, jolsa,
	Kirill A. Shutemov, mark.rutland, Mel Gorman, Minchan Kim,
	Ingo Molnar, namhyung, peterz, Randy Dunlap, David Rientjes,
	Steven Rostedt, shuah, sj38.park, Vlastimil Babka,
	Vladimir Davydov, Linux MM, linux-doc, LKML

On Thu, Mar 19, 2020 at 2:04 AM SeongJae Park <sjpark@amazon.com> wrote:
>
> On Wed, 18 Mar 2020 12:52:48 -0700 Shakeel Butt <shakeelb@google.com> wrote:
>
> > On Thu, Mar 12, 2020 at 3:44 AM SeongJae Park <sjpark@amazon.com> wrote:
> > >
> > > On Thu, 12 Mar 2020 11:07:59 +0100 SeongJae Park <sjpark@amazon.com> wrote:
> > >
> > > > On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> > > >
> > > > > On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
> > > > > >
> > > > > > From: SeongJae Park <sjpark@amazon.de>
> > > > > >
> > > > > > Introduction
> > > > > > ============
> > > > > >
> > > [...]
> > > > >
> > > > > I do want to question the actual motivation of the design followed by this work.
> > > > >
> > > > > With the already present Page Idle Tracking feature in the kernel, I
> > > > > can envision that the region sampling and adaptive region adjustments
> > > > > can be done in the user space. Due to sampling, the additional
> > > > > overhead will be very small and configurable.
> > > > >
> > > > > Additionally the proposed mechanism has inherent assumption of the
> > > > > presence of spatial locality (for virtual memory) in the monitored
> > > > > processes which is very workload dependent.
> > > > >
> > > > > Given that the the same mechanism can be implemented in the user space
> > > > > within tolerable overhead and is workload dependent, why it should be
> > > > > done in the kernel? What exactly is the advantage of implementing this
> > > > > in kernel?
> > > >
> > > > First of all, DAMON is not for only user space processes, but also for kernel
> > > > space core mechanisms.  Many of the core mechanisms will be able to use DAMON
> > > > for access pattern based optimizations, with light overhead and reasonable
> > > > accuracy.
> >
> > Which kernel space core mechanisms? I can see memory reclaim, do you
> > envision some other component as well.
>
> In addition to reclmation, I am thinking THP promotion/demotion decision, page
> migration among NUMA nodes on tier-memory configuration, and on-demand virtual
> machine live migration mechanisms could benefit from DAMON, for now.  I also
> believe more use-cases could be found.
>

I am still struggling to see how these use-cases require in-kernel
DAMON. For THP promotion/demotaion, madvise(MADV_[NO]HUGEPAGE) can be
used or we can introduce a new MADV_HUGIFY to synchronously convert
small pages to hugepages. Page migration on tier-memory is similar to
proactive reclaim and we already have migrate_pages/move_pages
syscalls. Basically why userspace DAMON can not perform these
operations?

> >
> > Let's discuss how this can interact with memory reclaim and we can see
> > if there is any benefit to do this in kernel.
>
> For reclaim, I believe we could try the proactive reclamation again using DAMON
> (Yes, I'm a fan of the idea of proactive reclamation).  I already implemented
> and evaluated a simple form of DAMON-based proactive reclamation for the proof
> of the concept (not for production).  In best case (parsec3/freqmine), it
> reduces 22.42% of system memory usage and 88.86% of residential sets while
> incurring only 3.07% runtime overhead.  Please refer to 'Appendix E' of the v7
> patchset[1] of DAMON.  It also describes the implementation and the evaluation
> of a data access monitoring-based THP promotion/demotion policy.
>
> The experimental implementation cannot be directly applied to kernel
> reclamation mechanism, because it requires users to specify the target
> applications.  Nonetheless, I think we can also easily adopt it inside the
> kernel by modifying kswapd to periodically select processes having huge RSS as
> targets, or by creating proactive reclamation type cgroups which selects every
> processes in the cgroup as targets.

Again I feel like these should be done in user space as these are more
policies instead of mechanisms. However if it can be shown that doing
that in userspace is very expensive as compared to in-kernel solution
then we can think about it.

>
> Of course, we can extend DAMON to support physical memory address space instead
> of virtual memory of specific processes.  Actually, this is in our TODO list.
> With the extension, applying DAMON to core memory management mechanisms will be
> even easier.

See below on physical memory monitoring.

>
> Nonetheless, this is only example but not concrete plan.  I didn't make the
> concrete plan yet, but believe that of DAMON will open the gates.
>
> [1] https://lore.kernel.org/linux-mm/20200318112722.30143-1-sjpark@amazon.com/
>
> >
> > > >
> > > > Implementing DAMON in user space is of course possible, but it will be
> > > > inefficient.  Using it from kernel space would make no sense, and it would
> > > > incur unnecessarily frequent kernel-user context switches, which is very
> > > > expensive nowadays.
> > >
> > > Forgot mentioning about the spatial locality.  Yes, it is workload dependant,
> > > but still pervasive in many case.  Also, many core mechanisms in kernel such as
> > > read-ahead or LRU are already using some similar assumptions.
> > >
> >
> > Not sure about the LRU but yes read-ahead in several places does
> > assume spatial locality. However most of those are configurable and
> > the userspace can enable/disable the read-ahead based on the workload.
>
> Sorry for my ambiguous description.  LRU uses temporal locality, which is
> somewhat similar to spatial locality, in terms of workload dependency.
>
> >
> > >
> > > If it is so problematic, you could set the maximum number of regions to the
> > > number of pages in the system so that each region monitors each page.
> > >
> >
> > How will this work in the process context? Number of regions equal to
> > the number of mapped pages?
>
> Suppose that a process has 1024 pages of working set and each of the pages has
> totally different access frequency.  If the maximum number of regions is 1024,
> the adaptive regions adjustment mechanism of DAMON will create each region for
> each page and monitor the access to each page.  So, the output will be same to
> straightforward periodic page-granularity access checking methods, which does
> not depend on the spatial locality.  Nevertheless, the monitoring overhead will
> be also similar to that.
>
> However, if any adjacent pages have similar access frequencies, DAMON will
> group those pages into one region.  This will reduce the total number of PTE
> Accessed bit checks and thus decrease the overhead.  In other words, DAMON do
> its best effort to minimize the overhead while preserving quality.
>
> Also suppose that the maximum number of region is smaller than 1024 in this
> case.  Pages having different access frequency will be grouped in same region
> and thus the output quality will be decreased.  However, the overhead will
> be half, as DAMON does one access check per each region.  This means that you
> can easily trade the monitoring quality with overhead by adjusting the maximum
> number of regions.
>

So, users can select to not merge the regions to keep the monitoring
quality high, right?

> >
> > Basically I am trying to envision the comparison of physical memory
> > based monitoring (using idle page tracking) vs pid+VA based
> > monitoring.
>
> I believe the core mechanisms of DAMON could be easily extended to the physical
> memory.  Indeed, it is in our TODO list, and I believe it would make use of
> DAMON in kernel core mechanisms much easier.
>

How will the sampling and regions representation/resizing work in
physical memory?

> >
> > Anyways I am not against your proposal. I am trying to see how to make
> > it more general to be applicable to more use-cases and one such
> > use-case which I am interested in is monitoring all the user pages on
> > the system for proactive reclaim purpose.
>
> Your questions gave me many insight and shed lights to the way DAMON should go.
> Really appreciate.  If you have any more questions or need my help, please let
> me know.
>
>

Shakeel


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: Re: Re: Re: Re: [PATCH v6 00/14] Introduce Data Access MONitor (DAMON)
  2020-03-23 17:29           ` Shakeel Butt
@ 2020-03-24  8:34             ` SeongJae Park
  0 siblings, 0 replies; 51+ messages in thread
From: SeongJae Park @ 2020-03-24  8:34 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: SeongJae Park, Andrew Morton, Jonathan.Cameron, SeongJae Park,
	Andrea Arcangeli, Yang Shi, acme, alexander.shishkin, amit,
	brendan.d.gregg, Brendan Higgins, Qian Cai, Colin Ian King,
	Jonathan Corbet, dwmw, jolsa, Kirill A. Shutemov, mark.rutland,
	Mel Gorman, Minchan Kim, Ingo Molnar, namhyung, peterz,
	Randy Dunlap, David Rientjes, Steven Rostedt, shuah, sj38.park,
	Vlastimil Babka, Vladimir Davydov, Linux MM, linux-doc, LKML

On Mon, 23 Mar 2020 10:29:24 -0700 Shakeel Butt <shakeelb@google.com> wrote:

> On Thu, Mar 19, 2020 at 2:04 AM SeongJae Park <sjpark@amazon.com> wrote:
> >
> > On Wed, 18 Mar 2020 12:52:48 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> >
> > > On Thu, Mar 12, 2020 at 3:44 AM SeongJae Park <sjpark@amazon.com> wrote:
> > > >
> > > > On Thu, 12 Mar 2020 11:07:59 +0100 SeongJae Park <sjpark@amazon.com> wrote:
> > > >
> > > > > On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@google.com> wrote:
> > > > >
> > > > > > On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@amazon.com> wrote:
> > > > > > >
> > > > > > > From: SeongJae Park <sjpark@amazon.de>
> > > > > > >
> > > > > > > Introduction
> > > > > > > ============
> > > > > > >
> > > > [...]
> > > > > >
> > > > > > I do want to question the actual motivation of the design followed by this work.
> > > > > >
> > > > > > With the already present Page Idle Tracking feature in the kernel, I
> > > > > > can envision that the region sampling and adaptive region adjustments
> > > > > > can be done in the user space. Due to sampling, the additional
> > > > > > overhead will be very small and configurable.
> > > > > >
> > > > > > Additionally the proposed mechanism has inherent assumption of the
> > > > > > presence of spatial locality (for virtual memory) in the monitored
> > > > > > processes which is very workload dependent.
> > > > > >
> > > > > > Given that the the same mechanism can be implemented in the user space
> > > > > > within tolerable overhead and is workload dependent, why it should be
> > > > > > done in the kernel? What exactly is the advantage of implementing this
> > > > > > in kernel?
> > > > >
> > > > > First of all, DAMON is not for only user space processes, but also for kernel
> > > > > space core mechanisms.  Many of the core mechanisms will be able to use DAMON
> > > > > for access pattern based optimizations, with light overhead and reasonable
> > > > > accuracy.
> > >
> > > Which kernel space core mechanisms? I can see memory reclaim, do you
> > > envision some other component as well.
> >
> > In addition to reclmation, I am thinking THP promotion/demotion decision, page
> > migration among NUMA nodes on tier-memory configuration, and on-demand virtual
> > machine live migration mechanisms could benefit from DAMON, for now.  I also
> > believe more use-cases could be found.
> >
> 
> I am still struggling to see how these use-cases require in-kernel
> DAMON. For THP promotion/demotaion, madvise(MADV_[NO]HUGEPAGE) can be
> used or we can introduce a new MADV_HUGIFY to synchronously convert
> small pages to hugepages. Page migration on tier-memory is similar to
> proactive reclaim and we already have migrate_pages/move_pages
> syscalls. Basically why userspace DAMON can not perform these
> operations?

You are understanding it right, the most cases could be implemented in user
space, either.  My point is that DAMON in kernel space could give _better_
experiences to users.

First, implementing DAMON and DAMON-based optimiztions in kernel makes more
people get the benefits without additional efforts.  I think this is important
because the benefits are often under-estimated while the required additional
efforts are over-estimated by many application developers who doesn't concern
about the system resources.  Thus, implementing DAMON in userspace could limit
the benefits to only early-adopters and leave many applications to not get
benefit from it.  For this, I would like to quote Jonathan, from his LWN
article[1] introducing DAMON:

    But one might well argue that production systems should Just Work without
    the need for this sort of manual tweaking, even if the tweaking is
    supported by a capable monitoring system. While DAMON looks like a useful
    tool now, users may be forgiven for hoping that it makes itself obsolete
    over time.

Second, the overhead comes from the user-kernel context changes.  For an
example, suppose that DAMON is implemented in user space and there are 100
regions.  If you want to know only whether the regions are accessed or not,
only 1 access check per each region is required and thus only 100 context
changes.  However, if you want to know the access frequency of each region in a
fine-grained score, say, ranges from 0 to 100.  In this case, the number of
required context changes becomes 100x.  Further, suppose that you want to know
only regions consitently keeping a range of the access frequency score (e.g.,
80-100 or 0-20) for several minutes.  The context changes becomes much more
frequent.  Contrarily, if we implement DAMON and DAMON-based operations in the
kernel, no context change is required.  Even if the DAMON-based operations are
implemented in user space, only one context change.


[1] https://lwn.net/Articles/812707/

> 
> > >
> > > Let's discuss how this can interact with memory reclaim and we can see
> > > if there is any benefit to do this in kernel.
> >
> > For reclaim, I believe we could try the proactive reclamation again using DAMON
> > (Yes, I'm a fan of the idea of proactive reclamation).  I already implemented
> > and evaluated a simple form of DAMON-based proactive reclamation for the proof
> > of the concept (not for production).  In best case (parsec3/freqmine), it
> > reduces 22.42% of system memory usage and 88.86% of residential sets while
> > incurring only 3.07% runtime overhead.  Please refer to 'Appendix E' of the v7
> > patchset[1] of DAMON.  It also describes the implementation and the evaluation
> > of a data access monitoring-based THP promotion/demotion policy.
> >
> > The experimental implementation cannot be directly applied to kernel
> > reclamation mechanism, because it requires users to specify the target
> > applications.  Nonetheless, I think we can also easily adopt it inside the
> > kernel by modifying kswapd to periodically select processes having huge RSS as
> > targets, or by creating proactive reclamation type cgroups which selects every
> > processes in the cgroup as targets.
> 
> Again I feel like these should be done in user space as these are more
> policies instead of mechanisms. However if it can be shown that doing
> that in userspace is very expensive as compared to in-kernel solution
> then we can think about it.

I think my above answers answer these.  Implementing it in kernel space will
make more users to get benefit, and a part of the overhead, which comes from
the context changes, could be estimated.

> 
> >
> > Of course, we can extend DAMON to support physical memory address space instead
> > of virtual memory of specific processes.  Actually, this is in our TODO list.
> > With the extension, applying DAMON to core memory management mechanisms will be
> > even easier.
> 
> See below on physical memory monitoring.
> 
> >
> > Nonetheless, this is only example but not concrete plan.  I didn't make the
> > concrete plan yet, but believe that of DAMON will open the gates.
> >
> > [1] https://lore.kernel.org/linux-mm/20200318112722.30143-1-sjpark@amazon.com/
> >
> > >
> > > > >
> > > > > Implementing DAMON in user space is of course possible, but it will be
> > > > > inefficient.  Using it from kernel space would make no sense, and it would
> > > > > incur unnecessarily frequent kernel-user context switches, which is very
> > > > > expensive nowadays.
> > > >
> > > > Forgot mentioning about the spatial locality.  Yes, it is workload dependant,
> > > > but still pervasive in many case.  Also, many core mechanisms in kernel such as
> > > > read-ahead or LRU are already using some similar assumptions.
> > > >
> > >
> > > Not sure about the LRU but yes read-ahead in several places does
> > > assume spatial locality. However most of those are configurable and
> > > the userspace can enable/disable the read-ahead based on the workload.
> >
> > Sorry for my ambiguous description.  LRU uses temporal locality, which is
> > somewhat similar to spatial locality, in terms of workload dependency.
> >
> > >
> > > >
> > > > If it is so problematic, you could set the maximum number of regions to the
> > > > number of pages in the system so that each region monitors each page.
> > > >
> > >
> > > How will this work in the process context? Number of regions equal to
> > > the number of mapped pages?
> >
> > Suppose that a process has 1024 pages of working set and each of the pages has
> > totally different access frequency.  If the maximum number of regions is 1024,
> > the adaptive regions adjustment mechanism of DAMON will create each region for
> > each page and monitor the access to each page.  So, the output will be same to
> > straightforward periodic page-granularity access checking methods, which does
> > not depend on the spatial locality.  Nevertheless, the monitoring overhead will
> > be also similar to that.
> >
> > However, if any adjacent pages have similar access frequencies, DAMON will
> > group those pages into one region.  This will reduce the total number of PTE
> > Accessed bit checks and thus decrease the overhead.  In other words, DAMON do
> > its best effort to minimize the overhead while preserving quality.
> >
> > Also suppose that the maximum number of region is smaller than 1024 in this
> > case.  Pages having different access frequency will be grouped in same region
> > and thus the output quality will be decreased.  However, the overhead will
> > be half, as DAMON does one access check per each region.  This means that you
> > can easily trade the monitoring quality with overhead by adjusting the maximum
> > number of regions.
> >
> 
> So, users can select to not merge the regions to keep the monitoring
> quality high, right?

No, DAMON provides no such option for now, though we could very easily add such
option in future if required.  Nonetheless, setting the maximum number of
regions high enough avoids quality degradation caused by unnecessary merges.

> 
> > >
> > > Basically I am trying to envision the comparison of physical memory
> > > based monitoring (using idle page tracking) vs pid+VA based
> > > monitoring.
> >
> > I believe the core mechanisms of DAMON could be easily extended to the physical
> > memory.  Indeed, it is in our TODO list, and I believe it would make use of
> > DAMON in kernel core mechanisms much easier.
> >
> 
> How will the sampling and regions representation/resizing work in
> physical memory?

Please note that below are a vague, might errorneous idea.  I made no concrete
plan for it yet, because I'm currently focusing for merge of virtual memory
supporting version DAMON.

For sampling, reuse PAGE_IDLE.  In some architectures, we could extend to use
architecture-specific access monitoring features, but using PAGE_IDLE might be
enough for the first step.

For regions representation, simply use the physical addresses instead of
virtual addresses.  Someone could ask whether physical address space has enough
spatial locality.  I believe so as long as some of the memory management
subsystems related to compaction and NUMA balancing works well.  Further,
DAMON-based optimizations for those subsystems and DAMON itself might be
possible.

Again, appreciate your questions.  I also learn and remind many things by
answering to you. :)  If my answer is insufficient or you have any further
comments, please feel free to let me know.


Thanks,
SeongJae Park

> 
> > >
> > > Anyways I am not against your proposal. I am trying to see how to make
> > > it more general to be applicable to more use-cases and one such
> > > use-case which I am interested in is monitoring all the user pages on
> > > the system for proactive reclaim purpose.
> >
> > Your questions gave me many insight and shed lights to the way DAMON should go.
> > Really appreciate.  If you have any more questions or need my help, please let
> > me know.
> >
> >
> 
> Shakeel
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2020-03-24  8:35 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-24 12:30 [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
2020-02-24 12:30 ` [PATCH v6 01/14] mm: " SeongJae Park
2020-03-10  8:54   ` Jonathan Cameron
2020-03-10 11:50     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 02/14] mm/damon: Implement region based sampling SeongJae Park
2020-03-10  8:57   ` Jonathan Cameron
2020-03-10 11:52     ` SeongJae Park
2020-03-10 15:55       ` Jonathan Cameron
2020-03-10 16:22         ` SeongJae Park
2020-03-10 17:39           ` Jonathan Cameron
2020-03-12  9:20             ` SeongJae Park
2020-03-13 17:29   ` Jonathan Cameron
2020-03-13 20:16     ` SeongJae Park
2020-03-17 11:32       ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 03/14] mm/damon: Adaptively adjust regions SeongJae Park
2020-03-10  8:57   ` Jonathan Cameron
2020-03-10 11:53     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 04/14] mm/damon: Apply dynamic memory mapping changes SeongJae Park
2020-03-10  9:00   ` Jonathan Cameron
2020-03-10 11:53     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 05/14] mm/damon: Implement callbacks SeongJae Park
2020-03-10  9:01   ` Jonathan Cameron
2020-03-10 11:55     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 06/14] mm/damon: Implement access pattern recording SeongJae Park
2020-03-10  9:01   ` Jonathan Cameron
2020-03-10 11:55     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 07/14] mm/damon: Implement kernel space API SeongJae Park
2020-03-10  9:01   ` Jonathan Cameron
2020-03-10 11:56     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 08/14] mm/damon: Add debugfs interface SeongJae Park
2020-03-10  9:02   ` Jonathan Cameron
2020-03-10 11:56     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 09/14] mm/damon: Add a tracepoint for result writing SeongJae Park
2020-03-10  9:03   ` Jonathan Cameron
2020-03-10 11:57     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 10/14] tools: Add a minimal user-space tool for DAMON SeongJae Park
2020-02-24 12:30 ` [PATCH v6 11/14] Documentation/admin-guide/mm: Add a document " SeongJae Park
2020-03-10  9:03   ` Jonathan Cameron
2020-03-10 11:57     ` SeongJae Park
2020-02-24 12:30 ` [PATCH v6 12/14] mm/damon: Add kunit tests SeongJae Park
2020-02-24 12:30 ` [PATCH v6 13/14] mm/damon: Add user selftests SeongJae Park
2020-02-24 12:30 ` [PATCH v6 14/14] MAINTAINERS: Update for DAMON SeongJae Park
2020-03-02 11:35 ` [PATCH v6 00/14] Introduce Data Access MONitor (DAMON) SeongJae Park
2020-03-09 10:23   ` SeongJae Park
2020-03-10 17:21 ` Shakeel Butt
2020-03-12 10:07   ` SeongJae Park
2020-03-12 10:43     ` SeongJae Park
2020-03-18 19:52       ` Shakeel Butt
2020-03-19  9:03         ` SeongJae Park
2020-03-23 17:29           ` Shakeel Butt
2020-03-24  8:34             ` SeongJae Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).