All of lore.kernel.org
 help / color / mirror / Atom feed
* [btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W
@ 2014-08-16  7:52 ` Fengguang Wu
  0 siblings, 0 replies; 16+ messages in thread
From: Fengguang Wu @ 2014-08-16  7:52 UTC (permalink / raw)
  To: Chris Mason; +Cc: Dave Hansen, LKML, lkp, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 6308 bytes --]

Hi Chris,

FYI, we noticed increased performance and reduced power consumption on

commit 4c468fd74859d901c0b78b42bef189295e00d74f ("btrfs: disable strict file flushes for renames and truncates")

test case: lkp-sb02/blogbench/1HDD-btrfs

0954d74f8f37a47  4c468fd74859d901c0b78b42b 
---------------  ------------------------- 
      1094 ± 1%      +7.8%       1180 ± 2%  TOTAL blogbench.write_score
      1396 ±19%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.active_objs
      1497 ±17%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.num_objs
       426 ±45%    -100.0%          0 ± 0%  TOTAL proc-vmstat.nr_vmscan_write
      1.02 ±38%    +193.1%       2.99 ±37%  TOTAL turbostat.%pc6
      0.12 ±48%    +113.8%       0.25 ±29%  TOTAL turbostat.%pc3
      0.38 ±18%    +117.7%       0.84 ±34%  TOTAL turbostat.%pc2
     19377 ±14%     -50.9%       9520 ±20%  TOTAL proc-vmstat.workingset_refault
        44 ±41%     +68.8%         75 ±28%  TOTAL cpuidle.POLL.usage
     31549 ± 1%     +95.7%      61732 ± 1%  TOTAL softirqs.BLOCK
      4547 ±10%     -38.3%       2804 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.active_objs
      4628 ±10%     -37.1%       2913 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.num_objs
     17597 ± 8%     -30.2%      12291 ±14%  TOTAL proc-vmstat.nr_writeback
     70335 ± 8%     -30.1%      49174 ±14%  TOTAL meminfo.Writeback
      3606 ± 6%     -29.1%       2556 ±10%  TOTAL slabinfo.mnt_cache.active_objs
     14763 ±12%     -29.9%      10350 ± 8%  TOTAL proc-vmstat.nr_dirty
      3766 ± 5%     -27.8%       2720 ±10%  TOTAL slabinfo.mnt_cache.num_objs
      3509 ± 6%     -28.5%       2510 ±11%  TOTAL slabinfo.kmalloc-4096.active_objs
     59201 ±11%     -30.1%      41396 ± 8%  TOTAL meminfo.Dirty
       479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.num_slabs
       479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.active_slabs
      3636 ± 6%     -26.6%       2669 ±10%  TOTAL slabinfo.kmalloc-4096.num_objs
      6040 ± 8%     -28.6%       4314 ± 6%  TOTAL slabinfo.kmalloc-96.num_objs
      5358 ± 5%     -25.1%       4011 ± 7%  TOTAL slabinfo.kmalloc-96.active_objs
    757208 ± 4%     -22.1%     589874 ± 4%  TOTAL meminfo.MemFree
    189508 ± 4%     -22.2%     147518 ± 4%  TOTAL proc-vmstat.nr_free_pages
    762781 ± 4%     -21.1%     601525 ± 4%  TOTAL vmstat.memory.free
     10491 ± 2%     -16.8%       8725 ± 2%  TOTAL slabinfo.kmalloc-64.num_objs
      2513 ± 4%     +16.3%       2923 ± 4%  TOTAL slabinfo.kmalloc-128.active_objs
      9768 ± 3%     -15.1%       8298 ± 1%  TOTAL slabinfo.kmalloc-64.active_objs
      2627 ± 4%     +14.0%       2995 ± 4%  TOTAL slabinfo.kmalloc-128.num_objs
     96242 ± 2%     +15.5%     111120 ± 2%  TOTAL slabinfo.btrfs_path.active_objs
      3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.num_slabs
      3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.active_slabs
     96580 ± 2%     +15.1%     111132 ± 2%  TOTAL slabinfo.btrfs_path.num_objs
      2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_slabs
      2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_slabs
    106133 ± 2%     +13.5%     120434 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_objs
    104326 ± 2%     +12.3%     117189 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_objs
    110759 ± 2%     +13.4%     125640 ± 2%  TOTAL slabinfo.btrfs_inode.active_objs
    110759 ± 2%     +13.4%     125642 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_objs
      4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_slabs
      4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_slabs
    110797 ± 2%     +13.4%     125663 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_objs
    110815 ± 2%     +13.4%     125669 ± 2%  TOTAL slabinfo.btrfs_inode.num_objs
      6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.num_slabs
      6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.active_slabs
      5607 ± 3%     -11.0%       4991 ± 3%  TOTAL slabinfo.kmalloc-256.active_objs
      6077 ± 2%      -9.9%       5476 ± 3%  TOTAL slabinfo.kmalloc-256.num_objs
     11153 ± 1%      -7.7%      10295 ± 2%  TOTAL proc-vmstat.nr_slab_unreclaimable
    547824 ± 3%     +16.5%     638368 ± 8%  TOTAL meminfo.Inactive(file)
    112124 ± 2%     +11.6%     125105 ± 2%  TOTAL slabinfo.radix_tree_node.active_objs
    112169 ± 2%     +11.6%     125134 ± 2%  TOTAL slabinfo.radix_tree_node.num_objs
      4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.num_slabs
      4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.active_slabs
    551119 ± 3%     +16.4%     641663 ± 8%  TOTAL meminfo.Inactive
    285596 ± 2%     +11.4%     318160 ± 2%  TOTAL meminfo.SReclaimable
       156 ± 3%    +118.0%        340 ± 2%  TOTAL iostat.sda.w/s
       282 ± 3%     -43.2%        160 ± 3%  TOTAL iostat.sda.avgrq-sz
      1.45 ±12%     -28.9%       1.03 ±18%  TOTAL iostat.sda.rrqm/s
       633 ± 2%     -26.5%        465 ± 2%  TOTAL iostat.sda.wrqm/s
    154423 ± 5%     +17.4%     181309 ± 3%  TOTAL time.voluntary_context_switches
       536 ± 5%     -11.5%        474 ± 9%  TOTAL iostat.sda.await
    102.71 ± 5%     +10.4%     113.36 ± 6%  TOTAL iostat.sda.avgqu-sz
     20842 ± 2%      -6.5%      19493 ± 2%  TOTAL iostat.sda.wkB/s
     20856 ± 2%      -6.4%      19525 ± 2%  TOTAL vmstat.io.bo
     75.48 ± 4%      -6.9%      70.27 ± 5%  TOTAL turbostat.%c0
       285 ± 4%      -6.6%        266 ± 5%  TOTAL time.percent_of_cpu_this_job_got
     34.58 ± 2%      -5.5%      32.68 ± 3%  TOTAL turbostat.Cor_W
     39.86 ± 2%      -5.1%      37.82 ± 3%  TOTAL turbostat.Pkg_W
      5805 ± 1%      -4.3%       5558 ± 3%  TOTAL vmstat.system.in
  10069454 ± 1%      +6.3%   10699830 ± 1%  TOTAL time.file_system_outputs


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Fengguang

[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 374 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
mkfs -t btrfs /dev/sda2
mount -t btrfs /dev/sda2 /fs/sda2
./blogbench -d /fs/sda2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W
@ 2014-08-16  7:52 ` Fengguang Wu
  0 siblings, 0 replies; 16+ messages in thread
From: Fengguang Wu @ 2014-08-16  7:52 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 6399 bytes --]

Hi Chris,

FYI, we noticed increased performance and reduced power consumption on

commit 4c468fd74859d901c0b78b42bef189295e00d74f ("btrfs: disable strict file flushes for renames and truncates")

test case: lkp-sb02/blogbench/1HDD-btrfs

0954d74f8f37a47  4c468fd74859d901c0b78b42b 
---------------  ------------------------- 
      1094 ± 1%      +7.8%       1180 ± 2%  TOTAL blogbench.write_score
      1396 ±19%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.active_objs
      1497 ±17%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.num_objs
       426 ±45%    -100.0%          0 ± 0%  TOTAL proc-vmstat.nr_vmscan_write
      1.02 ±38%    +193.1%       2.99 ±37%  TOTAL turbostat.%pc6
      0.12 ±48%    +113.8%       0.25 ±29%  TOTAL turbostat.%pc3
      0.38 ±18%    +117.7%       0.84 ±34%  TOTAL turbostat.%pc2
     19377 ±14%     -50.9%       9520 ±20%  TOTAL proc-vmstat.workingset_refault
        44 ±41%     +68.8%         75 ±28%  TOTAL cpuidle.POLL.usage
     31549 ± 1%     +95.7%      61732 ± 1%  TOTAL softirqs.BLOCK
      4547 ±10%     -38.3%       2804 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.active_objs
      4628 ±10%     -37.1%       2913 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.num_objs
     17597 ± 8%     -30.2%      12291 ±14%  TOTAL proc-vmstat.nr_writeback
     70335 ± 8%     -30.1%      49174 ±14%  TOTAL meminfo.Writeback
      3606 ± 6%     -29.1%       2556 ±10%  TOTAL slabinfo.mnt_cache.active_objs
     14763 ±12%     -29.9%      10350 ± 8%  TOTAL proc-vmstat.nr_dirty
      3766 ± 5%     -27.8%       2720 ±10%  TOTAL slabinfo.mnt_cache.num_objs
      3509 ± 6%     -28.5%       2510 ±11%  TOTAL slabinfo.kmalloc-4096.active_objs
     59201 ±11%     -30.1%      41396 ± 8%  TOTAL meminfo.Dirty
       479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.num_slabs
       479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.active_slabs
      3636 ± 6%     -26.6%       2669 ±10%  TOTAL slabinfo.kmalloc-4096.num_objs
      6040 ± 8%     -28.6%       4314 ± 6%  TOTAL slabinfo.kmalloc-96.num_objs
      5358 ± 5%     -25.1%       4011 ± 7%  TOTAL slabinfo.kmalloc-96.active_objs
    757208 ± 4%     -22.1%     589874 ± 4%  TOTAL meminfo.MemFree
    189508 ± 4%     -22.2%     147518 ± 4%  TOTAL proc-vmstat.nr_free_pages
    762781 ± 4%     -21.1%     601525 ± 4%  TOTAL vmstat.memory.free
     10491 ± 2%     -16.8%       8725 ± 2%  TOTAL slabinfo.kmalloc-64.num_objs
      2513 ± 4%     +16.3%       2923 ± 4%  TOTAL slabinfo.kmalloc-128.active_objs
      9768 ± 3%     -15.1%       8298 ± 1%  TOTAL slabinfo.kmalloc-64.active_objs
      2627 ± 4%     +14.0%       2995 ± 4%  TOTAL slabinfo.kmalloc-128.num_objs
     96242 ± 2%     +15.5%     111120 ± 2%  TOTAL slabinfo.btrfs_path.active_objs
      3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.num_slabs
      3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.active_slabs
     96580 ± 2%     +15.1%     111132 ± 2%  TOTAL slabinfo.btrfs_path.num_objs
      2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_slabs
      2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_slabs
    106133 ± 2%     +13.5%     120434 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_objs
    104326 ± 2%     +12.3%     117189 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_objs
    110759 ± 2%     +13.4%     125640 ± 2%  TOTAL slabinfo.btrfs_inode.active_objs
    110759 ± 2%     +13.4%     125642 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_objs
      4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_slabs
      4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_slabs
    110797 ± 2%     +13.4%     125663 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_objs
    110815 ± 2%     +13.4%     125669 ± 2%  TOTAL slabinfo.btrfs_inode.num_objs
      6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.num_slabs
      6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.active_slabs
      5607 ± 3%     -11.0%       4991 ± 3%  TOTAL slabinfo.kmalloc-256.active_objs
      6077 ± 2%      -9.9%       5476 ± 3%  TOTAL slabinfo.kmalloc-256.num_objs
     11153 ± 1%      -7.7%      10295 ± 2%  TOTAL proc-vmstat.nr_slab_unreclaimable
    547824 ± 3%     +16.5%     638368 ± 8%  TOTAL meminfo.Inactive(file)
    112124 ± 2%     +11.6%     125105 ± 2%  TOTAL slabinfo.radix_tree_node.active_objs
    112169 ± 2%     +11.6%     125134 ± 2%  TOTAL slabinfo.radix_tree_node.num_objs
      4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.num_slabs
      4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.active_slabs
    551119 ± 3%     +16.4%     641663 ± 8%  TOTAL meminfo.Inactive
    285596 ± 2%     +11.4%     318160 ± 2%  TOTAL meminfo.SReclaimable
       156 ± 3%    +118.0%        340 ± 2%  TOTAL iostat.sda.w/s
       282 ± 3%     -43.2%        160 ± 3%  TOTAL iostat.sda.avgrq-sz
      1.45 ±12%     -28.9%       1.03 ±18%  TOTAL iostat.sda.rrqm/s
       633 ± 2%     -26.5%        465 ± 2%  TOTAL iostat.sda.wrqm/s
    154423 ± 5%     +17.4%     181309 ± 3%  TOTAL time.voluntary_context_switches
       536 ± 5%     -11.5%        474 ± 9%  TOTAL iostat.sda.await
    102.71 ± 5%     +10.4%     113.36 ± 6%  TOTAL iostat.sda.avgqu-sz
     20842 ± 2%      -6.5%      19493 ± 2%  TOTAL iostat.sda.wkB/s
     20856 ± 2%      -6.4%      19525 ± 2%  TOTAL vmstat.io.bo
     75.48 ± 4%      -6.9%      70.27 ± 5%  TOTAL turbostat.%c0
       285 ± 4%      -6.6%        266 ± 5%  TOTAL time.percent_of_cpu_this_job_got
     34.58 ± 2%      -5.5%      32.68 ± 3%  TOTAL turbostat.Cor_W
     39.86 ± 2%      -5.1%      37.82 ± 3%  TOTAL turbostat.Pkg_W
      5805 ± 1%      -4.3%       5558 ± 3%  TOTAL vmstat.system.in
  10069454 ± 1%      +6.3%   10699830 ± 1%  TOTAL time.file_system_outputs


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Fengguang

[-- Attachment #2: reproduce.ksh --]
[-- Type: text/plain, Size: 374 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
mkfs -t btrfs /dev/sda2
mount -t btrfs /dev/sda2 /fs/sda2
./blogbench -d /fs/sda2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W
       [not found] ` <CAKHchJwQcH326Z8otiaXtKoKs6fH9s+e9BqJDi=H1bWKiuybAw@mail.gmail.com>
@ 2014-08-16 13:10     ` Fengguang Wu
  0 siblings, 0 replies; 16+ messages in thread
From: Fengguang Wu @ 2014-08-16 13:10 UTC (permalink / raw)
  To: Abhay Sachan; +Cc: Chris Mason, Dave Hansen, LKML, lkp, linux-btrfs

Hi Abhay,

On Sat, Aug 16, 2014 at 05:30:35PM +0530, Abhay Sachan wrote:
> Hi Fengguag,
> Sorry for the out of topic question, but what benchmark is this?
> I have heard about blogbench, but it doesn't give output in this format AFAIK.

It is blogbench run in the lkp-tests framework.

https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/

It will collect various system stats when blogbench runs. Then it
presents you the collected blogbench stats together with slabinfo,
meminfo, proc-vmstat, turbostat, softirqs etc. stats.

The basic steps to reproduce this report are

        $ split-job jobs/blogbench.yaml
        jobs/blogbench.yaml => ./blogbench-1HDD-ext4.yaml
        jobs/blogbench.yaml => ./blogbench-1HDD-xfs.yaml
        jobs/blogbench.yaml => ./blogbench-1HDD-btrfs.yaml

        # requires debian/ubuntu for now
        $ bin/setup-local --hdd /dev/sdaX ./blogbench-1HDD-btrfs.yaml

        $ bin/run-local ./blogbench-1HDD-btrfs.yaml

The report is generated by the "sbin/compare" script.

Thanks,
Fengguang

> On Sat, Aug 16, 2014 at 1:22 PM, Fengguang Wu <fengguang.wu@intel.com> wrote:
> > Hi Chris,
> >
> > FYI, we noticed increased performance and reduced power consumption on
> >
> > commit 4c468fd74859d901c0b78b42bef189295e00d74f ("btrfs: disable strict file flushes for renames and truncates")
> >
> > test case: lkp-sb02/blogbench/1HDD-btrfs
> >
> > 0954d74f8f37a47  4c468fd74859d901c0b78b42b
> > ---------------  -------------------------
> >       1094 ± 1%      +7.8%       1180 ± 2%  TOTAL blogbench.write_score
> >       1396 ±19%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.active_objs
> >       1497 ±17%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.num_objs
> >        426 ±45%    -100.0%          0 ± 0%  TOTAL proc-vmstat.nr_vmscan_write
> >       1.02 ±38%    +193.1%       2.99 ±37%  TOTAL turbostat.%pc6
> >       0.12 ±48%    +113.8%       0.25 ±29%  TOTAL turbostat.%pc3
> >       0.38 ±18%    +117.7%       0.84 ±34%  TOTAL turbostat.%pc2
> >      19377 ±14%     -50.9%       9520 ±20%  TOTAL proc-vmstat.workingset_refault
> >         44 ±41%     +68.8%         75 ±28%  TOTAL cpuidle.POLL.usage
> >      31549 ± 1%     +95.7%      61732 ± 1%  TOTAL softirqs.BLOCK
> >       4547 ±10%     -38.3%       2804 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.active_objs
> >       4628 ±10%     -37.1%       2913 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.num_objs
> >      17597 ± 8%     -30.2%      12291 ±14%  TOTAL proc-vmstat.nr_writeback
> >      70335 ± 8%     -30.1%      49174 ±14%  TOTAL meminfo.Writeback
> >       3606 ± 6%     -29.1%       2556 ±10%  TOTAL slabinfo.mnt_cache.active_objs
> >      14763 ±12%     -29.9%      10350 ± 8%  TOTAL proc-vmstat.nr_dirty
> >       3766 ± 5%     -27.8%       2720 ±10%  TOTAL slabinfo.mnt_cache.num_objs
> >       3509 ± 6%     -28.5%       2510 ±11%  TOTAL slabinfo.kmalloc-4096.active_objs
> >      59201 ±11%     -30.1%      41396 ± 8%  TOTAL meminfo.Dirty
> >        479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.num_slabs
> >        479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.active_slabs
> >       3636 ± 6%     -26.6%       2669 ±10%  TOTAL slabinfo.kmalloc-4096.num_objs
> >       6040 ± 8%     -28.6%       4314 ± 6%  TOTAL slabinfo.kmalloc-96.num_objs
> >       5358 ± 5%     -25.1%       4011 ± 7%  TOTAL slabinfo.kmalloc-96.active_objs
> >     757208 ± 4%     -22.1%     589874 ± 4%  TOTAL meminfo.MemFree
> >     189508 ± 4%     -22.2%     147518 ± 4%  TOTAL proc-vmstat.nr_free_pages
> >     762781 ± 4%     -21.1%     601525 ± 4%  TOTAL vmstat.memory.free
> >      10491 ± 2%     -16.8%       8725 ± 2%  TOTAL slabinfo.kmalloc-64.num_objs
> >       2513 ± 4%     +16.3%       2923 ± 4%  TOTAL slabinfo.kmalloc-128.active_objs
> >       9768 ± 3%     -15.1%       8298 ± 1%  TOTAL slabinfo.kmalloc-64.active_objs
> >       2627 ± 4%     +14.0%       2995 ± 4%  TOTAL slabinfo.kmalloc-128.num_objs
> >      96242 ± 2%     +15.5%     111120 ± 2%  TOTAL slabinfo.btrfs_path.active_objs
> >       3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.num_slabs
> >       3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.active_slabs
> >      96580 ± 2%     +15.1%     111132 ± 2%  TOTAL slabinfo.btrfs_path.num_objs
> >       2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_slabs
> >       2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_slabs
> >     106133 ± 2%     +13.5%     120434 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_objs
> >     104326 ± 2%     +12.3%     117189 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_objs
> >     110759 ± 2%     +13.4%     125640 ± 2%  TOTAL slabinfo.btrfs_inode.active_objs
> >     110759 ± 2%     +13.4%     125642 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_objs
> >       4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_slabs
> >       4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_slabs
> >     110797 ± 2%     +13.4%     125663 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_objs
> >     110815 ± 2%     +13.4%     125669 ± 2%  TOTAL slabinfo.btrfs_inode.num_objs
> >       6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.num_slabs
> >       6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.active_slabs
> >       5607 ± 3%     -11.0%       4991 ± 3%  TOTAL slabinfo.kmalloc-256.active_objs
> >       6077 ± 2%      -9.9%       5476 ± 3%  TOTAL slabinfo.kmalloc-256.num_objs
> >      11153 ± 1%      -7.7%      10295 ± 2%  TOTAL proc-vmstat.nr_slab_unreclaimable
> >     547824 ± 3%     +16.5%     638368 ± 8%  TOTAL meminfo.Inactive(file)
> >     112124 ± 2%     +11.6%     125105 ± 2%  TOTAL slabinfo.radix_tree_node.active_objs
> >     112169 ± 2%     +11.6%     125134 ± 2%  TOTAL slabinfo.radix_tree_node.num_objs
> >       4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.num_slabs
> >       4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.active_slabs
> >     551119 ± 3%     +16.4%     641663 ± 8%  TOTAL meminfo.Inactive
> >     285596 ± 2%     +11.4%     318160 ± 2%  TOTAL meminfo.SReclaimable
> >        156 ± 3%    +118.0%        340 ± 2%  TOTAL iostat.sda.w/s
> >        282 ± 3%     -43.2%        160 ± 3%  TOTAL iostat.sda.avgrq-sz
> >       1.45 ±12%     -28.9%       1.03 ±18%  TOTAL iostat.sda.rrqm/s
> >        633 ± 2%     -26.5%        465 ± 2%  TOTAL iostat.sda.wrqm/s
> >     154423 ± 5%     +17.4%     181309 ± 3%  TOTAL time.voluntary_context_switches
> >        536 ± 5%     -11.5%        474 ± 9%  TOTAL iostat.sda.await
> >     102.71 ± 5%     +10.4%     113.36 ± 6%  TOTAL iostat.sda.avgqu-sz
> >      20842 ± 2%      -6.5%      19493 ± 2%  TOTAL iostat.sda.wkB/s
> >      20856 ± 2%      -6.4%      19525 ± 2%  TOTAL vmstat.io.bo
> >      75.48 ± 4%      -6.9%      70.27 ± 5%  TOTAL turbostat.%c0
> >        285 ± 4%      -6.6%        266 ± 5%  TOTAL time.percent_of_cpu_this_job_got
> >      34.58 ± 2%      -5.5%      32.68 ± 3%  TOTAL turbostat.Cor_W
> >      39.86 ± 2%      -5.1%      37.82 ± 3%  TOTAL turbostat.Pkg_W
> >       5805 ± 1%      -4.3%       5558 ± 3%  TOTAL vmstat.system.in
> >   10069454 ± 1%      +6.3%   10699830 ± 1%  TOTAL time.file_system_outputs
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
> > Thanks,
> > Fengguang
> 
> 
> 
> -- 
> Abhay

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W
@ 2014-08-16 13:10     ` Fengguang Wu
  0 siblings, 0 replies; 16+ messages in thread
From: Fengguang Wu @ 2014-08-16 13:10 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 7934 bytes --]

Hi Abhay,

On Sat, Aug 16, 2014 at 05:30:35PM +0530, Abhay Sachan wrote:
> Hi Fengguag,
> Sorry for the out of topic question, but what benchmark is this?
> I have heard about blogbench, but it doesn't give output in this format AFAIK.

It is blogbench run in the lkp-tests framework.

https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/

It will collect various system stats when blogbench runs. Then it
presents you the collected blogbench stats together with slabinfo,
meminfo, proc-vmstat, turbostat, softirqs etc. stats.

The basic steps to reproduce this report are

        $ split-job jobs/blogbench.yaml
        jobs/blogbench.yaml => ./blogbench-1HDD-ext4.yaml
        jobs/blogbench.yaml => ./blogbench-1HDD-xfs.yaml
        jobs/blogbench.yaml => ./blogbench-1HDD-btrfs.yaml

        # requires debian/ubuntu for now
        $ bin/setup-local --hdd /dev/sdaX ./blogbench-1HDD-btrfs.yaml

        $ bin/run-local ./blogbench-1HDD-btrfs.yaml

The report is generated by the "sbin/compare" script.

Thanks,
Fengguang

> On Sat, Aug 16, 2014 at 1:22 PM, Fengguang Wu <fengguang.wu@intel.com> wrote:
> > Hi Chris,
> >
> > FYI, we noticed increased performance and reduced power consumption on
> >
> > commit 4c468fd74859d901c0b78b42bef189295e00d74f ("btrfs: disable strict file flushes for renames and truncates")
> >
> > test case: lkp-sb02/blogbench/1HDD-btrfs
> >
> > 0954d74f8f37a47  4c468fd74859d901c0b78b42b
> > ---------------  -------------------------
> >       1094 ± 1%      +7.8%       1180 ± 2%  TOTAL blogbench.write_score
> >       1396 ±19%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.active_objs
> >       1497 ±17%    -100.0%          0 ± 0%  TOTAL slabinfo.btrfs_delalloc_work.num_objs
> >        426 ±45%    -100.0%          0 ± 0%  TOTAL proc-vmstat.nr_vmscan_write
> >       1.02 ±38%    +193.1%       2.99 ±37%  TOTAL turbostat.%pc6
> >       0.12 ±48%    +113.8%       0.25 ±29%  TOTAL turbostat.%pc3
> >       0.38 ±18%    +117.7%       0.84 ±34%  TOTAL turbostat.%pc2
> >      19377 ±14%     -50.9%       9520 ±20%  TOTAL proc-vmstat.workingset_refault
> >         44 ±41%     +68.8%         75 ±28%  TOTAL cpuidle.POLL.usage
> >      31549 ± 1%     +95.7%      61732 ± 1%  TOTAL softirqs.BLOCK
> >       4547 ±10%     -38.3%       2804 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.active_objs
> >       4628 ±10%     -37.1%       2913 ± 9%  TOTAL slabinfo.btrfs_ordered_extent.num_objs
> >      17597 ± 8%     -30.2%      12291 ±14%  TOTAL proc-vmstat.nr_writeback
> >      70335 ± 8%     -30.1%      49174 ±14%  TOTAL meminfo.Writeback
> >       3606 ± 6%     -29.1%       2556 ±10%  TOTAL slabinfo.mnt_cache.active_objs
> >      14763 ±12%     -29.9%      10350 ± 8%  TOTAL proc-vmstat.nr_dirty
> >       3766 ± 5%     -27.8%       2720 ±10%  TOTAL slabinfo.mnt_cache.num_objs
> >       3509 ± 6%     -28.5%       2510 ±11%  TOTAL slabinfo.kmalloc-4096.active_objs
> >      59201 ±11%     -30.1%      41396 ± 8%  TOTAL meminfo.Dirty
> >        479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.num_slabs
> >        479 ±13%     -30.5%        333 ±10%  TOTAL slabinfo.kmalloc-4096.active_slabs
> >       3636 ± 6%     -26.6%       2669 ±10%  TOTAL slabinfo.kmalloc-4096.num_objs
> >       6040 ± 8%     -28.6%       4314 ± 6%  TOTAL slabinfo.kmalloc-96.num_objs
> >       5358 ± 5%     -25.1%       4011 ± 7%  TOTAL slabinfo.kmalloc-96.active_objs
> >     757208 ± 4%     -22.1%     589874 ± 4%  TOTAL meminfo.MemFree
> >     189508 ± 4%     -22.2%     147518 ± 4%  TOTAL proc-vmstat.nr_free_pages
> >     762781 ± 4%     -21.1%     601525 ± 4%  TOTAL vmstat.memory.free
> >      10491 ± 2%     -16.8%       8725 ± 2%  TOTAL slabinfo.kmalloc-64.num_objs
> >       2513 ± 4%     +16.3%       2923 ± 4%  TOTAL slabinfo.kmalloc-128.active_objs
> >       9768 ± 3%     -15.1%       8298 ± 1%  TOTAL slabinfo.kmalloc-64.active_objs
> >       2627 ± 4%     +14.0%       2995 ± 4%  TOTAL slabinfo.kmalloc-128.num_objs
> >      96242 ± 2%     +15.5%     111120 ± 2%  TOTAL slabinfo.btrfs_path.active_objs
> >       3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.num_slabs
> >       3448 ± 2%     +15.1%       3968 ± 2%  TOTAL slabinfo.btrfs_path.active_slabs
> >      96580 ± 2%     +15.1%     111132 ± 2%  TOTAL slabinfo.btrfs_path.num_objs
> >       2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_slabs
> >       2526 ± 2%     +13.5%       2867 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_slabs
> >     106133 ± 2%     +13.5%     120434 ± 1%  TOTAL slabinfo.btrfs_extent_state.num_objs
> >     104326 ± 2%     +12.3%     117189 ± 1%  TOTAL slabinfo.btrfs_extent_state.active_objs
> >     110759 ± 2%     +13.4%     125640 ± 2%  TOTAL slabinfo.btrfs_inode.active_objs
> >     110759 ± 2%     +13.4%     125642 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_objs
> >       4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_slabs
> >       4261 ± 2%     +13.4%       4832 ± 2%  TOTAL slabinfo.btrfs_delayed_node.active_slabs
> >     110797 ± 2%     +13.4%     125663 ± 2%  TOTAL slabinfo.btrfs_delayed_node.num_objs
> >     110815 ± 2%     +13.4%     125669 ± 2%  TOTAL slabinfo.btrfs_inode.num_objs
> >       6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.num_slabs
> >       6926 ± 2%     +13.4%       7853 ± 2%  TOTAL slabinfo.btrfs_inode.active_slabs
> >       5607 ± 3%     -11.0%       4991 ± 3%  TOTAL slabinfo.kmalloc-256.active_objs
> >       6077 ± 2%      -9.9%       5476 ± 3%  TOTAL slabinfo.kmalloc-256.num_objs
> >      11153 ± 1%      -7.7%      10295 ± 2%  TOTAL proc-vmstat.nr_slab_unreclaimable
> >     547824 ± 3%     +16.5%     638368 ± 8%  TOTAL meminfo.Inactive(file)
> >     112124 ± 2%     +11.6%     125105 ± 2%  TOTAL slabinfo.radix_tree_node.active_objs
> >     112169 ± 2%     +11.6%     125134 ± 2%  TOTAL slabinfo.radix_tree_node.num_objs
> >       4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.num_slabs
> >       4005 ± 2%     +11.6%       4468 ± 2%  TOTAL slabinfo.radix_tree_node.active_slabs
> >     551119 ± 3%     +16.4%     641663 ± 8%  TOTAL meminfo.Inactive
> >     285596 ± 2%     +11.4%     318160 ± 2%  TOTAL meminfo.SReclaimable
> >        156 ± 3%    +118.0%        340 ± 2%  TOTAL iostat.sda.w/s
> >        282 ± 3%     -43.2%        160 ± 3%  TOTAL iostat.sda.avgrq-sz
> >       1.45 ±12%     -28.9%       1.03 ±18%  TOTAL iostat.sda.rrqm/s
> >        633 ± 2%     -26.5%        465 ± 2%  TOTAL iostat.sda.wrqm/s
> >     154423 ± 5%     +17.4%     181309 ± 3%  TOTAL time.voluntary_context_switches
> >        536 ± 5%     -11.5%        474 ± 9%  TOTAL iostat.sda.await
> >     102.71 ± 5%     +10.4%     113.36 ± 6%  TOTAL iostat.sda.avgqu-sz
> >      20842 ± 2%      -6.5%      19493 ± 2%  TOTAL iostat.sda.wkB/s
> >      20856 ± 2%      -6.4%      19525 ± 2%  TOTAL vmstat.io.bo
> >      75.48 ± 4%      -6.9%      70.27 ± 5%  TOTAL turbostat.%c0
> >        285 ± 4%      -6.6%        266 ± 5%  TOTAL time.percent_of_cpu_this_job_got
> >      34.58 ± 2%      -5.5%      32.68 ± 3%  TOTAL turbostat.Cor_W
> >      39.86 ± 2%      -5.1%      37.82 ± 3%  TOTAL turbostat.Pkg_W
> >       5805 ± 1%      -4.3%       5558 ± 3%  TOTAL vmstat.system.in
> >   10069454 ± 1%      +6.3%   10699830 ± 1%  TOTAL time.file_system_outputs
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
> > Thanks,
> > Fengguang
> 
> 
> 
> -- 
> Abhay

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [btrfs] 8d875f95: xfstests.generic.226.fail
  2014-08-16  7:52 ` Fengguang Wu
@ 2014-08-19 11:58   ` Fengguang Wu
  -1 siblings, 0 replies; 16+ messages in thread
From: Fengguang Wu @ 2014-08-19 11:58 UTC (permalink / raw)
  To: Chris Mason; +Cc: Dave Hansen, LKML, lkp, linux-btrfs, Abhay Sachan

Hi Chris,

We noticed an xfstests failure on commit

8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")

It's 100% reproducible in the 5 test runs.

test case: snb-drag/xfstests/4HDD-btrfs-generic-mid

27b9a8122ff71a8  8d875f95da43c6a8f18f77869
---------------  -------------------------
                    %change               %stddev
                       |                 /
         0           +Inf%          1 ± 0%  TOTAL xfstests.generic.226.fail

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [btrfs] 8d875f95: xfstests.generic.226.fail
@ 2014-08-19 11:58   ` Fengguang Wu
  0 siblings, 0 replies; 16+ messages in thread
From: Fengguang Wu @ 2014-08-19 11:58 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 550 bytes --]

Hi Chris,

We noticed an xfstests failure on commit

8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")

It's 100% reproducible in the 5 test runs.

test case: snb-drag/xfstests/4HDD-btrfs-generic-mid

27b9a8122ff71a8  8d875f95da43c6a8f18f77869
---------------  -------------------------
                    %change               %stddev
                       |                 /
         0           +Inf%          1 ± 0%  TOTAL xfstests.generic.226.fail

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
  2014-08-19 11:58   ` Fengguang Wu
@ 2014-08-19 14:23     ` David Sterba
  -1 siblings, 0 replies; 16+ messages in thread
From: David Sterba @ 2014-08-19 14:23 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Chris Mason, Dave Hansen, LKML, lkp, linux-btrfs, Abhay Sachan

On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
> We noticed an xfstests failure on commit
> 
> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
> 
> It's 100% reproducible in the 5 test runs.

Same here, different mkfs configurations.

generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
    --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
    +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
    @@ -1,6 +1,8 @@
     QA output created by 226
     --> mkfs 256m filesystem
     --> 16 buffered 64m writes in a loop
    -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
    +1 2 3 4 pwrite64: No space left on device
    +5 6 7 8 9 10 11 12 pwrite64: No space left on device
    +13 14 15 16

enospc on a small filesystem (256M)

# btrfs fi df /mnt/a2
System, single: total=4.00MiB, used=4.00KiB
Data+Metadata, single: total=252.00MiB, used=31.09MiB
GlobalReserve, single: total=4.00MiB, used=0.00B

$ df -h /mnt/a2
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda9             256M   16M  241M   6% /mnt/a2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
@ 2014-08-19 14:23     ` David Sterba
  0 siblings, 0 replies; 16+ messages in thread
From: David Sterba @ 2014-08-19 14:23 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 1231 bytes --]

On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
> We noticed an xfstests failure on commit
> 
> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
> 
> It's 100% reproducible in the 5 test runs.

Same here, different mkfs configurations.

generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
    --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
    +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
    @@ -1,6 +1,8 @@
     QA output created by 226
     --> mkfs 256m filesystem
     --> 16 buffered 64m writes in a loop
    -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
    +1 2 3 4 pwrite64: No space left on device
    +5 6 7 8 9 10 11 12 pwrite64: No space left on device
    +13 14 15 16

enospc on a small filesystem (256M)

# btrfs fi df /mnt/a2
System, single: total=4.00MiB, used=4.00KiB
Data+Metadata, single: total=252.00MiB, used=31.09MiB
GlobalReserve, single: total=4.00MiB, used=0.00B

$ df -h /mnt/a2
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda9             256M   16M  241M   6% /mnt/a2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
  2014-08-19 14:23     ` David Sterba
@ 2014-08-19 14:58       ` Chris Mason
  -1 siblings, 0 replies; 16+ messages in thread
From: Chris Mason @ 2014-08-19 14:58 UTC (permalink / raw)
  To: dsterba, Fengguang Wu, Dave Hansen, LKML, lkp, linux-btrfs, Abhay Sachan

On 08/19/2014 10:23 AM, David Sterba wrote:
> On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
>> We noticed an xfstests failure on commit
>>
>> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
>>
>> It's 100% reproducible in the 5 test runs.
> 
> Same here, different mkfs configurations.
> 
> generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
>     --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
>     +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
>     @@ -1,6 +1,8 @@
>      QA output created by 226
>      --> mkfs 256m filesystem
>      --> 16 buffered 64m writes in a loop
>     -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>     +1 2 3 4 pwrite64: No space left on device
>     +5 6 7 8 9 10 11 12 pwrite64: No space left on device
>     +13 14 15 16
> 
> enospc on a small filesystem (256M)

I'm calling filemap flush more often, but otherwise everything else is
the same.  I'll take a look.

-chris

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
@ 2014-08-19 14:58       ` Chris Mason
  0 siblings, 0 replies; 16+ messages in thread
From: Chris Mason @ 2014-08-19 14:58 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 1124 bytes --]

On 08/19/2014 10:23 AM, David Sterba wrote:
> On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
>> We noticed an xfstests failure on commit
>>
>> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
>>
>> It's 100% reproducible in the 5 test runs.
> 
> Same here, different mkfs configurations.
> 
> generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
>     --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
>     +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
>     @@ -1,6 +1,8 @@
>      QA output created by 226
>      --> mkfs 256m filesystem
>      --> 16 buffered 64m writes in a loop
>     -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>     +1 2 3 4 pwrite64: No space left on device
>     +5 6 7 8 9 10 11 12 pwrite64: No space left on device
>     +13 14 15 16
> 
> enospc on a small filesystem (256M)

I'm calling filemap flush more often, but otherwise everything else is
the same.  I'll take a look.

-chris

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
  2014-08-19 14:58       ` Chris Mason
@ 2014-08-20 10:52         ` Miao Xie
  -1 siblings, 0 replies; 16+ messages in thread
From: Miao Xie @ 2014-08-20 10:52 UTC (permalink / raw)
  To: Chris Mason, dsterba, Fengguang Wu, Dave Hansen, LKML, lkp,
	linux-btrfs, Abhay Sachan

On Tue, 19 Aug 2014 10:58:09 -0400, Chris Mason wrote:
> On 08/19/2014 10:23 AM, David Sterba wrote:
>> On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
>>> We noticed an xfstests failure on commit
>>>
>>> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
>>>
>>> It's 100% reproducible in the 5 test runs.
>>
>> Same here, different mkfs configurations.
>>
>> generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
>>     --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
>>     +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
>>     @@ -1,6 +1,8 @@
>>      QA output created by 226
>>      --> mkfs 256m filesystem
>>      --> 16 buffered 64m writes in a loop
>>     -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>>     +1 2 3 4 pwrite64: No space left on device
>>     +5 6 7 8 9 10 11 12 pwrite64: No space left on device
>>     +13 14 15 16
>>
>> enospc on a small filesystem (256M)
> 
> I'm calling filemap flush more often, but otherwise everything else is
> the same.  I'll take a look.

The above patch also introduced a performance regression(~70%DOWN).
We can reproduce this regression by fio, here is the config:

[global]
ioengine=falloc
iodepth=1
direct=0
buffered=0
directory=<mnt>
nrfiles=1
filesize=100m
group_reporting

[sequential aio-dio write]
stonewall
ioengine=posixaio
numjobs=1
iodepth=128
buffered=0
direct=0
rw=write
bs=64k
filename=fragmented_file

I found the problem is caused by the following function:

int btrfs_release_file(struct inode *inode, struct file *filp)
{
	...
	filemap_flush(inode->i_mapping);
	return 0;
}

I don't think we need flush file at most situation. Ext4 flushes the file only
after someone truncate the file to be zero-length, I don't know the real reason
why ext4 flush the file only after the file is truncated, someone said it is to
reduce the risk that the users find a zero-length file after a crash, which happens
after truncate-write-close process.

If we change btrfs_release_file by ext4's implementation, both the failure of
xfstests's generic/226  and performance regression can be fixed.

Thanks
Miao

> 
> -chris
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
@ 2014-08-20 10:52         ` Miao Xie
  0 siblings, 0 replies; 16+ messages in thread
From: Miao Xie @ 2014-08-20 10:52 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 2600 bytes --]

On Tue, 19 Aug 2014 10:58:09 -0400, Chris Mason wrote:
> On 08/19/2014 10:23 AM, David Sterba wrote:
>> On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
>>> We noticed an xfstests failure on commit
>>>
>>> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
>>>
>>> It's 100% reproducible in the 5 test runs.
>>
>> Same here, different mkfs configurations.
>>
>> generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
>>     --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
>>     +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
>>     @@ -1,6 +1,8 @@
>>      QA output created by 226
>>      --> mkfs 256m filesystem
>>      --> 16 buffered 64m writes in a loop
>>     -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>>     +1 2 3 4 pwrite64: No space left on device
>>     +5 6 7 8 9 10 11 12 pwrite64: No space left on device
>>     +13 14 15 16
>>
>> enospc on a small filesystem (256M)
> 
> I'm calling filemap flush more often, but otherwise everything else is
> the same.  I'll take a look.

The above patch also introduced a performance regression(~70%DOWN).
We can reproduce this regression by fio, here is the config:

[global]
ioengine=falloc
iodepth=1
direct=0
buffered=0
directory=<mnt>
nrfiles=1
filesize=100m
group_reporting

[sequential aio-dio write]
stonewall
ioengine=posixaio
numjobs=1
iodepth=128
buffered=0
direct=0
rw=write
bs=64k
filename=fragmented_file

I found the problem is caused by the following function:

int btrfs_release_file(struct inode *inode, struct file *filp)
{
	...
	filemap_flush(inode->i_mapping);
	return 0;
}

I don't think we need flush file at most situation. Ext4 flushes the file only
after someone truncate the file to be zero-length, I don't know the real reason
why ext4 flush the file only after the file is truncated, someone said it is to
reduce the risk that the users find a zero-length file after a crash, which happens
after truncate-write-close process.

If we change btrfs_release_file by ext4's implementation, both the failure of
xfstests's generic/226  and performance regression can be fixed.

Thanks
Miao

> 
> -chris
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
  2014-08-20 10:52         ` Miao Xie
@ 2014-08-20 14:07           ` Chris Mason
  -1 siblings, 0 replies; 16+ messages in thread
From: Chris Mason @ 2014-08-20 14:07 UTC (permalink / raw)
  To: miaox, dsterba, Fengguang Wu, Dave Hansen, LKML, lkp,
	linux-btrfs, Abhay Sachan



On 08/20/2014 06:52 AM, Miao Xie wrote:
> On Tue, 19 Aug 2014 10:58:09 -0400, Chris Mason wrote:
>> On 08/19/2014 10:23 AM, David Sterba wrote:
>>> On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
>>>> We noticed an xfstests failure on commit
>>>>
>>>> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
>>>>
>>>> It's 100% reproducible in the 5 test runs.
>>>
>>> Same here, different mkfs configurations.
>>>
>>> generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
>>>     --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
>>>     +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
>>>     @@ -1,6 +1,8 @@
>>>      QA output created by 226
>>>      --> mkfs 256m filesystem
>>>      --> 16 buffered 64m writes in a loop
>>>     -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>>>     +1 2 3 4 pwrite64: No space left on device
>>>     +5 6 7 8 9 10 11 12 pwrite64: No space left on device
>>>     +13 14 15 16
>>>
>>> enospc on a small filesystem (256M)
>>
>> I'm calling filemap flush more often, but otherwise everything else is
>> the same.  I'll take a look.
> 
> I found the problem is caused by the following function:
> 
> int btrfs_release_file(struct inode *inode, struct file *filp)
> {
> 	...
> 	filemap_flush(inode->i_mapping);
> 	return 0;
> }
> 
> I don't think we need flush file at most situation. Ext4 flushes the file only
> after someone truncate the file to be zero-length, I don't know the real reason
> why ext4 flush the file only after the file is truncated, someone said it is to
> reduce the risk that the users find a zero-length file after a crash, which happens
> after truncate-write-close process.
> 
> If we change btrfs_release_file by ext4's implementation, both the failure of
> xfstests's generic/226  and performance regression can be fixed.
> 

You're completely right, my original had more checks here and I stripped
them out by accident.  Fixing, thanks!

-chris


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [btrfs] 8d875f95: xfstests.generic.226.fail
@ 2014-08-20 14:07           ` Chris Mason
  0 siblings, 0 replies; 16+ messages in thread
From: Chris Mason @ 2014-08-20 14:07 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 2135 bytes --]



On 08/20/2014 06:52 AM, Miao Xie wrote:
> On Tue, 19 Aug 2014 10:58:09 -0400, Chris Mason wrote:
>> On 08/19/2014 10:23 AM, David Sterba wrote:
>>> On Tue, Aug 19, 2014 at 07:58:20PM +0800, Fengguang Wu wrote:
>>>> We noticed an xfstests failure on commit
>>>>
>>>> 8d875f95da43c6a8f18f77869f2ef26e9594fecc ("btrfs: disable strict file flushes for renames and truncates")
>>>>
>>>> It's 100% reproducible in the 5 test runs.
>>>
>>> Same here, different mkfs configurations.
>>>
>>> generic/226 28s ...    [16:11:52] [16:12:55] - output mismatch (see /root/xfstests/results//generic/226.out.bad)
>>>     --- tests/generic/226.out   2013-05-29 17:16:03.000000000 +0200
>>>     +++ /root/xfstests/results//generic/226.out.bad     2014-08-19 16:12:55.000000000 +0200
>>>     @@ -1,6 +1,8 @@
>>>      QA output created by 226
>>>      --> mkfs 256m filesystem
>>>      --> 16 buffered 64m writes in a loop
>>>     -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
>>>     +1 2 3 4 pwrite64: No space left on device
>>>     +5 6 7 8 9 10 11 12 pwrite64: No space left on device
>>>     +13 14 15 16
>>>
>>> enospc on a small filesystem (256M)
>>
>> I'm calling filemap flush more often, but otherwise everything else is
>> the same.  I'll take a look.
> 
> I found the problem is caused by the following function:
> 
> int btrfs_release_file(struct inode *inode, struct file *filp)
> {
> 	...
> 	filemap_flush(inode->i_mapping);
> 	return 0;
> }
> 
> I don't think we need flush file at most situation. Ext4 flushes the file only
> after someone truncate the file to be zero-length, I don't know the real reason
> why ext4 flush the file only after the file is truncated, someone said it is to
> reduce the risk that the users find a zero-length file after a crash, which happens
> after truncate-write-close process.
> 
> If we change btrfs_release_file by ext4's implementation, both the failure of
> xfstests's generic/226  and performance regression can be fixed.
> 

You're completely right, my original had more checks here and I stripped
them out by accident.  Fixing, thanks!

-chris


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] Btrfs: fix filemap_flush call in btrfs_file_release
  2014-08-20 10:52         ` Miao Xie
@ 2014-08-20 14:48           ` Chris Mason
  -1 siblings, 0 replies; 16+ messages in thread
From: Chris Mason @ 2014-08-20 14:48 UTC (permalink / raw)
  To: miaox, dsterba, Fengguang Wu, Dave Hansen, LKML, lkp,
	linux-btrfs, Abhay Sachan


We should only be flushing on close if the file was flagged as needing
it during truncate.  I broke this with my ordered data vs transaction
commit deadlock fix.

Thanks to Miao Xie for catching this.

Signed-off-by: Chris Mason <clm@fb.com>
Reported-by: Miao Xie <miaox@cn.fujitsu.com>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index f15c13f..36861b7 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1840,7 +1840,15 @@ int btrfs_release_file(struct inode *inode, struct file *filp)
 {
 	if (filp->private_data)
 		btrfs_ioctl_trans_end(filp);
-	filemap_flush(inode->i_mapping);
+	/*
+	 * ordered_data_close is set by settattr when we are about to truncate
+	 * a file from a non-zero size to a zero size.  This tries to
+	 * flush down new bytes that may have been written if the
+	 * application were using truncate to replace a file in place.
+	 */
+	if (test_and_clear_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,
+			       &BTRFS_I(inode)->runtime_flags))
+			filemap_flush(inode->i_mapping);
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH] Btrfs: fix filemap_flush call in btrfs_file_release
@ 2014-08-20 14:48           ` Chris Mason
  0 siblings, 0 replies; 16+ messages in thread
From: Chris Mason @ 2014-08-20 14:48 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 1103 bytes --]


We should only be flushing on close if the file was flagged as needing
it during truncate.  I broke this with my ordered data vs transaction
commit deadlock fix.

Thanks to Miao Xie for catching this.

Signed-off-by: Chris Mason <clm@fb.com>
Reported-by: Miao Xie <miaox@cn.fujitsu.com>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index f15c13f..36861b7 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1840,7 +1840,15 @@ int btrfs_release_file(struct inode *inode, struct file *filp)
 {
 	if (filp->private_data)
 		btrfs_ioctl_trans_end(filp);
-	filemap_flush(inode->i_mapping);
+	/*
+	 * ordered_data_close is set by settattr when we are about to truncate
+	 * a file from a non-zero size to a zero size.  This tries to
+	 * flush down new bytes that may have been written if the
+	 * application were using truncate to replace a file in place.
+	 */
+	if (test_and_clear_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,
+			       &BTRFS_I(inode)->runtime_flags))
+			filemap_flush(inode->i_mapping);
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-08-20 14:49 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-16  7:52 [btrfs] 4c468fd7485: +7.8% blogbench.write_score, -5.1% turbostat.Pkg_W Fengguang Wu
2014-08-16  7:52 ` Fengguang Wu
     [not found] ` <CAKHchJwQcH326Z8otiaXtKoKs6fH9s+e9BqJDi=H1bWKiuybAw@mail.gmail.com>
2014-08-16 13:10   ` Fengguang Wu
2014-08-16 13:10     ` Fengguang Wu
2014-08-19 11:58 ` [btrfs] 8d875f95: xfstests.generic.226.fail Fengguang Wu
2014-08-19 11:58   ` Fengguang Wu
2014-08-19 14:23   ` David Sterba
2014-08-19 14:23     ` David Sterba
2014-08-19 14:58     ` Chris Mason
2014-08-19 14:58       ` Chris Mason
2014-08-20 10:52       ` Miao Xie
2014-08-20 10:52         ` Miao Xie
2014-08-20 14:07         ` Chris Mason
2014-08-20 14:07           ` Chris Mason
2014-08-20 14:48         ` [PATCH] Btrfs: fix filemap_flush call in btrfs_file_release Chris Mason
2014-08-20 14:48           ` Chris Mason

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.