ocfs2-devel.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Wengang Wang <wen.gang.wang@oracle.com>
To: Gang He <ghe@suse.com>
Cc: "ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Ocfs2-devel] [PATCH] ocfs2: avoid getting dlm lock of the target directory multiple times during reflink process
Date: Tue, 14 Sep 2021 17:50:04 +0000	[thread overview]
Message-ID: <8BD8816D-FF28-423B-99D1-642EE034D2E3@oracle.com> (raw)
In-Reply-To: <60b3e52b-fdab-5255-3339-de3d95555dca@suse.com>

[-- Attachment #1.1: Type: text/plain, Size: 16019 bytes --]

On Sep 14, 2021, at 12:34 AM, Gang He <ghe@suse.com<mailto:ghe@suse.com>> wrote:

Hi Wengang,

Sorry for delayed reply.

On 2021/9/9 0:00, Wengang Wang wrote:
On Sep 7, 2021, at 11:06 PM, Gang He <ghe@suse.com<mailto:ghe@suse.com>> wrote:

On 2021/9/7 23:57, Wengang Wang wrote:
Hi Gang,
Sure, I will look into the problem you are trying to address. Any bug fix and performance improvement is welcomed!
Well, can you please provide the analysis on the tcpdumps between the (two) nodes that covers the reflink operation with/without your patch to show how you saved dlm locking ping-pongs?
The code change will hold the reflink destination dir inode dlm lock, release it until the whole reflink process is completed.
The current code will hold/release this dlm lock three times during the reflink process. If there are some concurrently reflink operation to that directory from other nodes, the ping-pong effect is that directory inode related data will be synchronized(multiple times) to the disk when inode dlm lock is downconverted during one reflink operation.

I think the above can be a good summary, but not details or the locking ping-pong procedure.  I need the details to understand the situation. For details, Which locks are involved, what are the locking types, which nodes are involved.

The problem is clone files to the same directory simultaneously, for some users, they like to add this task to each node as a crontab job.
The main impact factor is the destination dir inode dlm lock in this case, each reflink operation will acquire/release this dlm lock three time. For parallel environment, each node will contend this dlm lock, that means there will be some lock downconvert costs(sync the inode meta-data to the disk before downconvert).
For the code logic, I think we can identify this problem clearly,I
do not think we need performance related tool to help us.
Here, I can share my test results for clone files to the same directory simultaneously.
1) test script on each node: https://pastebin.com/y8EcCjWK

I am pasting your test script here:

  1 loop=1
  2 while ((loop++)) ; do
  4         for i in `seq 1 100`; do
  5                 reflink "/mnt/shared/testnode1.qcow2" "/mnt/shared/.snapshots/testnode1.qcow2.${loop}.${i}.`date +%m%d%H%M%S`.`hostname`"
  6         done
  8         rm -f /mnt/shared/.snapshots/testnode1.qcow2.*.`hostname`
  9         echo "`hostname` `date +%m%d%H%M%S`: loop - $loop"
 10 done

I think your test script would heavily depends on the size of file testnode1.qcow2 and how fragmented it is.  That would impact the number of extent records to be copied.
Let’s simplify the reflink procedure with your patch applied:

1) lock destination dir
2) extent record copy
3) unlock destination dir

Apply this to multinode case:

node1                  node2                     node3
————           ————               ————
extent copy

                           extent copy

                                                        extent copy

As you said, you saved the time of down-converting by serializing the reflink opertions targeting the directory. But seems you skipped the impact of extent copy.
If extent copy take much more time than lock down-convertings, your patch may slowdown the whole reflink operations among the three nodes.
The time for extent copy depends:
a) number of extents to copy. more extents more time
b) CPUs load on the nodes, higher CPU load more time
c) OCFS2 device load, higher load more time

I don’t know if you considered above factors when you did your test.  I was ever playing reflink with well fragmented 50GB file (though it’s not OCFS2). When load is added, a single run of reflink take 20+m.
So I’d like to see your test for at least this case:
I) make the reflink source file 50GiB long (with FS cluster size 1MiB) and
II) make the reflink source file well fragmented (suggest 40% fragmentation) and
III) add some CPU and block device load and
IV) add some file creation/removal operations under the same destination directory as the reflinks do, and collect the time for each operations. Do it on all the three nodes.

For II), I’d suggest 40% fragmentation ratio, some example output and time for reflinks
debugfs: frag /file1
Inode: 592130 % fragmented: 40.00 clusters: 51200 extents: 20481 score: 40

[root@wengwan-ocfs2-3 opc]# time reflink /ocfs2/file1 /ocfs2/dir1/reflink1

real 0m6.837s
user 0m0.000s
sys 0m6.828s
[root@wengwan-ocfs2-3 opc]# time reflink /ocfs2/file1 /ocfs2/dir1/reflink2

real 0m3.799s
user 0m0.000s
sys 0m3.799s
[root@wengwan-ocfs2-3 opc]# time reflink /ocfs2/file1 /ocfs2/dir1/reflink3

real 0m3.802s
user 0m0.001s
sys 0m3.801s

I’d guess you may get negative result comparing to original kernel for both reflink and file creation/removal.


2) Performance log of current parallel reflink:
3) Performance log of parallel reflink after patching:

For the performance logs, the test time has been cut in half for every hundred reflink operations.

In sum, I think this code change logic is simple and clear, can improve
the performance in the above case. Of course, I also want more people to review if there is any risk after this patch, e.g. add dead-lock risk, etc.


Say like this (just for an example, not related to the problem here):
Node 1                                         Node 2                                             Node 3
———————————      ———————————            ————————————————————
dir1 meta lock EX
orphandir1 lock EX
                                             dir1 meta lock EX
                                             orphadir2 lock EX
 dir1 meta lock EX
                                                                                                  orphadir3 lock EX
dir1 meta block EX (again)
According to my test, running the reflink command to clone a file to the same directory repeatedly from three nodes, the code change can shorten the previous half of the time.

I am not sure if above is a typical use case.
What else cases did you test? Did it help in case only one node do the reflink?
What if there are concurrent file creating/removing operations going on under the target directory when the reflink is going on?
 I think when you are trying to make a performance improvement,  you should provide the performance data for different test cases, like this:
Test case desc         | orig performance  | performance after patched   |
test case1 (details)  | perf data               | perf data                                |
test case2 (details)   | perf data               | perf data                                |

And what cases did you test to get better performance?
On Aug 30, 2021, at 11:25 PM, Gang He <ghe@suse.com<mailto:ghe@suse.com>> wrote:

Hello Joseph and Wengang,

When you have time, please help review this patch.
About the deadlock problem which was caused by ocfs2_downconvert_lock
failure, we have the fix patch, it is very key.
But I feel this patch is still useful as a optimization patch, the user
case is to reflink the files to the same directory concurrently, our
users usually backup the files(via reflink) from the cluster nodes
concurrently(via crontab) every day/hour.
The current design, during the reflink process, the node will
acquire/release dlm lock of the target directory multiple times,
this is very inefficient in concurrently reflink.


On 2021/8/26 15:59, Gang He wrote:
During the reflink process, we should acquire the target directory
inode dlm lock at the beginning, and hold this dlm lock until end
of the function.
With this patch, we avoid dlm lock ping-pong effect when clone
files to the same directory simultaneously from multiple nodes.
There is a typical user scenario, users regularly back up files
to a specified directory through the reflink feature from the
multiple nodes.

Signed-off-by: Gang He <ghe@suse.com<mailto:ghe@suse.com>>
 fs/ocfs2/namei.c        | 32 +++++++++++++-------------------
 fs/ocfs2/namei.h        |  2 ++
 fs/ocfs2/refcounttree.c | 15 +++++++++++----
 fs/ocfs2/xattr.c        | 12 +-----------
 fs/ocfs2/xattr.h        |  1 +
 5 files changed, 28 insertions(+), 34 deletions(-)

diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
index 2c46ff6ba4ea..f8bbb22cc60b 100644
--- a/fs/ocfs2/namei.c
+++ b/fs/ocfs2/namei.c
@@ -2489,6 +2489,7 @@ static int ocfs2_prep_new_orphaned_file(struct inode *dir,

 int ocfs2_create_inode_in_orphan(struct inode *dir,
+ struct buffer_head **dir_bh,
  int mode,
  struct inode **new_inode)
@@ -2597,13 +2598,16 @@ int ocfs2_create_inode_in_orphan(struct inode *dir,


- if (!status)
- *new_inode = inode;

- ocfs2_inode_unlock(dir, 1);
- brelse(parent_di_bh);
+ if (!status) {
+ *new_inode = inode;
+ *dir_bh = parent_di_bh;
+ } else {
+ ocfs2_inode_unlock(dir, 1);
+ brelse(parent_di_bh);
+ }
  return status;

@@ -2760,11 +2764,11 @@ int ocfs2_del_inode_from_orphan(struct ocfs2_super *osb,

 int ocfs2_mv_orphaned_inode_to_new(struct inode *dir,
+   struct buffer_head *dir_bh,
    struct inode *inode,
    struct dentry *dentry)
  int status = 0;
- struct buffer_head *parent_di_bh = NULL;
  handle_t *handle = NULL;
  struct ocfs2_super *osb = OCFS2_SB(dir->i_sb);
  struct ocfs2_dinode *dir_di, *di;
@@ -2778,14 +2782,7 @@ int ocfs2_mv_orphaned_inode_to_new(struct inode *dir,
  (unsigned long long)OCFS2_I(dir)->ip_blkno,
  (unsigned long long)OCFS2_I(inode)->ip_blkno);

- status = ocfs2_inode_lock(dir, &parent_di_bh, 1);
- if (status < 0) {
- if (status != -ENOENT)
- mlog_errno(status);
- return status;
- }
- dir_di = (struct ocfs2_dinode *) parent_di_bh->b_data;
+ dir_di = (struct ocfs2_dinode *) dir_bh->b_data;
  if (!dir_di->i_links_count) {
  /* can't make a file in a deleted directory. */
  status = -ENOENT;
@@ -2798,7 +2795,7 @@ int ocfs2_mv_orphaned_inode_to_new(struct inode *dir,
  goto leave;

  /* get a spot inside the dir. */
- status = ocfs2_prepare_dir_for_insert(osb, dir, parent_di_bh,
+ status = ocfs2_prepare_dir_for_insert(osb, dir, dir_bh,
       dentry->d_name.len, &lookup);
  if (status < 0) {
@@ -2862,7 +2859,7 @@ int ocfs2_mv_orphaned_inode_to_new(struct inode *dir,
  ocfs2_journal_dirty(handle, di_bh);

  status = ocfs2_add_entry(handle, dentry, inode,
- OCFS2_I(inode)->ip_blkno, parent_di_bh,
+ OCFS2_I(inode)->ip_blkno, dir_bh,
  if (status < 0) {
@@ -2886,10 +2883,7 @@ int ocfs2_mv_orphaned_inode_to_new(struct inode *dir,

- ocfs2_inode_unlock(dir, 1);
- brelse(parent_di_bh);

diff --git a/fs/ocfs2/namei.h b/fs/ocfs2/namei.h
index 9cc891eb874e..03a2c526e2c1 100644
--- a/fs/ocfs2/namei.h
+++ b/fs/ocfs2/namei.h
@@ -24,6 +24,7 @@ int ocfs2_orphan_del(struct ocfs2_super *osb,
      struct buffer_head *orphan_dir_bh,
      bool dio);
 int ocfs2_create_inode_in_orphan(struct inode *dir,
+ struct buffer_head **dir_bh,
  int mode,
  struct inode **new_inode);
 int ocfs2_add_inode_to_orphan(struct ocfs2_super *osb,
@@ -32,6 +33,7 @@ int ocfs2_del_inode_from_orphan(struct ocfs2_super *osb,
  struct inode *inode, struct buffer_head *di_bh,
  int update_isize, loff_t end);
 int ocfs2_mv_orphaned_inode_to_new(struct inode *dir,
+   struct buffer_head *dir_bh,
    struct inode *new_inode,
    struct dentry *new_dentry);

diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
index 7f6355cbb587..a9a0c7c37e8e 100644
--- a/fs/ocfs2/refcounttree.c
+++ b/fs/ocfs2/refcounttree.c
@@ -4250,7 +4250,7 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,
  int error, had_lock;
  struct inode *inode = d_inode(old_dentry);
- struct buffer_head *old_bh = NULL;
+ struct buffer_head *old_bh = NULL, *dir_bh = NULL;
  struct inode *new_orphan_inode = NULL;
  struct ocfs2_lock_holder oh;

@@ -4258,7 +4258,7 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,
  return -EOPNOTSUPP;

- error = ocfs2_create_inode_in_orphan(dir, inode->i_mode,
+ error = ocfs2_create_inode_in_orphan(dir, &dir_bh, inode->i_mode,
  if (error) {
@@ -4304,13 +4304,15 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,

  /* If the security isn't preserved, we need to re-initialize them. */
  if (!preserve) {
- error = ocfs2_init_security_and_acl(dir, new_orphan_inode,
+ error = ocfs2_init_security_and_acl(dir, dir_bh,
+    new_orphan_inode,
  if (error)
  if (!error) {
- error = ocfs2_mv_orphaned_inode_to_new(dir, new_orphan_inode,
+ error = ocfs2_mv_orphaned_inode_to_new(dir, dir_bh,
+       new_orphan_inode,
  if (error)
@@ -4328,6 +4330,11 @@ static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,

+ if (dir_bh) {
+ ocfs2_inode_unlock(dir, 1);
+ brelse(dir_bh);
+ }
  return error;

diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
index dd784eb0cd7c..3f23e3a5018c 100644
--- a/fs/ocfs2/xattr.c
+++ b/fs/ocfs2/xattr.c
@@ -7203,16 +7203,13 @@ int ocfs2_reflink_xattrs(struct inode *old_inode,
  * Initialize security and acl for a already created inode.
  * Used for reflink a non-preserve-security file.
- *
- * It uses common api like ocfs2_xattr_set, so the caller
- * must not hold any lock expect i_mutex.
 int ocfs2_init_security_and_acl(struct inode *dir,
+ struct buffer_head *dir_bh,
  struct inode *inode,
  const struct qstr *qstr)
  int ret = 0;
- struct buffer_head *dir_bh = NULL;

  ret = ocfs2_init_security_get(inode, dir, qstr, NULL);
  if (ret) {
@@ -7220,17 +7217,10 @@ int ocfs2_init_security_and_acl(struct inode *dir,
  goto leave;

- ret = ocfs2_inode_lock(dir, &dir_bh, 0);
- if (ret) {
- mlog_errno(ret);
- goto leave;
- }
  ret = ocfs2_init_acl(NULL, inode, dir, NULL, dir_bh, NULL, NULL);
  if (ret)

- ocfs2_inode_unlock(dir, 0);
- brelse(dir_bh);
  return ret;
diff --git a/fs/ocfs2/xattr.h b/fs/ocfs2/xattr.h
index 00308b57f64f..b27fd8ba0019 100644
--- a/fs/ocfs2/xattr.h
+++ b/fs/ocfs2/xattr.h
@@ -83,6 +83,7 @@ int ocfs2_reflink_xattrs(struct inode *old_inode,
  struct buffer_head *new_bh,
  bool preserve_security);
 int ocfs2_init_security_and_acl(struct inode *dir,
+ struct buffer_head *dir_bh,
  struct inode *inode,
  const struct qstr *qstr);
 #endif /* OCFS2_XATTR_H */

Ocfs2-devel mailing list

[-- Attachment #1.2: Type: text/html, Size: 52714 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

Ocfs2-devel mailing list

  reply	other threads:[~2021-09-14 17:50 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-26  7:59 [Ocfs2-devel] [PATCH] ocfs2: avoid getting dlm lock of the target directory multiple times during reflink process Gang He
2021-08-31  6:25 ` Gang He
2021-08-31  7:39   ` Joseph Qi
2021-09-07 15:57   ` Wengang Wang
2021-09-08  6:06     ` Gang He
2021-09-08 16:00       ` Wengang Wang
2021-09-14  7:34         ` Gang He
2021-09-14 17:50           ` Wengang Wang [this message]
2021-09-17  7:37             ` Gang He
2021-09-06 11:14 ` Joseph Qi
2021-09-07  9:40   ` Gang He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8BD8816D-FF28-423B-99D1-642EE034D2E3@oracle.com \
    --to=wen.gang.wang@oracle.com \
    --cc=ghe@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ocfs2-devel@oss.oracle.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).