linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rohith Surabattula <rohiths.msft@gmail.com>
To: Tom Talpey <tom@talpey.com>
Cc: "Shyam Prasad N" <nspmangalore@gmail.com>,
	linux-cifs <linux-cifs@vger.kernel.org>,
	"Steve French" <smfrench@gmail.com>,
	"Pavel Shilovsky" <piastryyy@gmail.com>,
	sribhat.msa@outlook.com,
	"ronnie sahlberg" <ronniesahlberg@gmail.com>,
	"Aurélien Aptel" <aaptel@suse.com>
Subject: Re: cifs: Deferred close for files
Date: Sun, 11 Apr 2021 17:49:34 +0530	[thread overview]
Message-ID: <CACdtm0YhpOW+7-ELLOk97utOikU-06Zccwq-2HK+h3aFumnedg@mail.gmail.com> (raw)
In-Reply-To: <CACdtm0ZYHjHXt-n8xnRQgZ+ZdLw4XVdb_yCK3Q_QSG4SrK6_Lw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 10689 bytes --]

Hi All,

Have updated the patch with some optimizations.

Patch V3:
Moved the deferred file list and lock from TCON to inode structures.

Regards,
Rohith

On Wed, Apr 7, 2021 at 8:27 PM Rohith Surabattula
<rohiths.msft@gmail.com> wrote:
>
> Hi All,
>
> Have updated the patch.
>
> patch v2:
> 1)  Close the file immediately during unlink of file.
> 2)  Update the reference count properly when deferred work is already scheduled.
>
> Regards,
> Rohith
>
> On Thu, Mar 25, 2021 at 8:12 AM Rohith Surabattula
> <rohiths.msft@gmail.com> wrote:
> >
> > >>> Hi Rohith,
> > >>>
> > >>> The changes look good at a high level.
> > >>>
> > >>> Just a few points worth checking:
> > >>> 1. In cifs_open(), should be perform deferred close for certain cases
> > >>> like O_DIRECT? AFAIK, O_DIRECT is just a hint to the filesystem to
> > >>> perform less data caching. By deferring close, aren't we delaying
> > >>> flushing dirty pages? @Steve French ?
> > >>
> > >> With O_DIRECT flag, data is not cached locally and will be sent to
> > >> server immediately.
> > >>
> > >>> 2. I see that you're maintaining a new list of files for deferred
> > >>> closing. Since there could be a large number of such files for a big
> > >>> share with sufficient I/O, maybe we should think of a structure with
> > >>> faster lookups (rb trees?).
> > >>> I know we already have a bunch of linked lists in cifs.ko, and we need
> > >>> to review perf impact for all those lists. But this one sounds like a
> > >>> candidate for faster lookups.
> > >>
> > >> Entries will be added into this list only once file is closed and will
> > >> remain for acregmax amount interval.
> >
> > >I think you mean once the "file descriptor" is closed, right? But now
> > >that you mention it, caching the attributes sounds a lot like the
> > >NFS close-to-open semantic, which is itself optional with the "nocto"
> > >mount option.
> >
> > >Because some applications use close() to perform flush, there may be
> > >some silent side effects as well. I don't see anything special in the
> > >patch regarding this. Have you considered it?
> > >The change to promptly close the handle on oplock or lease break looks
> > >reasonable. The rewording in the patch description is better too.
> >
> > Yes, as the handle is closed immediately when oplock/lease breaks,
> > will there be any
> > silent side effects still?
> >
> > >>> What happens if the handle is durable or persistent, and the connection
> > >>> to the server times out? Is the handle recovered, then closed?
> > >>
> > >> Do you mean when the session gets reconnected, the deferred handle
> > >> will be recovered and closed?
> >
> > >Yes, because I expect the client to attempt to reclaim its handles upon
> > >reconnection. I don't see any particular code change regarding this.
> >
> > >My question is, if there's a deferred close pending, should it do that,
> > >or should it simply cut to the chase and close any such handle(s)?
> >
> > As the handles are persistent once the session gets reconnected,
> > applications can reclaim
> > the handle. So, i believe no need to close handles immediately until
> > timeout(i.e acregmax interval)
> >
> > Regards,
> > Rohith
> >
> > On Wed, Mar 24, 2021 at 7:50 PM Tom Talpey <tom@talpey.com> wrote:
> > >
> > > On 3/22/2021 1:07 PM, Rohith Surabattula wrote:
> > > > On 3/11/2021 8:47 AM, Shyam Prasad N wrote:
> > > >> Hi Rohith,
> > > >>
> > > >> The changes look good at a high level.
> > > >>
> > > >> Just a few points worth checking:
> > > >> 1. In cifs_open(), should be perform deferred close for certain cases
> > > >> like O_DIRECT? AFAIK, O_DIRECT is just a hint to the filesystem to
> > > >> perform less data caching. By deferring close, aren't we delaying
> > > >> flushing dirty pages? @Steve French ?
> > > >
> > > > With O_DIRECT flag, data is not cached locally and will be sent to
> > > > server immediately.
> > > >
> > > >> 2. I see that you're maintaining a new list of files for deferred
> > > >> closing. Since there could be a large number of such files for a big
> > > >> share with sufficient I/O, maybe we should think of a structure with
> > > >> faster lookups (rb trees?).
> > > >> I know we already have a bunch of linked lists in cifs.ko, and we need
> > > >> to review perf impact for all those lists. But this one sounds like a
> > > >> candidate for faster lookups.
> > > >
> > > > Entries will be added into this list only once file is closed and will
> > > > remain for acregmax amount interval.
> > >
> > > I think you mean once the "file descriptor" is closed, right? But now
> > > that you mention it, caching the attributes sounds a lot like the
> > > NFS close-to-open semantic, which is itself optional with the "nocto"
> > > mount option.
> > >
> > > Because some applications use close() to perform flush, there may be
> > > some silent side effects as well. I don't see anything special in the
> > > patch regarding this. Have you considered it?
> > >
> > > > So,  Will this affect performance i.e during lookups ?
> > > >
> > > >> 3. Minor comment. Maybe change the field name oplock_deferred_close to
> > > >> oplock_break_received?
> > > >
> > > > Addressed. Attached the patch.
> > > >>
> > > >> Regards,
> > > >> Shyam
> > > >
> > > >> I wonder why the choice of 5 seconds? It seems to me a more natural
> > > >> interval on the order of one or more of
> > > >> - attribute cache timeout
> > > >> - lease time
> > > >> - echo_interval
> > > >
> > > > Yes, This sounds good. I modified the patch to defer by attribute
> > > > cache timeout interval.
> > > >
> > > >> Also, this wording is rather confusing:
> > > >
> > > >>> When file is closed, SMB2 close request is not sent to server
> > > >>> immediately and is deferred for 5 seconds interval. When file is
> > > >>> reopened by same process for read or write, previous file handle
> > > >>> can be used until oplock is held.
> > > >
> > > >> It's not a "previous" filehandle if it's open, and "can be used" is
> > > >> a rather passive statement. A better wording may be "the filehandle
> > > >> is reused".
> > > >
> > > >> And, "until oplock is held" is similarly awkward. Do you mean "*if*
> > > >> an oplock is held", or "*provided" an oplock is held"?
> > > >
> > > >>> When same file is reopened by another client during 5 second
> > > >>> interval, oplock break is sent by server and file is closed immediately
> > > >>> if reference count is zero or else oplock is downgraded.
> > > >
> > > >> Only the second part of the sentence is relevant to the patch. Also,
> > > >> what about lease break?
> > > >
> > > > Modified the patch.
> > >
> > > The change to promptly close the handle on oplock or lease break looks
> > > reasonable. The rewording in the patch description is better too.
> > >
> > > >> What happens if the handle is durable or persistent, and the connection
> > > >> to the server times out? Is the handle recovered, then closed?
> > > >
> > > > Do you mean when the session gets reconnected, the deferred handle
> > > > will be recovered and closed?
> > >
> > > Yes, because I expect the client to attempt to reclaim its handles upon
> > > reconnection. I don't see any particular code change regarding this.
> > >
> > > My question is, if there's a deferred close pending, should it do that,
> > > or should it simply cut to the chase and close any such handle(s)?
> > >
> > > Tom.
> > >
> > > > Regards,
> > > > Rohith
> > > >
> > > > On Thu, Mar 11, 2021 at 11:25 PM Tom Talpey <tom@talpey.com> wrote:
> > > >>
> > > >> On 3/11/2021 8:47 AM, Shyam Prasad N wrote:
> > > >>> Hi Rohith,
> > > >>>
> > > >>> The changes look good at a high level.
> > > >>>
> > > >>> Just a few points worth checking:
> > > >>> 1. In cifs_open(), should be perform deferred close for certain cases
> > > >>> like O_DIRECT? AFAIK, O_DIRECT is just a hint to the filesystem to
> > > >>> perform less data caching. By deferring close, aren't we delaying
> > > >>> flushing dirty pages? @Steve French ?
> > > >>> 2. I see that you're maintaining a new list of files for deferred
> > > >>> closing. Since there could be a large number of such files for a big
> > > >>> share with sufficient I/O, maybe we should think of a structure with
> > > >>> faster lookups (rb trees?).
> > > >>> I know we already have a bunch of linked lists in cifs.ko, and we need
> > > >>> to review perf impact for all those lists. But this one sounds like a
> > > >>> candidate for faster lookups.
> > > >>> 3. Minor comment. Maybe change the field name oplock_deferred_close to
> > > >>> oplock_break_received?
> > > >>>
> > > >>> Regards,
> > > >>> Shyam
> > > >>
> > > >> I wonder why the choice of 5 seconds? It seems to me a more natural
> > > >> interval on the order of one or more of
> > > >> - attribute cache timeout
> > > >> - lease time
> > > >> - echo_interval
> > > >>
> > > >> Also, this wording is rather confusing:
> > > >>
> > > >>> When file is closed, SMB2 close request is not sent to server
> > > >>> immediately and is deferred for 5 seconds interval. When file is
> > > >>> reopened by same process for read or write, previous file handle
> > > >>> can be used until oplock is held.
> > > >>
> > > >> It's not a "previous" filehandle if it's open, and "can be used" is
> > > >> a rather passive statement. A better wording may be "the filehandle
> > > >> is reused".
> > > >>
> > > >> And, "until oplock is held" is similarly awkward. Do you mean "*if*
> > > >> an oplock is held", or "*provided" an oplock is held"?
> > > >>
> > > >>> When same file is reopened by another client during 5 second
> > > >>> interval, oplock break is sent by server and file is closed immediately
> > > >>> if reference count is zero or else oplock is downgraded.
> > > >>
> > > >> Only the second part of the sentence is relevant to the patch. Also,
> > > >> what about lease break?
> > > >>
> > > >> What happens if the handle is durable or persistent, and the connection
> > > >> to the server times out? Is the handle recovered, then closed?
> > > >>
> > > >> Tom.
> > > >>
> > > >>
> > > >>> On Tue, Mar 9, 2021 at 2:41 PM Rohith Surabattula
> > > >>> <rohiths.msft@gmail.com> wrote:
> > > >>>>
> > > >>>> Hi All,
> > > >>>>
> > > >>>> Please find the attached patch which will defer the close to server.
> > > >>>> So, performance can be improved.
> > > >>>>
> > > >>>> i.e When file is open, write, close, open, read, close....
> > > >>>> As close is deferred and oplock is held, cache will not be invalidated
> > > >>>> and same handle can be used for second open.
> > > >>>>
> > > >>>> Please review the changes and let me know your thoughts.
> > > >>>>
> > > >>>> Regards,
> > > >>>> Rohith
> > > >>>
> > > >>>
> > > >>>

[-- Attachment #2: 0001-cifs-Deferred-close-for-files.patch --]
[-- Type: application/octet-stream, Size: 13616 bytes --]

From fedf687b27f1c80156d5983eb6a618d1aa441f34 Mon Sep 17 00:00:00 2001
From: Rohith Surabattula <rohiths@microsoft.com>
Date: Mon, 8 Mar 2021 16:28:09 +0000
Subject: [PATCH] cifs: Deferred close for files

When file is closed, SMB2 close request is not sent to server
immediately and is deferred for acregmax defined interval. When file is
reopened by same process for read or write, the file handle
is reused if an oplock is held.

When client receives a oplock/lease break, file is closed immediately
if reference count is zero, else oplock is downgraded.

Signed-off-by: Rohith Surabattula <rohiths@microsoft.com>
---
 fs/cifs/cifsfs.c    | 15 +++++++-
 fs/cifs/cifsglob.h  | 16 +++++++++
 fs/cifs/cifsproto.h | 11 ++++++
 fs/cifs/file.c      | 84 +++++++++++++++++++++++++++++++++++++++++++--
 fs/cifs/inode.c     |  1 +
 fs/cifs/misc.c      | 62 +++++++++++++++++++++++++++++++++
 6 files changed, 186 insertions(+), 3 deletions(-)

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 099ad9f3660b..35cf51810cbf 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -133,6 +133,7 @@ struct workqueue_struct	*cifsiod_wq;
 struct workqueue_struct	*decrypt_wq;
 struct workqueue_struct	*fileinfo_put_wq;
 struct workqueue_struct	*cifsoplockd_wq;
+struct workqueue_struct *deferredclose_wq;
 __u32 cifs_lock_secret;
 
 /*
@@ -364,6 +365,8 @@ cifs_alloc_inode(struct super_block *sb)
 	/* cifs_inode->vfs_inode.i_flags = S_NOATIME | S_NOCMTIME; */
 	INIT_LIST_HEAD(&cifs_inode->openFileList);
 	INIT_LIST_HEAD(&cifs_inode->llist);
+	INIT_LIST_HEAD(&cifs_inode->deferred_closes);
+	spin_lock_init(&cifs_inode->deferred_lock);
 	return &cifs_inode->vfs_inode;
 }
 
@@ -1605,9 +1608,16 @@ init_cifs(void)
 		goto out_destroy_fileinfo_put_wq;
 	}
 
+	deferredclose_wq = alloc_workqueue("deferredclose",
+					   WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+	if (!deferredclose_wq) {
+		rc = -ENOMEM;
+		goto out_destroy_cifsoplockd_wq;
+	}
+
 	rc = cifs_fscache_register();
 	if (rc)
-		goto out_destroy_cifsoplockd_wq;
+		goto out_destroy_deferredclose_wq;
 
 	rc = cifs_init_inodecache();
 	if (rc)
@@ -1675,6 +1685,8 @@ init_cifs(void)
 	cifs_destroy_inodecache();
 out_unreg_fscache:
 	cifs_fscache_unregister();
+out_destroy_deferredclose_wq:
+	destroy_workqueue(deferredclose_wq);
 out_destroy_cifsoplockd_wq:
 	destroy_workqueue(cifsoplockd_wq);
 out_destroy_fileinfo_put_wq:
@@ -1709,6 +1721,7 @@ exit_cifs(void)
 	cifs_destroy_mids();
 	cifs_destroy_inodecache();
 	cifs_fscache_unregister();
+	destroy_workqueue(deferredclose_wq);
 	destroy_workqueue(cifsoplockd_wq);
 	destroy_workqueue(decrypt_wq);
 	destroy_workqueue(fileinfo_put_wq);
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
index 31fc8695abd6..a6c6c7d257eb 100644
--- a/fs/cifs/cifsglob.h
+++ b/fs/cifs/cifsglob.h
@@ -1154,6 +1154,14 @@ struct cifs_pending_open {
 	__u32 oplock;
 };
 
+struct cifs_deferred_close {
+	struct list_head dlist;
+	struct tcon_link *tlink;
+	__u16  netfid;
+	__u64  persistent_fid;
+	__u64  volatile_fid;
+};
+
 /*
  * This info hangs off the cifsFileInfo structure, pointed to by llist.
  * This is used to track byte stream locks on the file
@@ -1248,6 +1256,9 @@ struct cifsFileInfo {
 	struct cifs_search_info srch_inf;
 	struct work_struct oplock_break; /* work for oplock breaks */
 	struct work_struct put; /* work for the final part of _put */
+	struct delayed_work deferred;
+	bool oplock_break_received; /* Flag to indicate oplock break */
+	bool deferred_scheduled;
 };
 
 struct cifs_io_parms {
@@ -1396,6 +1407,7 @@ struct cifsInodeInfo {
 #define CIFS_INO_DELETE_PENDING		  (3) /* delete pending on server */
 #define CIFS_INO_INVALID_MAPPING	  (4) /* pagecache is invalid */
 #define CIFS_INO_LOCK			  (5) /* lock bit for synchronization */
+#define CIFS_INO_MODIFIED_ATTR            (6) /* Indicate change in mtime/ctime */
 	unsigned long flags;
 	spinlock_t writers_lock;
 	unsigned int writers;		/* Number of writers on this inode */
@@ -1408,6 +1420,8 @@ struct cifsInodeInfo {
 	struct fscache_cookie *fscache;
 #endif
 	struct inode vfs_inode;
+	struct list_head deferred_closes; /* list of deferred closes */
+	spinlock_t deferred_lock; /* protection on deferred list */
 };
 
 static inline struct cifsInodeInfo *
@@ -1892,12 +1906,14 @@ GLOBAL_EXTERN spinlock_t gidsidlock;
 
 void cifs_oplock_break(struct work_struct *work);
 void cifs_queue_oplock_break(struct cifsFileInfo *cfile);
+void smb2_deferred_work_close(struct work_struct *work);
 
 extern const struct slow_work_ops cifs_oplock_break_ops;
 extern struct workqueue_struct *cifsiod_wq;
 extern struct workqueue_struct *decrypt_wq;
 extern struct workqueue_struct *fileinfo_put_wq;
 extern struct workqueue_struct *cifsoplockd_wq;
+extern struct workqueue_struct *deferredclose_wq;
 extern __u32 cifs_lock_secret;
 
 extern mempool_t *cifs_mid_poolp;
diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
index 75ce6f742b8d..e2e56e1f66f5 100644
--- a/fs/cifs/cifsproto.h
+++ b/fs/cifs/cifsproto.h
@@ -256,6 +256,17 @@ extern void cifs_add_pending_open_locked(struct cifs_fid *fid,
 					 struct tcon_link *tlink,
 					 struct cifs_pending_open *open);
 extern void cifs_del_pending_open(struct cifs_pending_open *open);
+
+extern bool cifs_is_deferred_close(struct cifsFileInfo *cfile,
+				struct cifs_deferred_close **dclose);
+
+extern void cifs_add_deferred_close(struct cifsFileInfo *cfile,
+				struct cifs_deferred_close *dclose);
+
+extern void cifs_del_deferred_close(struct cifsFileInfo *cfile);
+
+extern void cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode);
+
 extern struct TCP_Server_Info *cifs_get_tcp_session(struct smb3_fs_context *ctx);
 extern void cifs_put_tcp_session(struct TCP_Server_Info *server,
 				 int from_reconnect);
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 26de4329d161..744cce9357ba 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -321,9 +321,12 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
 	cfile->dentry = dget(dentry);
 	cfile->f_flags = file->f_flags;
 	cfile->invalidHandle = false;
+	cfile->oplock_break_received = false;
+	cfile->deferred_scheduled = false;
 	cfile->tlink = cifs_get_tlink(tlink);
 	INIT_WORK(&cfile->oplock_break, cifs_oplock_break);
 	INIT_WORK(&cfile->put, cifsFileInfo_put_work);
+	INIT_DELAYED_WORK(&cfile->deferred, smb2_deferred_work_close);
 	mutex_init(&cfile->fh_mutex);
 	spin_lock_init(&cfile->file_info_lock);
 
@@ -562,6 +565,23 @@ int cifs_open(struct inode *inode, struct file *file)
 			file->f_op = &cifs_file_direct_ops;
 	}
 
+	spin_lock(&CIFS_I(inode)->deferred_lock);
+	/* Get the cached handle as SMB2 close is deferred */
+	rc = cifs_get_readable_path(tcon, full_path, &cfile);
+	if (rc == 0) {
+		if (file->f_flags == cfile->f_flags) {
+			file->private_data = cfile;
+			cifs_del_deferred_close(cfile);
+			spin_unlock(&CIFS_I(inode)->deferred_lock);
+			goto out;
+		} else {
+			spin_unlock(&CIFS_I(inode)->deferred_lock);
+			_cifsFileInfo_put(cfile, true, false);
+		}
+	} else {
+		spin_unlock(&CIFS_I(inode)->deferred_lock);
+	}
+
 	if (server->oplocks)
 		oplock = REQ_OPLOCK;
 	else
@@ -842,11 +862,52 @@ cifs_reopen_file(struct cifsFileInfo *cfile, bool can_flush)
 	return rc;
 }
 
+void smb2_deferred_work_close(struct work_struct *work)
+{
+	struct cifsFileInfo *cfile = container_of(work,
+			struct cifsFileInfo, deferred.work);
+
+	spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+	cifs_del_deferred_close(cfile);
+	cfile->deferred_scheduled = false;
+	spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+	_cifsFileInfo_put(cfile, true, false);
+}
+
 int cifs_close(struct inode *inode, struct file *file)
 {
+	struct cifsFileInfo *cfile;
+	struct cifsInodeInfo *cinode = CIFS_I(inode);
+	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+	struct cifs_deferred_close *dclose;
+
 	if (file->private_data != NULL) {
-		_cifsFileInfo_put(file->private_data, true, false);
+		cfile = file->private_data;
 		file->private_data = NULL;
+		dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL);
+		if ((cinode->oplock == CIFS_CACHE_RHW_FLG) &&
+		    dclose) {
+			if (test_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags))
+				inode->i_ctime = inode->i_mtime = current_time(inode);
+			spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+			cifs_add_deferred_close(cfile, dclose);
+			if (cfile->deferred_scheduled) {
+				mod_delayed_work(deferredclose_wq,
+						&cfile->deferred, cifs_sb->ctx->acregmax);
+			} else {
+				/* Deferred close for files */
+				queue_delayed_work(deferredclose_wq,
+						&cfile->deferred, cifs_sb->ctx->acregmax);
+				cfile->deferred_scheduled = true;
+				spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+				return 0;
+			}
+			spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+			_cifsFileInfo_put(cfile, true, false);
+		} else {
+			_cifsFileInfo_put(cfile, true, false);
+			kfree(dclose);
+		}
 	}
 
 	/* return code from the ->release op is always ignored */
@@ -1943,7 +2004,8 @@ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
 		if (fsuid_only && !uid_eq(open_file->uid, current_fsuid()))
 			continue;
 		if (OPEN_FMODE(open_file->f_flags) & FMODE_READ) {
-			if (!open_file->invalidHandle) {
+			if ((!open_file->invalidHandle) &&
+				(!open_file->oplock_break_received)) {
 				/* found a good file */
 				/* lock it so it will not be closed on us */
 				cifsFileInfo_get(open_file);
@@ -2478,6 +2540,8 @@ static int cifs_writepages(struct address_space *mapping,
 	if (cfile)
 		cifsFileInfo_put(cfile);
 	free_xid(xid);
+	/* Indication to update ctime and mtime as close is deferred */
+	set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags);
 	return rc;
 }
 
@@ -2586,6 +2650,8 @@ static int cifs_write_end(struct file *file, struct address_space *mapping,
 
 	unlock_page(page);
 	put_page(page);
+	/* Indication to update ctime and mtime as close is deferred */
+	set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags);
 
 	return rc;
 }
@@ -4746,6 +4812,8 @@ void cifs_oplock_break(struct work_struct *work)
 	struct TCP_Server_Info *server = tcon->ses->server;
 	int rc = 0;
 	bool purge_cache = false;
+	bool is_deferred = false;
+	struct cifs_deferred_close *dclose;
 
 	wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
 			TASK_UNINTERRUPTIBLE);
@@ -4793,6 +4861,18 @@ void cifs_oplock_break(struct work_struct *work)
 		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
 	}
 	_cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+	/*
+	 * When oplock break is received and there are no active
+	 * file handles but cached, then set the flag oplock_break_received.
+	 * So, new open will not use cached handle.
+	 */
+	spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+	is_deferred = cifs_is_deferred_close(cfile, &dclose);
+	if (is_deferred) {
+		cfile->oplock_break_received = true;
+		mod_delayed_work(deferredclose_wq, &cfile->deferred, 0);
+	}
+	spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
 	cifs_done_oplock_break(cinode);
 }
 
diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
index f2df4422e54a..ce7d7173550c 100644
--- a/fs/cifs/inode.c
+++ b/fs/cifs/inode.c
@@ -1643,6 +1643,7 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
 		goto unlink_out;
 	}
 
+	cifs_close_deferred_file(CIFS_I(inode));
 	if (cap_unix(tcon->ses) && (CIFS_UNIX_POSIX_PATH_OPS_CAP &
 				le64_to_cpu(tcon->fsUnixInfo.Capability))) {
 		rc = CIFSPOSIXDelFile(xid, tcon, full_path,
diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
index 82e176720ca6..5892748b34e6 100644
--- a/fs/cifs/misc.c
+++ b/fs/cifs/misc.c
@@ -672,6 +672,68 @@ cifs_add_pending_open(struct cifs_fid *fid, struct tcon_link *tlink,
 	spin_unlock(&tlink_tcon(open->tlink)->open_file_lock);
 }
 
+bool
+cifs_is_deferred_close(struct cifsFileInfo *cfile, struct cifs_deferred_close **pdclose)
+{
+	struct cifs_deferred_close *dclose;
+
+	list_for_each_entry(dclose, &CIFS_I(d_inode(cfile->dentry))->deferred_closes, dlist) {
+		if ((dclose->netfid == cfile->fid.netfid) &&
+			(dclose->persistent_fid == cfile->fid.persistent_fid) &&
+			(dclose->volatile_fid == cfile->fid.volatile_fid)) {
+			*pdclose = dclose;
+			return true;
+		}
+	}
+	return false;
+}
+
+void
+cifs_add_deferred_close(struct cifsFileInfo *cfile, struct cifs_deferred_close *dclose)
+{
+	bool is_deferred = false;
+	struct cifs_deferred_close *pdclose;
+
+	is_deferred = cifs_is_deferred_close(cfile, &pdclose);
+	if (is_deferred) {
+		kfree(dclose);
+		return;
+	}
+
+	dclose->tlink = cfile->tlink;
+	dclose->netfid = cfile->fid.netfid;
+	dclose->persistent_fid = cfile->fid.persistent_fid;
+	dclose->volatile_fid = cfile->fid.volatile_fid;
+	list_add_tail(&dclose->dlist, &CIFS_I(d_inode(cfile->dentry))->deferred_closes);
+}
+
+void
+cifs_del_deferred_close(struct cifsFileInfo *cfile)
+{
+	bool is_deferred = false;
+	struct cifs_deferred_close *dclose;
+
+	is_deferred = cifs_is_deferred_close(cfile, &dclose);
+	if (!is_deferred)
+		return;
+	list_del(&dclose->dlist);
+	kfree(dclose);
+}
+
+void
+cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+{
+	struct cifsFileInfo *cfile = NULL;
+	struct cifs_deferred_close *dclose;
+
+	list_for_each_entry(cfile, &cifs_inode->openFileList, flist) {
+		spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+		if (cifs_is_deferred_close(cfile, &dclose))
+			mod_delayed_work(deferredclose_wq, &cfile->deferred, 0);
+		spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+	}
+}
+
 /* parses DFS refferal V3 structure
  * caller is responsible for freeing target_nodes
  * returns:
-- 
2.17.1


  reply	other threads:[~2021-04-11 12:19 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09  9:11 cifs: Deferred close for files Rohith Surabattula
2021-03-11 13:47 ` Shyam Prasad N
2021-03-11 17:55   ` Tom Talpey
2021-03-22 17:07     ` Rohith Surabattula
2021-03-24 14:20       ` Tom Talpey
2021-03-25  2:42         ` Rohith Surabattula
2021-04-07 14:57           ` Rohith Surabattula
2021-04-11 12:19             ` Rohith Surabattula [this message]
2021-04-11 18:49               ` Steve French
2021-04-12  3:43                 ` Rohith Surabattula
2021-04-12 16:57                   ` Aurélien Aptel
2021-04-12 17:23 ` Steve French
2021-04-12 17:24   ` Steve French
2021-04-12 19:35     ` Rohith Surabattula
2021-04-19 23:03       ` Steve French
2021-04-28  3:30         ` Rohith Surabattula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACdtm0YhpOW+7-ELLOk97utOikU-06Zccwq-2HK+h3aFumnedg@mail.gmail.com \
    --to=rohiths.msft@gmail.com \
    --cc=aaptel@suse.com \
    --cc=linux-cifs@vger.kernel.org \
    --cc=nspmangalore@gmail.com \
    --cc=piastryyy@gmail.com \
    --cc=ronniesahlberg@gmail.com \
    --cc=smfrench@gmail.com \
    --cc=sribhat.msa@outlook.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).