From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Teigland Date: Mon, 16 Aug 2021 09:50:24 -0500 Subject: [Cluster-devel] Why does dlm_lock function fails when downconvert a dlm lock? In-Reply-To: <20210816144118.GB23630@redhat.com> References: <74009531-f6ef-4ef9-b969-353684006ddc@suse.com> <20210812174523.GC1757@redhat.com> <20210816144118.GB23630@redhat.com> Message-ID: <20210816145024.GA23980@redhat.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On Mon, Aug 16, 2021 at 09:41:18AM -0500, David Teigland wrote: > On Fri, Aug 13, 2021 at 02:49:04PM +0800, Gang He wrote: > > Hi David, > > > > On 2021/8/13 1:45, David Teigland wrote: > > > On Thu, Aug 12, 2021 at 01:44:53PM +0800, Gang He wrote: > > > > In fact, I can reproduce this problem stably. > > > > I want to know if this error happen is by our expectation? since there is > > > > not any extreme pressure test. > > > > Second, how should we handle these error cases? call dlm_lock function > > > > again? maybe the function will fails again, that will lead to kernel > > > > soft-lockup after multiple re-tries. > > > > > > What's probably happening is that ocfs2 calls dlm_unlock(CANCEL) to cancel > > > an in-progress dlm_lock() request. Before the cancel completes (or the > > > original request completes), ocfs2 calls dlm_lock() again on the same > > > resource. This dlm_lock() returns -EBUSY because the previous request has > > > not completed, either normally or by cancellation. This is expected. > > These dlm_lock and dlm_unlock are invoked in the same node, or the different > > nodes? > > different Sorry, same node