All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] ceph: switch to VM_BUG_ON_FOLIO and continue the loop for none write op
@ 2022-05-05 12:47 Xiubo Li
  2022-05-05 13:25 ` Jeff Layton
  0 siblings, 1 reply; 3+ messages in thread
From: Xiubo Li @ 2022-05-05 12:47 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, vshankar, ceph-devel, Xiubo Li

Use the VM_BUG_ON_FOLIO macro to get more infomation when we hit
the BUG_ON, and continue the loop when seeing the incorrect none
write opcode in writepages_finish().

URL: https://tracker.ceph.com/issues/55421
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/addr.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e52b62407b10..d4bcef1d9549 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
 	 * Reference snap context in folio->private.  Also set
 	 * PagePrivate so that we get invalidate_folio callback.
 	 */
-	BUG_ON(folio_get_private(folio));
+	VM_BUG_ON_FOLIO(folio_get_private(folio), folio);
 	folio_attach_private(folio, snapc);
 
 	return ceph_fscache_dirty_folio(mapping, folio);
@@ -733,8 +733,11 @@ static void writepages_finish(struct ceph_osd_request *req)
 
 	/* clean all pages */
 	for (i = 0; i < req->r_num_ops; i++) {
-		if (req->r_ops[i].op != CEPH_OSD_OP_WRITE)
-			break;
+		if (req->r_ops[i].op != CEPH_OSD_OP_WRITE) {
+			pr_warn("%s incorrect op %d req %p index %d tid %llu\n",
+				__func__, req->r_ops[i].op, req, i, req->r_tid);
+			continue;
+		}
 
 		osd_data = osd_req_op_extent_osd_data(req, i);
 		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
-- 
2.36.0.rc1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] ceph: switch to VM_BUG_ON_FOLIO and continue the loop for none write op
  2022-05-05 12:47 [PATCH v2] ceph: switch to VM_BUG_ON_FOLIO and continue the loop for none write op Xiubo Li
@ 2022-05-05 13:25 ` Jeff Layton
  2022-05-06  0:03   ` Xiubo Li
  0 siblings, 1 reply; 3+ messages in thread
From: Jeff Layton @ 2022-05-05 13:25 UTC (permalink / raw)
  To: Xiubo Li; +Cc: idryomov, vshankar, ceph-devel

On Thu, 2022-05-05 at 20:47 +0800, Xiubo Li wrote:
> Use the VM_BUG_ON_FOLIO macro to get more infomation when we hit
> the BUG_ON, and continue the loop when seeing the incorrect none
> write opcode in writepages_finish().
> 
> URL: https://tracker.ceph.com/issues/55421
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/addr.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index e52b62407b10..d4bcef1d9549 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
>  	 * Reference snap context in folio->private.  Also set
>  	 * PagePrivate so that we get invalidate_folio callback.
>  	 */
> -	BUG_ON(folio_get_private(folio));
> +	VM_BUG_ON_FOLIO(folio_get_private(folio), folio);
>  	folio_attach_private(folio, snapc);
>  
>  	return ceph_fscache_dirty_folio(mapping, folio);
> @@ -733,8 +733,11 @@ static void writepages_finish(struct ceph_osd_request *req)
>  
>  	/* clean all pages */
>  	for (i = 0; i < req->r_num_ops; i++) {
> -		if (req->r_ops[i].op != CEPH_OSD_OP_WRITE)
> -			break;
> +		if (req->r_ops[i].op != CEPH_OSD_OP_WRITE) {
> +			pr_warn("%s incorrect op %d req %p index %d tid %llu\n",
> +				__func__, req->r_ops[i].op, req, i, req->r_tid);
> +			continue;

A break here is probably fine. We don't expect to see any non OP_WRITE
ops in here, so if we do I don't see the point in continuing.

> +		}
>  
>  		osd_data = osd_req_op_extent_osd_data(req, i);
>  		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);

Otherwise looks fine.
-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] ceph: switch to VM_BUG_ON_FOLIO and continue the loop for none write op
  2022-05-05 13:25 ` Jeff Layton
@ 2022-05-06  0:03   ` Xiubo Li
  0 siblings, 0 replies; 3+ messages in thread
From: Xiubo Li @ 2022-05-06  0:03 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, vshankar, ceph-devel


On 5/5/22 9:25 PM, Jeff Layton wrote:
> On Thu, 2022-05-05 at 20:47 +0800, Xiubo Li wrote:
>> Use the VM_BUG_ON_FOLIO macro to get more infomation when we hit
>> the BUG_ON, and continue the loop when seeing the incorrect none
>> write opcode in writepages_finish().
>>
>> URL: https://tracker.ceph.com/issues/55421
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/addr.c | 9 ++++++---
>>   1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>> index e52b62407b10..d4bcef1d9549 100644
>> --- a/fs/ceph/addr.c
>> +++ b/fs/ceph/addr.c
>> @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
>>   	 * Reference snap context in folio->private.  Also set
>>   	 * PagePrivate so that we get invalidate_folio callback.
>>   	 */
>> -	BUG_ON(folio_get_private(folio));
>> +	VM_BUG_ON_FOLIO(folio_get_private(folio), folio);
>>   	folio_attach_private(folio, snapc);
>>   
>>   	return ceph_fscache_dirty_folio(mapping, folio);
>> @@ -733,8 +733,11 @@ static void writepages_finish(struct ceph_osd_request *req)
>>   
>>   	/* clean all pages */
>>   	for (i = 0; i < req->r_num_ops; i++) {
>> -		if (req->r_ops[i].op != CEPH_OSD_OP_WRITE)
>> -			break;
>> +		if (req->r_ops[i].op != CEPH_OSD_OP_WRITE) {
>> +			pr_warn("%s incorrect op %d req %p index %d tid %llu\n",
>> +				__func__, req->r_ops[i].op, req, i, req->r_tid);
>> +			continue;
> A break here is probably fine. We don't expect to see any non OP_WRITE
> ops in here, so if we do I don't see the point in continuing.
>
I am afraid if the non OP_WRITE op is in middle of the r_ops[], for the 
following OP_WRITE ops we may miss some pages with the dirty bit cleared 
but with the private data kept ?

For this check I didn't in which case will the non OP_WRITE ops will 
happen unless the request is corrupted.

-- Xiubo

>> +		}
>>   
>>   		osd_data = osd_req_op_extent_osd_data(req, i);
>>   		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
> Otherwise looks fine.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-05-06  0:03 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-05 12:47 [PATCH v2] ceph: switch to VM_BUG_ON_FOLIO and continue the loop for none write op Xiubo Li
2022-05-05 13:25 ` Jeff Layton
2022-05-06  0:03   ` Xiubo Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.