ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ceph: fix incorrectly counting the export targets
@ 2021-08-30 12:33 xiubli
  2021-08-30 12:45 ` Jeff Layton
  0 siblings, 1 reply; 4+ messages in thread
From: xiubli @ 2021-08-30 12:33 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, ukernel, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/mds_client.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 7ddc36c14b92..aa0ab069db40 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -4434,7 +4434,7 @@ static void check_new_map(struct ceph_mds_client *mdsc,
 			  struct ceph_mdsmap *newmap,
 			  struct ceph_mdsmap *oldmap)
 {
-	int i, err;
+	int i, j, err;
 	int oldstate, newstate;
 	struct ceph_mds_session *s;
 	unsigned long targets[DIV_ROUND_UP(CEPH_MAX_MDS, sizeof(unsigned long))] = {0};
@@ -4443,8 +4443,10 @@ static void check_new_map(struct ceph_mds_client *mdsc,
 	     newmap->m_epoch, oldmap->m_epoch);
 
 	if (newmap->m_info) {
-		for (i = 0; i < newmap->m_info->num_export_targets; i++)
-			set_bit(newmap->m_info->export_targets[i], targets);
+		for (i = 0; i < newmap->m_num_active_mds; i++) {
+			for (j = 0; j < newmap->m_info[i].num_export_targets; j++)
+				set_bit(newmap->m_info[i].export_targets[j], targets);
+		}
 	}
 
 	for (i = 0; i < oldmap->possible_max_rank && i < mdsc->max_sessions; i++) {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] ceph: fix incorrectly counting the export targets
  2021-08-30 12:33 [PATCH] ceph: fix incorrectly counting the export targets xiubli
@ 2021-08-30 12:45 ` Jeff Layton
  2021-08-30 12:48   ` Xiubo Li
  2021-08-30 12:53   ` Xiubo Li
  0 siblings, 2 replies; 4+ messages in thread
From: Jeff Layton @ 2021-08-30 12:45 UTC (permalink / raw)
  To: xiubli; +Cc: idryomov, ukernel, pdonnell, ceph-devel

On Mon, 2021-08-30 at 20:33 +0800, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/mds_client.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 7ddc36c14b92..aa0ab069db40 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -4434,7 +4434,7 @@ static void check_new_map(struct ceph_mds_client *mdsc,
>  			  struct ceph_mdsmap *newmap,
>  			  struct ceph_mdsmap *oldmap)
>  {
> -	int i, err;
> +	int i, j, err;
>  	int oldstate, newstate;
>  	struct ceph_mds_session *s;
>  	unsigned long targets[DIV_ROUND_UP(CEPH_MAX_MDS, sizeof(unsigned long))] = {0};
> @@ -4443,8 +4443,10 @@ static void check_new_map(struct ceph_mds_client *mdsc,
>  	     newmap->m_epoch, oldmap->m_epoch);
>  
>  	if (newmap->m_info) {
> -		for (i = 0; i < newmap->m_info->num_export_targets; i++)
> -			set_bit(newmap->m_info->export_targets[i], targets);
> +		for (i = 0; i < newmap->m_num_active_mds; i++) {
> +			for (j = 0; j < newmap->m_info[i].num_export_targets; j++)
> +				set_bit(newmap->m_info[i].export_targets[j], targets);
> +		}
>  	}
>  
>  	for (i = 0; i < oldmap->possible_max_rank && i < mdsc->max_sessions; i++) {

Looks sane. I'll plan to fold this into "ceph: reconnect to the export
targets on new mdsmaps".

Thanks,
-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] ceph: fix incorrectly counting the export targets
  2021-08-30 12:45 ` Jeff Layton
@ 2021-08-30 12:48   ` Xiubo Li
  2021-08-30 12:53   ` Xiubo Li
  1 sibling, 0 replies; 4+ messages in thread
From: Xiubo Li @ 2021-08-30 12:48 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, ukernel, pdonnell, ceph-devel


On 8/30/21 8:45 PM, Jeff Layton wrote:
> On Mon, 2021-08-30 at 20:33 +0800, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/mds_client.c | 8 +++++---
>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>> index 7ddc36c14b92..aa0ab069db40 100644
>> --- a/fs/ceph/mds_client.c
>> +++ b/fs/ceph/mds_client.c
>> @@ -4434,7 +4434,7 @@ static void check_new_map(struct ceph_mds_client *mdsc,
>>   			  struct ceph_mdsmap *newmap,
>>   			  struct ceph_mdsmap *oldmap)
>>   {
>> -	int i, err;
>> +	int i, j, err;
>>   	int oldstate, newstate;
>>   	struct ceph_mds_session *s;
>>   	unsigned long targets[DIV_ROUND_UP(CEPH_MAX_MDS, sizeof(unsigned long))] = {0};
>> @@ -4443,8 +4443,10 @@ static void check_new_map(struct ceph_mds_client *mdsc,
>>   	     newmap->m_epoch, oldmap->m_epoch);
>>   
>>   	if (newmap->m_info) {
>> -		for (i = 0; i < newmap->m_info->num_export_targets; i++)
>> -			set_bit(newmap->m_info->export_targets[i], targets);
>> +		for (i = 0; i < newmap->m_num_active_mds; i++) {
>> +			for (j = 0; j < newmap->m_info[i].num_export_targets; j++)
>> +				set_bit(newmap->m_info[i].export_targets[j], targets);
>> +		}
>>   	}
>>   
>>   	for (i = 0; i < oldmap->possible_max_rank && i < mdsc->max_sessions; i++) {
> Looks sane. I'll plan to fold this into "ceph: reconnect to the export
> targets on new mdsmaps".

Wait, I have sent the V2 to fix:

- s/m_num_active_mds/possible_max_rank/

Thanks.


>
> Thanks,


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] ceph: fix incorrectly counting the export targets
  2021-08-30 12:45 ` Jeff Layton
  2021-08-30 12:48   ` Xiubo Li
@ 2021-08-30 12:53   ` Xiubo Li
  1 sibling, 0 replies; 4+ messages in thread
From: Xiubo Li @ 2021-08-30 12:53 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, ukernel, pdonnell, ceph-devel


On 8/30/21 8:45 PM, Jeff Layton wrote:
> On Mon, 2021-08-30 at 20:33 +0800, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/mds_client.c | 8 +++++---
>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>> index 7ddc36c14b92..aa0ab069db40 100644
>> --- a/fs/ceph/mds_client.c
>> +++ b/fs/ceph/mds_client.c
>> @@ -4434,7 +4434,7 @@ static void check_new_map(struct ceph_mds_client *mdsc,
>>   			  struct ceph_mdsmap *newmap,
>>   			  struct ceph_mdsmap *oldmap)
>>   {
>> -	int i, err;
>> +	int i, j, err;
>>   	int oldstate, newstate;
>>   	struct ceph_mds_session *s;
>>   	unsigned long targets[DIV_ROUND_UP(CEPH_MAX_MDS, sizeof(unsigned long))] = {0};
>> @@ -4443,8 +4443,10 @@ static void check_new_map(struct ceph_mds_client *mdsc,
>>   	     newmap->m_epoch, oldmap->m_epoch);
>>   
>>   	if (newmap->m_info) {
>> -		for (i = 0; i < newmap->m_info->num_export_targets; i++)
>> -			set_bit(newmap->m_info->export_targets[i], targets);
>> +		for (i = 0; i < newmap->m_num_active_mds; i++) {
>> +			for (j = 0; j < newmap->m_info[i].num_export_targets; j++)
>> +				set_bit(newmap->m_info[i].export_targets[j], targets);
>> +		}
>>   	}
>>   
>>   	for (i = 0; i < oldmap->possible_max_rank && i < mdsc->max_sessions; i++) {
> Looks sane. I'll plan to fold this into "ceph: reconnect to the export
> targets on new mdsmaps".

Why my previous test worked well, it should be there has only one active 
mds. And for the 'newmap->m_info->export_target' it will always equal to 
'newmap->m_info[0].export_target', but I am not very sure why this could 
cause crash.


> Thanks,


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-08-30 12:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-30 12:33 [PATCH] ceph: fix incorrectly counting the export targets xiubli
2021-08-30 12:45 ` Jeff Layton
2021-08-30 12:48   ` Xiubo Li
2021-08-30 12:53   ` Xiubo Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).