dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Felix Kuehling <felix.kuehling@amd.com>
To: Rajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: alexander.deucher@amd.com, airlied@redhat.com,
	christian.koenig@amd.com, daniel.vetter@ffwll.ch
Subject: Re: [Patch v4 24/24] drm/amdkfd: CRIU resume shared virtual memory ranges
Date: Mon, 10 Jan 2022 19:03:16 -0500	[thread overview]
Message-ID: <655d7468-bf18-498b-fc74-0a12a48ef079@amd.com> (raw)
In-Reply-To: <20211223003711.13064-25-rajneesh.bhardwaj@amd.com>

On 2021-12-22 7:37 p.m., Rajneesh Bhardwaj wrote:
> In CRIU resume stage, resume all the shared virtual memory ranges from
> the data stored inside the resuming kfd process during CRIU restore
> phase. Also setup xnack mode and free up the resources.
>
> Signed-off-by: Rajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>
> ---
>   drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 10 +++++
>   drivers/gpu/drm/amd/amdkfd/kfd_svm.c     | 55 ++++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdkfd/kfd_svm.h     |  6 +++
>   3 files changed, 71 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> index f7aa15b18f95..6191e37656dd 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> @@ -2759,7 +2759,17 @@ static int criu_resume(struct file *filep,
>   	}
>   
>   	mutex_lock(&target->mutex);
> +	ret = kfd_criu_resume_svm(target);
> +	if (ret) {
> +		pr_err("kfd_criu_resume_svm failed for %i\n", args->pid);
> +		goto exit;
> +	}
> +
>   	ret =  amdgpu_amdkfd_criu_resume(target->kgd_process_info);
> +	if (ret)
> +		pr_err("amdgpu_amdkfd_criu_resume failed for %i\n", args->pid);
> +
> +exit:
>   	mutex_unlock(&target->mutex);
>   
>   	kfd_unref_process(target);
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index e9f6c63c2a26..bd2dce37f345 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -3427,6 +3427,61 @@ svm_range_get_attr(struct kfd_process *p, struct mm_struct *mm,
>   	return 0;
>   }
>   
> +int kfd_criu_resume_svm(struct kfd_process *p)
> +{
> +	int nattr_common = 4, nattr_accessibility = 1;
> +	struct criu_svm_metadata *criu_svm_md = NULL;
> +	struct criu_svm_metadata *next = NULL;
> +	struct svm_range_list *svms = &p->svms;
> +	int i, j, num_attrs, ret = 0;
> +	struct mm_struct *mm;
> +
> +	if (list_empty(&svms->criu_svm_metadata_list)) {
> +		pr_debug("No SVM data from CRIU restore stage 2\n");
> +		return ret;
> +	}
> +
> +	mm = get_task_mm(p->lead_thread);
> +	if (!mm) {
> +		pr_err("failed to get mm for the target process\n");
> +		return -ESRCH;
> +	}
> +
> +	num_attrs = nattr_common + (nattr_accessibility * p->n_pdds);
> +
> +	i = j = 0;
> +	list_for_each_entry(criu_svm_md, &svms->criu_svm_metadata_list, list) {
> +		pr_debug("criu_svm_md[%d]\n\tstart: 0x%llx size: 0x%llx (npages)\n",
> +			 i, criu_svm_md->start_addr, criu_svm_md->size);
> +		for (j = 0; j < num_attrs; j++) {
> +			pr_debug("\ncriu_svm_md[%d]->attrs[%d].type : 0x%x \ncriu_svm_md[%d]->attrs[%d].value : 0x%x\n",
> +				 i,j, criu_svm_md->attrs[j].type,
> +				 i,j, criu_svm_md->attrs[j].value);
> +		}

Is this super-detailed debug output really needed?

Regards,
   Felix


> +
> +		ret = svm_range_set_attr(p, mm, criu_svm_md->start_addr,
> +					 criu_svm_md->size, num_attrs,
> +					 criu_svm_md->attrs);
> +		if (ret) {
> +			pr_err("CRIU: failed to set range attributes\n");
> +			goto exit;
> +		}
> +
> +		i++;
> +	}
> +
> +exit:
> +	list_for_each_entry_safe(criu_svm_md, next, &svms->criu_svm_metadata_list, list) {
> +		pr_debug("freeing criu_svm_md[]\n\tstart: 0x%llx\n",
> +						criu_svm_md->start_addr);
> +		kfree(criu_svm_md);
> +	}
> +
> +	mmput(mm);
> +	return ret;
> +
> +}
> +
>   int svm_criu_prepare_for_resume(struct kfd_process *p,
>   				struct kfd_criu_svm_range_priv_data *svm_priv)
>   {
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> index e0c0853f085c..3b5bcb52723c 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> @@ -195,6 +195,7 @@ int kfd_criu_restore_svm(struct kfd_process *p,
>   			 uint8_t __user *user_priv_ptr,
>   			 uint64_t *priv_data_offset,
>   			 uint64_t max_priv_data_size);
> +int kfd_criu_resume_svm(struct kfd_process *p);
>   struct kfd_process_device *
>   svm_range_get_pdd_by_adev(struct svm_range *prange, struct amdgpu_device *adev);
>   void svm_range_list_lock_and_flush_work(struct svm_range_list *svms, struct mm_struct *mm);
> @@ -256,6 +257,11 @@ static inline int kfd_criu_restore_svm(struct kfd_process *p,
>   	return -EINVAL;
>   }
>   
> +static inline int kfd_criu_resume_svm(struct kfd_process *p)
> +{
> +	return 0;
> +}
> +
>   #define KFD_IS_SVM_API_SUPPORTED(dev) false
>   
>   #endif /* IS_ENABLED(CONFIG_HSA_AMD_SVM) */

      reply	other threads:[~2022-01-11  0:03 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-23  0:36 [Patch v4 00/24] CHECKPOINT RESTORE WITH ROCm Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 01/24] x86/configs: CRIU update debug rock defconfig Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 02/24] x86/configs: Add rock-rel_defconfig for amd-feature-criu branch Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 03/24] drm/amdkfd: CRIU Introduce Checkpoint-Restore APIs Rajneesh Bhardwaj
2022-01-10 22:08   ` Felix Kuehling
2021-12-23  0:36 ` [Patch v4 04/24] drm/amdkfd: CRIU Implement KFD process_info ioctl Rajneesh Bhardwaj
2022-01-10 22:47   ` Felix Kuehling
2021-12-23  0:36 ` [Patch v4 05/24] drm/amdkfd: CRIU Implement KFD checkpoint ioctl Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 06/24] drm/amdkfd: CRIU Implement KFD restore ioctl Rajneesh Bhardwaj
2022-01-10 23:01   ` Felix Kuehling
2021-12-23  0:36 ` [Patch v4 07/24] drm/amdkfd: CRIU Implement KFD resume ioctl Rajneesh Bhardwaj
2022-01-10 23:16   ` Felix Kuehling
2021-12-23  0:36 ` [Patch v4 08/24] drm/amdkfd: CRIU Implement KFD unpause operation Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 09/24] drm/amdkfd: CRIU add queues support Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 10/24] drm/amdkfd: CRIU restore queue ids Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 11/24] drm/amdkfd: CRIU restore sdma id for queues Rajneesh Bhardwaj
2021-12-23  0:36 ` [Patch v4 12/24] drm/amdkfd: CRIU restore queue doorbell id Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 13/24] drm/amdkfd: CRIU checkpoint and restore queue mqds Rajneesh Bhardwaj
2022-01-10 23:32   ` Felix Kuehling
2021-12-23  0:37 ` [Patch v4 14/24] drm/amdkfd: CRIU checkpoint and restore queue control stack Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 15/24] drm/amdkfd: CRIU checkpoint and restore events Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 16/24] drm/amdkfd: CRIU implement gpu_id remapping Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 17/24] drm/amdkfd: CRIU export BOs as prime dmabuf objects Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 18/24] drm/amdkfd: CRIU checkpoint and restore xnack mode Rajneesh Bhardwaj
2022-01-05 15:22   ` philip yang
2022-01-11  0:10     ` Felix Kuehling
2022-01-11 15:49       ` philip yang
2021-12-23  0:37 ` [Patch v4 19/24] drm/amdkfd: CRIU allow external mm for svm ranges Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 20/24] drm/amdkfd: use user_gpu_id " Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 21/24] drm/amdkfd: CRIU Discover " Rajneesh Bhardwaj
2022-01-05 14:48   ` philip yang
2022-01-10 23:11   ` philip yang
2021-12-23  0:37 ` [Patch v4 22/24] drm/amdkfd: CRIU Save Shared Virtual Memory ranges Rajneesh Bhardwaj
2021-12-23  0:37 ` [Patch v4 23/24] drm/amdkfd: CRIU prepare for svm resume Rajneesh Bhardwaj
2022-01-05 14:43   ` philip yang
2022-01-10 23:58     ` Felix Kuehling
2022-01-11 15:58       ` philip yang
2021-12-23  0:37 ` [Patch v4 24/24] drm/amdkfd: CRIU resume shared virtual memory ranges Rajneesh Bhardwaj
2022-01-11  0:03   ` Felix Kuehling [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=655d7468-bf18-498b-fc74-0a12a48ef079@amd.com \
    --to=felix.kuehling@amd.com \
    --cc=airlied@redhat.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=rajneesh.bhardwaj@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).