linux-nvdimm.lists.01.org archive mirror
 help / color / mirror / Atom feed
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: Re: [PATCH v3] libnvdimm/nsio: differentiate between probe mapping and runtime mapping
Date: Thu, 24 Oct 2019 14:37:01 +0530	[thread overview]
Message-ID: <49c98556-5c57-b8fc-ef50-fa0ed679e094@linux.ibm.com> (raw)
In-Reply-To: <CAPcyv4ix9Ld6NgN=6u3yBJc1A81U-nkY3EFHjBUTMNoxAxth1g@mail.gmail.com>

On 10/24/19 7:36 AM, Dan Williams wrote:
> On Thu, Oct 17, 2019 at 12:33 AM Aneesh Kumar K.V
> <aneesh.kumar@linux.ibm.com> wrote:
>>
>> nvdimm core currently maps the full namespace to an ioremap range
>> while probing the namespace mode. This can result in probe failures
>> on architectures that have limited ioremap space.
>>
>> For example, with a large btt namespace that consumes most of I/O remap
>> range, depending on the sequence of namespace initialization, the user can find
>> a pfn namespace initialization failure due to unavailable I/O remap space
>> which nvdimm core uses for temporary mapping.
>>
>> nvdimm core can avoid this failure by only mapping the reserver block area to
> 
> s/reserver/reserved/

Will fix


> 
>> check for pfn superblock type and map the full namespace resource only before
>> using the namespace.
> 
> Are you going to follow up with the BTT patch that uses this new facility?
> 

I am not yet sure about that. ioremap/vmap area is the way we get a 
kernel mapping without struct page backing. What we are suggesting hee 
is the ability to have a direct mapped mapping without struct page. I 
need to look at closely about what that imply.

>>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> ---
>> Changes from v2:
>> * update changelog
>>
>> Changes from V1:
>> * update changelog
>> * update patch based on review feedback.
>>
>>   drivers/dax/pmem/core.c   |  2 +-
>>   drivers/nvdimm/claim.c    |  7 +++----
>>   drivers/nvdimm/nd.h       |  4 ++--
>>   drivers/nvdimm/pfn.h      |  6 ++++++
>>   drivers/nvdimm/pfn_devs.c |  5 -----
>>   drivers/nvdimm/pmem.c     | 15 ++++++++++++---
>>   6 files changed, 24 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/dax/pmem/core.c b/drivers/dax/pmem/core.c
>> index 6eb6dfdf19bf..f174dbfbe1c4 100644
>> --- a/drivers/dax/pmem/core.c
>> +++ b/drivers/dax/pmem/core.c
>> @@ -28,7 +28,7 @@ struct dev_dax *__dax_pmem_probe(struct device *dev, enum dev_dax_subsys subsys)
>>          nsio = to_nd_namespace_io(&ndns->dev);
>>
>>          /* parse the 'pfn' info block via ->rw_bytes */
>> -       rc = devm_nsio_enable(dev, nsio);
>> +       rc = devm_nsio_enable(dev, nsio, info_block_reserve());
>>          if (rc)
>>                  return ERR_PTR(rc);
>>          rc = nvdimm_setup_pfn(nd_pfn, &pgmap);
>> diff --git a/drivers/nvdimm/claim.c b/drivers/nvdimm/claim.c
>> index 2985ca949912..d89d2c039e25 100644
>> --- a/drivers/nvdimm/claim.c
>> +++ b/drivers/nvdimm/claim.c
>> @@ -300,12 +300,12 @@ static int nsio_rw_bytes(struct nd_namespace_common *ndns,
>>          return rc;
>>   }
>>
>> -int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio)
>> +int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio, unsigned long size)
>>   {
>>          struct resource *res = &nsio->res;
>>          struct nd_namespace_common *ndns = &nsio->common;
>>
>> -       nsio->size = resource_size(res);
>> +       nsio->size = size;
> 
> This needs a:
> 
> if (nsio->size)
>     devm_memunmap(dev, nsio->addr);


Why ? we should not be calling devm_nsio_enable twice now.

> 
> 
>>          if (!devm_request_mem_region(dev, res->start, resource_size(res),
>>                                  dev_name(&ndns->dev))) {
> 
> Also don't repeat the devm_request_mem_region() if one was already done.
> 
> Another thing to check is if nsio->size gets cleared when the
> namespace is disabled, if not that well need to be added in the
> shutdown path.
> 


Not sure I understand that. So with this patch when we probe we always 
use info_block_reserve() size irrespective of the device. This probe 
will result in us adding a btt/pfn/dax device and we return -ENXIO after 
this probe. This return value will cause us to destroy the I/O remap 
mmapping we did with info_block_reserve() size. Also the nsio data 
structure is also freed.

nd_pmem_probe is then again called with btt device type and in that case 
we do  devm_memremap again.

if (btt)
     remap the full namespace size.
else
    remap the info_block_reserve_size.


This infor block reserve mapping is unmapped in pmem_attach_disk()


Anything i am missing in the above flow?



> 
>>                  dev_warn(dev, "could not reserve region %pR\n", res);
>> @@ -318,8 +318,7 @@ int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio)
>>          nvdimm_badblocks_populate(to_nd_region(ndns->dev.parent), &nsio->bb,
>>                          &nsio->res);
>>
>> -       nsio->addr = devm_memremap(dev, res->start, resource_size(res),
>> -                       ARCH_MEMREMAP_PMEM);
>> +       nsio->addr = devm_memremap(dev, res->start, size, ARCH_MEMREMAP_PMEM);
>>
>>          return PTR_ERR_OR_ZERO(nsio->addr);
>>   }
>> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
>> index ee5c04070ef9..93d3c760c0f3 100644
>> --- a/drivers/nvdimm/nd.h
>> +++ b/drivers/nvdimm/nd.h
>> @@ -376,7 +376,7 @@ void nvdimm_badblocks_populate(struct nd_region *nd_region,
>>   #define MAX_STRUCT_PAGE_SIZE 64
>>
>>   int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap);
>> -int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio);
>> +int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio, unsigned long size);
>>   void devm_nsio_disable(struct device *dev, struct nd_namespace_io *nsio);
>>   #else
>>   static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn,
>> @@ -385,7 +385,7 @@ static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn,
>>          return -ENXIO;
>>   }
>>   static inline int devm_nsio_enable(struct device *dev,
>> -               struct nd_namespace_io *nsio)
>> +               struct nd_namespace_io *nsio, unsigned long size)
>>   {
>>          return -ENXIO;
>>   }
>> diff --git a/drivers/nvdimm/pfn.h b/drivers/nvdimm/pfn.h
>> index acb19517f678..f4856c87d01c 100644
>> --- a/drivers/nvdimm/pfn.h
>> +++ b/drivers/nvdimm/pfn.h
>> @@ -36,4 +36,10 @@ struct nd_pfn_sb {
>>          __le64 checksum;
>>   };
>>
>> +static inline u32 info_block_reserve(void)
>> +{
>> +       return ALIGN(SZ_8K, PAGE_SIZE);
>> +}
>> +
>> +
>>   #endif /* __NVDIMM_PFN_H */
>> diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
>> index 60d81fae06ee..e49aa9a0fd04 100644
>> --- a/drivers/nvdimm/pfn_devs.c
>> +++ b/drivers/nvdimm/pfn_devs.c
>> @@ -635,11 +635,6 @@ int nd_pfn_probe(struct device *dev, struct nd_namespace_common *ndns)
>>   }
>>   EXPORT_SYMBOL(nd_pfn_probe);
>>
>> -static u32 info_block_reserve(void)
>> -{
>> -       return ALIGN(SZ_8K, PAGE_SIZE);
>> -}
>> -
>>   /*
>>    * We hotplug memory at sub-section granularity, pad the reserved area
>>    * from the previous section base to the namespace base address.
>> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
>> index f9f76f6ba07b..3c188ffeff11 100644
>> --- a/drivers/nvdimm/pmem.c
>> +++ b/drivers/nvdimm/pmem.c
>> @@ -491,17 +491,26 @@ static int pmem_attach_disk(struct device *dev,
>>   static int nd_pmem_probe(struct device *dev)
>>   {
>>          int ret;
>> +       struct nd_namespace_io *nsio;
>>          struct nd_namespace_common *ndns;
>>
>>          ndns = nvdimm_namespace_common_probe(dev);
>>          if (IS_ERR(ndns))
>>                  return PTR_ERR(ndns);
>>
>> -       if (devm_nsio_enable(dev, to_nd_namespace_io(&ndns->dev)))
>> -               return -ENXIO;
>> +       nsio = to_nd_namespace_io(&ndns->dev);
>>
>> -       if (is_nd_btt(dev))
>> +       if (is_nd_btt(dev)) {
>> +               /*
>> +                * Map with resource size
>> +                */
>> +               if (devm_nsio_enable(dev, nsio, resource_size(&nsio->res)))
>> +                       return -ENXIO;
>>                  return nvdimm_namespace_attach_btt(ndns);
>> +       }
>> +
>> +       if (devm_nsio_enable(dev, nsio, info_block_reserve()))
>> +               return -ENXIO;
>>
>>          if (is_nd_pfn(dev))
>>                  return pmem_attach_disk(dev, ndns);
>> --
>> 2.21.0
>>
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

  reply	other threads:[~2019-10-24  9:07 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-17  7:33 [PATCH v3] libnvdimm/nsio: differentiate between probe mapping and runtime mapping Aneesh Kumar K.V
2019-10-24  2:06 ` Dan Williams
2019-10-24  9:07   ` Aneesh Kumar K.V [this message]
2019-10-30 23:33     ` Dan Williams
2019-10-31  3:55       ` Dan Williams
2019-10-31  4:10         ` Aneesh Kumar K.V
2019-10-31  4:19           ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49c98556-5c57-b8fc-ef50-fa0ed679e094@linux.ibm.com \
    --to=aneesh.kumar@linux.ibm.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).