All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: david@gibson.dropbear.id.au, groug@kaod.org, qemu-ppc@nongnu.org,
	ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
	imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
	peter.maydell@linaro.org, eblake@redhat.com, qemu-arm@nongnu.org,
	richard.henderson@linaro.org, pbonzini@redhat.com,
	marcel.apfelbaum@gmail.com, stefanha@redhat.com,
	haozhong.zhang@intel.com, shameerali.kolothum.thodi@huawei.com,
	kwangwoo.lee@sk.com, armbru@redhat.com
Cc: qemu-devel@nongnu.org, aneesh.kumar@linux.ibm.com,
	linux-nvdimm@lists.01.org, kvm-ppc@vger.kernel.org,
	shivaprasadbhat@gmail.com, bharata@linux.vnet.ibm.com
Subject: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Wed, 28 Apr 2021 23:48:21 -0400	[thread overview]
Message-ID: <161966810162.652.13723419108625443430.stgit@17be908f7c1c> (raw)

The nvdimm devices are expected to ensure write persistence during power
failure kind of scenarios.

The libpmem has architecture specific instructions like dcbf on POWER
to flush the cache data to backend nvdimm device during normal writes
followed by explicit flushes if the backend devices are not synchronous
DAX capable.

Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
and the subsequent flush doesn't traslate to actual flush to the backend
file on the host in case of file backed v-nvdimms. This is addressed by
virtio-pmem in case of x86_64 by making explicit flushes translating to
fsync at qemu.

On SPAPR, the issue is addressed by adding a new hcall to
request for an explicit flush from the guest ndctl driver when the backend
nvdimm cannot ensure write persistence with dcbf alone. So, the approach
here is to convey when the hcall flush is required in a device tree
property. The guest makes the hcall when the property is found, instead
of relying on dcbf.

A new device property sync-dax is added to the nvdimm device. When the 
sync-dax is 'writeback'(default for PPC), device property
"hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH
requesting for an explicit flush. 

sync-dax is "unsafe" on all other platforms(x86, ARM) and old pseries machines
prior to 5.2 on PPC. sync-dax="writeback" on ARM and x86_64 is prevented
now as the flush semantics are unimplemented.

When the backend file is actually synchronous DAX capable and no explicit
flushes are required, the sync-dax mode 'direct' is to be used.

The below demonstration shows the map_sync behavior with sync-dax writeback &
direct.
(https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/memory/ndctl.py.data/map_sync.c)

The pmem0 is from nvdimm with With sync-dax=direct, and pmem1 is from
nvdimm with syn-dax=writeback, mounted as
/dev/pmem0 on /mnt1 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize=32k,noquota)
/dev/pmem1 on /mnt2 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize=32k,noquota)

[root@atest-guest ~]# ./mapsync /mnt1/newfile ----> When sync-dax=unsafe/direct
[root@atest-guest ~]# ./mapsync /mnt2/newfile ----> when sync-dax=writeback
Failed to mmap  with Operation not supported

The first patch does the header file cleanup necessary for the
subsequent ones. Second patch implements the hcall, adds the necessary
vmstate properties to spapr machine structure for carrying the hcall
status during save-restore. The nature of the hcall being asynchronus,
the patch uses aio utilities to offload the flush. The third patch adds
the 'sync-dax' device property and enables the device tree property
for the guest to utilise the hcall.

The kernel changes to exploit this hcall is at
https://github.com/linuxppc/linux/commit/75b7c05ebf9026.patch

---
v3 - https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg07916.html
Changes from v3:
      - Fixed the forward declaration coding guideline violations in 1st patch.
      - Removed the code waiting for the flushes to complete during migration,
        instead restart the flush worker on destination qemu in post load.
      - Got rid of the randomization of the flush tokens, using simple
        counter.
      - Got rid of the redundant flush state lock, relying on the BQL now.
      - Handling the memory-backend-ram usage
      - Changed the sync-dax symantics from on/off to 'unsafe','writeback' and 'direct'.
	Added prevention code using 'writeback' on arm and x86_64.
      - Fixed all the miscellaneous comments.

v2 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg07031.html
Changes from v2:
      - Using the thread pool based approach as suggested
      - Moved the async hcall handling code to spapr_nvdimm.c along
        with some simplifications
      - Added vmstate to preserve the hcall status during save-restore
        along with pre_save handler code to complete all ongoning flushes.
      - Added hw_compat magic for sync-dax 'on' on previous machines.
      - Miscellanious minor fixes.

v1 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg06330.html
Changes from v1
      - Fixed a missed-out unlock
      - using QLIST_FOREACH instead of QLIST_FOREACH_SAFE while generating token

Shivaprasad G Bhat (3):
      spapr: nvdimm: Forward declare and move the definitions
      spapr: nvdimm: Implement H_SCM_FLUSH hcall
      nvdimm: Enable sync-dax device property for nvdimm


 hw/arm/virt.c                 |   28 ++++
 hw/i386/pc.c                  |   28 ++++
 hw/mem/nvdimm.c               |   52 +++++++
 hw/ppc/spapr.c                |   16 ++
 hw/ppc/spapr_nvdimm.c         |  285 +++++++++++++++++++++++++++++++++++++++++
 include/hw/mem/nvdimm.h       |   11 ++
 include/hw/ppc/spapr.h        |   11 +-
 include/hw/ppc/spapr_nvdimm.h |   27 ++--
 qapi/common.json              |   20 +++
 9 files changed, 455 insertions(+), 23 deletions(-)

--
Signature
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: david@gibson.dropbear.id.au, groug@kaod.org, qemu-ppc@nongnu.org,
	ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
	imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
	peter.maydell@linaro.org, eblake@redhat.com, qemu-arm@nongnu.org,
	richard.henderson@linaro.org, pbonzini@redhat.com,
	marcel.apfelbaum@gmail.com, stefanha@redhat.com,
	haozhong.zhang@intel.com, shameerali.kolothum.thodi@huawei.com,
	kwangwoo.lee@sk.com, armbru@redhat.com
Cc: linux-nvdimm@lists.01.org, aneesh.kumar@linux.ibm.com,
	qemu-devel@nongnu.org, kvm-ppc@vger.kernel.org,
	shivaprasadbhat@gmail.com, bharata@linux.vnet.ibm.com
Subject: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Wed, 28 Apr 2021 23:48:21 -0400	[thread overview]
Message-ID: <161966810162.652.13723419108625443430.stgit@17be908f7c1c> (raw)

The nvdimm devices are expected to ensure write persistence during power
failure kind of scenarios.

The libpmem has architecture specific instructions like dcbf on POWER
to flush the cache data to backend nvdimm device during normal writes
followed by explicit flushes if the backend devices are not synchronous
DAX capable.

Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
and the subsequent flush doesn't traslate to actual flush to the backend
file on the host in case of file backed v-nvdimms. This is addressed by
virtio-pmem in case of x86_64 by making explicit flushes translating to
fsync at qemu.

On SPAPR, the issue is addressed by adding a new hcall to
request for an explicit flush from the guest ndctl driver when the backend
nvdimm cannot ensure write persistence with dcbf alone. So, the approach
here is to convey when the hcall flush is required in a device tree
property. The guest makes the hcall when the property is found, instead
of relying on dcbf.

A new device property sync-dax is added to the nvdimm device. When the 
sync-dax is 'writeback'(default for PPC), device property
"hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH
requesting for an explicit flush. 

sync-dax is "unsafe" on all other platforms(x86, ARM) and old pseries machines
prior to 5.2 on PPC. sync-dax="writeback" on ARM and x86_64 is prevented
now as the flush semantics are unimplemented.

When the backend file is actually synchronous DAX capable and no explicit
flushes are required, the sync-dax mode 'direct' is to be used.

The below demonstration shows the map_sync behavior with sync-dax writeback &
direct.
(https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/memory/ndctl.py.data/map_sync.c)

The pmem0 is from nvdimm with With sync-dax=direct, and pmem1 is from
nvdimm with syn-dax=writeback, mounted as
/dev/pmem0 on /mnt1 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize=32k,noquota)
/dev/pmem1 on /mnt2 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize=32k,noquota)

[root@atest-guest ~]# ./mapsync /mnt1/newfile ----> When sync-dax=unsafe/direct
[root@atest-guest ~]# ./mapsync /mnt2/newfile ----> when sync-dax=writeback
Failed to mmap  with Operation not supported

The first patch does the header file cleanup necessary for the
subsequent ones. Second patch implements the hcall, adds the necessary
vmstate properties to spapr machine structure for carrying the hcall
status during save-restore. The nature of the hcall being asynchronus,
the patch uses aio utilities to offload the flush. The third patch adds
the 'sync-dax' device property and enables the device tree property
for the guest to utilise the hcall.

The kernel changes to exploit this hcall is at
https://github.com/linuxppc/linux/commit/75b7c05ebf9026.patch

---
v3 - https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg07916.html
Changes from v3:
      - Fixed the forward declaration coding guideline violations in 1st patch.
      - Removed the code waiting for the flushes to complete during migration,
        instead restart the flush worker on destination qemu in post load.
      - Got rid of the randomization of the flush tokens, using simple
        counter.
      - Got rid of the redundant flush state lock, relying on the BQL now.
      - Handling the memory-backend-ram usage
      - Changed the sync-dax symantics from on/off to 'unsafe','writeback' and 'direct'.
	Added prevention code using 'writeback' on arm and x86_64.
      - Fixed all the miscellaneous comments.

v2 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg07031.html
Changes from v2:
      - Using the thread pool based approach as suggested
      - Moved the async hcall handling code to spapr_nvdimm.c along
        with some simplifications
      - Added vmstate to preserve the hcall status during save-restore
        along with pre_save handler code to complete all ongoning flushes.
      - Added hw_compat magic for sync-dax 'on' on previous machines.
      - Miscellanious minor fixes.

v1 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg06330.html
Changes from v1
      - Fixed a missed-out unlock
      - using QLIST_FOREACH instead of QLIST_FOREACH_SAFE while generating token

Shivaprasad G Bhat (3):
      spapr: nvdimm: Forward declare and move the definitions
      spapr: nvdimm: Implement H_SCM_FLUSH hcall
      nvdimm: Enable sync-dax device property for nvdimm


 hw/arm/virt.c                 |   28 ++++
 hw/i386/pc.c                  |   28 ++++
 hw/mem/nvdimm.c               |   52 +++++++
 hw/ppc/spapr.c                |   16 ++
 hw/ppc/spapr_nvdimm.c         |  285 +++++++++++++++++++++++++++++++++++++++++
 include/hw/mem/nvdimm.h       |   11 ++
 include/hw/ppc/spapr.h        |   11 +-
 include/hw/ppc/spapr_nvdimm.h |   27 ++--
 qapi/common.json              |   20 +++
 9 files changed, 455 insertions(+), 23 deletions(-)

--
Signature



WARNING: multiple messages have this Message-ID (diff)
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: david@gibson.dropbear.id.au, groug@kaod.org, qemu-ppc@nongnu.org,
	ehabkost@redhat.com, marcel.apfelbaum@gmail.com, mst@redhat.com,
	imammedo@redhat.com, xiaoguangrong.eric@gmail.com,
	peter.maydell@linaro.org, eblake@redhat.com, qemu-arm@nongnu.org,
	richard.henderson@linaro.org, pbonzini@redhat.com,
	marcel.apfelbaum@gmail.com, stefanha@redhat.com,
	haozhong.zhang@intel.com, shameerali.kolothum.thodi@huawei.com,
	kwangwoo.lee@sk.com, armbru@redhat.com
Cc: qemu-devel@nongnu.org, aneesh.kumar@linux.ibm.com,
	linux-nvdimm@lists.01.org, kvm-ppc@vger.kernel.org,
	shivaprasadbhat@gmail.com, bharata@linux.vnet.ibm.com
Subject: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Thu, 29 Apr 2021 03:48:21 +0000	[thread overview]
Message-ID: <161966810162.652.13723419108625443430.stgit@17be908f7c1c> (raw)

The nvdimm devices are expected to ensure write persistence during power
failure kind of scenarios.

The libpmem has architecture specific instructions like dcbf on POWER
to flush the cache data to backend nvdimm device during normal writes
followed by explicit flushes if the backend devices are not synchronous
DAX capable.

Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest
and the subsequent flush doesn't traslate to actual flush to the backend
file on the host in case of file backed v-nvdimms. This is addressed by
virtio-pmem in case of x86_64 by making explicit flushes translating to
fsync at qemu.

On SPAPR, the issue is addressed by adding a new hcall to
request for an explicit flush from the guest ndctl driver when the backend
nvdimm cannot ensure write persistence with dcbf alone. So, the approach
here is to convey when the hcall flush is required in a device tree
property. The guest makes the hcall when the property is found, instead
of relying on dcbf.

A new device property sync-dax is added to the nvdimm device. When the 
sync-dax is 'writeback'(default for PPC), device property
"hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH
requesting for an explicit flush. 

sync-dax is "unsafe" on all other platforms(x86, ARM) and old pseries machines
prior to 5.2 on PPC. sync-dax="writeback" on ARM and x86_64 is prevented
now as the flush semantics are unimplemented.

When the backend file is actually synchronous DAX capable and no explicit
flushes are required, the sync-dax mode 'direct' is to be used.

The below demonstration shows the map_sync behavior with sync-dax writeback &
direct.
(https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/memory/ndctl.py.data/map_sync.c)

The pmem0 is from nvdimm with With sync-dax=direct, and pmem1 is from
nvdimm with syn-dax=writeback, mounted as
/dev/pmem0 on /mnt1 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize2k,noquota)
/dev/pmem1 on /mnt2 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize2k,noquota)

[root@atest-guest ~]# ./mapsync /mnt1/newfile ----> When sync-dax=unsafe/direct
[root@atest-guest ~]# ./mapsync /mnt2/newfile ----> when sync-dax=writeback
Failed to mmap  with Operation not supported

The first patch does the header file cleanup necessary for the
subsequent ones. Second patch implements the hcall, adds the necessary
vmstate properties to spapr machine structure for carrying the hcall
status during save-restore. The nature of the hcall being asynchronus,
the patch uses aio utilities to offload the flush. The third patch adds
the 'sync-dax' device property and enables the device tree property
for the guest to utilise the hcall.

The kernel changes to exploit this hcall is at
https://github.com/linuxppc/linux/commit/75b7c05ebf9026.patch

---
v3 - https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg07916.html
Changes from v3:
      - Fixed the forward declaration coding guideline violations in 1st patch.
      - Removed the code waiting for the flushes to complete during migration,
        instead restart the flush worker on destination qemu in post load.
      - Got rid of the randomization of the flush tokens, using simple
        counter.
      - Got rid of the redundant flush state lock, relying on the BQL now.
      - Handling the memory-backend-ram usage
      - Changed the sync-dax symantics from on/off to 'unsafe','writeback' and 'direct'.
	Added prevention code using 'writeback' on arm and x86_64.
      - Fixed all the miscellaneous comments.

v2 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg07031.html
Changes from v2:
      - Using the thread pool based approach as suggested
      - Moved the async hcall handling code to spapr_nvdimm.c along
        with some simplifications
      - Added vmstate to preserve the hcall status during save-restore
        along with pre_save handler code to complete all ongoning flushes.
      - Added hw_compat magic for sync-dax 'on' on previous machines.
      - Miscellanious minor fixes.

v1 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg06330.html
Changes from v1
      - Fixed a missed-out unlock
      - using QLIST_FOREACH instead of QLIST_FOREACH_SAFE while generating token

Shivaprasad G Bhat (3):
      spapr: nvdimm: Forward declare and move the definitions
      spapr: nvdimm: Implement H_SCM_FLUSH hcall
      nvdimm: Enable sync-dax device property for nvdimm


 hw/arm/virt.c                 |   28 ++++
 hw/i386/pc.c                  |   28 ++++
 hw/mem/nvdimm.c               |   52 +++++++
 hw/ppc/spapr.c                |   16 ++
 hw/ppc/spapr_nvdimm.c         |  285 +++++++++++++++++++++++++++++++++++++++++
 include/hw/mem/nvdimm.h       |   11 ++
 include/hw/ppc/spapr.h        |   11 +-
 include/hw/ppc/spapr_nvdimm.h |   27 ++--
 qapi/common.json              |   20 +++
 9 files changed, 455 insertions(+), 23 deletions(-)

--
Signature

             reply	other threads:[~2021-04-29  3:48 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-29  3:48 Shivaprasad G Bhat [this message]
2021-04-29  3:48 ` [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm Shivaprasad G Bhat
2021-04-29  3:48 ` Shivaprasad G Bhat
2021-04-29  3:48 ` [PATCH v4 1/3] spapr: nvdimm: Forward declare and move the definitions Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-05-03 18:23   ` Eric Blake
2021-05-03 18:23     ` Eric Blake
2021-05-03 18:23     ` Eric Blake
2021-05-04  1:21     ` David Gibson
2021-05-04  1:21       ` David Gibson
2021-05-04  1:21       ` David Gibson
2021-04-29  3:48 ` [PATCH v4 2/3] spapr: nvdimm: Implement H_SCM_FLUSH hcall Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-04-29  3:48   ` Shivaprasad G Bhat
2021-04-29  3:49 ` [PATCH v4 3/3] nvdimm: Enable sync-dax device property for nvdimm Shivaprasad G Bhat
2021-04-29  3:49   ` Shivaprasad G Bhat
2021-04-29  3:49   ` Shivaprasad G Bhat
2021-05-03 18:27   ` Eric Blake
2021-05-03 18:27     ` Eric Blake
2021-05-03 18:27     ` Eric Blake
2021-04-29 15:55 ` [PATCH v4 0/3] nvdimm: Enable sync-dax " Stefan Hajnoczi
2021-04-29 15:55   ` Stefan Hajnoczi
2021-04-29 15:55   ` Stefan Hajnoczi
2021-04-29 16:32   ` Aneesh Kumar K.V
2021-04-29 16:44     ` Aneesh Kumar K.V
2021-04-29 16:32     ` Aneesh Kumar K.V
2021-04-30  4:27     ` David Gibson
2021-04-30  4:27       ` David Gibson
2021-04-30  4:27       ` David Gibson
2021-04-30 15:08       ` Stefan Hajnoczi
2021-04-30 15:08         ` Stefan Hajnoczi
2021-04-30 15:08         ` Stefan Hajnoczi
2021-04-30 19:14 ` Dan Williams
2021-04-30 19:14   ` Dan Williams
2021-04-30 19:14   ` Dan Williams
2021-05-01 13:55   ` Aneesh Kumar K.V
2021-05-01 13:56     ` Aneesh Kumar K.V
2021-05-01 13:55     ` Aneesh Kumar K.V
2021-05-03 14:05   ` Shivaprasad G Bhat
2021-05-03 14:17     ` Shivaprasad G Bhat
2021-05-03 14:05     ` Shivaprasad G Bhat
2021-05-03 19:41     ` Dan Williams
2021-05-03 19:41       ` Dan Williams
2021-05-03 19:41       ` Dan Williams
2021-05-04  4:59       ` Aneesh Kumar K.V
2021-05-04  4:59         ` Aneesh Kumar K.V
2021-05-04  4:59         ` Aneesh Kumar K.V
2021-05-04  5:43         ` Pankaj Gupta
2021-05-04  5:43           ` Pankaj Gupta
2021-05-04  5:43           ` Pankaj Gupta
2021-05-04  9:02           ` Aneesh Kumar K.V
2021-05-04  9:14             ` Aneesh Kumar K.V
2021-05-04  9:02             ` Aneesh Kumar K.V
2021-05-05  0:12             ` Dan Williams
2021-05-05  0:12               ` Dan Williams
2021-05-05  0:12               ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=161966810162.652.13723419108625443430.stgit@17be908f7c1c \
    --to=sbhat@linux.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=armbru@redhat.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=eblake@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=groug@kaod.org \
    --cc=haozhong.zhang@intel.com \
    --cc=imammedo@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kwangwoo.lee@sk.com \
    --cc=linux-nvdimm@lists.01.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=shivaprasadbhat@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=xiaoguangrong.eric@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.