All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Christoph Hellwig <hch@infradead.org>,
	device-mapper development <dm-devel@redhat.com>
Subject: Re: [PATCH v5 2/8] drivers/pmem: Allow pmem_clear_poison() to accept arbitrary offset and len
Date: Mon, 24 Feb 2020 16:15:53 -0500	[thread overview]
Message-ID: <20200224211553.GD14651@redhat.com> (raw)
In-Reply-To: <CAPcyv4gGrimesjZ=OKRaYTDd5dUVz+U9aPeBMh_H3_YCz4FOEQ@mail.gmail.com>

On Mon, Feb 24, 2020 at 12:52:13PM -0800, Dan Williams wrote:
> On Mon, Feb 24, 2020 at 12:13 PM Vivek Goyal <vgoyal@redhat.com> wrote:
> >
> > On Mon, Feb 24, 2020 at 10:03:30AM +1100, Dave Chinner wrote:
> > > On Fri, Feb 21, 2020 at 03:17:59PM -0500, Vivek Goyal wrote:
> > > > On Fri, Feb 21, 2020 at 01:32:48PM -0500, Jeff Moyer wrote:
> > > > > Vivek Goyal <vgoyal@redhat.com> writes:
> > > > >
> > > > > > On Thu, Feb 20, 2020 at 04:35:17PM -0500, Jeff Moyer wrote:
> > > > > >> Vivek Goyal <vgoyal@redhat.com> writes:
> > > > > >>
> > > > > >> > Currently pmem_clear_poison() expects offset and len to be sector aligned.
> > > > > >> > Atleast that seems to be the assumption with which code has been written.
> > > > > >> > It is called only from pmem_do_bvec() which is called only from pmem_rw_page()
> > > > > >> > and pmem_make_request() which will only passe sector aligned offset and len.
> > > > > >> >
> > > > > >> > Soon we want use this function from dax_zero_page_range() code path which
> > > > > >> > can try to zero arbitrary range of memory with-in a page. So update this
> > > > > >> > function to assume that offset and length can be arbitrary and do the
> > > > > >> > necessary alignments as needed.
> > > > > >>
> > > > > >> What caller will try to zero a range that is smaller than a sector?
> > > > > >
> > > > > > Hi Jeff,
> > > > > >
> > > > > > New dax zeroing interface (dax_zero_page_range()) can technically pass
> > > > > > a range which is less than a sector. Or which is bigger than a sector
> > > > > > but start and end are not aligned on sector boundaries.
> > > > >
> > > > > Sure, but who will call it with misaligned ranges?
> > > >
> > > > create a file foo.txt of size 4K and then truncate it.
> > > >
> > > > "truncate -s 23 foo.txt". Filesystems try to zero the bytes from 24 to
> > > > 4095.
> > >
> > > This should fail with EIO. Only full page writes should clear the
> > > bad page state, and partial writes should therefore fail because
> > > they do not guarantee the data in the filesystem block is all good.
> > >
> > > If this zeroing was a buffered write to an address with a bad
> > > sector, then the writeback will fail and the user will (eventually)
> > > get an EIO on the file.
> > >
> > > DAX should do the same thing, except because the zeroing is
> > > synchronous (i.e. done directly by the truncate syscall) we can -
> > > and should - return EIO immediately.
> > >
> > > Indeed, with your code, if we then extend the file by truncating up
> > > back to 4k, then the range between 23 and 512 is still bad, even
> > > though we've successfully zeroed it and the user knows it. An
> > > attempt to read anywhere in this range (e.g. 10 bytes at offset 100)
> > > will fail with EIO, but reading 10 bytes at offset 2000 will
> > > succeed.
> > >
> > > That's *awful* behaviour to expose to userspace, especially when
> > > they look at the fs config and see that it's using both 4kB block
> > > and sector sizes...
> > >
> > > The only thing that makes sense from a filesystem perspective is
> > > clearing bad page state when entire filesystem blocks are
> > > overwritten. The data in a filesystem block is either good or bad,
> > > and it doesn't matter how many internal (kernel or device) sectors
> > > it has.
> > >
> > > > > And what happens to the rest?  The caller is left to trip over the
> > > > > errors?  That sounds pretty terrible.  I really think there needs to be
> > > > > an explicit contract here.
> > > >
> > > > Ok, I think is is the contentious bit. Current interface
> > > > (__dax_zero_page_range()) either clears the poison (if I/O is aligned to
> > > > sector) or expects page to be free of poison.
> > > >
> > > > So in above example, of "truncate -s 23 foo.txt", currently I get an error
> > > > because range being zeroed is not sector aligned. So
> > > > __dax_zero_page_range() falls back to calling direct_access(). Which
> > > > fails because there are poisoned sectors in the page.
> > > >
> > > > With my patches, dax_zero_page_range(), clears the poison from sector 1 to
> > > > 7 but leaves sector 0 untouched and just writes zeroes from byte 0 to 511
> > > > and returns success.
> > >
> > > Ok, kernel sectors are not the unit of granularity bad page state
> > > should be managed at. They don't match page state granularity, and
> > > they don't match filesystem block granularity, and the whacky
> > > "partial writes silently succeed, reads fail unpredictably"
> > > assymetry it leads to will just cause problems for users.
> > >
> > > > So question is, is this better behavior or worse behavior. If sector 0
> > > > was poisoned, it will continue to remain poisoned and caller will come
> > > > to know about it on next read and then it should try to truncate file
> > > > to length 0 or unlink file or restore that file to get rid of poison.
> > >
> > > Worse, because the filesystem can't track what sub-parts of the
> > > block are bad and that leads to inconsistent data integrity status
> > > being exposed to userspace.
> > >
> > >
> > > > IOW, if a partial block is being zeroed and if it is poisoned, caller
> > > > will not be return an error and poison will not be cleared and memory
> > > > will be zeroed. What do we expect in such cases.
> > > >
> > > > Do we expect an interface where if there are any bad blocks in the range
> > > > being zeroed, then they all should be cleared (and hence all I/O should
> > > > be aligned) otherwise error is returned. If yes, I could make that
> > > > change.
> > > >
> > > > Downside of current interface is that it will clear as many blocks as
> > > > possible in the given range and leave starting and end blocks poisoned
> > > > (if it is unaligned) and not return error. That means a reader will
> > > > get error on these blocks again and they will have to try to clear it
> > > > again.
> > >
> > > Which is solved by having partial page writes always EIO on poisoned
> > > memory.
> >
> > Ok, how about if I add one more patch to the series which will check
> > if unwritten portion of the page has known poison. If it has, then
> > -EIO is returned.
> >
> >
> > Subject: pmem: zero page range return error if poisoned memory in unwritten area
> >
> > Filesystems call into pmem_dax_zero_page_range() to zero partial page upon
> > truncate. If partial page is being zeroed, then at the end of operation
> > file systems expect that there is no poison in the whole page (atleast
> > known poison).
> >
> > So make sure part of the partial page which is not being written, does not
> > have poison. If it does, return error. If there is poison in area of page
> > being written, it will be cleared.
> 
> No, I don't like that the zero operation is special cased compared to
> the write case. I'd say let's make them identical for now. I.e. fail
> the I/O at dax_direct_access() time.

So basically __dax_zero_page_range() will only write zeros (and not
try to clear any poison). Right?

> I think the error clearing
> interface should be an explicit / separate op rather than a
> side-effect. What about an explicit interface for initializing newly
> allocated blocks, and the only reliable way to destroy poison through
> the filesystem is to free the block?

Effectively pmem_make_request() is already that interface filesystems
use to initialize blocks and clear poison. So we don't really have to
introduce a new interface?

Or you are suggesting separate dax_zero_page_range() interface which will
always call into firmware to clear poison. And that will make sure latent
poison is cleared as well and filesystem should use that for block
initialization instead? I do like the idea of not having to differentiate
between known poison and latent poison. Once a block has been initialized
all poison should be cleared (known/latent). I am worried though that
on large devices this might slowdown filesystem initialization a lot
if they are zeroing large range of blocks.

If yes, this sounds like two different patch series. First patch series
takes care of removing blkdev_issue_zeroout() from
__dax_zero_page_range() and couple of iomap related cleans christoph
wanted.

And second patch series for adding new dax operation to zero a range
and always call info firmware to clear poison and modify filesystems
accordingly.

Thanks
Vivek
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>,
	Jeff Moyer <jmoyer@redhat.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Christoph Hellwig <hch@infradead.org>,
	device-mapper development <dm-devel@redhat.com>
Subject: Re: [PATCH v5 2/8] drivers/pmem: Allow pmem_clear_poison() to accept arbitrary offset and len
Date: Mon, 24 Feb 2020 16:15:53 -0500	[thread overview]
Message-ID: <20200224211553.GD14651@redhat.com> (raw)
In-Reply-To: <CAPcyv4gGrimesjZ=OKRaYTDd5dUVz+U9aPeBMh_H3_YCz4FOEQ@mail.gmail.com>

On Mon, Feb 24, 2020 at 12:52:13PM -0800, Dan Williams wrote:
> On Mon, Feb 24, 2020 at 12:13 PM Vivek Goyal <vgoyal@redhat.com> wrote:
> >
> > On Mon, Feb 24, 2020 at 10:03:30AM +1100, Dave Chinner wrote:
> > > On Fri, Feb 21, 2020 at 03:17:59PM -0500, Vivek Goyal wrote:
> > > > On Fri, Feb 21, 2020 at 01:32:48PM -0500, Jeff Moyer wrote:
> > > > > Vivek Goyal <vgoyal@redhat.com> writes:
> > > > >
> > > > > > On Thu, Feb 20, 2020 at 04:35:17PM -0500, Jeff Moyer wrote:
> > > > > >> Vivek Goyal <vgoyal@redhat.com> writes:
> > > > > >>
> > > > > >> > Currently pmem_clear_poison() expects offset and len to be sector aligned.
> > > > > >> > Atleast that seems to be the assumption with which code has been written.
> > > > > >> > It is called only from pmem_do_bvec() which is called only from pmem_rw_page()
> > > > > >> > and pmem_make_request() which will only passe sector aligned offset and len.
> > > > > >> >
> > > > > >> > Soon we want use this function from dax_zero_page_range() code path which
> > > > > >> > can try to zero arbitrary range of memory with-in a page. So update this
> > > > > >> > function to assume that offset and length can be arbitrary and do the
> > > > > >> > necessary alignments as needed.
> > > > > >>
> > > > > >> What caller will try to zero a range that is smaller than a sector?
> > > > > >
> > > > > > Hi Jeff,
> > > > > >
> > > > > > New dax zeroing interface (dax_zero_page_range()) can technically pass
> > > > > > a range which is less than a sector. Or which is bigger than a sector
> > > > > > but start and end are not aligned on sector boundaries.
> > > > >
> > > > > Sure, but who will call it with misaligned ranges?
> > > >
> > > > create a file foo.txt of size 4K and then truncate it.
> > > >
> > > > "truncate -s 23 foo.txt". Filesystems try to zero the bytes from 24 to
> > > > 4095.
> > >
> > > This should fail with EIO. Only full page writes should clear the
> > > bad page state, and partial writes should therefore fail because
> > > they do not guarantee the data in the filesystem block is all good.
> > >
> > > If this zeroing was a buffered write to an address with a bad
> > > sector, then the writeback will fail and the user will (eventually)
> > > get an EIO on the file.
> > >
> > > DAX should do the same thing, except because the zeroing is
> > > synchronous (i.e. done directly by the truncate syscall) we can -
> > > and should - return EIO immediately.
> > >
> > > Indeed, with your code, if we then extend the file by truncating up
> > > back to 4k, then the range between 23 and 512 is still bad, even
> > > though we've successfully zeroed it and the user knows it. An
> > > attempt to read anywhere in this range (e.g. 10 bytes at offset 100)
> > > will fail with EIO, but reading 10 bytes at offset 2000 will
> > > succeed.
> > >
> > > That's *awful* behaviour to expose to userspace, especially when
> > > they look at the fs config and see that it's using both 4kB block
> > > and sector sizes...
> > >
> > > The only thing that makes sense from a filesystem perspective is
> > > clearing bad page state when entire filesystem blocks are
> > > overwritten. The data in a filesystem block is either good or bad,
> > > and it doesn't matter how many internal (kernel or device) sectors
> > > it has.
> > >
> > > > > And what happens to the rest?  The caller is left to trip over the
> > > > > errors?  That sounds pretty terrible.  I really think there needs to be
> > > > > an explicit contract here.
> > > >
> > > > Ok, I think is is the contentious bit. Current interface
> > > > (__dax_zero_page_range()) either clears the poison (if I/O is aligned to
> > > > sector) or expects page to be free of poison.
> > > >
> > > > So in above example, of "truncate -s 23 foo.txt", currently I get an error
> > > > because range being zeroed is not sector aligned. So
> > > > __dax_zero_page_range() falls back to calling direct_access(). Which
> > > > fails because there are poisoned sectors in the page.
> > > >
> > > > With my patches, dax_zero_page_range(), clears the poison from sector 1 to
> > > > 7 but leaves sector 0 untouched and just writes zeroes from byte 0 to 511
> > > > and returns success.
> > >
> > > Ok, kernel sectors are not the unit of granularity bad page state
> > > should be managed at. They don't match page state granularity, and
> > > they don't match filesystem block granularity, and the whacky
> > > "partial writes silently succeed, reads fail unpredictably"
> > > assymetry it leads to will just cause problems for users.
> > >
> > > > So question is, is this better behavior or worse behavior. If sector 0
> > > > was poisoned, it will continue to remain poisoned and caller will come
> > > > to know about it on next read and then it should try to truncate file
> > > > to length 0 or unlink file or restore that file to get rid of poison.
> > >
> > > Worse, because the filesystem can't track what sub-parts of the
> > > block are bad and that leads to inconsistent data integrity status
> > > being exposed to userspace.
> > >
> > >
> > > > IOW, if a partial block is being zeroed and if it is poisoned, caller
> > > > will not be return an error and poison will not be cleared and memory
> > > > will be zeroed. What do we expect in such cases.
> > > >
> > > > Do we expect an interface where if there are any bad blocks in the range
> > > > being zeroed, then they all should be cleared (and hence all I/O should
> > > > be aligned) otherwise error is returned. If yes, I could make that
> > > > change.
> > > >
> > > > Downside of current interface is that it will clear as many blocks as
> > > > possible in the given range and leave starting and end blocks poisoned
> > > > (if it is unaligned) and not return error. That means a reader will
> > > > get error on these blocks again and they will have to try to clear it
> > > > again.
> > >
> > > Which is solved by having partial page writes always EIO on poisoned
> > > memory.
> >
> > Ok, how about if I add one more patch to the series which will check
> > if unwritten portion of the page has known poison. If it has, then
> > -EIO is returned.
> >
> >
> > Subject: pmem: zero page range return error if poisoned memory in unwritten area
> >
> > Filesystems call into pmem_dax_zero_page_range() to zero partial page upon
> > truncate. If partial page is being zeroed, then at the end of operation
> > file systems expect that there is no poison in the whole page (atleast
> > known poison).
> >
> > So make sure part of the partial page which is not being written, does not
> > have poison. If it does, return error. If there is poison in area of page
> > being written, it will be cleared.
> 
> No, I don't like that the zero operation is special cased compared to
> the write case. I'd say let's make them identical for now. I.e. fail
> the I/O at dax_direct_access() time.

So basically __dax_zero_page_range() will only write zeros (and not
try to clear any poison). Right?

> I think the error clearing
> interface should be an explicit / separate op rather than a
> side-effect. What about an explicit interface for initializing newly
> allocated blocks, and the only reliable way to destroy poison through
> the filesystem is to free the block?

Effectively pmem_make_request() is already that interface filesystems
use to initialize blocks and clear poison. So we don't really have to
introduce a new interface?

Or you are suggesting separate dax_zero_page_range() interface which will
always call into firmware to clear poison. And that will make sure latent
poison is cleared as well and filesystem should use that for block
initialization instead? I do like the idea of not having to differentiate
between known poison and latent poison. Once a block has been initialized
all poison should be cleared (known/latent). I am worried though that
on large devices this might slowdown filesystem initialization a lot
if they are zeroing large range of blocks.

If yes, this sounds like two different patch series. First patch series
takes care of removing blkdev_issue_zeroout() from
__dax_zero_page_range() and couple of iomap related cleans christoph
wanted.

And second patch series for adding new dax operation to zero a range
and always call info firmware to clear poison and modify filesystems
accordingly.

Thanks
Vivek


  reply	other threads:[~2020-02-24 21:16 UTC|newest]

Thread overview: 103+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-18 21:48 [PATCH v5 0/8] dax/pmem: Provide a dax operation to zero range of memory Vivek Goyal
2020-02-18 21:48 ` Vivek Goyal
2020-02-18 21:48 ` [PATCH v5 1/8] pmem: Add functions for reading/writing page to/from pmem Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-02-18 21:48 ` [PATCH v5 2/8] drivers/pmem: Allow pmem_clear_poison() to accept arbitrary offset and len Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-02-20 16:17   ` Christoph Hellwig
2020-02-20 16:17     ` Christoph Hellwig
2020-02-20 21:35   ` Jeff Moyer
2020-02-20 21:35     ` Jeff Moyer
2020-02-20 21:57     ` Vivek Goyal
2020-02-20 21:57       ` Vivek Goyal
2020-02-21 18:32       ` Jeff Moyer
2020-02-21 18:32         ` Jeff Moyer
2020-02-21 20:17         ` Vivek Goyal
2020-02-21 20:17           ` Vivek Goyal
2020-02-21 21:00           ` Dan Williams
2020-02-21 21:00             ` Dan Williams
2020-02-21 21:24             ` Vivek Goyal
2020-02-21 21:24               ` Vivek Goyal
2020-02-21 21:30               ` Dan Williams
2020-02-21 21:30                 ` Dan Williams
2020-02-21 21:33                 ` Jeff Moyer
2020-02-21 21:33                   ` Jeff Moyer
2020-02-23 23:03           ` Dave Chinner
2020-02-23 23:03             ` Dave Chinner
2020-02-24  0:40             ` Dan Williams
2020-02-24  0:40               ` Dan Williams
2020-02-24 13:50               ` Jeff Moyer
2020-02-24 13:50                 ` Jeff Moyer
2020-02-24 20:48                 ` Dan Williams
2020-02-24 20:48                   ` Dan Williams
2020-02-24 21:53                   ` Jeff Moyer
2020-02-24 21:53                     ` Jeff Moyer
2020-02-25  0:26                     ` Dan Williams
2020-02-25  0:26                       ` Dan Williams
2020-02-25 20:32                       ` Jeff Moyer
2020-02-25 20:32                         ` Jeff Moyer
2020-02-25 21:52                         ` Dan Williams
2020-02-25 21:52                           ` Dan Williams
2020-02-25 23:26                       ` Jane Chu
2020-02-25 23:26                         ` Jane Chu
2020-02-24 15:38             ` Vivek Goyal
2020-02-24 15:38               ` Vivek Goyal
2020-02-27  3:02               ` Dave Chinner
2020-02-27  3:02                 ` Dave Chinner
2020-02-27  4:19                 ` Dan Williams
2020-02-27  4:19                   ` Dan Williams
2020-02-28  1:30                   ` Dave Chinner
2020-02-28  1:30                     ` Dave Chinner
2020-02-28  3:28                     ` Dan Williams
2020-02-28  3:28                       ` Dan Williams
2020-02-28 14:05                       ` Christoph Hellwig
2020-02-28 14:05                         ` Christoph Hellwig
2020-02-28 16:26                         ` Dan Williams
2020-02-28 16:26                           ` Dan Williams
2020-02-24 20:13             ` Vivek Goyal
2020-02-24 20:13               ` Vivek Goyal
2020-02-24 20:52               ` Dan Williams
2020-02-24 20:52                 ` Dan Williams
2020-02-24 21:15                 ` Vivek Goyal [this message]
2020-02-24 21:15                   ` Vivek Goyal
2020-02-24 21:32                   ` Dan Williams
2020-02-24 21:32                     ` Dan Williams
2020-02-25 13:36                     ` Vivek Goyal
2020-02-25 13:36                       ` Vivek Goyal
2020-02-25 16:25                       ` Dan Williams
2020-02-25 16:25                         ` Dan Williams
2020-02-25 20:08                         ` Vivek Goyal
2020-02-25 20:08                           ` Vivek Goyal
2020-02-25 22:49                           ` Dan Williams
2020-02-25 22:49                             ` Dan Williams
2020-02-26 13:51                             ` Vivek Goyal
2020-02-26 13:51                               ` Vivek Goyal
2020-02-26 16:57                             ` Vivek Goyal
2020-02-26 16:57                               ` Vivek Goyal
2020-02-27  3:11                               ` Dave Chinner
2020-02-27  3:11                                 ` Dave Chinner
2020-02-27 15:25                                 ` Vivek Goyal
2020-02-27 15:25                                   ` Vivek Goyal
2020-02-28  1:50                                   ` Dave Chinner
2020-02-28  1:50                                     ` Dave Chinner
2020-02-18 21:48 ` [PATCH v5 3/8] pmem: Enable pmem_do_write() to deal with arbitrary ranges Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-02-20 16:17   ` Christoph Hellwig
2020-02-20 16:17     ` Christoph Hellwig
2020-02-18 21:48 ` [PATCH v5 4/8] dax, pmem: Add a dax operation zero_page_range Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-03-31 19:38   ` Dan Williams
2020-03-31 19:38     ` Dan Williams
2020-04-01 13:15     ` Vivek Goyal
2020-04-01 13:15       ` Vivek Goyal
2020-04-01 16:14     ` Vivek Goyal
2020-04-01 16:14       ` Vivek Goyal
2020-02-18 21:48 ` [PATCH v5 5/8] s390,dcssblk,dax: Add dax zero_page_range operation to dcssblk driver Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-02-18 21:48 ` [PATCH v5 6/8] dm,dax: Add dax zero_page_range operation Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-02-18 21:48 ` [PATCH v5 7/8] dax,iomap: Start using dax native zero_page_range() Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-02-18 21:48 ` [PATCH v5 8/8] dax,iomap: Add helper dax_iomap_zero() to zero a range Vivek Goyal
2020-02-18 21:48   ` Vivek Goyal
2020-04-25 11:31   ` [PATCH v5 8/8] dax, iomap: " neolift9

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200224211553.GD14651@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=dm-devel@redhat.com \
    --cc=hch@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.