LKML Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
@ 2019-01-11 15:11 Souptick Joarder
  2019-01-28  6:31 ` Souptick Joarder
  0 siblings, 1 reply; 8+ messages in thread
From: Souptick Joarder @ 2019-01-11 15:11 UTC (permalink / raw)
  To: akpm, willy, mhocko, joro, linux, robin.murphy
  Cc: iommu, linux-kernel, linux-mm

Convert to use vm_insert_range() to map range of kernel
memory to user vma.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
---
 drivers/iommu/dma-iommu.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..802de67 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -622,17 +622,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
 
 int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
 {
-	unsigned long uaddr = vma->vm_start;
-	unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	int ret = -ENXIO;
-
-	for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
-		ret = vm_insert_page(vma, uaddr, pages[i]);
-		if (ret)
-			break;
-		uaddr += PAGE_SIZE;
-	}
-	return ret;
+	return vm_insert_range(vma, pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
 }
 
 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
-- 
1.9.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
  2019-01-11 15:11 [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range Souptick Joarder
@ 2019-01-28  6:31 ` Souptick Joarder
  0 siblings, 0 replies; 8+ messages in thread
From: Souptick Joarder @ 2019-01-28  6:31 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Michal Hocko, joro,
	Russell King - ARM Linux, robin.murphy
  Cc: iommu, linux-kernel, Linux-MM

On Fri, Jan 11, 2019 at 8:37 PM Souptick Joarder <jrdr.linux@gmail.com> wrote:
>
> Convert to use vm_insert_range() to map range of kernel
> memory to user vma.
>
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>

Any comment on this patch ?
> ---
>  drivers/iommu/dma-iommu.c | 12 +-----------
>  1 file changed, 1 insertion(+), 11 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index d1b0475..802de67 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -622,17 +622,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>
>  int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
>  {
> -       unsigned long uaddr = vma->vm_start;
> -       unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> -       int ret = -ENXIO;
> -
> -       for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> -               ret = vm_insert_page(vma, uaddr, pages[i]);
> -               if (ret)
> -                       break;
> -               uaddr += PAGE_SIZE;
> -       }
> -       return ret;
> +       return vm_insert_range(vma, pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
>  }
>
>  static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
  2018-11-26  6:44     ` Souptick Joarder
@ 2018-11-26 13:18       ` Robin Murphy
  0 siblings, 0 replies; 8+ messages in thread
From: Robin Murphy @ 2018-11-26 13:18 UTC (permalink / raw)
  To: Souptick Joarder, Matthew Wilcox
  Cc: Andrew Morton, Michal Hocko, joro, Linux-MM, iommu, linux-kernel

On 26/11/2018 06:44, Souptick Joarder wrote:
> On Sat, Nov 24, 2018 at 3:04 AM Matthew Wilcox <willy@infradead.org> wrote:
>>
>> On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
>>> On 15/11/2018 15:49, Souptick Joarder wrote:
>>>> Convert to use vm_insert_range() to map range of kernel
>>>> memory to user vma.
>>>>
>>>> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
>>>> Reviewed-by: Matthew Wilcox <willy@infradead.org>
>>>> ---
>>>>    drivers/iommu/dma-iommu.c | 12 ++----------
>>>>    1 file changed, 2 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>>> index d1b0475..69c66b1 100644
>>>> --- a/drivers/iommu/dma-iommu.c
>>>> +++ b/drivers/iommu/dma-iommu.c
>>>> @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>>>>    int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
>>>>    {
>>>> -   unsigned long uaddr = vma->vm_start;
>>>> -   unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>>>> -   int ret = -ENXIO;
>>>> +   unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>>>> -   for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
>>>> -           ret = vm_insert_page(vma, uaddr, pages[i]);
>>>> -           if (ret)
>>>> -                   break;
>>>> -           uaddr += PAGE_SIZE;
>>>> -   }
>>>> -   return ret;
>>>> +   return vm_insert_range(vma, vma->vm_start, pages, count);
>>>
>>> AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
>>> break partial mmap()s of a large buffer? (which I believe can be a thing)
>>
>> Whoops.  That should have been:
>>
>> return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);
> 
> I am unable to trace back where vma->vm_pgoff is set for this driver ? if any ?

This isn't a driver - it's a DMA API backend, so any caller of 
dma_mmap_*() could potentially end up here (similarly for patch 2/9).

Robin.

> If default value set to 0 then I think existing code is correct.
> 
>>
>> I suppose.
>>
> 
>> Although arguably we should respect vm_pgoff inside vm_insert_region()
>> and then callers automatically get support for vm_pgoff without having
>> to think about it ...
> 
> I assume, vm_insert_region() means vm_insert_range(). If we respect vm_pgoff
> inside vm_insert_range, for any uninitialized/ error value set for vm_pgoff from
> drivers will introduce a bug inside core mm which might be difficult
> to trace back.
> But when vm_pgoff set and passed from caller (drivers) it might be
> easy to figure out.
> 
>> although we should then also pass in the length
>> of the pages array to avoid pages being mapped in which aren't part of
>> the allocated array.
> 
> Mostly Partial mapping is done by starting from an index and mapped it till
> end of pages array. Calculating length of the pages array will have a small
> overhead for each drivers.
> 
> Please correct me if I am wrong.
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
  2018-11-23 21:34   ` Matthew Wilcox
  2018-11-24  5:01     ` Souptick Joarder
@ 2018-11-26  6:44     ` Souptick Joarder
  2018-11-26 13:18       ` Robin Murphy
  1 sibling, 1 reply; 8+ messages in thread
From: Souptick Joarder @ 2018-11-26  6:44 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: robin.murphy, Andrew Morton, Michal Hocko, joro, Linux-MM, iommu,
	linux-kernel

On Sat, Nov 24, 2018 at 3:04 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
> > On 15/11/2018 15:49, Souptick Joarder wrote:
> > > Convert to use vm_insert_range() to map range of kernel
> > > memory to user vma.
> > >
> > > Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> > > Reviewed-by: Matthew Wilcox <willy@infradead.org>
> > > ---
> > >   drivers/iommu/dma-iommu.c | 12 ++----------
> > >   1 file changed, 2 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > index d1b0475..69c66b1 100644
> > > --- a/drivers/iommu/dma-iommu.c
> > > +++ b/drivers/iommu/dma-iommu.c
> > > @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> > >   int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> > >   {
> > > -   unsigned long uaddr = vma->vm_start;
> > > -   unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > -   int ret = -ENXIO;
> > > +   unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > -   for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > > -           ret = vm_insert_page(vma, uaddr, pages[i]);
> > > -           if (ret)
> > > -                   break;
> > > -           uaddr += PAGE_SIZE;
> > > -   }
> > > -   return ret;
> > > +   return vm_insert_range(vma, vma->vm_start, pages, count);
> >
> > AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
> > break partial mmap()s of a large buffer? (which I believe can be a thing)
>
> Whoops.  That should have been:
>
> return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);

I am unable to trace back where vma->vm_pgoff is set for this driver ? if any ?
If default value set to 0 then I think existing code is correct.

>
> I suppose.
>

> Although arguably we should respect vm_pgoff inside vm_insert_region()
> and then callers automatically get support for vm_pgoff without having
> to think about it ...

I assume, vm_insert_region() means vm_insert_range(). If we respect vm_pgoff
inside vm_insert_range, for any uninitialized/ error value set for vm_pgoff from
drivers will introduce a bug inside core mm which might be difficult
to trace back.
But when vm_pgoff set and passed from caller (drivers) it might be
easy to figure out.

> although we should then also pass in the length
> of the pages array to avoid pages being mapped in which aren't part of
> the allocated array.

Mostly Partial mapping is done by starting from an index and mapped it till
end of pages array. Calculating length of the pages array will have a small
overhead for each drivers.

Please correct me if I am wrong.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
  2018-11-23 21:34   ` Matthew Wilcox
@ 2018-11-24  5:01     ` Souptick Joarder
  2018-11-26  6:44     ` Souptick Joarder
  1 sibling, 0 replies; 8+ messages in thread
From: Souptick Joarder @ 2018-11-24  5:01 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: robin.murphy, Andrew Morton, Michal Hocko, joro, Linux-MM, iommu,
	linux-kernel

On Sat, Nov 24, 2018 at 3:04 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
> > On 15/11/2018 15:49, Souptick Joarder wrote:
> > > Convert to use vm_insert_range() to map range of kernel
> > > memory to user vma.
> > >
> > > Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> > > Reviewed-by: Matthew Wilcox <willy@infradead.org>
> > > ---
> > >   drivers/iommu/dma-iommu.c | 12 ++----------
> > >   1 file changed, 2 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > index d1b0475..69c66b1 100644
> > > --- a/drivers/iommu/dma-iommu.c
> > > +++ b/drivers/iommu/dma-iommu.c
> > > @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> > >   int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> > >   {
> > > -   unsigned long uaddr = vma->vm_start;
> > > -   unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > -   int ret = -ENXIO;
> > > +   unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > -   for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > > -           ret = vm_insert_page(vma, uaddr, pages[i]);
> > > -           if (ret)
> > > -                   break;
> > > -           uaddr += PAGE_SIZE;
> > > -   }
> > > -   return ret;
> > > +   return vm_insert_range(vma, vma->vm_start, pages, count);
> >
> > AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
> > break partial mmap()s of a large buffer? (which I believe can be a thing)
>
> Whoops.  That should have been:
>
> return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);
>
> I suppose.

Matthew, patch "drm/rockchip/rockchip_drm_gem.c: Convert to use vm_insert_range"
also need to address the same issue ?

>
> Although arguably we should respect vm_pgoff inside vm_insert_region()
> and then callers automatically get support for vm_pgoff without having
> to think about it ... although we should then also pass in the length
> of the pages array to avoid pages being mapped in which aren't part of
> the allocated array.
>
> Hm.  More thought required.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
  2018-11-23 17:23 ` Robin Murphy
@ 2018-11-23 21:34   ` Matthew Wilcox
  2018-11-24  5:01     ` Souptick Joarder
  2018-11-26  6:44     ` Souptick Joarder
  0 siblings, 2 replies; 8+ messages in thread
From: Matthew Wilcox @ 2018-11-23 21:34 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Souptick Joarder, akpm, mhocko, joro, linux-mm, iommu, linux-kernel

On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
> On 15/11/2018 15:49, Souptick Joarder wrote:
> > Convert to use vm_insert_range() to map range of kernel
> > memory to user vma.
> > 
> > Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> > Reviewed-by: Matthew Wilcox <willy@infradead.org>
> > ---
> >   drivers/iommu/dma-iommu.c | 12 ++----------
> >   1 file changed, 2 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index d1b0475..69c66b1 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> >   int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> >   {
> > -	unsigned long uaddr = vma->vm_start;
> > -	unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > -	int ret = -ENXIO;
> > +	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > -	for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > -		ret = vm_insert_page(vma, uaddr, pages[i]);
> > -		if (ret)
> > -			break;
> > -		uaddr += PAGE_SIZE;
> > -	}
> > -	return ret;
> > +	return vm_insert_range(vma, vma->vm_start, pages, count);
> 
> AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
> break partial mmap()s of a large buffer? (which I believe can be a thing)

Whoops.  That should have been:

return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);

I suppose.

Although arguably we should respect vm_pgoff inside vm_insert_region()
and then callers automatically get support for vm_pgoff without having
to think about it ... although we should then also pass in the length
of the pages array to avoid pages being mapped in which aren't part of
the allocated array.

Hm.  More thought required.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
  2018-11-15 15:49 Souptick Joarder
@ 2018-11-23 17:23 ` Robin Murphy
  2018-11-23 21:34   ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Robin Murphy @ 2018-11-23 17:23 UTC (permalink / raw)
  To: Souptick Joarder, akpm, willy, mhocko, joro; +Cc: linux-mm, iommu, linux-kernel

On 15/11/2018 15:49, Souptick Joarder wrote:
> Convert to use vm_insert_range() to map range of kernel
> memory to user vma.
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Reviewed-by: Matthew Wilcox <willy@infradead.org>
> ---
>   drivers/iommu/dma-iommu.c | 12 ++----------
>   1 file changed, 2 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index d1b0475..69c66b1 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>   
>   int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
>   {
> -	unsigned long uaddr = vma->vm_start;
> -	unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> -	int ret = -ENXIO;
> +	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>   
> -	for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> -		ret = vm_insert_page(vma, uaddr, pages[i]);
> -		if (ret)
> -			break;
> -		uaddr += PAGE_SIZE;
> -	}
> -	return ret;
> +	return vm_insert_range(vma, vma->vm_start, pages, count);

AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this 
break partial mmap()s of a large buffer? (which I believe can be a thing)

Robin.

>   }
>   
>   static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range
@ 2018-11-15 15:49 Souptick Joarder
  2018-11-23 17:23 ` Robin Murphy
  0 siblings, 1 reply; 8+ messages in thread
From: Souptick Joarder @ 2018-11-15 15:49 UTC (permalink / raw)
  To: akpm, willy, mhocko, joro; +Cc: iommu, linux-kernel, linux-mm

Convert to use vm_insert_range() to map range of kernel
memory to user vma.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Reviewed-by: Matthew Wilcox <willy@infradead.org>
---
 drivers/iommu/dma-iommu.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..69c66b1 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
 
 int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
 {
-	unsigned long uaddr = vma->vm_start;
-	unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	int ret = -ENXIO;
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
 
-	for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
-		ret = vm_insert_page(vma, uaddr, pages[i]);
-		if (ret)
-			break;
-		uaddr += PAGE_SIZE;
-	}
-	return ret;
+	return vm_insert_range(vma, vma->vm_start, pages, count);
 }
 
 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
-- 
1.9.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, back to index

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-11 15:11 [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range Souptick Joarder
2019-01-28  6:31 ` Souptick Joarder
  -- strict thread matches above, loose matches on Subject: below --
2018-11-15 15:49 Souptick Joarder
2018-11-23 17:23 ` Robin Murphy
2018-11-23 21:34   ` Matthew Wilcox
2018-11-24  5:01     ` Souptick Joarder
2018-11-26  6:44     ` Souptick Joarder
2018-11-26 13:18       ` Robin Murphy

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox