All of lore.kernel.org
 help / color / mirror / Atom feed
* highmem question
@ 2001-12-07 13:06 Roy Sigurd Karlsbakk
  2001-12-08  0:01 ` H. Peter Anvin
  0 siblings, 1 reply; 14+ messages in thread
From: Roy Sigurd Karlsbakk @ 2001-12-07 13:06 UTC (permalink / raw)
  To: linux-kernel

hi all

I heard that himem slows down systems.

- How much memory can Linux use without highmem enabled? (I've heard it's
  1GB, but Linux found 1,2GB without ...)
- How much is a system slowed down?
- How can this be fixed? I've heard it's a PCI issue (stuff being memory
  mapped above the 2GB limit?)

thanks

roy

--
Roy Sigurd Karlsbakk, MCSE, MCNE, CLS, LCA

Computers are like air conditioners.
They stop working when you open Windows.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-07 13:06 highmem question Roy Sigurd Karlsbakk
@ 2001-12-08  0:01 ` H. Peter Anvin
  2001-12-08  1:30   ` Marvin Justice
  0 siblings, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2001-12-08  0:01 UTC (permalink / raw)
  To: linux-kernel

Followup to:  <Pine.LNX.4.30.0112071404280.29154-100000@mustard.heime.net>
By author:    Roy Sigurd Karlsbakk <roy@karlsbakk.net>
In newsgroup: linux.dev.kernel
> 
> I heard that himem slows down systems.

It does, because it's a hack to extend 32-bit machines beyond their
architectural lifetime.

> - How much memory can Linux use without highmem enabled? (I've heard it's
>   1GB, but Linux found 1,2GB without ...)

On i386, it supports 896 MB without HIGHMEM.

> - How much is a system slowed down?

Depends completely on your application mix and amount of RAM -- and
whether or not you're using 4G or 64G HIGHMEM, the latter being more
severe across a whole bunch of axes.

> - How can this be fixed? I've heard it's a PCI issue (stuff being memory
>   mapped above the 2GB limit?)

Go to a 64-bit CPU architecture.

	-hpa

-- 
<hpa@transmeta.com> at work, <hpa@zytor.com> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt	<amsp@zytor.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:30   ` Marvin Justice
@ 2001-12-08  1:28     ` H. Peter Anvin
  2001-12-08  1:53       ` Marvin Justice
  0 siblings, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2001-12-08  1:28 UTC (permalink / raw)
  To: mjustice; +Cc: linux-kernel

Marvin Justice wrote:

> 
> While it certainly makes sense to expect a performance hit for mem above 4GB 
> on 32 bit systems I don't see why there should be any a priori reason to 
> either move to 64 bit or take a performance hit for if you need, say,  2GB of 
> RAM. The problem is that 2.4 Linux considers HIGHMEM to be anything above 
> 896MB. 
> 


The problem is that in the x86 architecture you don't have any reasonable
way of addressing the physical address space, so you need to map it into
the virtual address space.  You end up with a shortage of virtual address
space.

> 
>>From what I've read it looks like there will be changes in 2.5 to fix all 
> this.
> 


There is no way of fixing it.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  0:01 ` H. Peter Anvin
@ 2001-12-08  1:30   ` Marvin Justice
  2001-12-08  1:28     ` H. Peter Anvin
  0 siblings, 1 reply; 14+ messages in thread
From: Marvin Justice @ 2001-12-08  1:30 UTC (permalink / raw)
  To: H. Peter Anvin, linux-kernel

>
> > I heard that himem slows down systems.
>
> It does, because it's a hack to extend 32-bit machines beyond their
> architectural lifetime.
>

While it certainly makes sense to expect a performance hit for mem above 4GB 
on 32 bit systems I don't see why there should be any a priori reason to 
either move to 64 bit or take a performance hit for if you need, say,  2GB of 
RAM. The problem is that 2.4 Linux considers HIGHMEM to be anything above 
896MB. 

>From what I've read it looks like there will be changes in 2.5 to fix all 
this.

Marvin Justice

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:53       ` Marvin Justice
@ 2001-12-08  1:52         ` H. Peter Anvin
  2001-12-08  1:54         ` Jens Axboe
  1 sibling, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2001-12-08  1:52 UTC (permalink / raw)
  To: mjustice; +Cc: linux-kernel

Marvin Justice wrote:

>>The problem is that in the x86 architecture you don't have any reasonable
>>way of addressing the physical address space, so you need to map it into
>>the virtual address space.  You end up with a shortage of virtual address
>>space.
>>
> 
> Isn't this still just an artifact of the default 1:3 kernel/user virtual 
> address space split? I've never tried it myself but isn't there a 2:2 patch 
> available that has the effect of moving the highmem boundary up?
> 


You can tweak the split... both 2:2 and 0.5:3.5 splits have been used...
but it's not without side effects.  Cutting your user space breaks
applications which want large mmap() areas, for example.


> 
>>There is no way of fixing it.
>>
> 
> All I know is that a streaming io app I was playing with showed a drastic 
> performance hit when the kernel was compiled with CONFIG_HIGHMEM. On W2K we 
> saw no slowdown with 2 or even 4GB of RAM so I think solutions must exist.
> 


Of course you didn't.  Win2K runs with the equivalent of HIGHMEM all the
time.

	-hpa



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:28     ` H. Peter Anvin
@ 2001-12-08  1:53       ` Marvin Justice
  2001-12-08  1:52         ` H. Peter Anvin
  2001-12-08  1:54         ` Jens Axboe
  0 siblings, 2 replies; 14+ messages in thread
From: Marvin Justice @ 2001-12-08  1:53 UTC (permalink / raw)
  To: H. Peter Anvin, mjustice; +Cc: linux-kernel


>
> The problem is that in the x86 architecture you don't have any reasonable
> way of addressing the physical address space, so you need to map it into
> the virtual address space.  You end up with a shortage of virtual address
> space.

Isn't this still just an artifact of the default 1:3 kernel/user virtual 
address space split? I've never tried it myself but isn't there a 2:2 patch 
available that has the effect of moving the highmem boundary up?

>
> There is no way of fixing it.

All I know is that a streaming io app I was playing with showed a drastic 
performance hit when the kernel was compiled with CONFIG_HIGHMEM. On W2K we 
saw no slowdown with 2 or even 4GB of RAM so I think solutions must exist.

Marvin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:53       ` Marvin Justice
  2001-12-08  1:52         ` H. Peter Anvin
@ 2001-12-08  1:54         ` Jens Axboe
  2001-12-08  1:58           ` H. Peter Anvin
  2001-12-08  2:10           ` Marvin Justice
  1 sibling, 2 replies; 14+ messages in thread
From: Jens Axboe @ 2001-12-08  1:54 UTC (permalink / raw)
  To: Marvin Justice; +Cc: H. Peter Anvin, linux-kernel

On Fri, Dec 07 2001, Marvin Justice wrote:
> > There is no way of fixing it.
> 
> All I know is that a streaming io app I was playing with showed a drastic 
> performance hit when the kernel was compiled with CONFIG_HIGHMEM. On W2K we 
> saw no slowdown with 2 or even 4GB of RAM so I think solutions must exist.

That's because of highmem page bouncing when doing I/O. There is indeed
a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
happily do I/O directly to any page in your system as long as your
hardware supports it. I'm sure we're beating w2k with that enabled :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:54         ` Jens Axboe
@ 2001-12-08  1:58           ` H. Peter Anvin
  2001-12-08  2:02             ` Jens Axboe
  2001-12-08  2:10           ` Marvin Justice
  1 sibling, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2001-12-08  1:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Marvin Justice, linux-kernel

Jens Axboe wrote:

> On Fri, Dec 07 2001, Marvin Justice wrote:
> 
>>>There is no way of fixing it.
>>>
>>All I know is that a streaming io app I was playing with showed a drastic 
>>performance hit when the kernel was compiled with CONFIG_HIGHMEM. On W2K we 
>>saw no slowdown with 2 or even 4GB of RAM so I think solutions must exist.
>>
> 
> That's because of highmem page bouncing when doing I/O. There is indeed
> a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
> happily do I/O directly to any page in your system as long as your
> hardware supports it. I'm sure we're beating w2k with that enabled :-)
> 


I didn't realize we were doing page bouncing for I/O in the 1-4 GB range.
 Yes, this would be an issue.

	-hpa





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:58           ` H. Peter Anvin
@ 2001-12-08  2:02             ` Jens Axboe
  0 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2001-12-08  2:02 UTC (permalink / raw)
  To: H. Peter Anvin; +Cc: Marvin Justice, linux-kernel

On Fri, Dec 07 2001, H. Peter Anvin wrote:
> Jens Axboe wrote:
> 
> > On Fri, Dec 07 2001, Marvin Justice wrote:
> > 
> >>>There is no way of fixing it.
> >>>
> >>All I know is that a streaming io app I was playing with showed a drastic 
> >>performance hit when the kernel was compiled with CONFIG_HIGHMEM. On W2K we 
> >>saw no slowdown with 2 or even 4GB of RAM so I think solutions must exist.
> >>
> > 
> > That's because of highmem page bouncing when doing I/O. There is indeed
> > a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
> > happily do I/O directly to any page in your system as long as your
> > hardware supports it. I'm sure we're beating w2k with that enabled :-)
> > 
> 
> 
> I didn't realize we were doing page bouncing for I/O in the 1-4 GB range.
>  Yes, this would be an issue.

All due to the "old" block stuff requiring a virtual mapping
traditionally for doing I/O. Ugh. So yes, we are bouncing _any_ highmem
page.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  2:10           ` Marvin Justice
@ 2001-12-08  2:08             ` H. Peter Anvin
  2001-12-08  2:10             ` Jens Axboe
  1 sibling, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2001-12-08  2:08 UTC (permalink / raw)
  To: mjustice; +Cc: Jens Axboe, linux-kernel

Marvin Justice wrote:

>>That's because of highmem page bouncing when doing I/O. There is indeed
>>a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
>>happily do I/O directly to any page in your system as long as your
>>hardware supports it. I'm sure we're beating w2k with that enabled :-)
>>
> 
> Will your patch lead to better performance than the CONFIGH_HIGHMEM=n case? 
> Unfortunately, W2K with any amount of memory beat Linux with no highmem (see 
> http://www.uwsg.indiana.edu/hypermail/linux/kernel/0110.3/0375.html ) so my 
> PHB decided to hold off on Linux for now.
> 


Depends if you need the extra memory or not.

	-hpa



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  1:54         ` Jens Axboe
  2001-12-08  1:58           ` H. Peter Anvin
@ 2001-12-08  2:10           ` Marvin Justice
  2001-12-08  2:08             ` H. Peter Anvin
  2001-12-08  2:10             ` Jens Axboe
  1 sibling, 2 replies; 14+ messages in thread
From: Marvin Justice @ 2001-12-08  2:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: H. Peter Anvin, linux-kernel


> That's because of highmem page bouncing when doing I/O. There is indeed
> a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
> happily do I/O directly to any page in your system as long as your
> hardware supports it. I'm sure we're beating w2k with that enabled :-)

Will your patch lead to better performance than the CONFIGH_HIGHMEM=n case? 
Unfortunately, W2K with any amount of memory beat Linux with no highmem (see 
http://www.uwsg.indiana.edu/hypermail/linux/kernel/0110.3/0375.html ) so my 
PHB decided to hold off on Linux for now.

Marvin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  2:10           ` Marvin Justice
  2001-12-08  2:08             ` H. Peter Anvin
@ 2001-12-08  2:10             ` Jens Axboe
  2001-12-08  3:43               ` war
  1 sibling, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2001-12-08  2:10 UTC (permalink / raw)
  To: Marvin Justice; +Cc: H. Peter Anvin, linux-kernel

On Fri, Dec 07 2001, Marvin Justice wrote:
> 
> > That's because of highmem page bouncing when doing I/O. There is indeed
> > a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
> > happily do I/O directly to any page in your system as long as your
> > hardware supports it. I'm sure we're beating w2k with that enabled :-)
> 
> Will your patch lead to better performance than the CONFIGH_HIGHMEM=n case? 

No, it only makes sure that we do not take a hit with HIGHMEM enabled
for I/O.

> Unfortunately, W2K with any amount of memory beat Linux with no highmem (see 
> http://www.uwsg.indiana.edu/hypermail/linux/kernel/0110.3/0375.html ) so my 
> PHB decided to hold off on Linux for now.

Hmm I see, we can do better. With the patch you should do decently at
least with 2.4 too with 2gb of ram.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  2:10             ` Jens Axboe
@ 2001-12-08  3:43               ` war
  2001-12-08  3:46                 ` H. Peter Anvin
  0 siblings, 1 reply; 14+ messages in thread
From: war @ 2001-12-08  3:43 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Marvin Justice, H. Peter Anvin, linux-kernel

I have 1GB of ram + HIGHMEM support on.

How much of a performance impact are we talking about?

896MB of ram would be ok if HIGHMEM impacted the machine severely.

Has anyone done any benchmarks with HIGHMEM vs NO HIGHMEM?


Jens Axboe wrote:

> On Fri, Dec 07 2001, Marvin Justice wrote:
> >
> > > That's because of highmem page bouncing when doing I/O. There is indeed
> > > a solution for this -- 2.5 or 2.4 + block-highmem-all patches will
> > > happily do I/O directly to any page in your system as long as your
> > > hardware supports it. I'm sure we're beating w2k with that enabled :-)
> >
> > Will your patch lead to better performance than the CONFIGH_HIGHMEM=n case?
>
> No, it only makes sure that we do not take a hit with HIGHMEM enabled
> for I/O.
>
> > Unfortunately, W2K with any amount of memory beat Linux with no highmem (see
> > http://www.uwsg.indiana.edu/hypermail/linux/kernel/0110.3/0375.html ) so my
> > PHB decided to hold off on Linux for now.
>
> Hmm I see, we can do better. With the patch you should do decently at
> least with 2.4 too with 2gb of ram.
>
> --
> Jens Axboe
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: highmem question
  2001-12-08  3:43               ` war
@ 2001-12-08  3:46                 ` H. Peter Anvin
  0 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2001-12-08  3:46 UTC (permalink / raw)
  To: war; +Cc: Jens Axboe, Marvin Justice, linux-kernel

war wrote:

> I have 1GB of ram + HIGHMEM support on.
> 
> How much of a performance impact are we talking about?
> 
> 896MB of ram would be ok if HIGHMEM impacted the machine severely.
> 
> Has anyone done any benchmarks with HIGHMEM vs NO HIGHMEM?
> 


1 GB is really the worst case.  You don't gain too much memory this way, 
and suffer the necessary slowdown.

Personally I would support dropping the kernel boundary to 0xb8000000 
and use 0xb8000000-0xbfffffff for iomem; that way 1 GB wouldn't need 
HIGHMEM.

	-hpa



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2001-12-08  3:47 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-12-07 13:06 highmem question Roy Sigurd Karlsbakk
2001-12-08  0:01 ` H. Peter Anvin
2001-12-08  1:30   ` Marvin Justice
2001-12-08  1:28     ` H. Peter Anvin
2001-12-08  1:53       ` Marvin Justice
2001-12-08  1:52         ` H. Peter Anvin
2001-12-08  1:54         ` Jens Axboe
2001-12-08  1:58           ` H. Peter Anvin
2001-12-08  2:02             ` Jens Axboe
2001-12-08  2:10           ` Marvin Justice
2001-12-08  2:08             ` H. Peter Anvin
2001-12-08  2:10             ` Jens Axboe
2001-12-08  3:43               ` war
2001-12-08  3:46                 ` H. Peter Anvin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.