All of lore.kernel.org
 help / color / mirror / Atom feed
* Raid 0 setup doubt.
@ 2016-03-27 10:35 Jose Otero
  2016-03-28  0:56 ` Duncan
  2016-03-28  2:42 ` Chris Murphy
  0 siblings, 2 replies; 10+ messages in thread
From: Jose Otero @ 2016-03-27 10:35 UTC (permalink / raw)
  To: linux-btrfs

Hello,

--------------------------------------
I apologize beforehand if I'm asking a too basic question for the
mailing list, or if it has been already answered at nauseam.
--------------------------------------

I have two hdd (Western Digital 750 GB approx. 700 GiB each), and I
planning to set up a RAID 0 through btrfs. UEFI firmware/boot, no dual
boot, only linux.

My question is, given the UEFI partition plus linux swap partition, I
won't have two equal sized partitions for setting up the RAID 0 array.
So, I'm not quite sure how to do it. I'll have:

/dev/sda:

       16 KiB (GPT partition table)
sda1:  512 MiB (EFI, fat32)
sda2:  16 GiB (linux-swap)
sda3:  rest of the disk /  (btrfs)

/dev/sdb:

sdb1:  (btrfs)

The btrfs partitions on each hdd are not of the same size (admittedly by
an small difference, but still). Even if a backup copy of the EFI
partition is created in the second hdd (i.e. sdb) which it may be, not
sure, because the linux-swap partion is still left out.

Should I stripe both btrfs partitions together no matter the size?

mkfs.btrfs -m raid0 -d raid0 /dev/sda3 /dev/sdb1

How will btrfs manage the difference in size?

Or should I partition out the extra size of /dev/sdb for trying to match
equally sized partions? in other words:

/dev/sdb:

sdb1:  17 GiB approx. free or for whatever I want.
sdb2:  (btrfs)

and then:

mkfs.btrfs -m raid0 -d raid0 /dev/sda3 /dev/sdb2

Again, I'm sorry if it's an idiotic question, but I don't have it quite
clear and I would like to do it properly. So, any hint from more
knowable users would be MUCH appreciate it.

Thanks in advance.

JM.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-27 10:35 Raid 0 setup doubt Jose Otero
@ 2016-03-28  0:56 ` Duncan
  2016-03-28  5:26   ` James Johnston
                     ` (2 more replies)
  2016-03-28  2:42 ` Chris Murphy
  1 sibling, 3 replies; 10+ messages in thread
From: Duncan @ 2016-03-28  0:56 UTC (permalink / raw)
  To: linux-btrfs

Jose Otero posted on Sun, 27 Mar 2016 12:35:43 +0200 as excerpted:

> Hello,
> 
> --------------------------------------
> I apologize beforehand if I'm asking a too basic question for the
> mailing list, or if it has been already answered at nauseam.
> --------------------------------------

Actually, looks like pretty reasonable questions, to me. =:^)

> I have two hdd (Western Digital 750 GB approx. 700 GiB each), and I
> planning to set up a RAID 0 through btrfs. UEFI firmware/boot, no dual
> boot, only linux.
> 
> My question is, given the UEFI partition plus linux swap partition, I
> won't have two equal sized partitions for setting up the RAID 0 array.
> So, I'm not quite sure how to do it. I'll have:
> 
> /dev/sda:
> 
>        16 KiB (GPT partition table)
> sda1:  512 MiB (EFI, fat32)
> sda2:  16 GiB (linux-swap)
> sda3:  rest of the disk /  (btrfs)
> 
> /dev/sdb:
> 
> sdb1:  (btrfs)
> 
> The btrfs partitions on each hdd are not of the same size (admittedly by
> an small difference, but still). Even if a backup copy of the EFI
> partition is created in the second hdd (i.e. sdb) which it may be, not
> sure, because the linux-swap partion is still left out.
> 
> Should I stripe both btrfs partitions together no matter the size?

That should work without issue.

> mkfs.btrfs -m raid0 -d raid0 /dev/sda3 /dev/sdb1
> 
> How will btrfs manage the difference in size?

Btrfs raid0 requires two devices, minimum, striping each chunk across the 
two.  Therefore, with two devices, to the extent that one device is 
larger, the larger (as partitioned) device will leave the difference in 
space unusable, as there's no second device to stripe with.

> Or should I partition out the extra size of /dev/sdb for trying to match
> equally sized partions? in other words:
> 
> /dev/sdb:
> 
> sdb1:  17 GiB approx. free or for whatever I want.
> sdb2:  (btrfs)
> 
> and then:
> 
> mkfs.btrfs -m raid0 -d raid0 /dev/sda3 /dev/sdb2

This should work as well.


But there's another option you didn't mention, that may be useful, 
depending on your exact need and usage of that swap:

Split your swap space in half, say (roughly, you can make one slightly 
larger than the other to allow for the EFI on one device) 8 GiB on each 
of the hdds.  Then, in your fstab or whatever you use to list the swap 
options, put the option priority=100 (or whatever number you find 
appropriate) on /both/ swap partitions.

With an equal priority on both swaps and with both active, the kernel 
will effectively raid0 your swap as well (until one runs out, of course), 
which, given that on spinning rust the device speed is the definite 
performance bottleneck for swap, should roughly double your swap 
performance. =:^)  Given that swap on spinning rust is slower than real 
RAM by several orders of magnitude, it'll still be far slower than real 
RAM, but twice as fast as it would be is better than otherwise, so...


Tho how much RAM /do/ you have, and are you sure you really need swap at 
all?  Many systems today have enough RAM that they don't really need swap 
(at least as swap, see below), unless they're going to be used for 
something extremely memory intensive, where the much lower speed of swap 
isn't a problem.

If you have 8 GiB of RAM or more, this may well be your situation.  With 
4 GiB, you probably have more than enough RAM for normal operation, but 
it may still be useful to have at least some swap, so Linux can keep more 
recently used files cached while swapping out some seldom used 
application RAM, but by 8 GiB you likely have enough RAM for reasonable 
cache AND all your apps and won't actually use swap much at all.

Tho if you frequently edit GiB+ video files and/or work with many virtual 
machines, 8 GiB RAM will likely be actually used, and 16 GiB may be the 
point at which you don't use swap much at all.  And of course if you are 
using LOTS of VMs or doing heavy 4K video editing, 16 GiB or more may 
well still be in heavy use, but with that kind of memory-intensive usage, 
32 GiB of RAM or more would likely be a good investment.

Anyway, for systems with enough memory to not need swap in /normal/ 
circumstances, in the event that something's actually leaking memory 
badly enough that swap is needed, there's a very good chance that you'll 
never outrun the leak with swap anyway, as if it's really leaking gigs of 
memory, it'll just eat up whatever gigs of swap you throw at it as well 
and /still/ run out of memory.

Meanwhile, swap to spinning rust really is /slow/.  You're talking 16 GiB 
of swap, and spinning rust speeds of 50 MiB/sec for swap isn't unusual.  
That's ~20 seconds worth of swap-thrashing waiting per GiB, ~320 seconds 
or over five minutes worth of swap thrashing to use the full 16 GiB.  OK, 
so you take that priority= idea and raid0 over two devices, it'll still 
be ~2:40 worth of waiting, to fully use that swap.  Is 16 GiB of swap 
/really/ both needed and worth that sort of wait if you do actually use 
it?

Tho again, if you're running a half dozen VMs and only actually use a 
couple of them once or twice a day, having enough swap to let them swap 
out the rest of the day, so the memory they took can be used for more 
frequently accessed applications and cached files, can be useful.  But 
that's a somewhat limited use-case.


So swap, for its original use as slow memory at least, really isn't that 
much used any longer, tho it can still be quite useful in specific use-
cases.

But there's another more modern use-case that can be useful for many.  
Linux's suspend-to-disk, aka hibernate (as opposed to suspend-to-RAM, aka 
sleep or standby), functionality.  Suspend-to-disk uses swap space to 
store the suspend image.  And that's commonly enough used that swap still 
has a modern usage after all, just not the one it was originally designed 
for.

The caveat with suspend-to-disk, however, is that normally, the entire 
suspend image must be placed on a single swap device.[1]  If you intend 
to use your swap to store a hibernate image, then, and if you have 16 GiB 
or more of RAM and want to save as much of it as possible in that 
hibernate image, then you'll want to keep that 16 GiB swap on a single 
device in ordered to let you use the full size as a hibernate image.

Tho of course, if the corresponding space on the other hdd is going to be 
wasted anyway, as it will if you're doing btrfs raid0 on the big 
partition of each device and you don't have anything else to do with the 
remaining ~16 GiB on the other device, then you might still consider 
doing a 16 GiB swap on each and using the priority= trick to raid0 them 
during normal operation.  You're unlikely to actually use the full 32 GiB 
of swap, but since it'll be double-speed due to the raid0, if you do, 
it'll still be basically the same as using a single 16 GiB swap device, 
and at the more typical usage (if even above 0 at all) of a few MiB to a 
GiB or so, you'll still get the benefit of the raid0 swap.

> Again, I'm sorry if it's an idiotic question, but I don't have it quite
> clear and I would like to do it properly. So, any hint from more
> knowable users would be MUCH appreciate it.

Perhaps this was more information than you expected, but hopefully it's 
helpful, none-the-less.  And it's definitely better than finding out 
critical information /after/ you did it wrong, so while the answer here 
wasn't /that/ critical either way, I sure wish more people would ask 
before they actually deploy, and avoid problems they run into when it 
/was/ critical information they missed!  So you definitely have my 
respect as a wise and cautious administrator, taking the time to get 
things correct /before/ you make potential mistakes!  =:^)


Meanwhile, you didn't mention whether you've discovered the btrfs wiki as 
a resource and had read up there already or not.  So let me mention it, 
and recommend that if you haven't, you set aside a few hours to read up 
on btrfs and how it works, as well as problems you may encounter and 
possible solutions.  You may still have important questions like the 
above after reading thru the wiki, and indeed, may find reading it brings 
even more questions to your mind, but it's a very useful resource to read 
up a bit on, before starting in with btrfs.  I know it helped me quite a 
bit, tho I had questions after I read it, too.  But at least I knew a bit 
more about what questions I still needed to ask after that. =:^)

https://btrfs.wiki.kernel.org

Read up on most of the user documentation pages, anyway.  As a user not a 
dev, you can skip the developer pages unless like me you're simply 
curious and read some developer-targeted stuff anyway, even if you don't 
claim to actually be one.

---
[1] Single-device suspend-image:  There are ways around this that involve 
complex hoop-jumping in the initr* before the image is reloaded, but at 
least here, I prefer to avoid that sort of complexity as it increases 
maintenance complexity as well, and as an admin I prefer a simpler system 
that I understand well enough to troubleshoot and recover from disaster, 
to a complex one that I don't really understand and thus can't 
effectively troubleshoot nor be confident I can effectively recover in a 
disaster situation.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-27 10:35 Raid 0 setup doubt Jose Otero
  2016-03-28  0:56 ` Duncan
@ 2016-03-28  2:42 ` Chris Murphy
  1 sibling, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2016-03-28  2:42 UTC (permalink / raw)
  To: Jose Otero; +Cc: Btrfs BTRFS

On Sun, Mar 27, 2016 at 4:35 AM, Jose Otero <jose.manuel.otero@gmail.com> wrote:
> Hello,
>
> --------------------------------------
> I apologize beforehand if I'm asking a too basic question for the
> mailing list, or if it has been already answered at nauseam.
> --------------------------------------
>
> I have two hdd (Western Digital 750 GB approx. 700 GiB each), and I
> planning to set up a RAID 0 through btrfs. UEFI firmware/boot, no dual
> boot, only linux.
>
> My question is, given the UEFI partition plus linux swap partition, I
> won't have two equal sized partitions for setting up the RAID 0 array.

If you have odd sized partitions for Btrfs raid0, it won't get mad at
you. It just won't use the extra space on the drive without a swap
partition.


http://www.tldp.org/HOWTO/Partition/setting_up_swap.html
If that's current, you can have a swap on each drive, same size, with
the same priority number, and the kernel will use both kinda like
raid0. And in this case, you can have pretty much identically sized
swap partitions on the two drives.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: Raid 0 setup doubt.
  2016-03-28  0:56 ` Duncan
@ 2016-03-28  5:26   ` James Johnston
  2016-03-28  8:51     ` Duncan
  2016-03-28 12:35   ` Austin S. Hemmelgarn
  2016-03-28 20:30   ` Jose Otero
  2 siblings, 1 reply; 10+ messages in thread
From: James Johnston @ 2016-03-28  5:26 UTC (permalink / raw)
  To: 'Duncan', linux-btrfs

Sorry to interject here, but I think it's a bit overreaching to suggest
that swap isn't generally useful any more as a general-purpose member of
the memory hierarchy....

> Given that swap on spinning rust is slower than real
> RAM by several orders of magnitude, it'll still be far slower than real
> RAM, but twice as fast as it would be is better than otherwise, so...
> 
> 
> Tho how much RAM /do/ you have, and are you sure you really need swap at
> all?  Many systems today have enough RAM that they don't really need swap
> (at least as swap, see below), unless they're going to be used for
> something extremely memory intensive, where the much lower speed of swap
> isn't a problem.

For me, I use swap on an SSD, which is orders of magnitude faster than HDD.
Swap can still be useful on an SSD and can really close the gap between RAM
speeds and swap speeds.  (The original poster would do well to use one.)

So yeah, it's still "slow memory" but fast enough to be useful IMHO.

I find it's a useful safety net.  I'd rather have slower performance than
have outright failures.  Gives me time to react and free memory in a sane
fashion.  Why *not* use swap, if you have the drive capacity available?
The space is way cheaper than RAM, even for SSD.

> 
> Anyway, for systems with enough memory to not need swap in /normal/
> circumstances, in the event that something's actually leaking memory
> badly enough that swap is needed, there's a very good chance that you'll
> never outrun the leak with swap anyway, as if it's really leaking gigs of
> memory, it'll just eat up whatever gigs of swap you throw at it as well
> and /still/ run out of memory.

Often it's a slow leak and if you monitor the system, you have time to
identify the offending process and kill it before it impacts the rest of
the system.  Swap gives more time to do that.

> 
> Meanwhile, swap to spinning rust really is /slow/.  You're talking 16 GiB
> of swap, and spinning rust speeds of 50 MiB/sec for swap isn't unusual.
> That's ~20 seconds worth of swap-thrashing waiting per GiB, ~320 seconds
> or over five minutes worth of swap thrashing to use the full 16 GiB.  OK,
> so you take that priority= idea and raid0 over two devices, it'll still
> be ~2:40 worth of waiting, to fully use that swap.  Is 16 GiB of swap
> /really/ both needed and worth that sort of wait if you do actually use
> it?
>
> So swap, for its original use as slow memory at least, really isn't that
> much used any longer, tho it can still be quite useful in specific use-
> cases.

On my Windows boxes, I've exhausted system memory, and sometimes *not even
immediately noticed that I was out of physical RAM*.  The SSD-based swap
was that fast.  Eventually I realized the system was somewhat sluggish,
and closing some programs was easy to recover the system, and nothing ever
crashed.

Last week I ran chkdsk /R on a drive and apparently on Windows 7, chkdsk
will allocate all the free RAM it can get... apparently that is a
"feature" - anyway in my case, chkdsk allocates 26 GB on a 32 GB RAM
system that I heavily multitask on.  I never noticed the system was
swapping until an hour or two later when I had to start a VM and got an
error message (VirtualBox needed physical RAM to allocate)...

But before I had SSDs, HDD-based swap was.... very painful...  IMHO, HDDs
are obsolete for use for storing operating systems and swap.  I strongly
suggest the original poster add an SSD to the mix. :)

And now with things like bcache, we can put SSDs into their proper place
in the memory hierarchy: Registers > L1 cache > L2 > L3 > DRAM > SSD > HDD.
So conceivably the swap could be on both SSD and HDD, but you probably
don't need _that_ much swap...

Best regards,

James Johnston



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-28  5:26   ` James Johnston
@ 2016-03-28  8:51     ` Duncan
  0 siblings, 0 replies; 10+ messages in thread
From: Duncan @ 2016-03-28  8:51 UTC (permalink / raw)
  To: linux-btrfs

James Johnston posted on Mon, 28 Mar 2016 05:26:56 +0000 as excerpted:

> For me, I use swap on an SSD, which is orders of magnitude faster than
> HDD.
> Swap can still be useful on an SSD and can really close the gap between
> RAM speeds and swap speeds.  (The original poster would do well to use
> one.)

FWIW, swap on ssd is an entirely different beast, and /can/ still make 
quite a lot of sense.  I'll absolutely agree with you there.

However, this wasn't about swap on ssd, it was about swap on hdd, and the 
post was already long enough, without adding in the quite different 
discussion of swap on ssd.  My posts already tend to be longer than most, 
and I have to pick /somewhere/ to draw the line.  This was simply the 
"somewhere" that I drew it in this case.

So thanks for raising the issue and filling in the missing pieces.  I 
think we agree, in general, about swap on ssd.

That said, here for example is a bit of why I ask the question, ssd or no 
ssd (spacing on free shrunk a bit for posting):

$ uptime
 00:07:50 up 11:28,  2 users,  load average: 0.04, 0.43, 1.01

$ free -m
       total   used     free   shared buff/cache available
Mem:   16073    725    12632     1231       2715     13961
Swap:      0      0        0

16 GiB RAM, 12.5 GiB entirely free even with cache and buffers taking a 
bit under 3 GiB of RAM.  That's in kde/plasma5, after nearly 12 hours 
uptime.  (Tho I am running gentoo with more stuff turned off at build-
time than will be the case on most general-purpose binary distros, where 
lots of libraries that most people won't use are linked in for the sake 
of the few that will use them.  Significantly, I also have baloo turned 
off at build time, which still unfortunately requires some trivial 
patching on gentoo/kde, and stay /well/ clear of anything kdepim/akonadi 
related as both too bloated and far too unstable to handle my mail, 
etc.)  Triple full-hd 1080 monitors.

OK, startup firefox playing a full-screen 1080p video and let it run a 
bit... about half a GiB initial difference, 1.2 GiB used, only about 12 
GiB free, then up another 200 MiB used in a few minutes.

Now this is gentoo and it's my build machine.  It's only a six-core so I 
don't go hog-wild with the parallel builds, but portage is pointed at a 
tmpfs for its temporary build environment and my normal build settings 
allow 12 builds at a time, upto a load-average of 6, and each of those 
builds is set for upto 10 parallel jobs to a load average of 8 (thus 
encouraging parallelism at the individual package level first, and only 
where that doesn't utilize all cores does it load more packages to build 
in parallel).  I sometimes see upto 9 packages building at once and 
sometimes a 1-minute load of 10 or higher when build process that are 
already setup push it above the configured load-average of 8.

I don't run any VMs (but for an old DOS game in DOSSHELL, which qualifies 
as a VM, but from an age when machines with memory in the double-digit 
MiB were high-dollar, so it hardly counts), I keep / mounted ro except 
when I'm updating it, and the partition with all the build trees, 
sources, ccache and binpkgs is kept unmounted as well when I'm not using 
it.  Further, my media partition is unmounted by default as well.

But even during a build, I seldom use up all memory and start actually 
dumping cache, so which is when stuff would start getting pushed to swap 
as well if I had it, so I don't bother.

Back on my old machine I had 8 GiB RAM and swap, with swappiness[1] set 
to 100, I'd occasionally see a few hundred MB in swap, but seldom over a 
gig.  That was with a four-device spinning-rust mdraid1 setup, with swap 
similarly set to 4-way-striped via equal swap priority, but that machine 
was an old original dual-socket 3-digit opteron maxed out with dual-core 
Opteron 290s, so 2x2=4-core, and I had it accordingly a bit more limited 
in terms of parallel build jobs.

These days the main system is on dual ssds partitioned up in parallel, 
running multiple separate btrfs-raid1s on the pairs of partitions, one on 
each of the ssds.  Only media and backups is still on spinning rust, but 
given those numbers and the fact that suspend-to-ram works well on this 
machine and I never even tried suspend-to-disk, I just didn't see the 
point of setting up swap.

When I upgraded to the new machine, given the 6-core instead of 4-core, I 
decided I wanted more memory as well.  But altho 16 GiB is the next power-
of-two above the 8 GiB I was running (actually only 6 GiB by the time I 
upgraded, as a stick had died that I hadn't replaced) and I got 16 GiB 
for that reason, 12 GiB would have actually been plenty, and would have 
served my generally don't dump cache rule pretty well.  

That became even more the case when I upgraded to SSDs shortly 
thereafter, as recaching on ssd isn't the big deal it was with spinning 
rust, where I really did hate to reboot and lose all that cache that I'd 
have to read off of slow spinning rust again.

Which I guess goes to support the argument I had thought about making in 
the original post and then left out, intending to followup on it if the 
OP posted memory size and usage, etc, details.  If he's running 16 GiB as 
I am, and is seeing GiB worth of memory sit entirely unused, even for 
cache, most of the time as I am, then really, there's little need for 
swap.  That may actually be the case even with 8 GiB RAM, if his files 
working set is small enough.

OTOH, if he's only running a 4 GiB RAM or less system or his top-line 
free value (before cache and buffers are subtracted) is often under say 
half a GiB to a GiB, then chances are he's dumping cache at times and can 
use either more ram or swap (possibly with a tweaked swappiness), as on 
spinning rust dumped cache can really hurt performance, and thus really 
/hurts/ to see.

OK, here's my free -m now (minus the all-zeros swap line), after running 
a an hour or so of youtube 1080p videos in firefox (45):

      total   used   free  shared buff/cache available
Mem:  16073   1555  11769    1245       2748     13116

Tho it does get up there to around 12 GiB used (incl buffer/cache), only 
about 4 GiB free if I do a big update, sometimes even a bit above that, 
but so seldom does it actually start dumping cache, that as I said, 12 
GiB would have actually been a better use of my money than the 16 I got, 
tho it wouldn't have been that nice round power-of-two.

OTOH, that /will/ let me upgrade to say an 8-core CPU and similarly 
upgrade parallel build-job settings, if I decide to, without bottlenecking 
on memory, always a nice thing. =:^)

---
[1] Swappiness:  The /proc/syst/vm/swappiness knob, configurable via 
sysctl on most distros.  Set to 100 it says always swap out instead of 
dumping cache; set to 0 it says always dump cache to keep apps from 
swapping; the default is normally 60, IIRC.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-28  0:56 ` Duncan
  2016-03-28  5:26   ` James Johnston
@ 2016-03-28 12:35   ` Austin S. Hemmelgarn
  2016-03-29  1:46     ` Duncan
  2016-03-29  2:10     ` Chris Murphy
  2016-03-28 20:30   ` Jose Otero
  2 siblings, 2 replies; 10+ messages in thread
From: Austin S. Hemmelgarn @ 2016-03-28 12:35 UTC (permalink / raw)
  To: linux-btrfs

On 2016-03-27 20:56, Duncan wrote:
>
> But there's another option you didn't mention, that may be useful,
> depending on your exact need and usage of that swap:
>
> Split your swap space in half, say (roughly, you can make one slightly
> larger than the other to allow for the EFI on one device) 8 GiB on each
> of the hdds.  Then, in your fstab or whatever you use to list the swap
> options, put the option priority=100 (or whatever number you find
> appropriate) on /both/ swap partitions.
>
> With an equal priority on both swaps and with both active, the kernel
> will effectively raid0 your swap as well (until one runs out, of course),
> which, given that on spinning rust the device speed is the definite
> performance bottleneck for swap, should roughly double your swap
> performance. =:^)  Given that swap on spinning rust is slower than real
> RAM by several orders of magnitude, it'll still be far slower than real
> RAM, but twice as fast as it would be is better than otherwise, so...
I'm not 100% certain that it will double swap bandwidth unless you're 
constantly swapping, and even then it would only on average double the 
write bandwidth.  The kernel swaps pages in groups (8 pages by default, 
which is 32k, I usually up this to 16 pages on my systems because when 
I'm hitting swap, it usually means I'm hitting it hard), and I'm pretty 
certain that each group of pages only goes to one swap device.  This 
means that by default, with two devices, you would get 32k written at a 
time to alternating devices.  However, there is no guarantee that when 
you swap things in they will be from alternating devices, so you could 
be reading multiple MB of data from one device without even touching the 
other one.  Thus, for writes, this works like a raid0 setup with a large 
stripe size, but for reads it ends up somewhere between raid0 and single 
disk performance, depending on how lucky you are and what type of 
workload you are dealing with.
>
> Tho how much RAM /do/ you have, and are you sure you really need swap at
> all?  Many systems today have enough RAM that they don't really need swap
> (at least as swap, see below), unless they're going to be used for
> something extremely memory intensive, where the much lower speed of swap
> isn't a problem.
>
> If you have 8 GiB of RAM or more, this may well be your situation.  With
> 4 GiB, you probably have more than enough RAM for normal operation, but
> it may still be useful to have at least some swap, so Linux can keep more
> recently used files cached while swapping out some seldom used
> application RAM, but by 8 GiB you likely have enough RAM for reasonable
> cache AND all your apps and won't actually use swap much at all.
>
> Tho if you frequently edit GiB+ video files and/or work with many virtual
> machines, 8 GiB RAM will likely be actually used, and 16 GiB may be the
> point at which you don't use swap much at all.  And of course if you are
> using LOTS of VMs or doing heavy 4K video editing, 16 GiB or more may
> well still be in heavy use, but with that kind of memory-intensive usage,
> 32 GiB of RAM or more would likely be a good investment.
>
> Anyway, for systems with enough memory to not need swap in /normal/
> circumstances, in the event that something's actually leaking memory
> badly enough that swap is needed, there's a very good chance that you'll
> never outrun the leak with swap anyway, as if it's really leaking gigs of
> memory, it'll just eat up whatever gigs of swap you throw at it as well
> and /still/ run out of memory.
>
> Meanwhile, swap to spinning rust really is /slow/.  You're talking 16 GiB
> of swap, and spinning rust speeds of 50 MiB/sec for swap isn't unusual.
> That's ~20 seconds worth of swap-thrashing waiting per GiB, ~320 seconds
> or over five minutes worth of swap thrashing to use the full 16 GiB.  OK,
> so you take that priority= idea and raid0 over two devices, it'll still
> be ~2:40 worth of waiting, to fully use that swap.  Is 16 GiB of swap
> /really/ both needed and worth that sort of wait if you do actually use
> it?
>
> Tho again, if you're running a half dozen VMs and only actually use a
> couple of them once or twice a day, having enough swap to let them swap
> out the rest of the day, so the memory they took can be used for more
> frequently accessed applications and cached files, can be useful.  But
> that's a somewhat limited use-case.
>
>
> So swap, for its original use as slow memory at least, really isn't that
> much used any longer, tho it can still be quite useful in specific use-
> cases.
I would tend to disagree here.  Using the default settings under Linux, 
it isn't used much, but there are many people (myself included), who 
turn off memory over-commit, and thus need reasonable amounts of swap 
space.  Many programs will allocate huge chunks of memory that they 
never need or even touch, either 'just in case', or because they want to 
manage their own memory usage.  To account for this, Linux has a knob 
for the virtual memory subsystem that controls how it handles 
allocations beyond the system's effective memory limit (userspace 
accessible RAM + swap space).  For specifics, you can check 
Documentation/sysctl/vm.txt and Documentation/vm/overcommit-accounting 
in the kernel source tree.  The general idea is that by default, the 
kernel tries to estimate how much can be allocated safely.  This usually 
works well until you start to get close to an OOM condition, but it 
slows down memory allocations significantly.  There are two other 
options for this though, just pretend there's enough memory until there 
isn't (this is the fastest, and probably should be the default if you 
don't have swap space), and never over-commit.  Telling the kernel to 
never over-commit is faster than the default, and provides more 
deterministic behavior (you can prove exactly how much needs to be 
allocated to hit OOM), but requires swap space (because it calculates 
the limit as swap space + some percentage of user-space accessible 
memory).  I know a lot of people who run server systems who configure 
the system to never over-commit memory, then just run with lots of swap 
space.  As an example, my home server system has 16G of RAM with 64G of 
swap space configured (16G on SSD's at a higher priority, 48G on 
traditional HDD's).  This lets me set it to never over-commit memory, 
while still allowing me to work with big (astronomical scale, so >10k 
pixels on a side) images, do complex audio/video editing, and work with 
big VCS repositories without any issues.
>
> But there's another more modern use-case that can be useful for many.
> Linux's suspend-to-disk, aka hibernate (as opposed to suspend-to-RAM, aka
> sleep or standby), functionality.  Suspend-to-disk uses swap space to
> store the suspend image.  And that's commonly enough used that swap still
> has a modern usage after all, just not the one it was originally designed
> for.
>
> The caveat with suspend-to-disk, however, is that normally, the entire
> suspend image must be placed on a single swap device.[1]  If you intend
> to use your swap to store a hibernate image, then, and if you have 16 GiB
> or more of RAM and want to save as much of it as possible in that
> hibernate image, then you'll want to keep that 16 GiB swap on a single
> device in ordered to let you use the full size as a hibernate image.
The other caveat that nobody seems to mention outside of specific cases 
is that using suspend to disks exposes you to direct attack by anyone 
with the ability to either physically access the system, or boot an 
alternative OS on it.  This is however not a Linux specific issue 
(although Windows and OS X do a much better job of validating the 
hibernation image than Linux does before resuming from it, so it's not 
as easy to trick them into loading arbitrary data).


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-28  0:56 ` Duncan
  2016-03-28  5:26   ` James Johnston
  2016-03-28 12:35   ` Austin S. Hemmelgarn
@ 2016-03-28 20:30   ` Jose Otero
  2016-03-29  4:14     ` Duncan
  2 siblings, 1 reply; 10+ messages in thread
From: Jose Otero @ 2016-03-28 20:30 UTC (permalink / raw)
  To: linux-btrfs

Thanks a lot Duncan, Chris Murphy, James Johnston, and Austin.

Thanks for the clear answer and the extra information to chew on.

Duncan, you are right. I have 8 GB of RAM, and the most memory intensive
thing I'll be doing is a VM for Windows. Now I double boot, but rarely
go into Win, only to play some game occasionally. So, I think I'll be
better off with Linux flat out and Win in a VM.

I'm probably overshooting too much with the 16 GiB swap, so I may be
ending up with 8 GiB swap. And I'll read up the splitting thing with the
priority trick because sounds nice. Thanks for the tip.

Take care everybody,

JM.



On 03/28/2016 02:56 AM, Duncan wrote:
> Jose Otero posted on Sun, 27 Mar 2016 12:35:43 +0200 as excerpted:
> 
>> Hello,
>>
>> --------------------------------------
>> I apologize beforehand if I'm asking a too basic question for the
>> mailing list, or if it has been already answered at nauseam.
>> --------------------------------------
> 
> Actually, looks like pretty reasonable questions, to me. =:^)
> 
>> I have two hdd (Western Digital 750 GB approx. 700 GiB each), and I
>> planning to set up a RAID 0 through btrfs. UEFI firmware/boot, no dual
>> boot, only linux.
>>
>> My question is, given the UEFI partition plus linux swap partition, I
>> won't have two equal sized partitions for setting up the RAID 0 array.
>> So, I'm not quite sure how to do it. I'll have:
>>
>> /dev/sda:
>>
>>        16 KiB (GPT partition table)
>> sda1:  512 MiB (EFI, fat32)
>> sda2:  16 GiB (linux-swap)
>> sda3:  rest of the disk /  (btrfs)
>>
>> /dev/sdb:
>>
>> sdb1:  (btrfs)
>>
>> The btrfs partitions on each hdd are not of the same size (admittedly by
>> an small difference, but still). Even if a backup copy of the EFI
>> partition is created in the second hdd (i.e. sdb) which it may be, not
>> sure, because the linux-swap partion is still left out.
>>
>> Should I stripe both btrfs partitions together no matter the size?
> 
> That should work without issue.
> 
>> mkfs.btrfs -m raid0 -d raid0 /dev/sda3 /dev/sdb1
>>
>> How will btrfs manage the difference in size?
> 
> Btrfs raid0 requires two devices, minimum, striping each chunk across the 
> two.  Therefore, with two devices, to the extent that one device is 
> larger, the larger (as partitioned) device will leave the difference in 
> space unusable, as there's no second device to stripe with.
> 
>> Or should I partition out the extra size of /dev/sdb for trying to match
>> equally sized partions? in other words:
>>
>> /dev/sdb:
>>
>> sdb1:  17 GiB approx. free or for whatever I want.
>> sdb2:  (btrfs)
>>
>> and then:
>>
>> mkfs.btrfs -m raid0 -d raid0 /dev/sda3 /dev/sdb2
> 
> This should work as well.
> 
> 
> But there's another option you didn't mention, that may be useful, 
> depending on your exact need and usage of that swap:
> 
> Split your swap space in half, say (roughly, you can make one slightly 
> larger than the other to allow for the EFI on one device) 8 GiB on each 
> of the hdds.  Then, in your fstab or whatever you use to list the swap 
> options, put the option priority=100 (or whatever number you find 
> appropriate) on /both/ swap partitions.
> 
> With an equal priority on both swaps and with both active, the kernel 
> will effectively raid0 your swap as well (until one runs out, of course), 
> which, given that on spinning rust the device speed is the definite 
> performance bottleneck for swap, should roughly double your swap 
> performance. =:^)  Given that swap on spinning rust is slower than real 
> RAM by several orders of magnitude, it'll still be far slower than real 
> RAM, but twice as fast as it would be is better than otherwise, so...
> 
> 
> Tho how much RAM /do/ you have, and are you sure you really need swap at 
> all?  Many systems today have enough RAM that they don't really need swap 
> (at least as swap, see below), unless they're going to be used for 
> something extremely memory intensive, where the much lower speed of swap 
> isn't a problem.
> 
> If you have 8 GiB of RAM or more, this may well be your situation.  With 
> 4 GiB, you probably have more than enough RAM for normal operation, but 
> it may still be useful to have at least some swap, so Linux can keep more 
> recently used files cached while swapping out some seldom used 
> application RAM, but by 8 GiB you likely have enough RAM for reasonable 
> cache AND all your apps and won't actually use swap much at all.
> 
> Tho if you frequently edit GiB+ video files and/or work with many virtual 
> machines, 8 GiB RAM will likely be actually used, and 16 GiB may be the 
> point at which you don't use swap much at all.  And of course if you are 
> using LOTS of VMs or doing heavy 4K video editing, 16 GiB or more may 
> well still be in heavy use, but with that kind of memory-intensive usage, 
> 32 GiB of RAM or more would likely be a good investment.
> 
> Anyway, for systems with enough memory to not need swap in /normal/ 
> circumstances, in the event that something's actually leaking memory 
> badly enough that swap is needed, there's a very good chance that you'll 
> never outrun the leak with swap anyway, as if it's really leaking gigs of 
> memory, it'll just eat up whatever gigs of swap you throw at it as well 
> and /still/ run out of memory.
> 
> Meanwhile, swap to spinning rust really is /slow/.  You're talking 16 GiB 
> of swap, and spinning rust speeds of 50 MiB/sec for swap isn't unusual.  
> That's ~20 seconds worth of swap-thrashing waiting per GiB, ~320 seconds 
> or over five minutes worth of swap thrashing to use the full 16 GiB.  OK, 
> so you take that priority= idea and raid0 over two devices, it'll still 
> be ~2:40 worth of waiting, to fully use that swap.  Is 16 GiB of swap 
> /really/ both needed and worth that sort of wait if you do actually use 
> it?
> 
> Tho again, if you're running a half dozen VMs and only actually use a 
> couple of them once or twice a day, having enough swap to let them swap 
> out the rest of the day, so the memory they took can be used for more 
> frequently accessed applications and cached files, can be useful.  But 
> that's a somewhat limited use-case.
> 
> 
> So swap, for its original use as slow memory at least, really isn't that 
> much used any longer, tho it can still be quite useful in specific use-
> cases.
> 
> But there's another more modern use-case that can be useful for many.  
> Linux's suspend-to-disk, aka hibernate (as opposed to suspend-to-RAM, aka 
> sleep or standby), functionality.  Suspend-to-disk uses swap space to 
> store the suspend image.  And that's commonly enough used that swap still 
> has a modern usage after all, just not the one it was originally designed 
> for.
> 
> The caveat with suspend-to-disk, however, is that normally, the entire 
> suspend image must be placed on a single swap device.[1]  If you intend 
> to use your swap to store a hibernate image, then, and if you have 16 GiB 
> or more of RAM and want to save as much of it as possible in that 
> hibernate image, then you'll want to keep that 16 GiB swap on a single 
> device in ordered to let you use the full size as a hibernate image.
> 
> Tho of course, if the corresponding space on the other hdd is going to be 
> wasted anyway, as it will if you're doing btrfs raid0 on the big 
> partition of each device and you don't have anything else to do with the 
> remaining ~16 GiB on the other device, then you might still consider 
> doing a 16 GiB swap on each and using the priority= trick to raid0 them 
> during normal operation.  You're unlikely to actually use the full 32 GiB 
> of swap, but since it'll be double-speed due to the raid0, if you do, 
> it'll still be basically the same as using a single 16 GiB swap device, 
> and at the more typical usage (if even above 0 at all) of a few MiB to a 
> GiB or so, you'll still get the benefit of the raid0 swap.
> 
>> Again, I'm sorry if it's an idiotic question, but I don't have it quite
>> clear and I would like to do it properly. So, any hint from more
>> knowable users would be MUCH appreciate it.
> 
> Perhaps this was more information than you expected, but hopefully it's 
> helpful, none-the-less.  And it's definitely better than finding out 
> critical information /after/ you did it wrong, so while the answer here 
> wasn't /that/ critical either way, I sure wish more people would ask 
> before they actually deploy, and avoid problems they run into when it 
> /was/ critical information they missed!  So you definitely have my 
> respect as a wise and cautious administrator, taking the time to get 
> things correct /before/ you make potential mistakes!  =:^)
> 
> 
> Meanwhile, you didn't mention whether you've discovered the btrfs wiki as 
> a resource and had read up there already or not.  So let me mention it, 
> and recommend that if you haven't, you set aside a few hours to read up 
> on btrfs and how it works, as well as problems you may encounter and 
> possible solutions.  You may still have important questions like the 
> above after reading thru the wiki, and indeed, may find reading it brings 
> even more questions to your mind, but it's a very useful resource to read 
> up a bit on, before starting in with btrfs.  I know it helped me quite a 
> bit, tho I had questions after I read it, too.  But at least I knew a bit 
> more about what questions I still needed to ask after that. =:^)
> 
> https://btrfs.wiki.kernel.org
> 
> Read up on most of the user documentation pages, anyway.  As a user not a 
> dev, you can skip the developer pages unless like me you're simply 
> curious and read some developer-targeted stuff anyway, even if you don't 
> claim to actually be one.
> 
> ---
> [1] Single-device suspend-image:  There are ways around this that involve 
> complex hoop-jumping in the initr* before the image is reloaded, but at 
> least here, I prefer to avoid that sort of complexity as it increases 
> maintenance complexity as well, and as an admin I prefer a simpler system 
> that I understand well enough to troubleshoot and recover from disaster, 
> to a complex one that I don't really understand and thus can't 
> effectively troubleshoot nor be confident I can effectively recover in a 
> disaster situation.
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-28 12:35   ` Austin S. Hemmelgarn
@ 2016-03-29  1:46     ` Duncan
  2016-03-29  2:10     ` Chris Murphy
  1 sibling, 0 replies; 10+ messages in thread
From: Duncan @ 2016-03-29  1:46 UTC (permalink / raw)
  To: linux-btrfs

Austin S. Hemmelgarn posted on Mon, 28 Mar 2016 08:35:59 -0400 as
excerpted:

> The other caveat that nobody seems to mention outside of specific cases
> is that using suspend to disks exposes you to direct attack by anyone
> with the ability to either physically access the system, or boot an
> alternative OS on it.  This is however not a Linux specific issue
> (although Windows and OS X do a much better job of validating the
> hibernation image than Linux does before resuming from it, so it's not
> as easy to trick them into loading arbitrary data).

I believe that within the kernel community, it's generally accepted that 
physical access is to be considered effectively full root access,  
because there's simply too many routes to get root if you have physical 
access to practically control them all.  I've certainly read that.

Which is what encryption is all about, including encrypted / (via initr*) 
if you're paranoid enough, as that's considered the only effective way to 
thwart physical-access == root-access.

And even that has some pretty big assumptions if physical access is 
available, including that no hardware keyloggers or the like are planted, 
as that would let an attacker simply log the password or other access key 
used.  One would have to for instance use a wired keyboard that they kept 
on their person (or inspect the keyboard, including taking it apart to 
check for loggers), and at minimum visually inspect its connection to the 
computer, including having a look inside the case, to be sure, before 
entering their password.  Or store the access key on a thumbdrive kept on 
the person, etc, and still inspect the computer left behind for listening/
logging devices...

In practice it's generally simpler to just control physical access 
entirely, to whatever degree (onsite video security systems with tamper-
evident timestamping... kept in a vault, missile silo, etc) matches the 
extant paranoia level.

Tho hosting the swap, and therefore hibernation data, on an encrypted 
device that's setup by the initr* is certainly possible, if it's 
considered worth the trouble.  Obviously that's going to require jumping 
thru many of the same hoops that (as mentioned upthread) splitting the 
hibernate image between devices will require, as it generally uses the 
same underlying initr*-based mechanisms.  I'd certainly imagine the 
Snowden's of the world will be doing that sort of thing, among the 
multitude of security options they must take.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-28 12:35   ` Austin S. Hemmelgarn
  2016-03-29  1:46     ` Duncan
@ 2016-03-29  2:10     ` Chris Murphy
  1 sibling, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2016-03-29  2:10 UTC (permalink / raw)
  To: Austin S. Hemmelgarn; +Cc: Btrfs BTRFS

On Mon, Mar 28, 2016 at 6:35 AM, Austin S. Hemmelgarn
<ahferroin7@gmail.com> wrote:

> The other caveat that nobody seems to mention outside of specific cases is
> that using suspend to disks exposes you to direct attack by anyone with the
> ability to either physically access the system, or boot an alternative OS on
> it.  This is however not a Linux specific issue (although Windows and OS X
> do a much better job of validating the hibernation image than Linux does
> before resuming from it, so it's not as easy to trick them into loading
> arbitrary data).

OS X uses dynamically created swapfiles, and the hibernation file is a
separate file that's pre-allocated. Both are on the root file system,
so if you encrypt, then those files are also encrypted. Hibernate
involves a hint in NVRAM that hibernate resume is necessary, and the
firmware uses a hibernate recovery mechanism in the bootloader which
also has a way to unlock encrypted volumes (which are kinda like an
encrypted logical volume, as Apple now defaults to using a logical
volume manager of their own creation).



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 0 setup doubt.
  2016-03-28 20:30   ` Jose Otero
@ 2016-03-29  4:14     ` Duncan
  0 siblings, 0 replies; 10+ messages in thread
From: Duncan @ 2016-03-29  4:14 UTC (permalink / raw)
  To: linux-btrfs

Jose Otero posted on Mon, 28 Mar 2016 22:30:56 +0200 as excerpted:

> Duncan, you are right. I have 8 GB of RAM, and the most memory intensive
> thing I'll be doing is a VM for Windows. Now I double boot, but rarely
> go into Win, only to play some game occasionally. So, I think I'll be
> better off with Linux flat out and Win in a VM.

LOL.  That sounds /very/ much like me, tho obviously with different 
details given the timeframe, about 20 years ago.. and thru to this day, 
as the following story explains.

This was in the first few years after I got my first computer of my own, 
back in the early 90s, so well before I switched to Linux when the 
alternative was upgrading to MS eXPrivacy, starting the weekend eXPrivacy 
was actually released, in 2001.  So it was on MS.

When MS Windows95 came out, I upgraded to it, and normally stayed booted 
into it for all my usual tasks.  But I had one very favorite (to this 
day, actually) game, Master of Orion, original DOS edition, that wouldn't 
work in the original W95 -- I had to reboot to DOS to play it.

I remember what a relief it was to upgrade to 95-OSR2 and finally get the 
ability to run it from the DOS within W95, as that finally allowed me to 
play it without the hassle of rebooting all the time.

That's the first time I realized just what a hassle rebooting to do 
something specific was, as despite that being -- to this day -- my 
favorite computer game ever, I really didn't reboot very often to play 
it, when I had to actually reboot /to/ play it.

Of course W95OSR2 was upgraded to W98 -- at the time I was actually 
running the public betas for IE4/OE4 and was really looking forward to 
the advances that came with the to that point IE4-addon desktop 
integration, and I remember standing in line at midnight to get my copy 
of W98 as soon as possible.  At the time I was volunteering in the MS IE/
OE newsgroups, programming in VB and the MS Windows API, and on my way 
toward MSMVP.

But that was the height of my MS involvement.  By the time W98 came out I 
was already hearing about this Linux thing, and by sometime in 1999 I was 
convinced of the soundness of the Free/Libre and Open Source approach, 
and shortly thereafter read Eric S. Raymond's "The Cathedral and the 
Bazaar" and related essays (in dead-tree book form), the experience of 
which, for me, was one of repeated YES!!, EXACTLY!!, I didn't know others 
thought that way!!, because I had come to an immature form of many of the 
same conclusions on my own, due to my own VB programming experience, 
which sealed the deal.

But while I was convinced of the moral and logical correctness of the 
FLOSS way, I was loath to simply dump all the technical and developer API 
knowledge and experience I had on MS Windows by that point, and truth be 
told, may never have actually jumped off MS if MS themselves hadn't 
pushed me.

While I had played with Linux a bit, I quickly found that it simply 
wasn't practical on that level, for much the same reason booting to DOS 
to play Master of Orion wasn't practical, despite it being my favorite 
game.  Rebooting was simply too much of a hassle, and I simply didn't do 
it often enough in practice to get much of anywhere.

But it's at that point that MS first introduced its own malware, first 
with Office eXPrivacy, then publishing the fact that MS Windows eXPrivacy 
was going to be just that as well, that they were shipping activation 
malware that would, upon upgrade of too much of the machine, demand 
reactivation.

To me, this abuse of the user via activation malware was both a bridge 
too far, and 100% proof positive that MS considered itself a de-facto 
monopoly, regardless of what it might say in court.   After all, back in 
the day, MS Office got where it got in part because unlike the 
competition, it didn't require clumsy hardware dongles to allow one to 
use the software.  Their policy when trying to actually compete was that 
they'd rather their software be pirated, if it made them the de-facto 
standard, which it ultimately did.  That MS was now actually shipping 
deactivation malware as part of its product was thus 100% proof positive 
that they no longer considered anything else a serious competitive 
threat, and thus, that they could get away with inconveniencing their 
users via deactivation malware, since in their viewpoint they were now a 
monopoly and the users no longer had any other practical alternative 
/but/ MS.

And that was all the push it took.  By the time I started what I knew by 
then was THE switch, because MS really wasn't giving me any choice, I had 
actually been verifying my hardware upgrades against Linux compatibility 
for two full years, so I knew my hardware would handle Linux.  But I just 
couldn't spend the time booted to Linux to learn how to actually /use/ 
it, until MS gave me that push, leaving me no other viable option.  But 
once I knew I was switching, I was dead serious about it, and asked on my 
then ISP's newsgroup (luckily, that ISP had some SERIOUSLY knowledgeable 
Linux and BSD folks as both ISP employees and users, one guy was one of 
about a dozen with commit access to one of the BSDs, IDR which one, but 
this is the level of expertise I had available to me) for book 
recommendations to teach me Linux.  I bought the two books that were 
recommended by multiple people on that newsgroup, O'Reilly's Running 
Linux, which I read all 600 pages or so very nearly cover to cover, and 
Linux in a Nutshell, a reference book that I kept at my side for years, 
even buying a newer edition some years later.

Because I knew if I didn't learn Linux well enough and fast enough for it 
to become my default boot, and remove enough reasons for MS Windows to be 
that default boot, it simply wasn't going to happen for me.  And with the 
MS push off their ship highly motivating me, come hell or high water, I 
was determined that *THIS* time, it *WAS* going to happen for me.

Which it did.  I installed Linux for the last time with MS as a dual 
boot, the same week MS Windows eXPrivacy was released.  It took some 
intensive effort, but by three months later, I was configuring and 
building my own kernels, configuring LILO to work the way I wanted 
because I wasn't going to be satisfied until it either did so or I had a 
hard reason why it couldn't do so, and configuring xf86config to handle 
dual graphics cards and triple monitors, because I'd been using the same 
dual card, triple monitor setup on MS Windows98, and again, I was either 
going to get it working on Linux as well, or be able to explain precisely 
why it couldn't (yet) be done and what efforts were under way to fix the 
problem.  That took about six weeks.  The second six weeks of that three 
months was spent figuring out which desktop I wanted to use (kde), 
figuring out which of the many alternative apps fit my needs best, and 
configuring both the desktop and those apps to behave as I wanted/needed 
them to behave, again, with the alternatives being that they'd either do 
it, or I'd have a very good reason why neither they, nor any of the 
alternatives available, could do it.

At first, particularly before I figured out how to get the three monitors 
working, I was still spending most of my actual productive time on the MS 
side, using MSIE and MSOE to search for and at times ask my ISP's 
newsgroup for answers to why the Linux side wasn't working the way I 
wanted and needed it to work.  Once I got the kernel configured and 
rebuilding (necessary for the graphics cards I was running), and XFree86 
running with the two graphics cards and three monitors, things started to 
move faster, tho I was still rebooting to MS Windows98 to use IE/OE to 
get answers, until I had a browser and news client config I could feel 
comfortable with on the LInux side.

By the three month mark, I had everything configured more or less to my 
liking on the Linux side, tho of course I continued to tweak and still 
continue to tweak, and had in fact reversed the previous situation, to 
the point that when I'd boot to the MS side to take care of something 
there that I couldn't do on the Linux side yet, I'd sit there afterward, 
wondering just what else there was to do on MS, just as I had previously 
done on Linux, when I had been simply playing with it, before MS gave me 
that push off their ship with the eXPrivacy malware that was simply 
beyond what I was willing to take.

Within about six more months, so nine total, I had gone from booting to 
MS every couple weeks to take care of some loose end, to every month, to 
every couple months.  By nine months in, I had migrated the files I 
needed over, and had uninstalled and deleted most of the one rather large 
set of power tools addons and toys I had used on the MS side, shrinking 
the MS partition accordingly.

I basically didn't boot to the MS side at all after 9 months or so, tho I 
kept it around, unused, another 9 months or so, until about the year and 
a half mark, when I decided I could better use that space for something 
else.  Still, I kept the MS Windows 98 install files, which I had copied 
off the CD back when I was still on MS to make reinstalls faster, around 
for awhile, and finally deleted them at about the two year mark, keeping 
the install CD itself, as my final link to MS, around for another year or 
so after that.

But that favorite game, Master of Orion, original DOS edition?  I still 
play it to this day, in DOSBox these days.  In fact, it's the only (non-
firmware level) slaveryware (in the context of the quote in my sig) I 
still have and run, tho of course DOSBox, the VM I run it in, is 
freedomware.  While I no longer accept any EULAs or agree to waive my 
rights regarding other slaveryware and thus couldn't legally run pretty 
much any other slaveryware (including flash and nVidia graphics drivers, 
for instance) even if I wanted to, I long ago accepted the EULA on Master 
of Orion, and pretty much simply didn't ever unaccept it.  Yes, that 
/does/ make it, and the four software freedoms and thus to my mind human 
rights disrespecting authors that created it, my master, and me its 
slave, it's a slavery I've not yet freed myself from... in that one 
instance, anyway.

Tho were I to have to reboot to run it, I expect I'd find myself freed of 
that slavery rather fast, because as I said, I Just. Don't. Find. 
Repeated. Rebooting. Practical. For any reason.

Ironically, tho DOSBox may have initially helped free me from the slavery 
of the MS platform as it gave me a way to continue to play that game on 
Linux, these days it's helping keep me a slave to that last bit of 
favorite game slaveryware, even after I've long since slipped the bonds 
of all the other slaveryware.

So, umm... Yeah, an MS platform VM (tho DOSBox is freedomware and I don't 
actually run MS DOS or whatever in it, it emulates that) on which to run 
a game or two... thus avoiding having to dual-boot to an MS platform to 
do so... sounds rather familiar to me!

Fortunately, other than the dosbox executable and *.conf file, and the 
associated libraries, etc, plus the associated game files, this 
particular VM and the platform emulated within, are DOS-era small, and 
thus 100% virtualized in memory, no big VM image file to worry about and 
get fragmented due to modification-writes on a COW-based btrfs. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-03-29  4:14 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-27 10:35 Raid 0 setup doubt Jose Otero
2016-03-28  0:56 ` Duncan
2016-03-28  5:26   ` James Johnston
2016-03-28  8:51     ` Duncan
2016-03-28 12:35   ` Austin S. Hemmelgarn
2016-03-29  1:46     ` Duncan
2016-03-29  2:10     ` Chris Murphy
2016-03-28 20:30   ` Jose Otero
2016-03-29  4:14     ` Duncan
2016-03-28  2:42 ` Chris Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.