Wednesday, August 5, 2015

Building a Badass Tower

Discs & Partitioning

During my quest to build an impenetrable, powerful fortress, I have discovered that one of the most effective ways to thwart malicious software from taking hold of a Linux machine is to simply place certain directories on separate partitions. This gives you the ability to mount them with certain options not available when everything is lumped together inside one partition. The example that comes to mind first is mounting /tmp with 'noexec', 'nosuid', and 'nodev'. Because /tmp always has world read, write, and exec permissions, it's the perfect place to drop, compile, or otherwise prepare malicious software. This is not only important on servers, but also development machines, where compilers are often present.

If I were to ever create my own distribution, the installer would by default place /tmp on a separate partition with at least the 'nodev' and 'nosuid' options. Although I recommend mounting /tmp with 'noexec' as well, it can complicate things a little when you need install or update software. However, it's pretty easy to quickly remount a partition with different options:

mount -o remount,exec /tmp

Other directories that ought to be mounted separately include /var/log, /usr/local, and /home.  This partitioning scheme is reasonable for a multi purpose desktop system that may also run some type of server:

/               ext4    errors=remount-ro                                        0       1
/boot           ext4    defaults,noatime,nodev,nosuid,noexec    0       2
/home           ext4    defaults,nodev,nosuid                              0       2
/opt            ext4    defaults                                                        0       2
/srv            ext4    noatime,noexec,nodev                                 0       2
/tmp            ext4   noatime,nodev,nosuid,noexec                     0       2
/usr            ext4    defaults,nodev                                              0       2
/usr/local      ext4    defaults,nodev                                           0       2
/var            ext4    defaults,nodev,nosuid                                  0       2
/var/log        ext4    defaults,nodev,nosuid,noexec                                0       2
/var/log/audit   ext4    defaults,nodev,nosuid,noexec                                0       2
/dev/mapper/cryptswap1 none swap sw                                  0       0


This may be overkill, but I really like the flexibility that this scheme gives me. I like to at least put /var/log, /srv, /opt, and /usr/local in their own partitions. You can also mount /usr and /usr/local read-only, but you'd have to remount them as rw when you performed updates or installed software. However, doing so does harden a system quite a bit, and thus may be worth the extra hassle.

One thing to keep in mind is that it's a pain to fix something from recovery mode with a setup like this. It may be a good idea to keep a copy of busybox somewhere else on the filesystem in case something stupid happens. Usually you can just mount all of your partitions and than do what you need to do from recovery though, so I dont' think its much to worry about. However,  /etc should not have it's own partition, because if anything breaks and you need to change settings from recovery and can't mount /etc, than good luck to you.

If you have multiple hard discs you can spread the file system across the disks for a noticeable performance increase. Even with SSD's, the disc is often the performance bottleneck. Many factors seem to affect SATA speeds, including what cable you use to what type of disk you have. I currently have 4 hard drives in my tower, and it's awesome. Naturally, the main system runs of an SSD, because it's fastest. Than I have a 2 TB external disc hooked up to one of my USB3 ports. I use that for storage. I run VM's off another separate, dedicated disc, which nearly eliminates the performance reduction from the VM's on the rest of the system.

CPU's, GPU's, and BIOS: Fear Not the Microcode

I am an intel type of guy, and I've always hated AMD chipsets. My experience with AMD (which may have been more NVIDIA's fault, as the two often come together) and Linux has always been one of severe frustration. When you install Debian, for example, which does not come with any proprietary drivers out of the box, everything runs beautifully without any tweaking needed on an intel system. That's thanks to Intel working with the open source community over the last ten or twenty years.

It seemed that whenever I did an install on an AMD system, the experience was poor, and many 3rd party drivers, (cough, NVIDIA, I am looking at you right now!) were needed. For whatever reason, AMD systems often come with NVIDIA graphics cards. Intel often has graphics support integrated into their processors, so everything just works the way it is. The tradeoff is that you lose some of your performance because some of your RAM is reserved for video memory.

On desktop systems, you're probably going to want a video card, mainly because a) you have the room for it, and b) a GPU can seriously improve your user experience. Honestly, I don't have much experience with ATI or Radeon GPU's, but it seems people have less of a hard time getting those working than they do with NVIDIA GPU's on Linux systems. I do, however, have enough experience messing around with NVIDIA cards to offer some advice. First of all, try (hard) to avoid using the proprietary NVIDIA drivers! They seriously suck, like a lot. Sometimes it seems as though you don't have a choice in the matter, but before you resign yourself to such awfulness, try these things:

First, update your BIOS. Seriously, do it, especially on AMD systems, you will notice a big increase in stability. My theory is that Intel riggourously tests their shit before marketing it, which is why it's so stable from day one till death. AMD, either because they're always in a race with Intel, or maybe because of they don't work closely with the open source community like Intel does, seems to release processors before they're ready to do so, and than fix the problems with microcode updates. That's my theory, anyway. Because Linux distros don't always install the microcode for you, you're often left with damaged goods after installation. So, install the amd-microcode package, reboot, and than reevaluate your opinion of AMD's stability. I gaurentee you will be pleasantly surprised.

After installing the microcode & updating your BIOS, if you have an NVIDIA chip that is running NVIDIA drivers because it was acting up (my system would just randomly freeze when using the open source Noveau) try to get rid of all the NVIDIA crap, and see how your system performs. Just do ...

apt-get purge nvidia*

And than reboot. Hopefully at this point your system will be running stable and you'll be pleasantly surprised. If so, now install the mesa packages, which compliment the Noveau drivers quite nicely. After I did all of this, I had no more problems with system instability or video cards! My GPU is running at about 58 degrees Celsius now, as opposed to over 70 Celsius on the proprietary driver!

Now you can think about overclocking your CPU. Mine is factory clocked at 3.6 GHz (quad core), and I was able to overclock it to 4.7 GHz without any stability issues, or after-market heat sink! That is one hell of an improvement, in my opinion. However, once my thermal paste arrives in the mail, I am going to install a badass Zalman CPU fan and try to see if I can hit 6 GHz. Currently, my CPU never gets hotter than 45 Celsious, and idles around 10 or  15 (according to lm-sensors anyway). That is probably because my Cooler Master case is very well designed, and has excellent air flow.

More to come on building a badass tower next time. Peace. 

No comments:

Post a Comment