Recently, I've been working on a project to help my community by improving their technical infrastructure. My main priority with this project is to give the source to the people. To liberate them from the chains of Microsoft, and of closed source, proprietary software in general. The time is right, considering Android (the most popular OS in the world right now) runs on the Linux kernel. I would say that Android has inadvertently nudged the open source subculture into the center stage of tech.
But along with any other type of freedom, you must go out of your way a little to take advantage of it. Free software is not exempt. In other words, there will be a bit of a learning curve, but it will be well worth it. This is why I chose to use Ubuntu as the ambassador, because it's probably the most user friendly distro for people that are used to running Windows. It has a very friendly and large support community on the internet that can be found in forums such as askubuntu.com. It is always on the bleeding edge, receives very frequent updates, and supports the largest amount of hardware, right out the box, than any other Linux distribution I've used.
I met with several different clients to give them a rundown of the operating system and show them what is included by default. For the most part, my clients were pleasantly surprised with Ubuntu, and it seemed to exceed their expectations regarding ease of use, cross platform compatibility, and the secure nature which allows one to never buy antivirus again.
But installing Ubuntu on 100 computers is definitively a redundant, time consuming task. Automating the post installation configuration is very easy with bash, but automating the actual installation itself is not quite so easy. However, there are a few measures that one can take to make it go as quickly & smoothly as possible.
First of all, you're going to want a few copies of installation media. While DVD's are cheap, you may want to use USB discs instead. That way you can take advantage of the a persistence volume, which will save things like your wifi password, username, etc. These are things that you would otherwise have to enter many, many times, so it's worth it. Installations also go much faster when using USB, simply because USB transfers data much faster than optical drives. However, you should make sure that you have a couple of mini net install iso's on hand (these will fit on CD-R's). Sometimes a computer will have a graphics card that does not agree with the standard installation media and will not work. In these cases, you have to use the net installer, and it works 99% of the time.
The other thing you will need is a password generator script and some sticker labels. It would be pretty lazily irresponsible to configure 100 computers with the same password, wouldn't you say? Here's a good one:
#!/bin/bash
# Random password generator. Originally by jbsnake, modified by crouse to use Upper case letters as well.
# Now also does error checking and fails if the input isn't numerical integers or if no input is given at all.
# Modified by Darkerego to save output with description so you can remember what the password is for.
# For obvious security reasons, this is better managed as root.
#if [ "$(id -u)" != "0" ]; then
# echo "Run it as root." 1>&2
# exit 1
#fi
if [[ -z "$1" || $1 = *[^0-9]* ]];
then
echo " ";
echo " ######### COMMAND FAILED ########## ";
echo " USAGE: $0 passwordlength";
echo " EXAMPLE: $0 10";
echo " Creates a random password 10 chars long.";
echo " ######### COMMAND FAILED ########## ";echo " ";
exit
else
if [[ "$1" -lt "6" ]]
then echo "Your password is less than 6 characters in length."
echo "This is a security risk. Suggested length is 6 characters or longer !"
fi
RIGHTNOW=$(date +"%R %x")
pwdlen=$1
char=(0 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V X W Y Z)
max=${#char[*]}
for i in `seq 1 $pwdlen`
do
let "rand=$RANDOM % 62"
str="${str}${char[$rand]}"
done
echo $str ##| tee -a passwords
fi
echo 'Enter a password description'
read pwinfo; echo $RIGHTNOW : $pwinfo : $str >> /root/passwords
echo 'Warning: output saved to passwords file'
exit
Finally, you should choose a desktop environment. I started with Gnome, but half way through decided to do the other half with Unity. I really am not a Unity fan, but it may be easier to understand for new users, and I don't want to throw people off and give them a bad first impression of Linux. First impressions are everything, even in the technological world.
It also helps to have a post installation script. This is the one I have been using. I originally wrote it to configure my VPS servers, and adapted it for desktop systems. There are a lot of security related kernel tweaks and whatnot that really should be turned on by default. That and installing/removing certain software is what this script is for. If I ever have to do this again, I am going to take the time to figure out how to completely automate the installation. I mean all I should have to do is stick a disc in a drive, walk away, and come back to a fully configured, fully installed system.
Sunday, August 30, 2015
Monday, August 17, 2015
Building a Badass Tower, Part III : Choosing a Case
Maybe this should be titled Part 2.5, but I don't know how to do a
.5 in Roman Numerals (IIi ?). I'd like to highlight a few things I've
learned during this project to counter some bad advice I've found
lurking around the internet. There's all sorts of silly arguing on
those forums, and than you have people writing articles that don't
seem to have a clue what they're talking about. Sometimes I am one of
those people, admittedly.
First of all, when planning to build a computer, do not cheap out on the case! I've seen a couple articles where the author says "Buy the cheapest case possible." That's terrible advice, and I'll tell you why. Investing in a good case will allow you to upgrade, add parts, keep your hardware running cool, keep static electricity out, and even completely rebuild the computer one day, reusing that case. I'd even go so far as to recommend that you buy a cool looking case, because that will make you love your computer that much more, and motivate you to keep it functioning well. A little love goes a long way. However, one should find a good balance between aesthetics and functionality when choosing a case.
For example, my tower was originally custom built by a friend and came with a black steel gaming case. This case was not quite as cool looking as this Cooler Master case that I had lying around. The Cooler Master had trippy, quiet 80mm blue LED fans, and a pretty transparent blue acrylic side that allowed you to see all the components in action. It looked badass, but unfortunately it was not even close to a master of coolness due to a lack of fans and a poorly designed air intake. I ended up putting everything back into the original case, because that case has two 120mm front intake fans, a 120 mm side exhaust fan, a 136 mm top exhaust fan, and a 120 mm exhaust fan on the back. In contrast, the Cooler Master only has two 80mm front intake fans (with steel holes for intake that don't draw easily and get clogged easily too), and another 80 mm LED fan on the back. It'd be fine for a system that doesn't produce a lot of heat, but I like to over-clock my machines, so I wanted as much air circulation as possible. This is especially important when you have an Nvidia GPU, because those things freaking cook themselves to death, idling between 60 and 70 degrees Celsius and hitting upwards of 90 when doing these like even watching an HD movie when there are two monitors plugged in! Like, wtf Nvidia? Can't you, idk, maybe use higher quality heat sinks or something?
Keeping things cool isn't the only characteristic to look for in a case. One should also look for a heavy duty, big metal box. The metal encasing acts as a barrier to everything around it, keeping static electricity away from your fragile hardware. There's probably some conspiracy with the computer manufactures to use plastic housing in order to wear the machine out faster so you buy a new one. In retrospect, I've had a few laptops with mostly plastic exteriors, and those are ones that always have something die after 3 or 4 years for no reason, most notable, the LCD display and hard disc. Also, it's great to have a hole to put a padlock through so that you can lock your computer. Remember, anyone with physical access to your machine can reset the bios and bypass many of the security restrictions you may have in place. But, it's a lot harder to get away with when you have the case locked. Yes, this is something that you should be worried about in this day and age. It also prevents theft and tampering. To summarize, you want a big metal box not a half metal, half plastic, static attracting piece of junk.
Finally, it's a good idea to buy a case that has lots of room to work with on the inside. It's no good having things cramped. It impedes air flow, makes maintenance more of a pain, and puts limitations on the types of upgrades you can perform. For instance, you may want to put an after-market heat sink on your CPU, which may require more room because good heat-sinks can be big. BTW, if you do buy a nice air cooling heat-sink, I'd recommend looking for something that allows you to blow the air in any direction you want. Sometimes you might want to shoot it out the top fan, sometimes the side fan, and if you don't have a top or side fan, you will want to point it so it all blows out the back fan.
In my case (eh ha), i will probably end up painting some trippy shit on my black steel case, to liven it up a little. I'll also probably get some LED fans, because.. why not? As long as their still good quality fans it doesn't matter. That way I'd have colored light dancing through the fan grills, which would look pretty cool. Which brings me to my last point-- you can always modify your case, if you'd like. For example, if I cut out two big holes on the top of my aluminum Cooler Master, than I could add two big powerful exhaust fans, which would dramatically help the airflow situation. Unfortunately my camera broke recently, and I need to find a USB to Compact Flash card adapter (if they even make those anymore, it's an old Canon), so I don't have any pictures for you right now.
Buy a good case, just in case. One day you may find yourself in a place, where you wish you'd bought a better case...
First of all, when planning to build a computer, do not cheap out on the case! I've seen a couple articles where the author says "Buy the cheapest case possible." That's terrible advice, and I'll tell you why. Investing in a good case will allow you to upgrade, add parts, keep your hardware running cool, keep static electricity out, and even completely rebuild the computer one day, reusing that case. I'd even go so far as to recommend that you buy a cool looking case, because that will make you love your computer that much more, and motivate you to keep it functioning well. A little love goes a long way. However, one should find a good balance between aesthetics and functionality when choosing a case.
For example, my tower was originally custom built by a friend and came with a black steel gaming case. This case was not quite as cool looking as this Cooler Master case that I had lying around. The Cooler Master had trippy, quiet 80mm blue LED fans, and a pretty transparent blue acrylic side that allowed you to see all the components in action. It looked badass, but unfortunately it was not even close to a master of coolness due to a lack of fans and a poorly designed air intake. I ended up putting everything back into the original case, because that case has two 120mm front intake fans, a 120 mm side exhaust fan, a 136 mm top exhaust fan, and a 120 mm exhaust fan on the back. In contrast, the Cooler Master only has two 80mm front intake fans (with steel holes for intake that don't draw easily and get clogged easily too), and another 80 mm LED fan on the back. It'd be fine for a system that doesn't produce a lot of heat, but I like to over-clock my machines, so I wanted as much air circulation as possible. This is especially important when you have an Nvidia GPU, because those things freaking cook themselves to death, idling between 60 and 70 degrees Celsius and hitting upwards of 90 when doing these like even watching an HD movie when there are two monitors plugged in! Like, wtf Nvidia? Can't you, idk, maybe use higher quality heat sinks or something?
Keeping things cool isn't the only characteristic to look for in a case. One should also look for a heavy duty, big metal box. The metal encasing acts as a barrier to everything around it, keeping static electricity away from your fragile hardware. There's probably some conspiracy with the computer manufactures to use plastic housing in order to wear the machine out faster so you buy a new one. In retrospect, I've had a few laptops with mostly plastic exteriors, and those are ones that always have something die after 3 or 4 years for no reason, most notable, the LCD display and hard disc. Also, it's great to have a hole to put a padlock through so that you can lock your computer. Remember, anyone with physical access to your machine can reset the bios and bypass many of the security restrictions you may have in place. But, it's a lot harder to get away with when you have the case locked. Yes, this is something that you should be worried about in this day and age. It also prevents theft and tampering. To summarize, you want a big metal box not a half metal, half plastic, static attracting piece of junk.
Finally, it's a good idea to buy a case that has lots of room to work with on the inside. It's no good having things cramped. It impedes air flow, makes maintenance more of a pain, and puts limitations on the types of upgrades you can perform. For instance, you may want to put an after-market heat sink on your CPU, which may require more room because good heat-sinks can be big. BTW, if you do buy a nice air cooling heat-sink, I'd recommend looking for something that allows you to blow the air in any direction you want. Sometimes you might want to shoot it out the top fan, sometimes the side fan, and if you don't have a top or side fan, you will want to point it so it all blows out the back fan.
In my case (eh ha), i will probably end up painting some trippy shit on my black steel case, to liven it up a little. I'll also probably get some LED fans, because.. why not? As long as their still good quality fans it doesn't matter. That way I'd have colored light dancing through the fan grills, which would look pretty cool. Which brings me to my last point-- you can always modify your case, if you'd like. For example, if I cut out two big holes on the top of my aluminum Cooler Master, than I could add two big powerful exhaust fans, which would dramatically help the airflow situation. Unfortunately my camera broke recently, and I need to find a USB to Compact Flash card adapter (if they even make those anymore, it's an old Canon), so I don't have any pictures for you right now.
Buy a good case, just in case. One day you may find yourself in a place, where you wish you'd bought a better case...
Monday, August 10, 2015
Building a Badass Tower Part II : On Second Thought...
It turns out that open source Nvidia drivers are still actually not stable enough to rely on, or at least for certain GPU's. Although my card runs much cooler with the noveau driver, I started getting those random freezes again. With disappointment, I had to revert back to the proprietary Nvidia drivers. However, this time I used the xorg edgers repository, which seems to be, at least a little more stable.
On another disappointing note, I've been reminded of why I hate AMD processors again... Lately, I've been having this strange problem with my USB 3 ports. Sometimes, for no logical reason that I can find, the USB 3 controller just stops working. When that happens, sometimes it also crashes the entire system with it, and other times not. The only way I know to fix it is to shut down the system, turn it off, unplug everything, press the power button long enough to discharge the capacitors, and than reboot. After doing all of that, my USB 3 ports magically work again.
I am sure that there is a way to fix these problems, but honestly I am tired of trying to do so. This is why I prefer Intel chipsets: because they just work, no matter what. That is the definition of stability in my book. So whenever I burn out or outgrow this motherboard, I will most certainty purchase an Intel chipset next time.
In the meantime, if anyone else running an AMD 3+ board has experienced these issues and could shed some light here and tell me what to do to fix this USB 3 port problem, that'd be swell. Other than that, I am really enjoying this system. The computing power is beyond awesome, there's just a couple annoying glitches I need to work out.
On another disappointing note, I've been reminded of why I hate AMD processors again... Lately, I've been having this strange problem with my USB 3 ports. Sometimes, for no logical reason that I can find, the USB 3 controller just stops working. When that happens, sometimes it also crashes the entire system with it, and other times not. The only way I know to fix it is to shut down the system, turn it off, unplug everything, press the power button long enough to discharge the capacitors, and than reboot. After doing all of that, my USB 3 ports magically work again.
I am sure that there is a way to fix these problems, but honestly I am tired of trying to do so. This is why I prefer Intel chipsets: because they just work, no matter what. That is the definition of stability in my book. So whenever I burn out or outgrow this motherboard, I will most certainty purchase an Intel chipset next time.
In the meantime, if anyone else running an AMD 3+ board has experienced these issues and could shed some light here and tell me what to do to fix this USB 3 port problem, that'd be swell. Other than that, I am really enjoying this system. The computing power is beyond awesome, there's just a couple annoying glitches I need to work out.
Wednesday, August 5, 2015
Building a Badass Tower
Discs & Partitioning
During my quest to build an impenetrable, powerful fortress, I have discovered that one of the most effective ways to thwart malicious software from taking hold of a Linux machine is to simply place certain directories on separate partitions. This gives you the ability to mount them with certain options not available when everything is lumped together inside one partition. The example that comes to mind first is mounting /tmp with 'noexec', 'nosuid', and 'nodev'. Because /tmp always has world read, write, and exec permissions, it's the perfect place to drop, compile, or otherwise prepare malicious software. This is not only important on servers, but also development machines, where compilers are often present.
If I were to ever create my own distribution, the installer would by default place /tmp on a separate partition with at least the 'nodev' and 'nosuid' options. Although I recommend mounting /tmp with 'noexec' as well, it can complicate things a little when you need install or update software. However, it's pretty easy to quickly remount a partition with different options:
mount -o remount,exec /tmp
Other directories that ought to be mounted separately include /var/log, /usr/local, and /home. This partitioning scheme is reasonable for a multi purpose desktop system that may also run some type of server:
/ ext4 errors=remount-ro 0 1
/boot ext4 defaults,noatime,nodev,nosuid,noexec 0 2
/home ext4 defaults,nodev,nosuid 0 2
/opt ext4 defaults 0 2
/srv ext4 noatime,noexec,nodev 0 2
/tmp ext4 noatime,nodev,nosuid,noexec 0 2
/usr ext4 defaults,nodev 0 2
/usr/local ext4 defaults,nodev 0 2
/var ext4 defaults,nodev,nosuid 0 2
/var/log ext4 defaults,nodev,nosuid,noexec 0 2
/var/log/audit ext4 defaults,nodev,nosuid,noexec 0 2
/dev/mapper/cryptswap1 none swap sw 0 0
This may be overkill, but I really like the flexibility that this scheme gives me. I like to at least put /var/log, /srv, /opt, and /usr/local in their own partitions. You can also mount /usr and /usr/local read-only, but you'd have to remount them as rw when you performed updates or installed software. However, doing so does harden a system quite a bit, and thus may be worth the extra hassle.
One thing to keep in mind is that it's a pain to fix something from recovery mode with a setup like this. It may be a good idea to keep a copy of busybox somewhere else on the filesystem in case something stupid happens. Usually you can just mount all of your partitions and than do what you need to do from recovery though, so I dont' think its much to worry about. However, /etc should not have it's own partition, because if anything breaks and you need to change settings from recovery and can't mount /etc, than good luck to you.
If you have multiple hard discs you can spread the file system across the disks for a noticeable performance increase. Even with SSD's, the disc is often the performance bottleneck. Many factors seem to affect SATA speeds, including what cable you use to what type of disk you have. I currently have 4 hard drives in my tower, and it's awesome. Naturally, the main system runs of an SSD, because it's fastest. Than I have a 2 TB external disc hooked up to one of my USB3 ports. I use that for storage. I run VM's off another separate, dedicated disc, which nearly eliminates the performance reduction from the VM's on the rest of the system.
CPU's, GPU's, and BIOS: Fear Not the Microcode
I am an intel type of guy, and I've always hated AMD chipsets. My experience with AMD (which may have been more NVIDIA's fault, as the two often come together) and Linux has always been one of severe frustration. When you install Debian, for example, which does not come with any proprietary drivers out of the box, everything runs beautifully without any tweaking needed on an intel system. That's thanks to Intel working with the open source community over the last ten or twenty years.
It seemed that whenever I did an install on an AMD system, the experience was poor, and many 3rd party drivers, (cough, NVIDIA, I am looking at you right now!) were needed. For whatever reason, AMD systems often come with NVIDIA graphics cards. Intel often has graphics support integrated into their processors, so everything just works the way it is. The tradeoff is that you lose some of your performance because some of your RAM is reserved for video memory.
On desktop systems, you're probably going to want a video card, mainly because a) you have the room for it, and b) a GPU can seriously improve your user experience. Honestly, I don't have much experience with ATI or Radeon GPU's, but it seems people have less of a hard time getting those working than they do with NVIDIA GPU's on Linux systems. I do, however, have enough experience messing around with NVIDIA cards to offer some advice. First of all, try (hard) to avoid using the proprietary NVIDIA drivers! They seriously suck, like a lot. Sometimes it seems as though you don't have a choice in the matter, but before you resign yourself to such awfulness, try these things:
First, update your BIOS. Seriously, do it, especially on AMD systems, you will notice a big increase in stability. My theory is that Intel riggourously tests their shit before marketing it, which is why it's so stable from day one till death. AMD, either because they're always in a race with Intel, or maybe because of they don't work closely with the open source community like Intel does, seems to release processors before they're ready to do so, and than fix the problems with microcode updates. That's my theory, anyway. Because Linux distros don't always install the microcode for you, you're often left with damaged goods after installation. So, install the amd-microcode package, reboot, and than reevaluate your opinion of AMD's stability. I gaurentee you will be pleasantly surprised.
After installing the microcode & updating your BIOS, if you have an NVIDIA chip that is running NVIDIA drivers because it was acting up (my system would just randomly freeze when using the open source Noveau) try to get rid of all the NVIDIA crap, and see how your system performs. Just do ...
apt-get purge nvidia*
And than reboot. Hopefully at this point your system will be running stable and you'll be pleasantly surprised. If so, now install the mesa packages, which compliment the Noveau drivers quite nicely. After I did all of this, I had no more problems with system instability or video cards! My GPU is running at about 58 degrees Celsius now, as opposed to over 70 Celsius on the proprietary driver!
Now you can think about overclocking your CPU. Mine is factory clocked at 3.6 GHz (quad core), and I was able to overclock it to 4.7 GHz without any stability issues, or after-market heat sink! That is one hell of an improvement, in my opinion. However, once my thermal paste arrives in the mail, I am going to install a badass Zalman CPU fan and try to see if I can hit 6 GHz. Currently, my CPU never gets hotter than 45 Celsious, and idles around 10 or 15 (according to lm-sensors anyway). That is probably because my Cooler Master case is very well designed, and has excellent air flow.
More to come on building a badass tower next time. Peace.
During my quest to build an impenetrable, powerful fortress, I have discovered that one of the most effective ways to thwart malicious software from taking hold of a Linux machine is to simply place certain directories on separate partitions. This gives you the ability to mount them with certain options not available when everything is lumped together inside one partition. The example that comes to mind first is mounting /tmp with 'noexec', 'nosuid', and 'nodev'. Because /tmp always has world read, write, and exec permissions, it's the perfect place to drop, compile, or otherwise prepare malicious software. This is not only important on servers, but also development machines, where compilers are often present.
If I were to ever create my own distribution, the installer would by default place /tmp on a separate partition with at least the 'nodev' and 'nosuid' options. Although I recommend mounting /tmp with 'noexec' as well, it can complicate things a little when you need install or update software. However, it's pretty easy to quickly remount a partition with different options:
mount -o remount,exec /tmp
Other directories that ought to be mounted separately include /var/log, /usr/local, and /home. This partitioning scheme is reasonable for a multi purpose desktop system that may also run some type of server:
/ ext4 errors=remount-ro 0 1
/boot ext4 defaults,noatime,nodev,nosuid,noexec 0 2
/home ext4 defaults,nodev,nosuid 0 2
/opt ext4 defaults 0 2
/srv ext4 noatime,noexec,nodev 0 2
/tmp ext4 noatime,nodev,nosuid,noexec 0 2
/usr ext4 defaults,nodev 0 2
/usr/local ext4 defaults,nodev 0 2
/var ext4 defaults,nodev,nosuid 0 2
/var/log ext4 defaults,nodev,nosuid,noexec 0 2
/var/log/audit ext4 defaults,nodev,nosuid,noexec 0 2
/dev/mapper/cryptswap1 none swap sw 0 0
This may be overkill, but I really like the flexibility that this scheme gives me. I like to at least put /var/log, /srv, /opt, and /usr/local in their own partitions. You can also mount /usr and /usr/local read-only, but you'd have to remount them as rw when you performed updates or installed software. However, doing so does harden a system quite a bit, and thus may be worth the extra hassle.
One thing to keep in mind is that it's a pain to fix something from recovery mode with a setup like this. It may be a good idea to keep a copy of busybox somewhere else on the filesystem in case something stupid happens. Usually you can just mount all of your partitions and than do what you need to do from recovery though, so I dont' think its much to worry about. However, /etc should not have it's own partition, because if anything breaks and you need to change settings from recovery and can't mount /etc, than good luck to you.
If you have multiple hard discs you can spread the file system across the disks for a noticeable performance increase. Even with SSD's, the disc is often the performance bottleneck. Many factors seem to affect SATA speeds, including what cable you use to what type of disk you have. I currently have 4 hard drives in my tower, and it's awesome. Naturally, the main system runs of an SSD, because it's fastest. Than I have a 2 TB external disc hooked up to one of my USB3 ports. I use that for storage. I run VM's off another separate, dedicated disc, which nearly eliminates the performance reduction from the VM's on the rest of the system.
CPU's, GPU's, and BIOS: Fear Not the Microcode
I am an intel type of guy, and I've always hated AMD chipsets. My experience with AMD (which may have been more NVIDIA's fault, as the two often come together) and Linux has always been one of severe frustration. When you install Debian, for example, which does not come with any proprietary drivers out of the box, everything runs beautifully without any tweaking needed on an intel system. That's thanks to Intel working with the open source community over the last ten or twenty years.
It seemed that whenever I did an install on an AMD system, the experience was poor, and many 3rd party drivers, (cough, NVIDIA, I am looking at you right now!) were needed. For whatever reason, AMD systems often come with NVIDIA graphics cards. Intel often has graphics support integrated into their processors, so everything just works the way it is. The tradeoff is that you lose some of your performance because some of your RAM is reserved for video memory.
On desktop systems, you're probably going to want a video card, mainly because a) you have the room for it, and b) a GPU can seriously improve your user experience. Honestly, I don't have much experience with ATI or Radeon GPU's, but it seems people have less of a hard time getting those working than they do with NVIDIA GPU's on Linux systems. I do, however, have enough experience messing around with NVIDIA cards to offer some advice. First of all, try (hard) to avoid using the proprietary NVIDIA drivers! They seriously suck, like a lot. Sometimes it seems as though you don't have a choice in the matter, but before you resign yourself to such awfulness, try these things:
First, update your BIOS. Seriously, do it, especially on AMD systems, you will notice a big increase in stability. My theory is that Intel riggourously tests their shit before marketing it, which is why it's so stable from day one till death. AMD, either because they're always in a race with Intel, or maybe because of they don't work closely with the open source community like Intel does, seems to release processors before they're ready to do so, and than fix the problems with microcode updates. That's my theory, anyway. Because Linux distros don't always install the microcode for you, you're often left with damaged goods after installation. So, install the amd-microcode package, reboot, and than reevaluate your opinion of AMD's stability. I gaurentee you will be pleasantly surprised.
After installing the microcode & updating your BIOS, if you have an NVIDIA chip that is running NVIDIA drivers because it was acting up (my system would just randomly freeze when using the open source Noveau) try to get rid of all the NVIDIA crap, and see how your system performs. Just do ...
apt-get purge nvidia*
And than reboot. Hopefully at this point your system will be running stable and you'll be pleasantly surprised. If so, now install the mesa packages, which compliment the Noveau drivers quite nicely. After I did all of this, I had no more problems with system instability or video cards! My GPU is running at about 58 degrees Celsius now, as opposed to over 70 Celsius on the proprietary driver!
Now you can think about overclocking your CPU. Mine is factory clocked at 3.6 GHz (quad core), and I was able to overclock it to 4.7 GHz without any stability issues, or after-market heat sink! That is one hell of an improvement, in my opinion. However, once my thermal paste arrives in the mail, I am going to install a badass Zalman CPU fan and try to see if I can hit 6 GHz. Currently, my CPU never gets hotter than 45 Celsious, and idles around 10 or 15 (according to lm-sensors anyway). That is probably because my Cooler Master case is very well designed, and has excellent air flow.
More to come on building a badass tower next time. Peace.
Sunday, August 2, 2015
Troubleshooting Transmission Speeds
Who doesn't use bittorent these days? I asked someone (who shall remain anonymous) that just graduated high school last June "So, does everyone in your generation torrent everything these days..?" To which he replied, "Yeah, literally, everyone." That's not too surprising, but what was somewhat surprising is that these kids don't seem to understand (or maybe just don't care) about the legality issues of digital piracy, but I'll save that for another post. I should add that bittorent has more legitimate uses than illegal, and the protocol gets a lot of unfair criticism. Without admitting anything, I will say that if I were to use bittorrent for piracy, I would at least use a proxy of some sort.
VPN's are very useful for a variety of purposes, such as creating a secure tunnel from point A to point B. In this case, point A is the oppressive geopolitical region you live in, and point B is a more enlightened country on the other side of the world. VPN's are also convenient ways of obtaining a secure connection over an insecure access point. Everyone should always be using VPN's over public wifi, for example. If you're going to bother to set this up, use openvpn.
Typically, bittorent clients have no problem finding their way around firewalls, and are generally a very effective means of quickly transferring data from one place to another. However, I've found that in cases where there is notable latency between your client and public facing interface, (like when connected to a VPN across the Atlantic) the client will struggle keeping the peer to peer connections open. What seems to be happening is a connection is initiated, established, and than dropped seconds later. Than the client (in this case, Transmission) tries to download from the next peer in the torrent swarm. The process repeats itself, and the torrent takes forever to download.
Thus I began troubleshooting this problem to see what I could do about latency causing dropped connections. I think I've figured it out. First, you ought to forward a port from your VPN server to your box. Create a client connect script, or manually edit your firewall script and add something like this:
## Port Forwarding From Server Public IP to a VPN Client ##
fwd_EN="false" # Change to 'true' to enable
ext_if="eth0" # public interface
int_if="tun0" # vpn interface
int_ip="10.9.0.6" # vpn client to forward to
int_PRT="51413" # port to forward
if [[ $fwd_EN == "true" ]]; then
echo Warning: Port Forwarding enabled
$IPT -t nat -A PREROUTING -p tcp -i $ext_if --dport $int_PRT -j DNAT --to-dest $int_ip:$int_PRT
$IPT -A FORWARD -p tcp -i $ext_if -o $int_if -d $int_ip --dport $int_PRT -m state --state NEW -j ACCEPT
$IPT -t nat -A PREROUTING -p udp -i $ext_if --dport $int_PRT -j DNAT --to-dest $int_ip:$int_PRT
$IPT -A FORWARD -p udp -i $ext_if -o $int_if -d $int_ip --dport $int_PRT -m state --state NEW -j ACCEPT $IPT -A FORWARD -i $ext_if -o $int_if -d $int_ip -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPT -A FORWARD -i $int_if -s $int_ip -o $ext_if -m state --state ESTABLISHED,RELATED -j ACCEPT
else
echo Info: Port Forwarding Disabled
fi
Next, you need to open the port (example 51413) on your local box. Something like this:
iptables -A INPUT -i tun0 --dport 51413 -j ACCEPT
That alone will greatly improve speeds, and is usually enough. To make sure that it worked correctly, try testing the connection with netcat, and see if you can send yourself a message from another host on the internet to your VPN client by using the VPN server's public ip. If the connection establishes correctly and you can read the message, than port forwarding is working.
The last thing I had to do to get my torrent speeds up to par like they used to be was tweak some of the Transmission client's settings:
- Disable utp. For whatever reason, it was making my download speeds crawl.
- Disable PEX and DHT. Trackers don't like clients that use these features because it can mess up ratio tracking, or so I've heard, anyway.
- Uncheck the 'Use port forwarding from my router' box. Since we have a clear open port through the forward rules on the server, this is not necessary. Of course, if you are using port triggering, than keep this box checked.
After I did all of that, I was getting around 3MB/s download speeds through the VPN server again. Not bad for an encrypted tunnel that's 2000 miles long!
VPN's are very useful for a variety of purposes, such as creating a secure tunnel from point A to point B. In this case, point A is the oppressive geopolitical region you live in, and point B is a more enlightened country on the other side of the world. VPN's are also convenient ways of obtaining a secure connection over an insecure access point. Everyone should always be using VPN's over public wifi, for example. If you're going to bother to set this up, use openvpn.
Typically, bittorent clients have no problem finding their way around firewalls, and are generally a very effective means of quickly transferring data from one place to another. However, I've found that in cases where there is notable latency between your client and public facing interface, (like when connected to a VPN across the Atlantic) the client will struggle keeping the peer to peer connections open. What seems to be happening is a connection is initiated, established, and than dropped seconds later. Than the client (in this case, Transmission) tries to download from the next peer in the torrent swarm. The process repeats itself, and the torrent takes forever to download.
Thus I began troubleshooting this problem to see what I could do about latency causing dropped connections. I think I've figured it out. First, you ought to forward a port from your VPN server to your box. Create a client connect script, or manually edit your firewall script and add something like this:
## Port Forwarding From Server Public IP to a VPN Client ##
fwd_EN="false" # Change to 'true' to enable
ext_if="eth0" # public interface
int_if="tun0" # vpn interface
int_ip="10.9.0.6" # vpn client to forward to
int_PRT="51413" # port to forward
if [[ $fwd_EN == "true" ]]; then
echo Warning: Port Forwarding enabled
$IPT -t nat -A PREROUTING -p tcp -i $ext_if --dport $int_PRT -j DNAT --to-dest $int_ip:$int_PRT
$IPT -A FORWARD -p tcp -i $ext_if -o $int_if -d $int_ip --dport $int_PRT -m state --state NEW -j ACCEPT
$IPT -t nat -A PREROUTING -p udp -i $ext_if --dport $int_PRT -j DNAT --to-dest $int_ip:$int_PRT
$IPT -A FORWARD -p udp -i $ext_if -o $int_if -d $int_ip --dport $int_PRT -m state --state NEW -j ACCEPT $IPT -A FORWARD -i $ext_if -o $int_if -d $int_ip -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPT -A FORWARD -i $int_if -s $int_ip -o $ext_if -m state --state ESTABLISHED,RELATED -j ACCEPT
else
echo Info: Port Forwarding Disabled
fi
Next, you need to open the port (example 51413) on your local box. Something like this:
iptables -A INPUT -i tun0 --dport 51413 -j ACCEPT
That alone will greatly improve speeds, and is usually enough. To make sure that it worked correctly, try testing the connection with netcat, and see if you can send yourself a message from another host on the internet to your VPN client by using the VPN server's public ip. If the connection establishes correctly and you can read the message, than port forwarding is working.
The last thing I had to do to get my torrent speeds up to par like they used to be was tweak some of the Transmission client's settings:
- Disable utp. For whatever reason, it was making my download speeds crawl.
- Disable PEX and DHT. Trackers don't like clients that use these features because it can mess up ratio tracking, or so I've heard, anyway.
- Uncheck the 'Use port forwarding from my router' box. Since we have a clear open port through the forward rules on the server, this is not necessary. Of course, if you are using port triggering, than keep this box checked.
After I did all of that, I was getting around 3MB/s download speeds through the VPN server again. Not bad for an encrypted tunnel that's 2000 miles long!
Subscribe to:
Posts (Atom)