Published 2005-07-10 11:11:01

I'm not sure Paul has done himself many favours in his reactions this week. Security breaches happen all the time, either from relatively friendly testing, or more frequently actual breakins. He should be breathing a sigh of relief that the worst damage is having to run a SQL statement DELETE * FROM xxx where body like "%test_xss%"

Last year I blogged about one of my clients getting rootkit'ed, this is probably the worst of all Unix based disasters. And the lessons from that have to some degree been applied and acted on.
  • Keep up to date with security announcements (and schedule updates reasonably regularly
  • run chkrootkit on a cronjob (as debian's package does now)
  • Keep a good set of backups
  • Try keeping up a backup server (with replicates the main servers operations)
  • Practice taking the backup server live!
This year's issues saw yet more attacks, most common has been ssh port scanning and user testing. Looking at /var/log/auth.log, a few months ago I started noticing a huge number of failures on the main web server. And not just from one IP address. As a defensive measure I did one simple thing
  • Change the port that ssh runs on!
While this works well on my server, one of my main clients has also implemened ip based restrictions at the firewall level for ssh. So unless you are a registered IP address, you do not even get a prompt.

I had made this modification to a couple of boxes that I directly use, however It was not feasible for a number of others that i rarely touch.

Last month, I saw the result of what the ssh attackers had started doing. From the logs a Romanian IP had managed to log in a few times under the ftp account via ssh (An account that was probably briefly enabled when I set up ftp, but latter ended up using .htaccess style passwords for it), but had forgotton about. Obviously the password I had given that account had been less than secure, and they had managed to get a shell account.

Thanks to the wonderfull logging ability of Linux/Unix etc., I had a facinating history of what they had attempted (via the bash_history) and copies of quite a few of the files they used. (funnily enough they had to use the ftp home directory, which was a highly visable location to put .tgz files on a windows network!)

I guess partly by luck they had not attempted to rootkit the box, although the system was quite up-todate (tested via chkrootkit, and looking at the logs). They had however tried installing an irc server, and used the box as a scanner to find other vunerable ssh hosts. (the irc didnt work due to the user privages mixed with our firewall configuration.) I didnt examine the logs that much to see if they found any other vunerable hosts.

So what lessons came out of this one, - another run around check on the servers I look after
  • start using the AllowUsers line in the ssh file.
  • where feasible lock down ssh access to IP's at the firewall level
  • regulary check /etc/passwords to see who has accounts.. and shells.
  • regulary check /var/log/auth.log with grep to see who has been accessing (or trying to access the server)
In the same respects while the wiki at pear was being tested, I was on IRC with they people doing the testing, and was babling on how half the problem with pear and the wiki was that it used PHP as templates, (while not totally true, it is a big enough factor to make it a concern).

The real problem I had spotted months ago when fixing a report of another XSS attack on the pear site, was the disjoin between input and output. Variables where being set in one place, and used in another file, with no basic premise if they had been escaped or not. Something I had been ranting about, with only a few takers, when describing why template engines where valuable. The basic premise that all data should be untrusted and hence escaped, unless you expressly know it's ok... (rather than the other way round..!)

To say I'm a guru on this however would be twisting words, during the tests that where being run on pear, the testers, fired it off at my site, and one of my clients (at my request). The hole they found on my site was due to a error message not escaping input correctly (something I would not normally do, but had hacked up my own site a bit too hastely), however, it appears that most other attacts failed due to flexy's escaping, and maybe in part to the javascript defence on the comment system.

My client faired a little worse, as we had been using a very old version of the Pager library, which has since been updated, along with some rather silly mistakes that had been made with the url writing being done in PHP rather than the template engine.

But this is definatly not an invitation to try and hack my systems, but if you do, please dont rootkit them, as it's a royal Pain in the ass to fix!!!



Mentioned By:
www.phpbuilder.com : PHPBuilder.com, the best resource for PHP tutorials, templates, PHP manuals, content management systems, scripts, classes and m (175 referals)
google.com : How To Find XXX Passwords On Google (81 referals)
google.com : april (71 referals)
google.com : december (57 referals)
google.com : XXX Password Checker portable (51 referals)
google.com : xxx passwords (35 referals)
google.com : irc xxx passwords (31 referals)
google.com : hacked xxx (25 referals)
google.com : find-xxx-passwords (23 referals)
www.planet-php.net : Planet PHP (19 referals)
google.com : xxx passwords irc (18 referals)
google.com : "how To Find XXX Passwords On Google" (16 referals)
google.com : hacked xxx passwords (14 referals)
google.com : hacked xxx accounts (13 referals)
www.sitemeter.com : Site Meter - Counter and Statistics Tracker (12 referals)
google.com : "how to find xxx passwords" (10 referals)
google.com : xxx password checker (10 referals)
google.com : "XXX Password Checker" (9 referals)
google.com : find xxx passwords with google (9 referals)
planet.debian.org.hk : Debian HK : Debian @ Hong Kong (8 referals)

Comments

pecl/filter
You may be surprised, but I fully agree with you on the input filtering point. Imagine if you had to explicitly tell your firewall which ports to block. That's not how security works. You open up holes in your firewall for only the things you explicitly want to allow through. The same should be true of web applications. The whole concept of letting everything through and then having to manually go and filter things that you think might contain something nasty is flawed. It also makes auditing the code impossible. If instead you filtered everything by default and you had to explicitly unfilter the particular fields you knew needed to contain weird characters, then you would be way ahead, and it would be much easier to audit the code since you just had to look for the explicit filter holes and follow that data through the code.

This is what pecl/filter is all about and I really need to find some more time to finish that code. It's super-controversial because people associate it with magic_quotes_gpc and figure it will be a nightmare to write portable applications against something like this. And they have a point, but that doesn't make them right when looking at this problem at its most basic architectural level.
#0 - Rasmus ( Link) on 2005-07-10 12:49:54 Delete Comment
Not About Pear
Hi, Alan -- as far as "He should be breathing a sigh of relief that the worst damage is having to run a SQL statement DELETE * FROM xxx where body like "%test_xss%"" ... you can bet I am. :-)

As far as how to react, I think I did the proper thing; that is, I fixed it and rolled a new release (in under 24 hours from the demonstrated exploit). Everything else is secondary.

You confuse my ranting about the Solar and Cerebral Cortex attacks with the *.php.net testing. I am not talking about the latter, but about the former.

There are no "friendly" tests without approval in advance. If the tester was in fact "friendly" he did not (to my knowledge) notify the target after the fact of the vulnerability or the solution.

I argue that if you're going to do "testing" or "research" you need to tell people in advance, and then tell them afterwards what you found. Is that such a bad thing? If you don't tell, then you are indistinguishable from a bad guy who is beginning a series of attempts to open up your system.

I don't mind security testing; in fact, I love it. But for goodness sake don't keep it to yourself, let the owner know. I can't as that's so hard.
#1 - Paul M. Jones ( Link) on 2005-07-10 21:06:06 Delete Comment
Log, Speck, Eye
You close with this: "But this is definatly not an invitation to try and hack my systems, but if you do, please dont rootkit them, as it's a royal Pain in the ass to fix!!!"

And if someone *does* try to test your systems for vulnerabilities, you sure would appreciate advance notice, wouldn't you? And you would want them to tell you after-the-fact what they found, right?

That's all I'm asking for from "friendlies."
#2 - Paul M. Jones ( Link) on 2005-07-10 21:09:55 Delete Comment
Input filtering
input filtering is an admin tool from my POV, not a developer tool. if
I am an admin and I want to secure (or tighten the cracks) my sites this
is the kind of tool I want.

As a developer I might simply require this kind of input filtering to
secure my application, but overall I should probably not rely on this
stuff anyways.

So an admin could simply disallow any tags from get parameters. Or I may
disallow any tags inside parameters that dont contain some string in
their name to flag them as containing tags (now this can seriously kill
your ability to install a third party app). This will likely break a few
apps, so it pays for developers to take these possibilities into account
but as an admin in some situations I rather break parts of an
application than accept security issues.
#3 - Lukas Smith ( Link) on 2005-07-11 09:41:56 Delete Comment

Add Your Comment

Follow us on