Viewing posts from : August 2011

Anytime in a security review that a password is obtained, whether it is simply a default setting or cracked after 8 days of crunching (or anything in between) there are some important things as the tester that need to happen.  First, notify the client and recommend they force change the password. Don’t just have them set it to force change at next login, specify that they change it and then, if necessary, notify the user.  This is because the account may already be compromised.  Just forcing a change may permit a bad guy to continue to access the account.  Second, if you must disclose the passwords to anyone, do so using partial masks.  Show only the first two characters or something similar, to disclose as little as possible, but still prove you have the password.  This is because the password in question might be used for other things, like personal banking.  Disclosing the full password could permit 3rd parties to obtain access to accounts other than the one being examined.  Judgement is important, since even two characters can be enough to identify a full password.  Above all, never release the passwords and accounts in any transferable format, such as a pdf or as part of the report.  Make sure your contracts indicate the client must change the passwords within a time frame and be able to show that your practices do not put the account credentials at risk, such as by sharing them with co-workers on a file sharing site.  Should anything go wrong, you want to be able to demonstrate that your people are not at fault.  Never retain credentials, and don’t add any of them to your dictionary unless you’ve filtered them closely for any which might indicate origin.  Something like “ih8COMPANYX” can indicate origin and may be used by another user at some future date, and if your dictionary ever gets questioned, you might be liable.  As always, check with a lawyer for real legal advice on the contracts and other legal subjects; Digital Trust is not a legal authority.

This post was largely inspired by a conversation from the SANS pentesting forum.

Recently we received a notice from our one ISP that one of our machines might be infected, and please clean it up or we’d be shut down. Well, we explained the situation to them, and it’s good to know someone’s watching our traffic (It’s 1984?), but we’ve been doing this for nearly six years from this location. And they only just now noticed? We’ve run huge attacks against large customers, it’s our business after all, for six years, and they only just now noticed we “might” have an infected computer? Sort of makes you wonder. What about all the other domains we traverse, like Sprint and AT&T? Are they going to start sending us hate mail? What happens if they start dumping the packets? We’ll have to find another ISP, I suppose, but eventually, if things went that way, the core would be filtering as well, and nothing would work. We’d practically be out of business. Ironically, the bad guys wouldn’t. Because the bad guys would just invent new ways to circumvent the security. Which would let us stay in business as well; we’d just need a new toolset. So if nothing’s going to really change, can we establish right now that filtering anything is a really bad idea, except during attacks? Because all it’s going to do is raise the price tags on security. You have to pay for the filters, you have to pay for the new security to counter the new threats. While standing still doesn’t prevent new threats from becoming a reality, it does allow us lots of ways of tracking people. They may have a new attack, but they probed on high ports first, which might let us locate them. Or at least shut them off from here. But don’t restrict traffic in the middle. It’s like putting a stop sign in the middle of the Atlantic. All you do is make shipping more expensive and annoy some little fish. So keep it open. Please.