to the big man himself for passing the HCISSP exam on the first try! Liticode considers the HCISSP a necessary standard for working on HIPAA and hospital security and litigation consulting. No cert is too much of a reach for our valued clients.
Recently we received a notice from our one ISP that one of our machines might be infected, and please clean it up or we’d be shut down. Well, we explained the situation to them, and it’s good to know someone’s watching our traffic (It’s 1984?), but we’ve been doing this for nearly six years from this location. And they only just now noticed? We’ve run huge attacks against large customers, it’s our business after all, for six years, and they only just now noticed we “might” have an infected computer? Sort of makes you wonder. What about all the other domains we traverse, like Sprint and AT&T? Are they going to start sending us hate mail? What happens if they start dumping the packets? We’ll have to find another ISP, I suppose, but eventually, if things went that way, the core would be filtering as well, and nothing would work. We’d practically be out of business. Ironically, the bad guys wouldn’t. Because the bad guys would just invent new ways to circumvent the security. Which would let us stay in business as well; we’d just need a new toolset. So if nothing’s going to really change, can we establish right now that filtering anything is a really bad idea, except during attacks? Because all it’s going to do is raise the price tags on security. You have to pay for the filters, you have to pay for the new security to counter the new threats. While standing still doesn’t prevent new threats from becoming a reality, it does allow us lots of ways of tracking people. They may have a new attack, but they probed on high ports first, which might let us locate them. Or at least shut them off from here. But don’t restrict traffic in the middle. It’s like putting a stop sign in the middle of the Atlantic. All you do is make shipping more expensive and annoy some little fish. So keep it open. Please.
A wireless vendor (who shall remain nameless) is selling its 3G cards to corporate networks as a secure means of remote communication. There’s only one problem with it; anyone that picks up one of these cards, pre-configured for accessing the corporate network by the vendor, can just plug it in and reach out and touch the corporate network. Not to mention being able to browse the Internet and use DNS and other things; more later.
We do need to download the software first, and it does ask for a phone number to do so. Fortunately, any phone number will work, and if they fix it so only their numbers work (impossible given number portability) then you would just need a network phone number. “Hey, Fred, you use vendor for your cell service?” Boom!
Once the software is downloaded and installed, it asks for the device phone number. Which it auto-populates for you by pulling the number off the device. Brilliant. We don’t even have to query the card to obtain the phone number.
Linking to the net is accomplished with the press of a button. Now here, the vendor has limited what protocols and destinations are acceptable, so when we fire up a browser, it fails everywhere you look. Or does it. What actually happens is it hangs. It doesn’t time out, and a look at netstat reveals that we are getting DNS information, and we are initiating connections. Checking DNS directly clearly indicates not everything is locked down tight, if at all. The vendor has put us in a tunnel of some type, so some stuff works, some stuff doesn’t. https also fails. A quick trip to Google revealed an http site on non-standard port 81, which worked fine, so we know we can pass http, just not on 80. The only thing we know for sure is that port 80/443 is not getting us where we want to go, but it seems everything else is.
A quick peek with Nessus (if they’d been using Counteract, that would have failed) reveals Microsoft destinations. From there, its a matter of using hydra and getting onto an M$ resource, at which point the network is an open book.
All from a single lost vendor 3g card.
Several layered security mechanisms would have prevented this, not the least is some form of authentication at the vendor border. From there, we could have been stopped with a certificate check, a Forescout detection and prevention, and worst of all, no free passing of any protocols without authentication to a valid VPN. Boom!
To be perfectly frank, if the vendor or the corporation is alerted quickly to the loss of a card, this is a very low probability attack. But if a corporation is targeted, its much more likely to succeed. Your risk may vary. If alerting is a non-priority, as it is in many places, this is a serious problem. Once inside, hostile forces will plant the seeds that give them continuous access without the 3g card.
Play it safe, make sure your 3g cards are secure. Use layers to compensate for any single security failure. And most important, validate your assumptions when told something is “secure”.
This blog and it’s contents copyright 2009 Digital Trust, LLC. Republication of this post is permitted provided it is strictly on internal corporate messaging systems. Any republication or reuse is forbidden if the Digital Trust name is removed.
Recently, an article in Evidence Magazine discussed how hot and cold pixels in a camera can be used to fingerprint images from that camera, and thereby convict a suspect based on pictures and camera equipment, one or more of which is found in the suspects’ possession.
This is a clear indicator of why the defense needs quality representation and expert witnesses, and State appointed attorneys may miss crucial arguments without proper expert representation.
The article makes it seem quite easy to match camera to image, but it omits a couple of possibilities that drastically complicate the process and likelihood of conviction. Here are two additional complications that can ruin a case, and there are likely more; each situation is unique.
First, are the criminal images captured compressed or uncompressed? While uncompressed images are used for validating the hot and cold pixel fingerprint of the camera, they can only be matched to illegal images that are likewise uncompressed, or mathematically validated if compressed. Video cameras are likely to use compression to store the images, and photographs are frequently taken using image compression or scene magnification, any of which must be accounted for when eliminating possible errors in verification of the images and camera.
Worse, the author expresses probability of error in terms that appear astronomical, using math to paint a rosy picture of probability that fails to account for additional possibilities that must be taken into account before convicting based on pixel fingerprint evidence.
An example of such an assumption is that those pixels are unique to that camera, when in fact, hot or cold pixels can be endemic to the entire product line of cameras or image sensors. Two ways manufacturing can spoil the odds is by introducing defects to the image sensors when they are being mounted in the camera, or by damaging the sensors in manufacturing of the silicon wafers.
If a production line introduced hot or cold pixels, before going to trial we need to know what the manufacturers criteria is for acceptable bad pixels, and we need to know the production statistics. In the example given, any or all of the four hot pixels could have been present since the camera was made, which skews the probability projection. If all four were manufacturers defects, or cannot be shown to not be manufacturers defects, the fingerprinting process proves nothing, not even the camera family, as imaging chips could be used in multiple camera lines.
Other factors such as the number of cameras in the geographic area complicate or simplify fingerprinting pixels for matching. For instance, if the pictures and camera are found on the person in the Outback, miles from anyone else, the probability of responsibility rises to near positive assumption. But a common camera taken in New York may mean an uphill battle to reach a probability acceptable to a jury.
Improper representation of evidence leads to false convictions of innocent people. Make sure you do it 100% right, and any argument using statistics and probability needs to be examined closely to locate additional factors not taken into account.
This blog and it’s contents copyright 2009 Digital Trust, LLC. Republication of this post is permitted provided the Digital Trust name, url, and this paragraph are included. Counsel, do not list without contract.
Digital Trust, LLC is an information security consulting and activities resource. We can assist with any facet of your security program. From corporate guidance and compliance efforts to system implementation and penetration testing. We can help make your security better.
Contact us at email@example.com