A Geek With Guns

Discount security adviser to the proles.

Archive for the ‘Security’ tag

You Ought to Trust the Government with the Master Key

with one comment

The Federal Bureau of Investigations (FBI) director, James Comey, has been waging a war against effective cryptography. Although he can’t beat math he’s hellbent on trying. To that end, he and his ilk have proposed schemes that would allow the government to break consumer cryptography. One of those schemes is call key escrow, which requires anything encrypted by a consumer device be decipherable with a master key held by the government. It’s a terrible scheme because any actor that obtains the government’s master key will also be able to decrypt anything encrypted on a consumer device. The government promises that such a key wouldn’t be compromised but history shows that there are leaks in every organziation:

A FBI electronics technician pleaded guilty on Monday to having illegally acted as an agent of China, admitting that he on several occasions passed sensitive information to a Chinese official.

Kun Shan Chun, also known as Joey Chun, was employed by the Federal Bureau of Investigation since 1997. He pleaded guilty in federal court in Manhattan to one count of having illegally acted as an agent of a foreign government.

Chun, who was arrested in March on a set of charges made public only on Monday, admitted in court that from 2011 to 2016 he acted at the direction of a Chinese official, to whom he passed the sensitive information.

If the FBI can’t even keep moles out of its organization how are we supposed to trust it to guard a master key that would likely be worth billions of dollars? Hell, the government couldn’t even keep information about the most destructive weapons on Earth from leaking to its opponents. Considering its history, especially where stories like this involving government agents being paid informants to other governments, there is no way to reasonably believe that a master key to all consumer encryption wouldn’t get leaked to unauthorized parties.

Written by Christopher Burg

August 3rd, 2016 at 10:00 am

All Full-Disk Encryption isn’t Created Equal

without comments

For a while I’ve been guarded when recommending Android devices to friends. The only devices I’ve been willing to recommend are those like the Google Nexus line that receive regular security updates in a timely manner. However, after this little fiasco I don’t know if I’m willing to recommend any Android device anymore:

Privacy advocates take note: Android’s full-disk encryption just got dramatically easier to defeat on devices that use chips from semiconductor maker Qualcomm, thanks to new research that reveals several methods to extract crypto keys off of a locked handset. Those methods include publicly available attack code that works against an estimated 37 percent of enterprise users.

A blog post published Thursday revealed that in stark contrast to the iPhone’s iOS, Qualcomm-powered Android devices store the disk encryption keys in software. That leaves the keys vulnerable to a variety of attacks that can pull a key off a device. From there, the key can be loaded onto a server cluster, field-programmable gate array, or supercomputer that has been optimized for super-fast password cracking.

[…]

Beniamini’s research highlights several other previously overlooked disk-encryption weaknesses in Qualcomm-based Android devices. Since the key resides in software, it likely can be extracted using other vulnerabilities that have yet to be made public. Beyond hacks, Beniamini said the design makes it possible for phone manufacturers to assist law enforcement agencies in unlocking an encrypted device. Since the key is available to TrustZone, the hardware makers can simply create and sign a TrustZone image that extracts what are known as the keymaster keys. Those keys can then be flashed to the target device. (Beniamini’s post originally speculated QualComm also had the ability to create and sign such an image, but the Qualcomm spokeswoman disputed this claim and said only manufacturers have this capability.)

Apple designed its full-disk encryption on iOS very well. Each iOS device has a unique key referred to as the device’s UID that is mixed with whatever password you enter. In order to brute force the encryption key you need both the password and the device’s UID, which is difficult to extract. Qualcomm-based devices rely on a less secure scheme.

But this problem has two parts. The first part is the vulnerability itself. Full-disk encryption isn’t a novel idea. Scheme for properly implementing full-disk encryption have been around for a while now. Qualcomm not following those schemes puts into question the security of any of their devices. Now recommending a device involves both ensuring the handset manufacturers releases updates in a timely manner and isn’t using a Qualcomm chipset. The second part is the usual Android problem of security patch availability being hit or miss:

But researchers from two-factor authentication service Duo Security told Ars that an estimated 37 percent of all the Android phones that use the Duo app remain susceptible to the attack because they have yet to receive the patches. The lack of updates is the result of restrictions imposed by manufacturers or carriers that prevent end users from installing updates released by Google.

Apple was smart when it refused to allow the carriers to be involved in the firmware of iOS devices. Since Apple controls iOS with an iron fist it also prevents hardware manufacturers from interfering with the availability of iOS updates. Google wanted a more open platform, which is commendable. However, Google failed to maintain any real control over Android, which has left uses at the mercy of the handset manufacturers. Google would have been smart to restrict the availability of its proprietary applications to manufacturers who make their handsets to pull Android updates directly from Google.

Written by Christopher Burg

July 5th, 2016 at 10:30 am

How Not to Design Security

with one comment

As is common after a violent tragedy, a great deal of electrons are being annoyed by people who are calling for prohibitions. Some want to prohibit firearms, ammunition, and body armor while others want to prohibit members of an entire religion from crossing the imaginary line that separates the United States from the rest of the world. All of this finger pointing is being done under the guise of security but the truth is that any security system that depends on an attacker acting in a certain way is doomed to fail.

Prohibitions don’t eliminate or even curtail the threat they’re aimed at. In fact the opposite is true. The iron law of prohibition, a term coined in regards to prohibitions on drugs, states that the potency of drugs increases as law enforcement efforts against drugs increases. It applies to every form of prohibition though. Prohibitions against firearms just encourages the development of more easily manufactured and concealable firearms just as the prohibition against religious beliefs encourages those beliefs to be practices in secrecy.

When you rely on a prohibition for security you’re really relying on your potential attackers to act in a specific way. In the case of firearm prohibitions you’re relying on your potential attackers to abide by the prohibition and not use firearms. In the case of prohibiting members of a specific religion from entering a country you’re relying on potential attacks to truthfully reveal what religion they are a member of.

But attackers have a goal and like any other human being they will utilize means to achieve their ends. If their ends can be best achieved with a firearm they will acquire or manufacture one. If their ends require body armor they will acquire or manufacture body armor. If their ends require gaining entry into a country they will either lie to get through customs legitimately or bypass customs entirely. You attackers will not act in the manner you desire. If they did, they wouldn’t be attacking you.

What prohibitions offer is a false sense of security. People often assume that prohibited items no longer have to be addressed in their security models. This leaves large gaping holes for attackers to exploit. Worse yet, prohibitions usually make addressing the prohibited items more difficult due to the iron law of prohibition.

Prohibitions not only provide no actual security they also come at a high cost. One of those costs is the harassment of innocent people. Firearm prohibitions, for example, give law enforcers an excuse to harass anybody who owns or is interested in acquiring a firearm. Prohibitions against members of a religion give law enforcers an excuse to harass anybody who is or could potentially be a member of that religion.

Another cost is a decrease in overall security. Firearm prohibitions make it more difficult for non-government agents to defend themselves. A people who suffer under a firearm prohibition find themselves returned to the state of nature where the strong are able to prey on the weak with impunity. When religious prohibitions are in place an adversarial relationship is created between members of that religion and the entity putting the prohibition in place. An adversarial relationship means you lose access to community enforcement. Members of a prohibited religion are less likely to come forth with information on a potentially dangerous member of their community. That can be a massive loss of critical information that your security system can utilize.

If you want to improve security you need to banish the idea of prohibitions from your mind. They will actually work against you and make your security model less effective.

Written by Christopher Burg

June 14th, 2016 at 10:30 am

Nothing to See Here

without comments

I’m helping run the CryptoParty at B-Sides MSP this year. Because of that you’re getting nothing today. Sorry.

Written by Christopher Burg

June 10th, 2016 at 10:00 am

Be Careful When Taking Your Computer In For Servicing

with 2 comments

How many of you have taken your computer in to be repaired? How many of you erased all of your data before taking it in? I’m often amazed by the number of people who take their computer in for servicing without either replacing the hard drive or wiping the hard drive in the computer. Whenever I take any electronic device in for servicing I wipe all of the data off of it and only install an operating system with a default user account the repairer can use to log in with. When I get the device back I wipe it again and then restore my data from a backup.

Why am I so paranoid? Because you never know who might be a paid Federal Bureau of Investigations (FBI) snitch:

The doctor’s attorney says the FBI essentially used the employee to perform warrantless searches on electronics that passed through the massive maintenance facility outside Louisville, Ky., where technicians known as Geek Squad agents work on devices from across the country.

Since 2009, “the FBI was dealing with a paid agent inside the Geek Squad who was used for the specific purpose of searching clients’ computers for child pornography and other contraband or evidence of crimes,” defense attorney James Riddet claimed in a court filing last month.

Riddet represents Dr. Mark Albert Rettenmaier, a gynecological oncologist who practiced at Hoag Hospital until his indictment in November 2014 on two felony counts of possession of child pornography. Rettenmaier, who is free on bond, has taken a leave from seeing patients, Riddet said.

Because the case in this story involved child pornography I’m sure somebody will accuse me of trying to protect people who possess child pornography. But data is data when it comes to security. The methods you can use to protect your confidential communications, adult pornography, medical information, financial records, and any other data can also be used to protect illicit, dangerous, and downright distasteful data. Never let somebody make you feel guilty for helping good people protect themselves because the information you’re providing them can also be used by bad people.

Due to the number of laws on the books, the average working professional commits three felonies a day. In all likelihood some data on your device could be used to charge you with a crime. Since the FBI is using computer technicians as paid informants you should practice some healthy paranoia when handing your devices over to them. The technician who works on your computer could also have a side job of feeding the FBI evidence of crimes.

But those aren’t the only threats you have to worry about when taking your electronic devices in for servicing. I mentioned that I also wipe the device when I get it back from the service center. This is because the technician who worked on my device may have also installed malware on the system:

Harwell had been a Macintosh specialist with a Los Angeles-area home computer repair company called Rezitech. That’s how he allegedly had the opportunity to install the spy software, called Camcapture, on computers.

While working on repair assignments, the 20-year-old technician secretly set up a complex system that could notify him whenever it was ready to snap a shot using the computer’s webcam, according to Sergeant Andrew Goodrich, a spokesman with the Fullerton Police Department in California. “It would let his server know that the victim’s machine was on. The server would then notify his smartphone… and then the images were recorded on his home computer,” he said.

When your device is in the hands of an unknown third party there is no telling what they may do with it. But if the data isn’t there then they can’t snoop through it and if you wipe the device when you get it back any installed malware will be wiped as well.

Be careful when you’re handing your device over to a service center. Make sure the device has been wiped before it goes in and gets wiped when it comes back.

Written by Christopher Burg

May 27th, 2016 at 11:00 am

The FBI Cares More About Maintaining Browser Exploits Than Fighting Child Pornography

without comments

Creating and distributing child pornography are two things that most people seem to agree should be ruthlessly pursued by law enforcers. Law enforcers, on the other hand, don’t agree. The Federal Bureau of Investigations (FBI) would rather toss out a child pornography case than reveal one stupid browser exploit:

A judge has thrown out evidence obtained by the FBI via hacking, after the agency refused to provide the full code it used in the hack.

The decision is a symptom of the FBI using investigative techniques that are usually reserved for intelligence agencies, such as the NSA. When those same techniques are used in criminal cases, they have to stack up against the rights of defendants and are subject to court processes.

The evidence that was thrown out includes child pornography allegedly found on devices belonging to Jay Michaud, a Vancouver public schools worker.

Why did the FBI even bring the case Michaud if it wasn’t willing to reveal the exploit that the defense was guaranteed to demand technical information about?

This isn’t the first case the FBI has allowed to be thrown out due to the agency’s desperate desire to keep an exploit secret. In allowing these cases to be thrown out the FBI has told the country that it isn’t serious about pursuing these crimes and that it would rather all of us remain at the mercy of malicious hackers than reveal the exploits it, and almost certain they, rely on.

I guess the only crimes the FBI actually cares to fight are the ones it creates.

Written by Christopher Burg

May 26th, 2016 at 10:00 am

Free Apps Aren’t Free But Dumb Phones Won’t Protect Your Privacy

with 2 comments

I have a sort of love/hate relationship with John McAfee. The man has a crazy history and isn’t so far up his own ass not to recognize it and poke fun at it. He’s also a very nonjudgemental person, which I appreciate. With the exception of Vermin Supreme, I think McAfee is currently the best person running for president. However, his views on security seem to be stuck in the previous decade at times. This wouldn’t be so bad but he seems to take any opportunity to speak on the subject and his statements are often taken as fact by many. Take the recent video of him posted by Business Insider:

It opens strong. McAfee refutes something that’s been a pet peeve of mine for a while, the mistaken belief that there’s such a thing as free. TANSTAAFL, there ain’t no such thing as a free lunch, is a principle I wish everybody learned in school. If an app or service is free then you’re the product and the app only exists to extract salable information from you.

McAfee also discusses the surveillance threat that smartphones pose, which should receive more airtime. But then he follows up with a ridiculous statement. He says that he uses dumb phones when he wants to communicate privately. I hear a lot of people spout this nonsense and it’s quickly becoming another pet peeve of mine.

Because smartphones have the builtin ability to easily install applications the threat of malware exists. In fact there have been several cases of malware making their way into both Google and Apple’s app stores. That doesn’t make smartphones less secure than dumb phones though.

The biggest weakness in dumb phones as far as privacy is concerned is their complete inability to encrypt communications. Dumb phones rely on standard cellular protocols for making both phone calls and sending text messages. In both cases the only encryption that exists is between the devices and the cell towers. And the encryption there is weak enough that any jackass with a IMSI-catcher render it meaningless. Furthermore, because the data is available in plaintext phone for the phone companies, the data is like collected by the National Security Agency (NSA) and is always available to law enforcers via a court order.

The second biggest weakness in dumb phones is the general lack of software updates. Dumb phones still run software, which means they can still have security vulnerabilities and are therefore also vulnerable to malware. How often do dumb phone manufacturers update software? Rarely, which means security vulnerabilities remain unpatched for extensive periods of time and oftentimes indefinitely.

Smart phones can address both of these weaknesses. Encrypted communications are available to most smart phone manufacturers. Apple includes iMessage, which utilizes end-to-end encryption. Signal and WhatsApp, two application that also utilize end-to-end encryption, are available for both iOS and Android (WhatsApp is available for Windows Phone as well). Unless your communications are end-to-end encrypted they are not private. With smartphones you can have private communications, with dumb phones you cannot.

Smart phone manufacturers also address the problem of security vulnerabilities by releasing periodic software updates (although access to timely updates can vary from manufacturer to manufacturer for Android users). When a vulnerability is discovered it usually doesn’t remain unpatched forever.

When you communicate using a smartphone there is the risk of being surveilled. When you communicate with a dumb phone there is a guarantee of being surveilled.

As I said, I like a lot of things about McAfee. But much of the security advice he gives is flawed. Don’t make the mistake of assuming he’s correct on security issues just because he was involved in the antivirus industry ages ago.

Written by Christopher Burg

April 13th, 2016 at 10:30 am

How The Government Protects Your Data

without comments

Although I oppose both public and private surveillance I especially loathe public surveillance. Any form of surveillance results in data about you being stored and oftentimes that data ends up leaking to unauthorized parties. When the data is leaked from a private entity’s database I at least have some recourse. If, for example, Google leaks my personal information to unauthorized parties I can choose not to use the service again. The State is another beast entirely.

When the State leaks your personal information your only recourse is to vote harder, which is the same as saying your only recourse is to shut up and take it. This complete lack of consequences for failing to implement proper security is why the State continues to ignore security:

FRANKFORT, Ky. (AP) — Federal investigators found significant cybersecurity weaknesses in the health insurance websites of California, Kentucky and Vermont that could enable hackers to get their hands on sensitive personal information about hundreds of thousands of people, The Associated Press has learned. And some of those flaws have yet to be fixed.

[…]

The GAO report examined the three states’ systems from October 2013 to March 2015 and released an abbreviated, public version of its findings last month without identifying the states. On Thursday, the GAO revealed the states’ names in response to a Freedom of Information request from the AP.

According to the GAO, one state did not encrypt passwords, potentially making it easy for hackers to gain access to individual accounts. One state did not properly use a filter to block hostile attempts to visit the website. And one state did not use the proper encryption on its servers, making it easier for hackers to get in. The report did not say which state had what problem.

Today encrypting passwords is something even beginning web developers understand is necessary (even if they often fail to property encrypt passwords). Most content management systems do this by default and most web development frameworks do this if you use their builtin user management features. The fact a state paid developers to implement their health insurance exchange and didn’t require encrypted passwords is ridiculous.

Filtering hostile attempts to visit websites is a very subjective statement. What constitutes a hostile attempt to visit a website? Some websites try to block all Tor users under the assumption that Tor has no legitimate uses, a viewpoint I strongly disagree with. Other websites utilize blacklists that contain IP addresses of supposedly hostile devices. These blacklists can be very hit or miss and often block legitimate devices. Without knowing what the Government Accountability Office (GOA) considered effective filtering I’ll refrain from commenting.

I’m also not entirely sure what GOA means by using property encryption on servers. Usually I’d assume it meant a lack of HTTP connections secured by TLS. But that doesn’t necessarily impact a malicious hackers ability to get into a web server. But it’s not uncommon for government websites to either not implement TLS or implement it improperly, which puts user data at risk.

But what happens next? If we were talking about websites operated by private entities I’d believe the next step would be fixing the security holes. Since the websites are operated by government entities though it’s anybody’s guess what will happen next. There will certainly be hearings where politicians will try to point the finger at somebody for these security failures but finger pointing doesn’t fix the problem and governments have a long history of never actually fixing problems.

Written by Christopher Burg

April 13th, 2016 at 10:00 am

FBI Claims Its Method Of Accessing Farook’s Phone Doesn’t Work On Newer iPhones

with one comment

So far the Federal Bureau of Investigations (FBI) hasn’t given any specific details on how it was able to access the data on Farook’s phone. But agency’s director did divulge a bit of information regarding the scope of the method:

The FBI’s new method for unlocking iPhones won’t work on most models, FBI Director Comey said in a speech last night at Kenyon University. “It’s a bit of a technological corner case, because the world has moved on to sixes,” Comey said, describing the bug in response to a question. “This doesn’t work on sixes, doesn’t work on a 5s. So we have a tool that works on a narrow slice of phones.” He continued, “I can never be completely confident, but I’m pretty confident about that.” The exchange can be found at 52:30 in the video above.

Since he specifically mentioned the iPhone 5S, 6, and 6S it’s possible the Secure Enclave feature present in those phones thwarts the exploit. This does make sense assuming the FBI used a method to brute force the password. On the iPhone 5C the user password is combined with a hardware key to decrypt the phone’s storage. Farook used a four digit numerical password, which means there were only 10,000 possible passwords. With such a small pool of possible passwords it would have been trivial to bruce force the correct one. What stood in the way were two iOS security features. The first is a delay between entering passwords that increases with each incorrect password. The second is a feature that erases the decryption keys — which effectively renders all data stored on the phone useless — after 10 incorrect passwords have been entered.

On the 5C these features are implemented entirely in software. If an attacker can bypass the software and combine passwords with the hardware key they can try as many passwords they want without any artificial delay and prevent the decryption keys from being erased. On the iPhone 5S, 6, and 6S the Secure Enclave coprocessor handles all cryptographic operations, including enforcing a delay between incorrect passwords. Although this is entirely speculation, I’m guessing the FBI found a way to bypass the software security features on Farook’s phone and the method wouldn’t work on any device utilizing Secure Enclave.

Even though Secure Enclave makes four digit numerical passwords safer they’re still dependent on outside security measures to protect against bruce force attacks. I encourage everybody to set a complex password on their phone. On iPhones equipped with Touch ID this is a simple matter to do since you only have to enter your password after rebooting the phone or after not unlocking your phone for 48 hours. Besides those cases you can use your fingerprint to unlock the phone (just make sure you reboot the phone, which you can do at anytime by holding the power and home buttons down for a few seconds, if you interact with law enforcement so they can’t force you to unlock the phone with your fingerprint). With a strong password brute force attacks become unfeasible even if the software or hardware security enhancements are bypassed.

Written by Christopher Burg

April 8th, 2016 at 10:30 am

Don’t Stick Just Anything In Your Port

without comments

Universal Serial Bus (USB) flash drives are ubiquitous and it’s easy to see why. For a few dollars you can get a surprising amount of storage in a tiny package that can be connected to almost any computer. Their ubiquity is also the reason they annoy me. A lot of people wanting to give me a file to work on will hand me a USB drive to which I respond, “E-mail it to me.” USB drives are convenient for moving files between local computers but they’re also hardware components, which means you can do even more malicious things with them than malicious software alone.

The possibility of using malicious USB drives to exploit computers isn’t theoretical. And it’s a good vector for targeted malware since the devices are cheap and a lot of fools will plug any old USB drive into their computer:

Using booby-trapped USB flash drives is a classic hacker technique. But how effective is it really? A group of researchers at the University of Illinois decided to find out, dropping 297 USB sticks on the school’s Urbana-Champaign campus last year.

As it turns out, it really works. In a new study, the researchers estimate that at least 48 percent of people will pick up a random USB stick, plug it into their computers, and open files contained in them. Moreover, practically all of the drives (98 percent) were picked up or moved from their original drop location.

Very few people said they were concerned about their security. Sixty-eight percent of people said they took no precautions, according to the study, which will appear in the 37th IEEE Symposium on Security and Privacy in May of this year.

Leaving USB drives lying around for an unsuspecting sucker to plug into their computer is an evolution of the old trick of leaving a floppy drive labeled “Payroll” lying around. Eventually somebody’s curiosity will get the better of them and they’ll plug it into their computer and helpfully load your malware onto their network. The weakest link in any security system is the user.

A lot of energy has been invested in warning users against opening unexpected e-mail attachments, visiting questionable websites, and updating their operating systems. While it seems this advice has mostly fallen on deaf ears it has at least been followed by some. I think it’s important to spend time warning about other threats such as malicious hardware peripherals as well. Since it’s something that seldom gets mentioned almost nobody thinks about it and that helps ensure experiments like this will show disappointing results.

Written by Christopher Burg

April 7th, 2016 at 10:00 am