A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Security’ tag

You Don’t Have Any Rights

with one comment

If you read the Bill of Rights; which really is a bill of temporary privileges, all of which appear to have expired; you might get the impression that you have some kind of right against self-incrimination. At least that’s what a plain reading of the Fifth Amendment would lead one to believe. But self-incrimination means whatever the man in the muumuu says it means. In Minnesota one of those muumuu clad men decided that being compelled to provide the cryptographic key that unlocks your phone isn’t protected under the Fifth Amendment:

The Minnesota Court of Appeals ruled Tuesday that a judge’s order requiring a man to provide a fingerprint to unlock his cellphone was constitutional, a finding that is in line with similar rulings across the U.S.

What does this mean for us Minnesotans? It means that the first thing you should do in a police encounter is deauthorize your fingerprint reader. How do you do that? I’m not familiar enough with the various Android devices to know how they handle fingerprint readers. On the iPhone rebooting the phone will deauthorize the fingerprint reader until the password is entered. So iPhone users should hold down their home and lock buttons (or volume down and lock buttons if you’re using an iPhone 7) for a few seconds. That will cause the phone to reboot. If the phone is confiscated the fingerprint reader won’t unlock the phone so even if you’re compelled to press your finger against the sensor it won’t be an act of self-incrimination.

Why do I say deauthorize your fingerprint reader during a police encounter instead of disabled it entirely? Because disabling the fingerprint reader encourages most people to reduce their security by using a simple password or PIN to unlock their phone. And I understand that mentality. Phones are devices that get unlocked numerous times per day. Having to enter a complex password on a crappy touchscreen keyboard dozens of times per day isn’t appealing. Fingerprint readers offer a compromise. You can have a complex password but you only have to enter it after rebooting the phone or after not unlocking the phone for 48 hours. Otherwise you just press your finger to the reader to unlock your phone. So enabling the fingerprint reader is a feasible way to encourage people to use a strong password, which offers far better overall security (PINs can be brute forced with relative ease and Android’s unlock patterns aren’t all that much better).

Written by Christopher Burg

January 19th, 2017 at 11:00 am

The Public Private Data Cycle

without comments

Just as the Austrian school of economics has a business cycle I have a data cycle. The Public Private Data Cycle (catchier web 3.0 buzzword compliant name coming later) states that all privately held data becomes government data with a subpoena and all government data becomes privately held data with a leak.

The Public Private Data Cycle is important to note whenever somebody discusses keeping data on individuals. For example, many libertarians don’t worry much about the data Facebook collects because Facebook is a private company. The very same people will flip out whenever the government wants to collect more data though. Likewise, many statists don’t worry much about the data the government collects because the government is a public entity. The very same people will flip out whenever Facebook wants to collect more data though. Both of these groups have a major misunderstanding about how data access works.

I’ve presented several cases on this blog illustrating how privately held data became government data with a subpoena. But what about government data becoming privately held data? The State of California recently provided us with such an example:

Our reader Tom emailed me after he had been notified by the state of California that his personal information had been compromised as a result of a California Public Records Act. Based on the limited information that we have at this time, it appears that names, the instructor’s date of birth, the instructor California driver’s license number and/or their California ID card number.

When Tom reached out to the CA DOJ he was informed that the entire list of firearms trainers in California had been released in the public records act request. The state of California is sending letters to those affected with the promise of 12 months or identity protection, but if you are a CA firearms instructor and haven’t seen a letter, might bee a good idea to call the DOJ to see if you were affected.

This wasn’t a case of a malicious hacker gaining access to California’s database. The state accidentally handed out this data in response to a public records request. Now that government held data about firearm instructors is privately held by an unknown party. Sure, the State of California said it ordered the recipient to destroy the data but as we all know once data has be accessed by an unauthorized party there’s no way to control it.

If data exists then the chances of it being accessed by an unauthorized party increases from zero. That’s why everybody should be wary of any attempt by anybody to collect more data on individuals.

Written by Christopher Burg

January 17th, 2017 at 11:00 am

It’s Checkpoints All the Way Down

without comments

The shooting at the Fort Lauderdale airport last week has the media once again asking the wrong questions. Take this moron for example. His little article is asking whether or not air travelers should still be allowed to have declared firearms in their checked luggage. What would a prohibition against firearms in checked luggage accomplish? It would serve to punish people like myself who often have firearms in their checked luggage but it would do absolutely nothing to enhance security (since, if you want to attack an airport, you can still drive to it with your personal vehicle).

This is the trend amongst the media. Since most reports are clueless about the topics they’re reporting on they ask idiotic questions and make equally idiotic suggestions. I’ve heard a lot of people suggest establishing security checkpoints to get into the airport so you can go through the Transportation Security Administration (TSA) checkpoint. Of course, when somebody shoots up the checkpoint to get into the airport there will be demands for a checkpoint to get near the airport so you can go through the checkpoint to get into the airport so you can go through the TSA checkpoint. If we listened to these yokels it would be checkpoints all the way down.

If you haven’t already, the next time you go through a TSA checkpoint pay attention to how many people are in line with you and how tightly packed together you all are. You’ll probably notice that there are quite a few people packed into a small space. Concentrations of people are a byproduct of security checkpoints and concentrations of people are tempting targets. There’s always going to be a beginning checkpoint where the line of people remain in an insecure area and that line will be vulnerable.

Adding a checkpoint to guard a checkpoint just moves the vulnerability to a different location. What’s needed to guard against threats like the Fort Lauderdale airport shooting is a decentralized force in the insecure area of the airport. Yes, I’m talking about armed personnel. An important part of any security model is an ability to respond to a failure. Insecure areas are always a problem in a security model but even a secure area needs personnel able to respond to a checkpoint failure. So long as the nearest force able to respond to an attack are minutes away an attacker will have a period of free reign. If people really want to harden airports they need to look at both allowing staff members to carry concealed weapons and/or hiring armed private security personnel.

Written by Christopher Burg

January 9th, 2017 at 11:00 am

The Walls Have Ears

with one comment

Voice activated assistances such as the Amazon Echo and Google Home are becoming popular household devices. With a simple voice command these devices can allow you to do anything from turning on your smart lightbulbs to playing music. However, any voice activated device must necessarily be listening at all times and law enforcers know that:

Amazon’s Echo devices and its virtual assistant are meant to help find answers by listening for your voice commands. However, police in Arkansas want to know if one of the gadgets overheard something that can help with a murder case. According to The Information, authorities in Bentonville issued a warrant for Amazon to hand over any audio or records from an Echo belonging to James Andrew Bates. Bates is set to go to trial for first-degree murder for the death of Victor Collins next year.

Amazon declined to give police any of the information that the Echo logged on its servers, but it did hand over Bates’ account details and purchases. Police say they were able to pull data off of the speaker, but it’s unclear what info they were able to access.

While Amazon declined to provide any server side information logged by the Echo there’s no reason a court order couldn’t compel Amazon to provide such information. In addition to that, law enforcers also managed to pull some unknown data locally from the Echo. Those two points raise questions about what kind of information devices like the Echo and Home collect as they’re passively sitting on your counter awaiting your command.

As with much of the Internet of Things, I haven’t purchased one of these voice activated assistances yet and have no plans to buy one anytime in the near future. They’re too big of a privacy risk for my tastes since I don’t even know what kind of information they’re collecting as they sit there listening.

Written by Christopher Burg

December 28th, 2016 at 11:00 am

Denial of Service Attacks are Cheap to Perform

without comments

How expensive is it to perform a denial of service attack in the real world? More often than not the cost is nearly free. The trick is to exploit the target’s own security concerns:

A flight in America was delayed and almost diverted on Tuesday after a passenger changed the name of their wi-fi device to ‘Samsung Galaxy Note 7’.

An entire flight was screwed up by simply changing the SSID of a device.

Why did this simply trick cause any trouble whatsoever? Because the flight crew was more concerned about enforcing the rules than actual security. There was no evidence of a Galaxy Note 7 being onboard. Since anybody can change their device’s SSID to anything they want the presence of the SSID “Samsung Galaxy Note 7” shouldn’t have been enough to cause any issues. But the flight crew allowed that, at best, flimsy evidence to spur them into a hunt for the device.

This is why performing denial of service attacks in the real world is often very cheap. Staffers, such as flight crew, seldom have any real security training so they tend to overreact. They’re trying to cover their asses (and I don’t mean that as an insult, if they don’t cover their asses they very well could lose their job), which means you have an easy exploit sitting there for you.

Written by Christopher Burg

December 23rd, 2016 at 10:30 am


with one comment

Russia hasn’t occupied this much airtime on American news channels since the Cold War. But everywhere you look it’s Russia this and Russia that. Russia is propping up the Assad regime in Syria! Russia rigged the election! Russia stole my lunch money!

Wait, let’s step back to the second one. A lot of charges are being made that Russia “hacked” the election, which allowed Trump to win. And there’s some evidence that shenanigans were taking place regarding the election:

Georgia’s secretary of state says the state was hit with an attempted hack of its voter registration database from an IP address linked to the federal Department of Homeland Security.

Well that’s embarrassing. Apparently the Department of Motherland Fatherland Homeland Security (DHS) is a Russian agency. Who would have guessed?

Could Russia have influenced the election? Of course. We live in an age of accessible real-time global communications. Anybody could influence anybody else’s voting decision. A person in South Africa could influence a voter in South Korea to opt for one choice over another. This global communication system also means that malicious hackers in one nation could compromise any connected election equipment in another country.

However, the biggest check against Russian attempts to rig the election is all of the other forces that would be trying to do the exact same thing. People have accused both the Federal Bureau of Investigations (FBI) and the Central Intelligence Agency (CIA) (admittedly, rigging elections is what the CIA does) of trying to rig the election. Likewise, there are some questions about what exactly the DHS was doing in regards to Georgia. Major media companies were working overtime to influence people’s voting decision. Countries in Europe had a vested interest in the election going one way or another as did pretty much every other country on Earth.

I have no evidence one way or another but that’s never stopped me from guessing. My guess as to why these accusations against Russia are being made so vehemently is that a lot of voters are looking for answers as to why Trump won but are unwilling to consider that their preferred candidate was terrible. When you convince yourself that the candidate you oppose is Satan incarnate then you lose the ability to objectively judge your own candidate because in your head it’s now a battle between evil and good, not a battle between two flawed human beings.

Written by Christopher Burg

December 14th, 2016 at 10:00 am

Security Implications of Destructive Updates

with one comment

More and more it should be becoming more apparent that you don’t own your smartphone. Sure, you paid for it and you physically control it but if the device itself can be disabled by a third-party without your authorization can you really say that you own it? This is a question Samsung Galaxy Note 7 owners should be asking themselves right now:

Samsung’s Galaxy Note 7 recall in the US is still ongoing, but the company will release an update in a couple of weeks that will basically force customers to return any devices that may still be in use. The company announced today that a December 19th update to the handsets in the States will prevent them from charging at all and “will eliminate their ability to work as mobile devices.” In other words, if you still have a Note 7, it will soon be completely useless.

One could argue that this ability to push an update to a device to disable it is a good thing in the case of the Note 7 since the device has a reputation for lighting on fire. But it has rather frightening ownership and security implications.

The ownership implications should be obvious. If the device manufacturer can disable your device at its whim then you can’t really claim to own it. You can only claim that you’re borrowing it for as long as the manufacturer deems you worthy of doing so. However, in regards to ownership, nothing has really changed. Since copyright and patent laws were applied to software your ability to own your devices has been basically nonexistent.

The security implications may not be as obvious. Sure, the ability for a device manufacturer to push implicitly trusted software to their devices carries risks but the tradeoff, relying on users to apply security updates, also carries risks. But this particular update being pushed out by Samsung has the ability to destroy users’ trust in manufacturer updates. Many users are currently happy to allow their devices to update themselves automatically because those updates tend to improve the device. It only takes a single bad update to make those users unhappy with automatic updates. If they become unhappy with automatic updates they will seek ways of disabling updates.

The biggest weakness in any security system tends to be the human component. Part of this is due to the difficulty of training humans to be secure. It takes a great deal of effort to train somebody to follow even basic security principles but it takes very little to undo all of that training. A single bad experience is all that generally stands between that effort and having all of it undone. If Samsung’s strategy becomes more commonplace I fear that years of getting users comfortable with automatic updates may be undone and we’ll be looking at a world where users jump through hoops to disable updates.

Written by Christopher Burg

December 13th, 2016 at 11:00 am

Posted in Technology

Tagged with ,

Degrees of Anonymity

without comments

When a service describes itself as anonymous how anonymous is it? Users of Yik Yak may soon have a chance to find out:

Yik Yak has laid 70 percent of employees amid a downturn in the app’s growth prospects, The Verge has learned. The three-year-old anonymous social network has raised $73.5 million from top-tier investors on the promise that its young, college-age network of users could one day build a company to rival Facebook. But the challenge of growing its community while moving gradually away from anonymity has so far proven to be more than the company could muster.


But growth stalled almost immediately after Sequoia’s investment. As with Secret before it, the app’s anonymous nature created a series of increasingly difficult problems for the business. Almost from the start, Yik Yak users reported incidents of bullying and harassment. Multiple schools were placed on lockdown after the app was used to make threats. Some schools even banned it. Yik Yak put tools in place designed to reduce harassment, but growth began to slow soon afterward.

Yik Yak claimed it was an anonymous social network and on the front end the data did appear anonymous. However, the backend may be an entirely different matter. How much information did Yik Yak regularly keep about its users? Internet Protocol (IP) addresses, Global Positioning System (GPS) coordinates, unique device identifiers, phone numbers, and much more can be easily collected and transmitted by an application running on your phone.

Bankruptcy is looking like a very real possibility for Yik Yak. If the company ends up filing then its assets will be liquidated. In this day and age user data is considered a valuable asset. Somebody will almost certainly end up buying Yik Yak’s user data and when they do they may discover that it wasn’t as anonymous as users may have thought.

Not all forms of anonymity are created equal. If you access a web service without using some kind of anonymity service, such as Tor or I2P, then the service has some identifiable information already such as your IP address and a browser fingerprint. If you’re access the service through a phone application then that application may have collected and transmitted your phone number, contacts list, and other identifiable information (assuming, of course, the application has permission to access all of that data, which it may not depending on your platform and privacy settings). While on the front end of the service you may appear to be anonymous the same may not hold true for the back end.

This issue becomes much larger when you consider that even if your data is currently being held by a benevolent company that does care about your privacy that may not always be the case. Your data is just a bankruptcy filing away from falling into the hands of somebody else.

Written by Christopher Burg

December 9th, 2016 at 10:00 am

Secure E-Mail is an Impossibility

with 2 comments

A while back I wrote a handful of introductory guides on using Pretty Good Privacy (PGP) to encrypt the content of your e-mails. They were well intentioned guides. After all, everybody uses e-mail so we might as well try to secure it as much as possible, right? What I didn’t stop to consider was the fact that PGP is a dead end technology for securing e-mails not because the initial learning curve is steep but because the very implementation itself is flawed.

I recently came across a blog post by Filippo Valsorda that sums up the biggest issue with PGP:

But the real issues I realized are more subtle. I never felt confident in the security of my long term keys. The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe. Vulnerabilities would be announced. USB devices would get plugged in.

A long term key is as secure as the minimum common denominator of your security practices over its lifetime. It’s the weak link.

Worse, long term keys patterns like collecting signatures and printing fingerprints on business cards discourage practices that would otherwise be obvious hygiene: rotating keys often, having different keys for different devices, compartmentalization. It actually encourages expanding the attack surface by making backups of the key.

PGP, in fact the entire web of trust model, assumes that your private key will be more or less permanent. This assumption leads to a lot of implementation issues. What happens if you lose your private key? If you have an effective backup system you may laugh at this concern but lost private keys are the most common issue I’ve seen PGP users run into. When you lose your key you have to generate a new one and distribute it to everybody you communicate with. In addition to that, you also have to resign people’s existing keys. But worst of all, without your private key you can’t even revoke the corresponding published public key.

Another issue is that you cannot control the security practices of other PGP users. What happens when somebody who signed your key has their private key compromised? Their signature, which is used by others to decide whether or not to trust you, becomes meaningless because their private key is no longer confidential. Do you trust the security practices of your friends enough to make your own security practices reliant on them? I sure don’t.

PGP was a jury rigged solution to provide some security for e-mail. Because of that it has many limitations. For starters, while PGP can be used to encrypt the contents of a message it cannot encrypt the e-mail headers or the subject line. That means anybody snooping on the e-mail knows who the parties communicating are, what the subject is, and any other information stored in the headers. As we’ve learned from Edward Snowden’s leaks, metadata is very valuable. E-mail was never designed to be a secure means of communicating and can never be made secure. The only viable solution for secure communications is to find an alternative to e-mail.

With that said, PGP itself isn’t a bad technology. It’s still useful for signing binary packages, encrypting files for transferring between parties, and other similar tasks. But for e-mail it’s at best a bandage to a bigger problem and at worst a false sense of security.

Written by Christopher Burg

December 8th, 2016 at 11:00 am

A Beginner’s Guide to Privacy and Security

with one comment

I’m always on the lookout for good guides on privacy and security for beginner’s. Ars Technica posted an excellent beginner’s guide yesterday. It covers the basics; such as installing operating system and browser updates, enabling two-factor authentication, and using a password manager to enable you to use strong and unique passwords for your accounts; that even less computer savvy users can follow to improve their security.

If you’re not sure where to begin when it comes to security and privacy take a look at Ars’ guide.

Written by Christopher Burg

December 2nd, 2016 at 10:30 am