A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Technology’ Category

Your Browser is a Snitch

with one comment

The privacy-surveillance arms race will likely be waged eternally. The State wants to spy on people so it can better expropriate their wealth. Private companies want to spy on people so they can collect data to better serve them and better target ads at them. The State wants the private companies to spy on their users because it can get that information via a subpoena. Meanwhile, users are stuck being constantly watched.

Browser fingerprinting is one of the more effective tools in the private companies’ arsenal. Without having to store data on users’ systems, private companies are able to use the data surrendered by browsers to track users with a surprising degree of accuracy. But fingerprinting has been limited to individual browsers. If a user switches browsers their old fingerprint is no longer valid… until now:

The new technique relies on code that instructs browsers to perform a variety of tasks. Those tasks, in turn, draw on operating-system and hardware resources—including graphics cards, multiple CPU cores, audio cards, and installed fonts—that are slightly different for each computer. For instance, the cross-browser fingerprinting carries out 20 carefully selected tasks that use the WebGL standard for rendering 3D graphics in browsers. In all, 36 new features work independently of a specific browser.

New browser features are commonly used for tracking users. In time those features are usually improved in such a way that tracking becomes more difficult. I have no doubts that WebGL will follow this path as well. Until it is improved through, it wouldn’t be dumb to disable it if you’re trying to avoid being tracked.

Written by Christopher Burg

February 15th, 2017 at 10:30 am

Tips for Getting Past Customs

with 3 comments

Customs in the United States have become nosier every year. It makes one wonder how they can enter the country without surrendering their life by granting access to their digital devices. Wired put together a decent guide for dealing with customs. Of the tips there is one that I highly recommend:

Make a Travel Kit

For the most vulnerable travelers, the best way to keep customs away from your data is simply not to carry it. Instead, like Lackey, set up travel devices that store the minimum of sensitive data. Don’t link those “dirty” devices to your personal accounts, and when you do have to create a linked account—as with iTunes for iOS devices—create fresh ones with unique usernames and passwords. “If they ask for access and you can’t refuse, you want to be able to give them access without losing any sensitive information,” says Lackey.

Social media accounts, admittedly, can’t be so easily ditched. Some security experts recommend creating secondary personas that can be offered up to customs officials while keeping a more sensitive account secret. But if CBP agents do link your identity with an account you tried to hide, the result could be longer detention and, for non-citizens, even denial of entry.

I believe that I first came across this advice on Bruce Schneier’s blog. Instead of traveling with a device that contains all of your information you should consider traveling with a completely clean device and accessing the information you need via a Virtual Private Network (VPN) when you reach your destination. When you’re ready to return home wipe all of the data.

The most effective way to defend against the snoops at the border is to not have any data for them to snoop.

The other tips are good to follow as well but aren’t as effective as simply not having any data in the first place. But I understand that isn’t always feasible. In cases where you’re traveling somewhere that has unreliable Internet connectivity, for example, you will need to bring the data you need with you. If you’re in such a situation I recommend only brining the data you absolutely need.

Written by Christopher Burg

February 15th, 2017 at 10:00 am

Snitches Get Dents

without comments

Is your vehicle a snitch? If you have a modern vehicle, especially one with Internet connectivity, the answer is almost certainly yes:

One of the more recent examples can be found in a 2014 warrant that allowed New York police to trace a vehicle by demanding the satellite radio and telematics provider SiriusXM provide location information. The warrant, originally filed in 2014 but only recently unsealed (and published below in full), asked SiriusXM “to activate and monitor as a tracking device the SIRIUS XM Satellite Radio installed on the Target Vehicle for a period of 10 days.” The target was a Toyota 4-Runner wrapped up in an alleged illegal gambling enterprise.

[…]

So it was that in December 2009 police asked GM to cough up OnStar data from a Chevrolet Tahoe rented by a suspected crack cocaine dealer Riley Dantzler. The cops who were after Dantzler had no idea what the car looked like or where it was, but with OnStar tracking they could follow him from Houston, Texas, to Ouchita Parish, Louisiana. OnStar’s tracking was accurate too, a court document revealing it was able to “identify that vehicle among the many that were on Interstate 20 that evening.” They stopped Dantzler and found cocaine, ecstasy and a gun inside.

[…]

In at least two cases, individuals unwittingly had their conversations listened in on by law enforcement. In 2001, OnStar competitor ATX Technologies (which later became part of Agero) was ordered to provide “roving interceptions” of a Mercedes Benz S430V. It initially complied with the order in November of that year to spy on audible communications for 30 days, but when the FBI asked for an extension in December, ATX declined, claiming it was overly burdensome. (The filing on the FBI’s attempt to find ATX in contempt of court is also published below).

As a quick aside, it should also be noted that the cell phone you carry around contains the hardware necessary to perform these same forms of surveillance. So don’t start bragging about the old vehicle you drive if you’re carrying around a cell phone.

There are two major problems here. The first problem is technological and the second is statism. There’s nothing wrong with adding more technological capabilities to a vehicle. However, much like the Internet of Things, automobile manufacturers have a terrible track record when it comes to computer security. For example, having a builtin communication system like OnStar isn’t bad in of itself but when it can be remotely activated a lot of security questions come into play.

The second problem is statism. Monitoring technologies that can be remotely activated are dangerous in general but become even more dangerous in the hands of the State. As this story demonstrated, the combination of remotely activated microphones and statism leads to men with guns kidnapping people (or possibly worse).

Everything in this story is just the tip of the iceberg though. As more technology is integrated into automobiles the State will also integrate itself more. I have no doubt that at some point a law will be passed that will require all automobiles to have a remotely activated kill switch. It’ll likely be proposed shortly after a high speed chase that ends in an officer getting killed and will be sold to the public as necessary for protecting the lives of our heroes in blue. As self-driving cars become more popular there will likely be a law passed that requires self-driving cars to have a remotely accessible autopilot mode so police can command a car to pull over for a stop or drive to the courthouse if somebody is missing their court date.

Everything that could be amazing will end up being shit because the State will decided to meddle. The State is why we can’t have nice things.

The Public Private Data Cycle

without comments

Just as the Austrian school of economics has a business cycle I have a data cycle. The Public Private Data Cycle (catchier web 3.0 buzzword compliant name coming later) states that all privately held data becomes government data with a subpoena and all government data becomes privately held data with a leak.

The Public Private Data Cycle is important to note whenever somebody discusses keeping data on individuals. For example, many libertarians don’t worry much about the data Facebook collects because Facebook is a private company. The very same people will flip out whenever the government wants to collect more data though. Likewise, many statists don’t worry much about the data the government collects because the government is a public entity. The very same people will flip out whenever Facebook wants to collect more data though. Both of these groups have a major misunderstanding about how data access works.

I’ve presented several cases on this blog illustrating how privately held data became government data with a subpoena. But what about government data becoming privately held data? The State of California recently provided us with such an example:

Our reader Tom emailed me after he had been notified by the state of California that his personal information had been compromised as a result of a California Public Records Act. Based on the limited information that we have at this time, it appears that names, the instructor’s date of birth, the instructor California driver’s license number and/or their California ID card number.

When Tom reached out to the CA DOJ he was informed that the entire list of firearms trainers in California had been released in the public records act request. The state of California is sending letters to those affected with the promise of 12 months or identity protection, but if you are a CA firearms instructor and haven’t seen a letter, might bee a good idea to call the DOJ to see if you were affected.

This wasn’t a case of a malicious hacker gaining access to California’s database. The state accidentally handed out this data in response to a public records request. Now that government held data about firearm instructors is privately held by an unknown party. Sure, the State of California said it ordered the recipient to destroy the data but as we all know once data has be accessed by an unauthorized party there’s no way to control it.

If data exists then the chances of it being accessed by an unauthorized party increases from zero. That’s why everybody should be wary of any attempt by anybody to collect more data on individuals.

Written by Christopher Burg

January 17th, 2017 at 11:00 am

Your Private Medical Data isn’t So Private

with one comment

People seem to misunderstand the Health Insurance Portability and Accountability (HIPPA) Act. I often hear people citing HIPPA as proof that their medical data is private. However, misunderstandings aren’t reality. Your medical data isn’t private. In fact, it’s for sale:

Your medical data is for sale – all of it. Adam Tanner, a fellow at Harvard’s institute for quantitative social science and author of a new book on the topic, Our Bodies, Our Data, said that patients generally don’t know that their most personal information – what diseases they test positive for, what surgeries they have had – is the stuff of multibillion-dollar business.

The trick is that the data is “anonymized” before it is sold. I used quotation marks in that case because anonymized can mean different things to different people. To me, anonymized means the data has been scrubbed in such a way that it cannot be tied to any individual. This is a very difficult standard to meet though. To others, such as those who are selling your medical data, anonymized simply means replacing the name, address, and phone number of a patient with an identifier. But simply removing a few identifiers doesn’t cut it in the age of big data:

But other forms of data, such as information from fitness devices and search engines, are completely unregulated and have identities and addresses attached. A third kind of data called “predictive analytics” cross-references the other two and makes predictions about behavior with what Tanner calls “a surprising degree of accuracy”.

None of this technically violates the health insurance portability and accountability act, or Hipaa, Tanner writes. But the techniques do render the protections of Hipaa largely toothless. “Data scientists can now circumvent Hipaa’s privacy protections by making very sophisticated guesses, marrying anonymized patient dossiers with named consumer profiles available elsewhere – with a surprising degree of accuracy,” says the study.

With the vast amount of data available about everybody it’s not as difficult to identify who “anonymized” data applies to as most people think.

HIPPA was written by an organization that hates privacy so it’s not surprising to see that the law failed to protect anybody’s privacy. This is also the why legislation won’t fix this problem. The only way to fix this problem is to either incentivize medical professionals to keep patient data confidential or to give exclusive control of a patient’s data to that patient.

Written by Christopher Burg

January 12th, 2017 at 11:00 am

CNN and Hackers

with one comment

The media’s portrayal of hackers is never accurate but almost always amusing. From hooded figures stooping over keyboards and looking at green ones and zeros on a black screen to balaclava clad individuals holding a laptop in one hand while they furiously type with the other hand, the creative minds behind the scenes at major media outlets always have a way to make hackers appear far more sinister than they really are.

CNN recently aired a segment about Russian hackers. How did the creative minds at CNN portray hackers to the viewing public? By showing a mini-game from a game you may have heard of:

In a recent story about President Obama proposing sanctions against Russia for its role in cyberattacks targeting the United States, CNN grabbed a screenshot of the hacking mini-game from the extremely popular RPG Fallout 4. First spotted by Reddit, the screenshot shows the menacing neon green letters that gamers will instantly recognize as being from the game.

Personally, I would have lifted a screenshot from the hacking mini-game in Deus Ex, it looks far more futuristic.

A lot of electrons have been annoyed by all of the people flipping out about fake news. But almost no attention has been paid to uninformed news. Most major media outlets are woefully uninformed about many (most?) of the subjects they report on. If you know anything about guns or technology you’re familiar with the amount of inaccurate reporting that occurs because of the media’s lack of understanding. When the outlet reporting on a subject doesn’t know anything about the subject the information they provide is worthless. Why aren’t people flipping out about that?

Written by Christopher Burg

January 4th, 2017 at 10:00 am

The Walls Have Ears

with one comment

Voice activated assistances such as the Amazon Echo and Google Home are becoming popular household devices. With a simple voice command these devices can allow you to do anything from turning on your smart lightbulbs to playing music. However, any voice activated device must necessarily be listening at all times and law enforcers know that:

Amazon’s Echo devices and its virtual assistant are meant to help find answers by listening for your voice commands. However, police in Arkansas want to know if one of the gadgets overheard something that can help with a murder case. According to The Information, authorities in Bentonville issued a warrant for Amazon to hand over any audio or records from an Echo belonging to James Andrew Bates. Bates is set to go to trial for first-degree murder for the death of Victor Collins next year.

Amazon declined to give police any of the information that the Echo logged on its servers, but it did hand over Bates’ account details and purchases. Police say they were able to pull data off of the speaker, but it’s unclear what info they were able to access.

While Amazon declined to provide any server side information logged by the Echo there’s no reason a court order couldn’t compel Amazon to provide such information. In addition to that, law enforcers also managed to pull some unknown data locally from the Echo. Those two points raise questions about what kind of information devices like the Echo and Home collect as they’re passively sitting on your counter awaiting your command.

As with much of the Internet of Things, I haven’t purchased one of these voice activated assistances yet and have no plans to buy one anytime in the near future. They’re too big of a privacy risk for my tastes since I don’t even know what kind of information they’re collecting as they sit there listening.

Written by Christopher Burg

December 28th, 2016 at 11:00 am

Bypassing the Censors

with one comment

What happens when a government attempts to censor people who are using a secure mode of communication? The censorship is bypassed:

Over the weekend, we heard reports that Signal was not functioning reliably in Egypt or the United Arab Emirates. We investigated with the help of Signal users in those areas, and found that several ISPs were blocking communication with the Signal service and our website. It turns out that when some states can’t snoop, they censor.

[…]

Today’s Signal release uses a technique known as domain fronting. Many popular services and CDNs, such as Google, Amazon Cloudfront, Amazon S3, Azure, CloudFlare, Fastly, and Akamai can be used to access Signal in ways that look indistinguishable from other uncensored traffic. The idea is that to block the target traffic, the censor would also have to block those entire services. With enough large scale services acting as domain fronts, disabling Signal starts to look like disabling the internet.

Censorship is an arms race between the censors and the people trying to communicate freely. When one side finds a way to bypass the other then the other side responds. Fortunately, each individual government is up against the entire world. Egypt and the United Arab Emirates only have control over their own territories but the people in those territories can access knowledge from anywhere in the world. With odds like that, the State is bound to fail every time.

This is also why any plans to compromise secure means of communication are doomed to fail. Let’s say the United States passes a law that requires all encryption software used within its borders to include a government backdoor. That isn’t the end of secure communications in the United States. It merely means that people wanting to communicate securely need to obtain tools developed in nations where such rules don’t exist. Since the Internet is global access to the goods and services of other nations is at your fingertips.

Written by Christopher Burg

December 23rd, 2016 at 11:00 am

You Have No Right to Privacy, Slave

with 2 comments

It’s a good thing we have a right to not incriminate ourselves. Without that right a police officer could legally require us to give them our passcodes to unlock our phones:

A Florida man arrested for third-degree voyeurism using his iPhone 5 initially gave police verbal consent to search the smartphone, but later rescinded permission before divulging his 4-digit passcode. Even with a warrant, they couldn’t access the phone without the combination. A trial judge denied the state’s motion to force the man to give up the code, considering it equal to compelling him to testify against himself, which would violate the Fifth Amendment. But the Florida Court of Appeals’ Second District reversed that decision today, deciding that the passcode is not related to criminal photos or videos that may or may not exist on his iPhone.

‘Merica!

George W. Bush was falsely accused of saying that Constitution was just a “Goddamn piece of paper!” Those who believed the quote were outraged because that sentiment is heresy against the religion of the State. But it’s also true. The Constitution, especially the first ten amendments, can’t restrict the government in any way. It’s literally just a piece of paper, which is why your supposed rights enshrined by the document keep becoming more and more restricted.

Any sane interpretation of the Fifth Amendment would say that nobody is required to surrender a password to unlock their devices. But what you or I think the Constitution says is irrelevant. The only people who get to decide what it says, according to the Constitution itself, are the men who wear magical muumuus.

Written by Christopher Burg

December 20th, 2016 at 10:00 am

Facebook’s Attempt to Combat Scam News Sites

without comments

Fake news is the current boogeyman occupying news headlines. Ironically, this boogeyman is being promoted by many organizations that produce fake news such as CNN, Fox News, and MSNBC. For the most part fake news isn’t harmful. In fact fake news, which was originally referred to as tabloids, has probably been around as long as real news. But fake news can be harmful when it’s used to scam individuals, which is a problem Facebook is looking to address:

A new suite of tools will allow independent fact checkers to investigate stories that Facebook users or algorithms have flagged as potentially fake. Stories will be mostly flagged based on user feedback. But Mosseri also noted that the company will investigate stories that become viral in suspicious ways, such as by using a misleading URL. The company is also going to flag stories that are shared less than normal. “We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way,” Mosseri wrote.

Mosseri indicated that the company’s new efforts will only target scammers, not sites that push conspiracies like Pizzagate. “Fake news means a lot of different things to a lot of different people, but we are specifically focused on the worst of the worst—clear intentional hoaxes,” he told BuzzFeed. In other words, if a publisher genuinely believes fake news to be true, it will not be fact checked.

On the surface this doesn’t seem like a bad idea. I’ve seen quite a few people repost what they thought was a legitimate news article because the article was posted on a website that looked like CNBC and had a URL very close to CNBC but wasn’t actually CNBC. If you caught the slightly malformed URL you realized that the site was a scam.

However, I don’t have much faith in the method Facebook is using to judge whether an article is legitimate or not:

Once a story is flagged, it will go into a special queue that can only be accessed by signatories to the International Fact-Checkers Network Code of Principles, a project of nonprofit journalism organization Poynter. IFCN Code of Principles signatories in the U.S. will review the flagged stories for accuracy. If the signatory decides the story is fake news, a “disputed” warning will appear on the story in News Feed. The warning will also pop up when you share the story.

I don’t particularly trust many of the IFCN signatories. Websites such as FactCheck.org and Snopes have a very hit or miss record when it comes to fact checking. And I especially don’t trust nonprofit organizations. Any organization that claims that it doesn’t want to make a profit is suspect because, let’s face it, everybody wants to make a profit (although it may not necessarily be a monetary profit).

Either way, it’ll be interesting to see if Facebook’s tactic works for reducing the spread of outright scam sites.

Written by Christopher Burg

December 16th, 2016 at 10:30 am

Posted in Technology

Tagged with