Archive for the ‘Technology’ Category
Just as the Austrian school of economics has a business cycle I have a data cycle. The Public Private Data Cycle (catchier web 3.0 buzzword compliant name coming later) states that all privately held data becomes government data with a subpoena and all government data becomes privately held data with a leak.
The Public Private Data Cycle is important to note whenever somebody discusses keeping data on individuals. For example, many libertarians don’t worry much about the data Facebook collects because Facebook is a private company. The very same people will flip out whenever the government wants to collect more data though. Likewise, many statists don’t worry much about the data the government collects because the government is a public entity. The very same people will flip out whenever Facebook wants to collect more data though. Both of these groups have a major misunderstanding about how data access works.
I’ve presented several cases on this blog illustrating how privately held data became government data with a subpoena. But what about government data becoming privately held data? The State of California recently provided us with such an example:
Our reader Tom emailed me after he had been notified by the state of California that his personal information had been compromised as a result of a California Public Records Act. Based on the limited information that we have at this time, it appears that names, the instructor’s date of birth, the instructor California driver’s license number and/or their California ID card number.
When Tom reached out to the CA DOJ he was informed that the entire list of firearms trainers in California had been released in the public records act request. The state of California is sending letters to those affected with the promise of 12 months or identity protection, but if you are a CA firearms instructor and haven’t seen a letter, might bee a good idea to call the DOJ to see if you were affected.
This wasn’t a case of a malicious hacker gaining access to California’s database. The state accidentally handed out this data in response to a public records request. Now that government held data about firearm instructors is privately held by an unknown party. Sure, the State of California said it ordered the recipient to destroy the data but as we all know once data has be accessed by an unauthorized party there’s no way to control it.
If data exists then the chances of it being accessed by an unauthorized party increases from zero. That’s why everybody should be wary of any attempt by anybody to collect more data on individuals.
People seem to misunderstand the Health Insurance Portability and Accountability (HIPPA) Act. I often hear people citing HIPPA as proof that their medical data is private. However, misunderstandings aren’t reality. Your medical data isn’t private. In fact, it’s for sale:
Your medical data is for sale – all of it. Adam Tanner, a fellow at Harvard’s institute for quantitative social science and author of a new book on the topic, Our Bodies, Our Data, said that patients generally don’t know that their most personal information – what diseases they test positive for, what surgeries they have had – is the stuff of multibillion-dollar business.
The trick is that the data is “anonymized” before it is sold. I used quotation marks in that case because anonymized can mean different things to different people. To me, anonymized means the data has been scrubbed in such a way that it cannot be tied to any individual. This is a very difficult standard to meet though. To others, such as those who are selling your medical data, anonymized simply means replacing the name, address, and phone number of a patient with an identifier. But simply removing a few identifiers doesn’t cut it in the age of big data:
But other forms of data, such as information from fitness devices and search engines, are completely unregulated and have identities and addresses attached. A third kind of data called “predictive analytics” cross-references the other two and makes predictions about behavior with what Tanner calls “a surprising degree of accuracy”.
None of this technically violates the health insurance portability and accountability act, or Hipaa, Tanner writes. But the techniques do render the protections of Hipaa largely toothless. “Data scientists can now circumvent Hipaa’s privacy protections by making very sophisticated guesses, marrying anonymized patient dossiers with named consumer profiles available elsewhere – with a surprising degree of accuracy,” says the study.
With the vast amount of data available about everybody it’s not as difficult to identify who “anonymized” data applies to as most people think.
HIPPA was written by an organization that hates privacy so it’s not surprising to see that the law failed to protect anybody’s privacy. This is also the why legislation won’t fix this problem. The only way to fix this problem is to either incentivize medical professionals to keep patient data confidential or to give exclusive control of a patient’s data to that patient.
The media’s portrayal of hackers is never accurate but almost always amusing. From hooded figures stooping over keyboards and looking at green ones and zeros on a black screen to balaclava clad individuals holding a laptop in one hand while they furiously type with the other hand, the creative minds behind the scenes at major media outlets always have a way to make hackers appear far more sinister than they really are.
CNN recently aired a segment about Russian hackers. How did the creative minds at CNN portray hackers to the viewing public? By showing a mini-game from a game you may have heard of:
In a recent story about President Obama proposing sanctions against Russia for its role in cyberattacks targeting the United States, CNN grabbed a screenshot of the hacking mini-game from the extremely popular RPG Fallout 4. First spotted by Reddit, the screenshot shows the menacing neon green letters that gamers will instantly recognize as being from the game.
Personally, I would have lifted a screenshot from the hacking mini-game in Deus Ex, it looks far more futuristic.
A lot of electrons have been annoyed by all of the people flipping out about fake news. But almost no attention has been paid to uninformed news. Most major media outlets are woefully uninformed about many (most?) of the subjects they report on. If you know anything about guns or technology you’re familiar with the amount of inaccurate reporting that occurs because of the media’s lack of understanding. When the outlet reporting on a subject doesn’t know anything about the subject the information they provide is worthless. Why aren’t people flipping out about that?
Voice activated assistances such as the Amazon Echo and Google Home are becoming popular household devices. With a simple voice command these devices can allow you to do anything from turning on your smart lightbulbs to playing music. However, any voice activated device must necessarily be listening at all times and law enforcers know that:
Amazon’s Echo devices and its virtual assistant are meant to help find answers by listening for your voice commands. However, police in Arkansas want to know if one of the gadgets overheard something that can help with a murder case. According to The Information, authorities in Bentonville issued a warrant for Amazon to hand over any audio or records from an Echo belonging to James Andrew Bates. Bates is set to go to trial for first-degree murder for the death of Victor Collins next year.
Amazon declined to give police any of the information that the Echo logged on its servers, but it did hand over Bates’ account details and purchases. Police say they were able to pull data off of the speaker, but it’s unclear what info they were able to access.
While Amazon declined to provide any server side information logged by the Echo there’s no reason a court order couldn’t compel Amazon to provide such information. In addition to that, law enforcers also managed to pull some unknown data locally from the Echo. Those two points raise questions about what kind of information devices like the Echo and Home collect as they’re passively sitting on your counter awaiting your command.
As with much of the Internet of Things, I haven’t purchased one of these voice activated assistances yet and have no plans to buy one anytime in the near future. They’re too big of a privacy risk for my tastes since I don’t even know what kind of information they’re collecting as they sit there listening.
What happens when a government attempts to censor people who are using a secure mode of communication? The censorship is bypassed:
Over the weekend, we heard reports that Signal was not functioning reliably in Egypt or the United Arab Emirates. We investigated with the help of Signal users in those areas, and found that several ISPs were blocking communication with the Signal service and our website. It turns out that when some states can’t snoop, they censor.
Today’s Signal release uses a technique known as domain fronting. Many popular services and CDNs, such as Google, Amazon Cloudfront, Amazon S3, Azure, CloudFlare, Fastly, and Akamai can be used to access Signal in ways that look indistinguishable from other uncensored traffic. The idea is that to block the target traffic, the censor would also have to block those entire services. With enough large scale services acting as domain fronts, disabling Signal starts to look like disabling the internet.
Censorship is an arms race between the censors and the people trying to communicate freely. When one side finds a way to bypass the other then the other side responds. Fortunately, each individual government is up against the entire world. Egypt and the United Arab Emirates only have control over their own territories but the people in those territories can access knowledge from anywhere in the world. With odds like that, the State is bound to fail every time.
This is also why any plans to compromise secure means of communication are doomed to fail. Let’s say the United States passes a law that requires all encryption software used within its borders to include a government backdoor. That isn’t the end of secure communications in the United States. It merely means that people wanting to communicate securely need to obtain tools developed in nations where such rules don’t exist. Since the Internet is global access to the goods and services of other nations is at your fingertips.
It’s a good thing we have a right to not incriminate ourselves. Without that right a police officer could legally require us to give them our passcodes to unlock our phones:
A Florida man arrested for third-degree voyeurism using his iPhone 5 initially gave police verbal consent to search the smartphone, but later rescinded permission before divulging his 4-digit passcode. Even with a warrant, they couldn’t access the phone without the combination. A trial judge denied the state’s motion to force the man to give up the code, considering it equal to compelling him to testify against himself, which would violate the Fifth Amendment. But the Florida Court of Appeals’ Second District reversed that decision today, deciding that the passcode is not related to criminal photos or videos that may or may not exist on his iPhone.
George W. Bush was falsely accused of saying that Constitution was just a “Goddamn piece of paper!” Those who believed the quote were outraged because that sentiment is heresy against the religion of the State. But it’s also true. The Constitution, especially the first ten amendments, can’t restrict the government in any way. It’s literally just a piece of paper, which is why your supposed rights enshrined by the document keep becoming more and more restricted.
Any sane interpretation of the Fifth Amendment would say that nobody is required to surrender a password to unlock their devices. But what you or I think the Constitution says is irrelevant. The only people who get to decide what it says, according to the Constitution itself, are the men who wear magical muumuus.
Fake news is the current boogeyman occupying news headlines. Ironically, this boogeyman is being promoted by many organizations that produce fake news such as CNN, Fox News, and MSNBC. For the most part fake news isn’t harmful. In fact fake news, which was originally referred to as tabloids, has probably been around as long as real news. But fake news can be harmful when it’s used to scam individuals, which is a problem Facebook is looking to address:
A new suite of tools will allow independent fact checkers to investigate stories that Facebook users or algorithms have flagged as potentially fake. Stories will be mostly flagged based on user feedback. But Mosseri also noted that the company will investigate stories that become viral in suspicious ways, such as by using a misleading URL. The company is also going to flag stories that are shared less than normal. “We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way,” Mosseri wrote.
Mosseri indicated that the company’s new efforts will only target scammers, not sites that push conspiracies like Pizzagate. “Fake news means a lot of different things to a lot of different people, but we are specifically focused on the worst of the worst—clear intentional hoaxes,” he told BuzzFeed. In other words, if a publisher genuinely believes fake news to be true, it will not be fact checked.
On the surface this doesn’t seem like a bad idea. I’ve seen quite a few people repost what they thought was a legitimate news article because the article was posted on a website that looked like CNBC and had a URL very close to CNBC but wasn’t actually CNBC. If you caught the slightly malformed URL you realized that the site was a scam.
However, I don’t have much faith in the method Facebook is using to judge whether an article is legitimate or not:
Once a story is flagged, it will go into a special queue that can only be accessed by signatories to the International Fact-Checkers Network Code of Principles, a project of nonprofit journalism organization Poynter. IFCN Code of Principles signatories in the U.S. will review the flagged stories for accuracy. If the signatory decides the story is fake news, a “disputed” warning will appear on the story in News Feed. The warning will also pop up when you share the story.
I don’t particularly trust many of the IFCN signatories. Websites such as FactCheck.org and Snopes have a very hit or miss record when it comes to fact checking. And I especially don’t trust nonprofit organizations. Any organization that claims that it doesn’t want to make a profit is suspect because, let’s face it, everybody wants to make a profit (although it may not necessarily be a monetary profit).
Either way, it’ll be interesting to see if Facebook’s tactic works for reducing the spread of outright scam sites.
More and more it should be becoming more apparent that you don’t own your smartphone. Sure, you paid for it and you physically control it but if the device itself can be disabled by a third-party without your authorization can you really say that you own it? This is a question Samsung Galaxy Note 7 owners should be asking themselves right now:
Samsung’s Galaxy Note 7 recall in the US is still ongoing, but the company will release an update in a couple of weeks that will basically force customers to return any devices that may still be in use. The company announced today that a December 19th update to the handsets in the States will prevent them from charging at all and “will eliminate their ability to work as mobile devices.” In other words, if you still have a Note 7, it will soon be completely useless.
One could argue that this ability to push an update to a device to disable it is a good thing in the case of the Note 7 since the device has a reputation for lighting on fire. But it has rather frightening ownership and security implications.
The ownership implications should be obvious. If the device manufacturer can disable your device at its whim then you can’t really claim to own it. You can only claim that you’re borrowing it for as long as the manufacturer deems you worthy of doing so. However, in regards to ownership, nothing has really changed. Since copyright and patent laws were applied to software your ability to own your devices has been basically nonexistent.
The security implications may not be as obvious. Sure, the ability for a device manufacturer to push implicitly trusted software to their devices carries risks but the tradeoff, relying on users to apply security updates, also carries risks. But this particular update being pushed out by Samsung has the ability to destroy users’ trust in manufacturer updates. Many users are currently happy to allow their devices to update themselves automatically because those updates tend to improve the device. It only takes a single bad update to make those users unhappy with automatic updates. If they become unhappy with automatic updates they will seek ways of disabling updates.
The biggest weakness in any security system tends to be the human component. Part of this is due to the difficulty of training humans to be secure. It takes a great deal of effort to train somebody to follow even basic security principles but it takes very little to undo all of that training. A single bad experience is all that generally stands between that effort and having all of it undone. If Samsung’s strategy becomes more commonplace I fear that years of getting users comfortable with automatic updates may be undone and we’ll be looking at a world where users jump through hoops to disable updates.
Pebble was an interesting company. While the company didn’t invent the smartwatch concept, I have a Fossil smartwatch running Palm OS that came out way before the Pebble, it did popularize the market. But making a product concept popular doesn’t mean you’re going to be successful. Pebble has filed for bankruptcy and effective immediately will no longer sell products, honor warranties, or provide any support beyond the material already posted on the Pebble website.
But what really got me was how the announcement was handled. If you read the announcement you may be lead to believe that Fitbit has purchased Pebble. The post talks about this being Pebble’s “next step” and the e-mail announcement sent out yesterday even said that Pebble was joining Fitbit:
It’s no surprise that a lot of Pebble users were quite upset with Fitbit since, based on the information released by Pebble, it appeared that Fitbit had made the decision to not honor warranties, release regular software updates for current watches, and discontinue the newly announced watches. But Fitbit didn’t buy Pebble, it only bought some of its assets:
Fitbit Inc., the fitness band maker, has acquired software assets from struggling smartwatch startup Pebble Technology Corp., a move that will help it better compete with Apple Inc..
The purchase excludes Pebble’s hardware, Fitbit said in a statement Wednesday. The deal is mainly about hiring the startup’s software engineers and testers, and getting intellectual property such as the Pebble watch’s operating system, watch apps, and cloud services, people familiar with the matter said earlier.
While Fitbit didn’t disclose terms of the acquisition, the price is less than $40 million, and Pebble’s debt and other obligations exceed that, two of the people said. Fitbit is not taking on the debt, one of the people said. The rest of Pebble’s assets, including product inventory and server equipment, will be sold off separately, some of the people said.
I bring this up partially because I was a fan of Pebble’s initial offering and did enjoy the fact that the company offered a unique product (a smartwatch with an always on display that only needed to be charged every five to seven days) but mostly because I found the way Pebble handled this announcement rather dishonest. If your company is filing bankruptcy you should just straight up admit it instead of trying to make it sound like you’ve been bought out by the first company to come by and snap up some of your assets. Since you’re already liquidating the company there’s nothing to be gained by pussyfooting around the subject.
When people think of software glitches they generally think of annoyances such as their application crashing and losing any changes since their last save, their smart thermostat causing the furnace not to kick on, or the graphics in their game displaying abnormally. But as software has become more and more integrated into our lives the real life implications of software glitches have become more severe:
OAKLAND, Calif.—Most pieces of software don’t have the power to get someone arrested—but Tyler Technologies’ Odyssey Case Manager does. This is the case management software that runs on the computers of hundreds and perhaps even thousands of court clerks and judges in county courthouses across the US. (Federal courts use an entirely different system.)
Typically, when a judge makes a ruling—for example, issuing or rescinding a warrant—those words said by a judge in court are entered into Odyssey. That information is then relied upon by law enforcement officers to coordinate arrests and releases and to issue court summons. (Most other courts, even if they don’t use Odyssey, use a similar software system from another vendor.)
But, just across the bay from San Francisco, one of Alameda County’s deputy public defenders, Jeff Chorney, says that since the county switched from a decades-old computer system to Odyssey in August, dozens of defendants have been wrongly arrested or jailed. Others have even been forced to register as sex offenders unnecessarily. “I understand that with every piece of technology, bugs have to be worked out,” he said, practically exasperated. “But we’re not talking about whether people are getting their paychecks on time. We’re talking about people being locked in cages, that’s what jail is. It’s taking a person and locking them in a cage.”
First, let me commend Jeff Chorney for stating that jails are cages. Too many people like to prevent that isn’t the case. Second, he has a point. Case management software, as we’ve seen in this case, can have severe ramifications if bugs are left in the code.
The threat of bugs causing significant real life consequences isn’t a new one. A lot of software manages a lot of equipment that can lead to people dying if there is a malfunction. In response to that many industries have gone to great lengths to select tools and come up with procedures to minimize the chances of major bugs making it into released code. The National Aeronautics and Space Administration (NASA), for example, has an extensive history of writing code where malfunctions can cost millions of dollars or even kill people and its programmers have developed tools and standards to minimize their risks. Most industrial equipment manufacturers also spend a significant amount of time developing tools and standards to minimize code errors because their software mistakes can lead to millions of dollars being lost of people dying.
Software developers working on products that can have severe real life consequences need to focus on developing reliable code. Case management software isn’t Facebook. When a bug exists in Facebook the consequences are annoying to users but nobody is harmed. When a bug exists in case management software innocent people can end up in cages of on a sex offender registry, which can ruin their entire lives.
Likewise, people purchasing and use critical software needs to thoroughly test it before putting it in production. Do you think there are many companies that buy multi-million dollar pieces of equipment and don’t test them thoroughly before putting it on the assembly line? That would be foolish and any company that did that would end up facing millions of dollars of downtime or even bankruptcy if the machine didn’t perform as needed. The governments that are using the Odyssey Case Management software should have thoroughly tested the product before using it in any court. But since the governments themselves don’t face any risks from bad case management software they likely did, at best, basic testing before rushing the product into production.