Archive for the ‘Technology’ Category
The price of Bitcoin was getting a little wonky again, which meant that the media must be covering some story about it. This time around the media has learned the real identify of Satoshi Nakamoto!
Australian entrepreneur Craig Wright has publicly identified himself as Bitcoin creator Satoshi Nakamoto.
His admission follows years of speculation about who came up with the original ideas underlying the digital cash system.
Mr Wright has provided technical proof to back up his claim using coins known to be owned by Bitcoin’s creator.
Prominent members of the Bitcoin community and its core development team say they have confirmed his claims.
Mystery sovled, everybody go home! What’s that? Wright provided a technical proof? It’s based on a cryptographic signature? In that case I’m sure the experts are looking into his claim:
- Yes, this is a scam. Not maybe. Not possibly.
- Wright is pretending he has Satoshi’s signature on Sartre’s writing. That would mean he has the private key, and is likely to be Satoshi. What he actually has is Satoshi’s signature on parts of the public Blockchain, which of course means he doesn’t need the private key and he doesn’t need to be Satoshi. He just needs to make you think Satoshi signed something else besides the Blockchain — like Sartre. He doesn’t publish Sartre. He publishes 14% of one document. He then shows you a hash that’s supposed to summarize the entire document. This is a lie. It’s a hash extracted from the Blockchain itself. Ryan Castellucci (my engineer at White Ops and master of Bitcoin Fu) put an extractor here. Of course the Blockchain is totally public and of course has signatures from Satoshi, so Wright being able to lift a signature from here isn’t surprising at all.
- He probably would have gotten away with it if the signature itself wasn’t googlable by Redditors.
- I think Gavin et al are victims of another scam, and Wright’s done classic misdirection by generating different scams for different audiences.
Some congratulations should go to Wright — who will almost certainly claim this was a clever attempt to troll people so he doesn’t feel luck a schmuck for being too stupid to properly pull off a scam — for trolling so many people. Not only did the media get suckered but even members of the Bitcoin community fell for his scam hook, line, and sinker.
The difficult part about being a technophile and an anarchist is that the State often highjacks new technologies to further its own power. These highjackings are always done under the auspices of safety and the groundwork is already being laid for the State to get its fingers into self-driving vehicles:
It is time to start thinking about the rules of the new road. Otherwise, we may end up with some analog to today’s chaos in cyberspace, which arose from decisions in the 1980s about how personal computers and the Internet would work.
One of the biggest issues will be the rules under which public infrastructures and public safety officers may be empowered to override how autonomous vehicles are controlled.
When should law enforcers and safety officers be empowered to override another person’s self-driving vehicle? Never. Why? Setting aside the obvious abuses such empowerment would lead to we have the issue of security, which the article alludes to towards the end:
Last, but by no means least, is whether such override systems could possibly be made hack-proof. A system to allow authorized people to control someone else’s car is also a system with a built-in mechanism by which unauthorized people — aka hackers — can do the same.
Even if hackers are kept out, if every police officer is equipped to override AV systems, the number of authorized users is already in the hundreds of thousands — or more if override authority is extended to members of the National Guard, military police, fire/EMS units, and bus drivers.
No system can be “hacker-proof,” especially when that system has hundreds of thousands of authorized users. Each system is only as strong as its weakest user. It only takes one careless authorized user to leak their key for the entire world to have a means to gaining access to everything locked by that key.
In order to implement a system in self-driving cars that would allow law enforcers and safety officers to override them there would need to be a remote access option that allowed anybody employed by a police department, fire department, or hospital to log into the vehicle. Every vehicle would either have to be loaded with every law enforcer’s and safety officer’s credentials or, more likely, rely on a single master key. In the case of the former it would only take one careless law enforcer or safety officer posting their credentials somewhere an unauthorized party could access them, including the compromised network of a hospital, for every self-driving car to be compromised. In the case of the latter the only thing that would be required to compromise every self-driving car is the master key being leaked. Either way, the integrity of the system would be dependent on hundreds of thousands of people maintaining perfect security, which is an impossible goal.
If self-driving cars are setup to allow law enforcers and safety officers to override them then they will become useless due to being constantly compromised by malicious actors.
My first Apple product was a PowerBook G4 that I purchased back in college. At the time I was looking for a laptop that could run a Unix operating system. Back then (as is still the case today albeit to a lesser extent) running Linux on a laptop meant you had to usually give up sleep mode, Wi-Fi, the additional function buttons most manufacturers added on their keyboards, and a slew of power management features that made the already pathetic battery life even worse. Since OS X was (and still is) Unix based and didn’t involved the headaches of trying to get Linux to run on a laptop the PowerBook fit my needs perfectly.
Fast forward to today. Between then and now I’ve lost confidence in a lot of companies whose products I used to love. Apple on the other hand has continued to impress me. In recent times my preference for Apple products has been influenced in part by the fact that it doesn’t rely on selling my personal information to make money and displays a healthy level of paranoia:
Apple has begun designing its own servers partly because of suspicions that hardware is being intercepted before it gets delivered to Apple, according to a report yesterday from The Information.
“Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter,” the report said. “At one point, Apple even assigned people to take photographs of motherboards and annotate the function of each chip, explaining why it was supposed to be there. Building its own servers with motherboards it designed would be the most surefire way for Apple to prevent unauthorized snooping via extra chips.”
Anybody who has been paying attention the the leaks released by Edward Snowden knows that concerns about surveillance hardware being added to off-the-shelf products isn’t unfounded. In fact some companies such as Cisco have taken measure to mitigate such threats.
Apple has a lot of hardware manufacturing capacity and it appears that the company will be using it to further protect itself against surveillance by manufacturing its own servers.
This is a level of paranoia I can appreciate. Years ago I brought a lot of my infrastructure in house. My e-mail, calendar and contact syncing, and even this website are all being hosted on servers running in my dwelling. Although part of the reason I did this was for the experience another reason was to guard against certain forms of surveillance. National Security Letters (NSL), for example, require service providers to surrender customer information to the State and legally prohibit them from informing the targeted customer. Since my servers are sitting in my dwelling any NSL would necessarily require me to inform myself of receiving it.
In the ongoing security arms race researchers from John Hopkins discovered a vulnerability in Apple’s iMessage:
Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.
It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.
To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.
Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.
“And we kept doing that,” Green said, “until we had the key.”
A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.
With the key, the team was able to retrieve the photo from Apple’s server. If it had been a true attack, the user would not have known.
There are several things to note about this vulnerability. First, Apple did response quickly by including a fix for it in iOS 9.3. Second, security is very difficult to get right so it often turns into an arms race. Third, designing secure software, even if you’re a large company with a lot of talented employees, is hard.
Christopher Soghoian also made a good point in the article:
Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green’s attack highlights the danger of companies building their own encryption without independent review. “The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly,” he said.
The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.
Open source isn’t a magic bullet but it does allow independent third party verification of your code. This advantage often goes unrealized as even very popular open source projects like OpenSSL have contained numerous notable security vulnerabilities for years without anybody being the wiser. But it’s unlikely something like iMessage would have been ignored so thoroughly.
The project would likely attracted a lot of developers interested in writing iMessage clients for Android, Windows, and Linux. Since iOS, and therefore by extension iMessage, is so popular in the public eye it’s likely a lot of security researchers would have looked through the iMessage code hoping to be the first to find a vulnerability and enjoy the publicity that would almost certainly entail. So open sourcing iMessage would likely have gained Apple a lot of third party verification.
In fact this is why I recommend applications like Signal over iMessage. Not only is Signal compatible with Android and iOS but it’s also open source so it’s available for third party verification.
Were I asked I would summarize the Internet of Things as taking one step forward and two steps back. While integrating computers into everyday objects offers some potential the way manufacturers are going about it is all wrong.
Consider the standard light switch. A light switch usually has two states. One state, which closes the circuit, turns the lights on while the other state, which opens the circuit, turns the lights off. It’s simple enough but has some notable limitations. First, it cannot be controlled remotely. Having a remotely controlled light switch would be useful, especially if you’re away from home and want to make it appear as though somebody is there to discourage burglars. It would also be nice to verify if you turned all your lights off when you left to reduce the electric bill. Of course remotely operated switches also introduce the potential for remotely accessible vulnerabilities.
What happens when you take the worst aspects of connected light switches, namely vulnerabilities, and don’t even offer the positives? This:
Garrett, who’s also a member of the Free Software Foundation board of directors, was in London last week attending a conference, and found that his hotel room has Android tablets instead of light switches.
“One was embedded in the wall, but the two next to the bed had convenient looking ethernet cables plugged into the wall,” he noted. So, he got ahold of a couple of ethernet adapters, set up a transparent bridge, and put his laptop between the tablet and the wall.
He discovered that the traffic to and from the tablet is going through the Modbus serial communications protocol over TCP.
“Modbus is a pretty trivial protocol, and notably has no authentication whatsoever,” he noted. “Tcpdump showed that traffic was being sent to 172.16.207.14, and pymodbus let me start controlling my lights, turning the TV on and off and even making my curtains open and close.”
He then noticed that the last three digits of the IP address he was communicating with were those of his room, and successfully tested his theory:
“It’s basically as bad as it could be – once I’d figured out the gateway, I could access the control systems on every floor and query other rooms to figure out whether the lights were on or not, which strongly implies that I could control them as well.”
As far as I can tell the only reason the hotel swapped out mechanical light switches with Android tablets was to attempt to look impressive. What they ended up with was a setup that may look impressive to the layman but is every trolls dream come true.
I can’t wait to read a story about a 14 year-old turning off the lights to every room in a hotel.
With the recent kerfuffle between Apple and the Federal Bureau of Investigations (FBI) the debate between secure and insecure devices is in the spotlight. Apple has been marketing itself as a company that defends users’ privacy and this recent court battle gives merits to its claims. Other companies have expressed support for Apple’s decision to fight the FBI’s demand, including Google. That makes this next twist in the story interesting.
Yesterday Christopher Soghoian posted the following Tweet:
Google announces new Android messaging app designed to "allow compliance with legal interception procedures". https://t.co/YiGz9NtENV
— Christopher Soghoian (@csoghoian) February 22, 2016
His Tweet linked to a comment on a Hacker News thread discussing Google’s new Rich Communication Services (RCS) client, Jibe. What’s especially interesting about RCS is that it appears to include a backdoor as noted in the Hacker News thread:
When using MSRPoTLS, and with the following two objectives allow compliance with legal interception procedures, the TLS authentication shall be based on self-signed certificates and the MSRP encrypted connection shall be terminated in an element of the Service Provider network providing service to that UE. Mutual authentication shall be applied as defined in [RFC4572].
It’s important to note that this doesn’t really change anything from the current Short Message Service (SMS) service and cellular voice protocols, which offers no real security. By using this standard Google isn’t introducing a new security hole. However, Google also isn’t fixing a known security hole.
When Apple created iMessage and FaceTime it made use of strong end-to-end encryption (although that doesn’t protect your messages if you back them up to iCloud). Apple’s replacement for SMS and standard cellular calls addressed a known security hole.
Were I Google, especially with the security debate going on, I would have avoided embracing RCS since it’s insecure by default. RCS may be an industry standard, since it’s managed by the same association that manages Global System for Mobile Communications (GSM), but it’s a bad standard that shouldn’t see widespread adoption.
I straddle that fine line between an obsessive love of everything technologically advanced and a curmudgeonly attitude that results in me asking why new products ever see the light of day. The Internet of Things (IoT) trend has really put me in a bad place. There are a lot of new “smart” devices that I want to like but they’re so poorly executed that I end up hating their existence. Then there are the products I can’t fathom on any level. This is one of those:
Fisher-Price’s “Smart Toys” are a line of digital stuffed animals, like teddy bears, that are connected to the Internet in order to offer personalized learning activities. Aimed at kids aged 3 to 8, the toys actually adapt to children to figure out their favorite activities. They also use a combination of image and voice recognition to identify the child’s voice and to read “smart cards,” which kick off the various games and adventures.
According to a report released today by security researchers at Rapid7, these Smart Toys could have been compromised by hackers who wanted to take advantage of weaknesses in the underlying software. Specifically, the problem was that the platform’s web service (API) calls were not appropriately verifying the sender of messages, meaning an attacker could have sent requests that should not otherwise have been authorized.
I’m sure somebody can enlighten me on the appeal of Internet connected stuffed animals but I can only imagine these products being the outcome of some high level manager telling a poor underling to “Cloud enable our toys!” In all likelihood no specialists were brought in to properly implement the Internet connectivity features so Fisher-Price ended up releasing a prepackaged network vulnerability. Herein lies the problem with the IoT. Seemingly every company has become entirely obsessed with Internet enabled products but few of them know enough to know that they don’t know what they’re doing. This is creating an Internet of Bad Ideas.
There’s no reason the IoT has to be this way. Companies can bring in people with the knowledge to implement Internet connectivity correctly. But they’re not. Some will inevitably blame each company’s desire to keep overhead as low as possible but I think the biggest part of the problem may be rooted in ignorance. Most of these companies know they want to “cloud enable” their products to capitalize on the new hotness but are so ignorant about network connectivity that they don’t even know they’re ignorant.
Security is a fascinating field that is in a constant state of evolution. When new defenses are created new attackers follow and vice versa. One security measure some people take is to create and store their cryptography keys on a computer that isn’t attached to any network. This is known as an air gap and is a pretty solid security measure if implemented correctly (which is harder than most people realize). But even air gaps can be remotely exploited under the right circumstances:
In recent years, air-gapped computers, which are disconnected from the internet so hackers can not remotely access their contents, have become a regular target for security researchers. Now, researchers from Tel Aviv University and Technion have gone a step further than past efforts, and found a way to steal data from air-gapped machines while their equipment is in another room.
“By measuring the target’s electromagnetic emanations, the attack extracts the secret decryption key within seconds, from a target located in an adjacent room across a wall,” Daniel Genkin, Lev Pachmanov, Itamar Pipman, and Eran Tromer write in a recently published paper. The research will be presented at the upcoming RSA Conference on March 3.
It needs to be stated up front that this attack requires a tightly controlled environment so isn’t yet practical for common real world exploitation. But attacks only improve over time so it’s possible this attack will become more practical with further research. Some may decry this as the end of computer security, because that’s what people commonly do when new exploits are created, but it will simply cause countermeasures to be implemented. Air gapped machines may be operated in a Faraday cage or computer manufacturers may improve casings to better control electromagnetic emissions.
This is just another chapter in the never ending saga of security. And it’s a damn impressive chapter no matter how you look at it.
Proving once again that technology overcomes legal restrictions, a new stage in 3D printed firearms has been reached. Instead of a single shot pistol that’s difficult to reload we now have a 3D printed semiautomatic 9mm handgun:
Last weekend a 47-year-old West Virginia carpenter who goes by the pseudonym Derwood released the first video of what he calls the Shuty-MP1, a “mostly” 3-D printed semi-automatic firearm. Like any semi-automatic weapon, Derwood’s creation can fire an actual magazine of ammunition—in this case 9mm rounds—ejecting spent casings one by one and loading a new round into its chamber with every trigger pull. But unlike the typical steel semi-automatic rifle, Derwood says close to “95 percent” of his creation is 3-D printed in cheap PLA plastic, from its bolt to the magazine to the upper and lower receivers that make up the gun’s body.
Heres a video of it firing:
As the article notes, the gun isn’t perfect. The plastic around the barrel apparently starts to melt after firing 18 rounds if sufficient cooling time isn’t given. But the pace at which 3D printed firearms are evolving is staggering. In a few short years we’ve gone from the single shot Liberator pistol to a fully functional semiautomatic pistol. It won’t be long until practical 3D printed firearms are designed.
What does this mean? It means prohibitions against firearms are less relevant. Prohibiting something that any schmuck can make in their home isn’t possible. Alcohol prohibition and the current war on drugs have proven that.
One of the biggest weaknesses of today’s Internet is its reliance on centralized providers. Getting Internet access at home usually requires signing up with one of the few, if you’re even lucky to have more than one, Internet service providers (ISPs). In my area, for example, the only real options are Comcast or CenturyLink. CenturyLink only offers Digital subscriber line (DSL) services so the only actual option for me, assuming I want access speeds above 1Mbps, is Comcast. My situation isn’t unique. In fact it’s the norm.
The problem with highly centralized systems such as this are numerous, especially when you consider how cozy most ISPs are with the State. Censorship and surveillance are made much easier when a system is centralized. Instead of having to deal with a bunch of individuals to censor or surveil Internet users the State only has to make a few sweetheart deals with the handful of ISPs. Another issue with heavily centralized systems is that users are at a severe disadvantage. The entire debate surrounding net neutrality is really only an issue because so little competition exists in the Internet provision market. If Comcast wants to block access to Netflix unless I pay an additional fee there really isn’t much I can do about it.
Many consider to this nightmare proof that the market has failed. But such accusations are nonsense because the market isn’t at work here. The reason so little competition exists in the Internet provision market is because the State protects current ISPs from competition. It’s too easy for a massive regulatory entity such as the State to put its boot down on the fact of centralized service providers.
Does all this mean an uncensored, secured Internet is impossible to achieve? Not at all. The trick is to move away from easily identified centralized providers. If, for example, every Internet users was also a provider it would make it practically impossible for the State to effectively control it. That’s what mesh networks can offer and the idea is becoming more popular every day. Denizens of New York City have jumped onboard the mesh network bandwagon and are trying to make local ISPs irrelevant:
The internet may feel free, but it certainly isn’t. The only way for most people to get it is through a giant corporation like Comcast or Time Warner Cable, companies that choke your access and charge exorbitant prices.
In New York City, a group of activists and volunteers called NYC Mesh are trying to take back the internet. They’re building something called a mesh network — a makeshift system that provides internet access. Their goal is to make TWC totally irrelevant.
The hardest part about establishing a mesh network is achieving critical mass. A mesh network needs a decent number of nodes to begin being truly useful. That’s why it makes sense to start building mesh networks in very densely populated areas such as New York City. If the necessary critical mass is achieved in a few major metropolitan areas it will become feasible to bypass centralized ISPs by connecting various regional mesh networks together.
Looking at NYC Mesh’s map of active nodes it seems like they’ve already established pretty decent coverage considering the organization has only been around since January of 2014. If they can keep up this pace they could soon become a viable alternative to local centralized ISPs.