Archive for the ‘Technology’ Category
When it comes to effective technology the federal government has a dismal record. Recently news organizations have been flipping out over a report that noted that the federal government is still utilizing 8″ floppy disks for its nuclear weapons program:
The U.S. Defense Department is still using — after several decades — 8-inch floppy disks in a computer system that coordinates the operational functions of the nation’s nuclear forces, a jaw-dropping new report reveals.
The Defense Department’s 1970s-era IBM Series/1 Computer and long-outdated floppy disks handle functions related to intercontinental ballistic missiles, nuclear bombers and tanker support aircraft, according to the new Government Accountability Office report.
The department’s outdated “Strategic Automated Command and Control System” is one of the 10 oldest information technology investments or systems detailed in the sobering GAO report, which calls for a number of federal agencies “to address aging legacy systems.”
I’m not sure why that report is “jaw-droping.” There is wisdom in updating systems incrementally as key components become obsolete. There is also wisdom in not fixing something that isn’t broken.
This reminds me of the number of businesses and banks that still rely on software written in COBOL. A lot of people find it odd that these organizations haven’t upgraded their systems to the latest and greatest. But replacing a working system that has been debugged and fine tuned for decades is an expensive prospect. All of the work that was done over those decades is effectively thrown out. Whatever new system is developed to replace the old system will have to go through a painful period of fine tuning and debugging. Considering that and considering the current systems still fulfill their purposes, why would an organization sink a ton of money into replacing them?
The nuclear program strikes me as the same thing. While 8″ floppy disks and IBM Series/1 computers are ancient, they seem to be fulfilling their purpose. More importantly, those systems have gone through decades of fine tuning and debugging, which means they’re probably more reliable than any replacement system would be (and reliability is pretty important when you’re talking about weapons that can wipe out entire cities).
Sometimes old isn’t automatically bad, even when you’re talking about technology.
In addition to creating fake terrorist attacks so it can claim glory by thwarting them, the Federal Bureau of Investigations (FBI) also spends its time chasing brilliant minds out of the country:
FBI agents are currently trying to subpoena one of Tor’s core software developers to testify in a criminal hacking investigation, CNNMoney has learned.
But the developer, who goes by the name Isis Agora Lovecruft, fears that federal agents will coerce her to undermine the Tor system — and expose Tor users around the world to potential spying.
That’s why, when FBI agents approached her and her family over Thanksgiving break last year, she immediately packed her suitcase and left the United States for Germany.
Because of the State’s lust for power, the United Police States of America are becoming more hostile towards individuals knowledgable in cryptography. The FBI went after Apple earlier this year because the company implemented strong cryptography so it’s not too surprising to see that the agency has been harassing a developer who works on an application that utilizes strong cryptography. Fortunately, she was smart enough to flee before the FBI got a hold of her so none of its goons were able to slap her with a secret order or any such nonsense.
What’s especially interesting about Isis’ case is that the FBI wouldn’t tell her or her lawyer the reason it wanted to talk to her. It even went so far as to tell her lawyer that if agents found her on the street they would interrogate her without his presence. That’s some shady shit. Isis apparently wasn’t entirely dense though and decided it was time to go while the going was good. As this country continues to expand its police state don’t be afraid to follow her example.
The Federal Communications Commission (FCC), an agency that believes it has a monopoly on the naturally occurring electromagnetic spectrum, decreed that all Wi-Fi router manufacturers are now responsible for enforcing the agency’s restrictions on spectrum use. Any manufacturer that fails to be the enforcement arm of the FCC will face consequences (being a government agency must be nice, you can just force other people to do your work for you).
Most manufacturers have responded to this decree by taking measures that prevent users from loading third-party firmware of any sort. Such a response is unnecessary and goes beyond the demands of the FCC. Linksys, fortunately, is setting the bar higher and will not lock out third-party firmware entirely:
Next month, the FCC will start requiring manufacturers to prevent users from modifying the RF (radio frequency) parameters on Wi-Fi routers. Those rules were written to stop RF-modded devices from interfering with FAA Doppler weather radar systems. Despite the restrictions, the FCC stressed it was not advocating for device-makers to prevent all modifications or block the installation of third-party firmware.
Still, it’s a lot easier to lock down a device’s firmware than it is to prevent modifications to the radio module alone. Open source tech experts predicted that router manufacturers would take the easy way out by slamming the door shut on third-party firmware. And that’s exactly what happened. In March, TP-Link confirmed they were locking down the firmware in all Wi-Fi routers.
Instead of locking down everything, Linksys went the extra mile to ensure owners still had the option to install the firmware of their choice: “Newly sold Linksys WRT routers will store RF parameter data in a separate memory location in order to secure it from the firmware, the company says. That will allow users to keep loading open source firmware the same way they do now,” reports Ars Technica’s Josh Brodkin.
This is excellent news. Not only will it allow users to continue using their preferred firmware, it also sets a precedence for the industry. TP-Link, like many manufacturers, took the easy road. If every other manufacturer followed suit we’d be in a wash of shitty firmware (at least until bypasses for the firmware blocks were discovered). By saying it would still allow third-party firmware to be loaded on its devices, Linksys has maintained its value for many customers and may have convinced former users of other devices to buy its devices instead. Other manufacturers may find themselves having to follow Linksys’s path to prevent paying customers from going over to Linksys. By being a voice of reason, Linksys may end up saving Wi-Fi consumers from only having terrible firmware options.
The new Doom finally convinced me to buy a new console. I debated between a PlayStation 4 and an Xbox One. In the end I settled on the Xbox One because I still don’t fully trust Sony (I may never get over the fact that they included malicious root kits on music CDs to enforce their idiotic copy protection and I’m still unhappy about them removing the Linux capabilities for the PlayStation 3) and I was able to buy a refurbished unit for $100.00 off (I’m cheap).
When I hooked up the Xbox One and powered it up for the first time it said it needed to download and apply an update before doing anything else. I let it download the update, since I couldn’t do anything with it until it finished updating, only for it to report that “There was a problem with the update.” That was the entirety of the error message and the only diagnostic option available was to test the network connection, which reported that everything was fine and I was connected to the Internet. I tried power cycling the device, disconnecting it from power for 30 seconds, and every other magical dance that Microsoft recommended on its useless trouble shooting site. Nothing would convince the Xbox to download and install the update it said it absolutely needed.
After a lot of fucking around I finally managed to update it. If you’re running into this problem you can give this strategy a try. Hopefully it saves you the hour and a half of fucking around I went through. What you will need is a USB flash drive formatted in NTFS (the Xbox One will not read the drive if it’s formatted in a variation of FAT because reasons) and some time to wait for the multi-gigabyte files to download.
Go to Microsoft’s site for downloading the Offline System Update Diagnostic Tool. Scroll down to the downloads. You’ll notice that they’re separated by OS versions. Since you cannot do anything on the Xbox One until the update is applies you can’t look up your OS version (nice catch-22). What you will want to do is download both OSUDT3 and OSUDT2.
When you have the files unzip them. Copy the contents of OSUDT3 to the root directory of the flash drive and connect the flash drive to the side USB port on the Xbox One. Hold down the controller sync button on the side and press the power button on the Xbox One (do not turn the Xbox One on with the controller otherwise this won’t work). Still holding down the sync button now press and hold the DVD eject button as well. You should hear the startup sound play twice. After that you can release the two buttons and the Xbox One should start applying the OSUDT3 update. Once that is finished the system will boot normally and you will return to the initial update screen that refuses to apply any updates.
Remove the flash drive, erase the OSUDT3 files from it, and copy the contents of the OSUDT2 zip file to the root directory of the flash drive. Insert the flash drive into the side USB port on the Xbox One and perform the above dance all over again. Once the update has applied your Xbox One should boot up and actually be something other than a useless brick.
As an aside, my initial impression of the Xbox One is less than stellar.
The price of Bitcoin was getting a little wonky again, which meant that the media must be covering some story about it. This time around the media has learned the real identify of Satoshi Nakamoto!
Australian entrepreneur Craig Wright has publicly identified himself as Bitcoin creator Satoshi Nakamoto.
His admission follows years of speculation about who came up with the original ideas underlying the digital cash system.
Mr Wright has provided technical proof to back up his claim using coins known to be owned by Bitcoin’s creator.
Prominent members of the Bitcoin community and its core development team say they have confirmed his claims.
Mystery sovled, everybody go home! What’s that? Wright provided a technical proof? It’s based on a cryptographic signature? In that case I’m sure the experts are looking into his claim:
- Yes, this is a scam. Not maybe. Not possibly.
- Wright is pretending he has Satoshi’s signature on Sartre’s writing. That would mean he has the private key, and is likely to be Satoshi. What he actually has is Satoshi’s signature on parts of the public Blockchain, which of course means he doesn’t need the private key and he doesn’t need to be Satoshi. He just needs to make you think Satoshi signed something else besides the Blockchain — like Sartre. He doesn’t publish Sartre. He publishes 14% of one document. He then shows you a hash that’s supposed to summarize the entire document. This is a lie. It’s a hash extracted from the Blockchain itself. Ryan Castellucci (my engineer at White Ops and master of Bitcoin Fu) put an extractor here. Of course the Blockchain is totally public and of course has signatures from Satoshi, so Wright being able to lift a signature from here isn’t surprising at all.
- He probably would have gotten away with it if the signature itself wasn’t googlable by Redditors.
- I think Gavin et al are victims of another scam, and Wright’s done classic misdirection by generating different scams for different audiences.
Some congratulations should go to Wright — who will almost certainly claim this was a clever attempt to troll people so he doesn’t feel luck a schmuck for being too stupid to properly pull off a scam — for trolling so many people. Not only did the media get suckered but even members of the Bitcoin community fell for his scam hook, line, and sinker.
The difficult part about being a technophile and an anarchist is that the State often highjacks new technologies to further its own power. These highjackings are always done under the auspices of safety and the groundwork is already being laid for the State to get its fingers into self-driving vehicles:
It is time to start thinking about the rules of the new road. Otherwise, we may end up with some analog to today’s chaos in cyberspace, which arose from decisions in the 1980s about how personal computers and the Internet would work.
One of the biggest issues will be the rules under which public infrastructures and public safety officers may be empowered to override how autonomous vehicles are controlled.
When should law enforcers and safety officers be empowered to override another person’s self-driving vehicle? Never. Why? Setting aside the obvious abuses such empowerment would lead to we have the issue of security, which the article alludes to towards the end:
Last, but by no means least, is whether such override systems could possibly be made hack-proof. A system to allow authorized people to control someone else’s car is also a system with a built-in mechanism by which unauthorized people — aka hackers — can do the same.
Even if hackers are kept out, if every police officer is equipped to override AV systems, the number of authorized users is already in the hundreds of thousands — or more if override authority is extended to members of the National Guard, military police, fire/EMS units, and bus drivers.
No system can be “hacker-proof,” especially when that system has hundreds of thousands of authorized users. Each system is only as strong as its weakest user. It only takes one careless authorized user to leak their key for the entire world to have a means to gaining access to everything locked by that key.
In order to implement a system in self-driving cars that would allow law enforcers and safety officers to override them there would need to be a remote access option that allowed anybody employed by a police department, fire department, or hospital to log into the vehicle. Every vehicle would either have to be loaded with every law enforcer’s and safety officer’s credentials or, more likely, rely on a single master key. In the case of the former it would only take one careless law enforcer or safety officer posting their credentials somewhere an unauthorized party could access them, including the compromised network of a hospital, for every self-driving car to be compromised. In the case of the latter the only thing that would be required to compromise every self-driving car is the master key being leaked. Either way, the integrity of the system would be dependent on hundreds of thousands of people maintaining perfect security, which is an impossible goal.
If self-driving cars are setup to allow law enforcers and safety officers to override them then they will become useless due to being constantly compromised by malicious actors.
My first Apple product was a PowerBook G4 that I purchased back in college. At the time I was looking for a laptop that could run a Unix operating system. Back then (as is still the case today albeit to a lesser extent) running Linux on a laptop meant you had to usually give up sleep mode, Wi-Fi, the additional function buttons most manufacturers added on their keyboards, and a slew of power management features that made the already pathetic battery life even worse. Since OS X was (and still is) Unix based and didn’t involved the headaches of trying to get Linux to run on a laptop the PowerBook fit my needs perfectly.
Fast forward to today. Between then and now I’ve lost confidence in a lot of companies whose products I used to love. Apple on the other hand has continued to impress me. In recent times my preference for Apple products has been influenced in part by the fact that it doesn’t rely on selling my personal information to make money and displays a healthy level of paranoia:
Apple has begun designing its own servers partly because of suspicions that hardware is being intercepted before it gets delivered to Apple, according to a report yesterday from The Information.
“Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter,” the report said. “At one point, Apple even assigned people to take photographs of motherboards and annotate the function of each chip, explaining why it was supposed to be there. Building its own servers with motherboards it designed would be the most surefire way for Apple to prevent unauthorized snooping via extra chips.”
Anybody who has been paying attention the the leaks released by Edward Snowden knows that concerns about surveillance hardware being added to off-the-shelf products isn’t unfounded. In fact some companies such as Cisco have taken measure to mitigate such threats.
Apple has a lot of hardware manufacturing capacity and it appears that the company will be using it to further protect itself against surveillance by manufacturing its own servers.
This is a level of paranoia I can appreciate. Years ago I brought a lot of my infrastructure in house. My e-mail, calendar and contact syncing, and even this website are all being hosted on servers running in my dwelling. Although part of the reason I did this was for the experience another reason was to guard against certain forms of surveillance. National Security Letters (NSL), for example, require service providers to surrender customer information to the State and legally prohibit them from informing the targeted customer. Since my servers are sitting in my dwelling any NSL would necessarily require me to inform myself of receiving it.
In the ongoing security arms race researchers from John Hopkins discovered a vulnerability in Apple’s iMessage:
Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.
It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.
To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.
Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.
“And we kept doing that,” Green said, “until we had the key.”
A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.
With the key, the team was able to retrieve the photo from Apple’s server. If it had been a true attack, the user would not have known.
There are several things to note about this vulnerability. First, Apple did response quickly by including a fix for it in iOS 9.3. Second, security is very difficult to get right so it often turns into an arms race. Third, designing secure software, even if you’re a large company with a lot of talented employees, is hard.
Christopher Soghoian also made a good point in the article:
Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green’s attack highlights the danger of companies building their own encryption without independent review. “The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly,” he said.
The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.
Open source isn’t a magic bullet but it does allow independent third party verification of your code. This advantage often goes unrealized as even very popular open source projects like OpenSSL have contained numerous notable security vulnerabilities for years without anybody being the wiser. But it’s unlikely something like iMessage would have been ignored so thoroughly.
The project would likely attracted a lot of developers interested in writing iMessage clients for Android, Windows, and Linux. Since iOS, and therefore by extension iMessage, is so popular in the public eye it’s likely a lot of security researchers would have looked through the iMessage code hoping to be the first to find a vulnerability and enjoy the publicity that would almost certainly entail. So open sourcing iMessage would likely have gained Apple a lot of third party verification.
In fact this is why I recommend applications like Signal over iMessage. Not only is Signal compatible with Android and iOS but it’s also open source so it’s available for third party verification.
Were I asked I would summarize the Internet of Things as taking one step forward and two steps back. While integrating computers into everyday objects offers some potential the way manufacturers are going about it is all wrong.
Consider the standard light switch. A light switch usually has two states. One state, which closes the circuit, turns the lights on while the other state, which opens the circuit, turns the lights off. It’s simple enough but has some notable limitations. First, it cannot be controlled remotely. Having a remotely controlled light switch would be useful, especially if you’re away from home and want to make it appear as though somebody is there to discourage burglars. It would also be nice to verify if you turned all your lights off when you left to reduce the electric bill. Of course remotely operated switches also introduce the potential for remotely accessible vulnerabilities.
What happens when you take the worst aspects of connected light switches, namely vulnerabilities, and don’t even offer the positives? This:
Garrett, who’s also a member of the Free Software Foundation board of directors, was in London last week attending a conference, and found that his hotel room has Android tablets instead of light switches.
“One was embedded in the wall, but the two next to the bed had convenient looking ethernet cables plugged into the wall,” he noted. So, he got ahold of a couple of ethernet adapters, set up a transparent bridge, and put his laptop between the tablet and the wall.
He discovered that the traffic to and from the tablet is going through the Modbus serial communications protocol over TCP.
“Modbus is a pretty trivial protocol, and notably has no authentication whatsoever,” he noted. “Tcpdump showed that traffic was being sent to 172.16.207.14, and pymodbus let me start controlling my lights, turning the TV on and off and even making my curtains open and close.”
He then noticed that the last three digits of the IP address he was communicating with were those of his room, and successfully tested his theory:
“It’s basically as bad as it could be – once I’d figured out the gateway, I could access the control systems on every floor and query other rooms to figure out whether the lights were on or not, which strongly implies that I could control them as well.”
As far as I can tell the only reason the hotel swapped out mechanical light switches with Android tablets was to attempt to look impressive. What they ended up with was a setup that may look impressive to the layman but is every trolls dream come true.
I can’t wait to read a story about a 14 year-old turning off the lights to every room in a hotel.
With the recent kerfuffle between Apple and the Federal Bureau of Investigations (FBI) the debate between secure and insecure devices is in the spotlight. Apple has been marketing itself as a company that defends users’ privacy and this recent court battle gives merits to its claims. Other companies have expressed support for Apple’s decision to fight the FBI’s demand, including Google. That makes this next twist in the story interesting.
Yesterday Christopher Soghoian posted the following Tweet:
Google announces new Android messaging app designed to "allow compliance with legal interception procedures". https://t.co/YiGz9NtENV
— Christopher Soghoian (@csoghoian) February 22, 2016
His Tweet linked to a comment on a Hacker News thread discussing Google’s new Rich Communication Services (RCS) client, Jibe. What’s especially interesting about RCS is that it appears to include a backdoor as noted in the Hacker News thread:
When using MSRPoTLS, and with the following two objectives allow compliance with legal interception procedures, the TLS authentication shall be based on self-signed certificates and the MSRP encrypted connection shall be terminated in an element of the Service Provider network providing service to that UE. Mutual authentication shall be applied as defined in [RFC4572].
It’s important to note that this doesn’t really change anything from the current Short Message Service (SMS) service and cellular voice protocols, which offers no real security. By using this standard Google isn’t introducing a new security hole. However, Google also isn’t fixing a known security hole.
When Apple created iMessage and FaceTime it made use of strong end-to-end encryption (although that doesn’t protect your messages if you back them up to iCloud). Apple’s replacement for SMS and standard cellular calls addressed a known security hole.
Were I Google, especially with the security debate going on, I would have avoided embracing RCS since it’s insecure by default. RCS may be an industry standard, since it’s managed by the same association that manages Global System for Mobile Communications (GSM), but it’s a bad standard that shouldn’t see widespread adoption.