Archive for the ‘Computers Aren’t Magic’ tag
The Internet remains one of the few communication tools that has avoided falling entirely under the state’s control. This is likely due to its decentralized nature. Unlike communication systems of yore that relied on centrally managed systems the Internet was designed to avoid centralization. Anybody can setup and run a web server, e-mail server, instant messenger server, etc. As it currently stands one of the central points of failure that still remain is the Domain Name System (DNS). DNS is the system that translates human readable uniform resource locators (URL), such as christopherburg.com, to addresses understood by computers.
Most people rely on the DNS servers provided by centrally managed authorities such as their Internet service provider (ISP) or other companies such as Google or OpenDNS. Unfortunately these centralized agencies are central points the state can use to censor or otherwise control the Internet. The United States government has exploited this vulnerability in order to enforce copyright laws and it is likely they will exploit this vulnerability to censor other content they deem undesirable. Thankfully there is no reason we have to rely on centralized DNS servers. DNS, like every other protocol that makes up the Internet as we know it, was designed in a way that doesn’t require central authorities. Enter OpenNIC, a decentralized DNS.
I haven’t had much time to experiment with OpenNIC so it may not even be a viable solution to the centralized nature of DNS but it looks promising. OpenNIC is a network of DNS servers that not only resolve well-known top level domains (TLD) but also resolves OpenNIC specific TLDs such as .pirate. Since the system is decentralized there are no single points of failure that can be easily exploited by the state. I plan on experimenting with OpenNIC to see how well it works and, if it works for my needs, switching over to it for my domain name needs. I’ll also write a followup post overviewing my experience with the system and whether or not I can recommend it for general usage. It is my hope that OpenNIC will serve the purpose of avoiding the state’s influence over DNS and thus assist those of us who are actively fighting against the state.
That should be the title of this headline:
At around 3:45 a.m. on March 24, someone in Fort Lauderdale, Fla., used a mobile phone to Google “chemicals to passout a person.” Then the person searched Ask.com for “making people faint.” Then Google again, for “ways to kill people in their sleep,” “how to suffocate someone,” and “how to poison someone.”
The phone belonged to 23-year-old Nicole Okrzesik. Later that morning, police allege, she and her boyfriend strangled 19-year-old Juliana Mensch as she slept on the floor of their apartment. The Google searches, along with incriminating text messages between Okrzesik and her boyfriend, came to light as authorities investigated Mensch’s death. But what if they could have been alerted to the suspicious-sounding searches immediately? Could they have rushed to the apartment and saved the girl’s life?
Can you guess where this is going? Yup, Slate is hypothesizing the use of search data to effectively go pre-crime on peoples’ asses:
Web search data, by contrast, contains information about specific individuals’ thoughts and plans. In theory, Google or Ask.com could have flagged Okrzesik’s search queries as suspicious and sent the cops her device’s IP address. In the Hollywood script, a vigilant officer would notice the alert, rush to the scene, and knock on the door just as Mensch’s assailants were about to do her in.
In reality, there are a few obstacles that scenario. For starters, police would need instant access to the search data and a way to connect it to a physical address. These days they usually get electronic records only after a crime has been committed and they’ve built up enough evidence to obtain a warrant. They use the data not to prevent crime but to build their case for arrest and conviction.
There is also another obstacle to Slate’s scenario, people search for shit they have no intention of doing all the time. While I’ve never smoked marijuana before yet I often search of marijuana related topics for blog posts, historical information, and genuine curiosity. I’ve searched for terms like “can X kill somebody” where X is a random chemical because I’m simply curious. If somebody went through my search history they would probably think I’m quite the suspicious individual. I’m sure the search string “how long can a person survive without oxygen” would raise a few red flags in a law enforcement database (for those of you who are curious the answer appears to be somewhere between three to five minutes).
Yet the next three Google searches on Okrzesik’s phone—“ways to kill people in their sleep,” “how to suffocate someone,” and “how to poison someone”—seem to clearly indicate that someone has a strong curiosity about how to kill someone. One can also imagine other searches—say, a series of queries about the ingredients used to make anthrax—that law enforcement agents might like to know about.
Yeah, because law enforcement’s time should be wasted looking into a kid who was just wondering what antrax is and how it’s made.
Computers aren’t magic, they can’t predict crime. Using search terms to predict crime is absurd. People search for strings that appear criminal in nature all the time. Sometimes the people are looking for a recent story related to a crime, sometimes they’re interested in case histories surrounding such crimes, and sometimes they’re just curious if such a crime could be perpetrated.
It’s unfortunate that people who don’t understand computers have bestowed these wonderful electronic devices with mythical powers. Articles like this remind me of those yahoos who think the Venus Project is a good idea. If you’re not familiar with the Venus Project count your blessings, it’s an idiotic idea to centrally control economics with a super computer in order to bring forth utopia. How we’re suppose to build a computer that can control an entire economy when we can’t even build one that can accurately predict the stock market still remains unanswered.
Computers, like any other tool, is very good at performing the task they were build for. If you need to crunch numbers a computer is the right tool for the job. If you need to predict human behavior computers are all but entirely worthless. In order to build a computer that can do something we must first know how to do it. Since we can’t predict human behavior there is no way we can build a computer to do it. These people who talk seriously about using computers to solve crimes before they happy are living in a fantasy land made possible by a complete ignorance of computer technology.
Unfortunately other people who are ignorant of computer technology will latch onto this and think it’s a good idea. Thus this idiocy will continue to perpetuate.
You know who’s birthday it is? Obviously if you read the title. A big happy birthday goes out to the Father of Computer Science.
One rule I have here is any comments containing a link that uses a URL shortening service gets removed, no questions asked. I do this because as Bruce Shcneier shows us those shortened URLs are a huge security risk. Cory Doctorow recently got screwed by a phishing attack via a good old URL shortened link:
I opened up my phone fired up my freshly reinstalled Twitter client and saw that I had a direct message from an old friend in Seattle, someone I know through fandom. The message read “Is this you????” and was followed by one of those ubiquitous shortened URLs that consist of a domain and a short code, like this: http://owl.ly/iuefuew.
Never click on a URL from a URL shortening service. You have no idea where they will lead you or what the page they link to will contain.
I mentioned some time ago about a situation occurring where a school was caught spying on students via webcams built into laptops that were issued to students. Well apparently there is nothing to see here:
An “independent” investigation into the Lower Merion School District laptop scandal has concluded that there’s no evidence that students were being spied on. This is despite the existence of 58,000 photos surreptitiously taken of students on or around their computers and e-mails between district IT people commenting on the entertainment value of the photos. The 72-page report (PDF) from law firm Ballard Spahr claims, however, that most of the photos were not seen by anyone and that the district merely failed to implement proper record-keeping procedures.
Yeah obviously there was no spying. Sure they had 58,000 pictures of kids doing who-knows-what but most of the pictures weren’t actually seen by anybody, scouts promise. After all I’m sure the Ballard Spahr law firm has evidence proving none of the pictures were viewed:
Ballard Spahr admits that there is no way to determine how often the images were viewed, but says it found no evidence that the IT staff had viewed any of the images. Additionally, it says there was no evidence that district administrators knew how TheftTrack worked or even understood that large numbers of images were being collected in the first place.
Oops I guess not. Oh wait there was evidence… of the pictures being viewed:
This, of course, is the problem: because there was very little record-keeping going on and no official policies, there are few ways to know who knew what and when. However, claiming that there’s no evidence whatsoever that IT staff saw the images seems disingenuous, considering the fact that e-mail records were dug up last month that showed at least two IT administrators chatting about the photos. One staffer that has since been put on leave, Carol Cafiero, described the pictures as “a little [Lower Merion School District] soap opera,” while another staffer responded, “I know. I love it!”
Yes the school gave all the students laptops, installed spyware (in the most literal sense) on the machines, but didn’t really document it nor put any polices in place of when the cameras were to be used. That’s doesn’t scream trying to cover your tracks because you knew what you were doing was going to land you in very hot water.
But the fact of the matter is the school went to great lengths to ensure an outside party chose an independent entity to carry out the investigation so no possibility of bias could have entered into the equation:
One detail of note is that Ballard Spahr was hired by the Lower Merion School District itself to carry out the investigation, casting doubts on the true “independent” nature of the report.
Fuck me. I’m still hoping that school district gets sued right into oblivion.
I know quite a few parents who don’t want their kids having a laptop with a webcam because pedophiles may be able to access the camera without anybody’s knowledge and watching everything going on. I usually write such concerns off as over-the-top paranoia but I guess when the school is providing the laptops you should be worried about such things (the cameras being activated remotely without anybody’s knowledge, well and possibly the pedophiles doing it depending on the truth reason for installing that spyware).
My ultimate question here is who requested the installation of the spyware? Did the IT people do it without asking the school administrators or did the school administrators ask the IT people to do it? This will ultimately show the guilty party.
We’ve all seen movies where the main star creates a computer virus by aligning three dimensional cubes on a 10 monitor display in order to create a super virus to destroy the bad guys’ computers. Hollywood believe computers are magic and I found a good list of Hollywood’s favorite computer sorcery. My pet peeve is on there:
3. You can zoom and enhance any footage
This has long been the staple of the lazy writer (particularly those working for CSI): a security camera or photo is put on a screen, someone asks for zone G4 to be zoomed and enhanced, then as if by magic stunning detail appears from nowhere and the criminal is identified.
For this system to work it either requires every camera and CCTV system to use Gigapixel resolutions, or such incredible computing technology that Hollywood could throw away all of its expensive HD cameras and shoot everything using £50 camcorders.
As we all know, all zooming into a poor-quality image would do is give a muddled blurry mess on the screen. This technique was recently brilliantly parried in Red Dwarf.
A while back I mentioned that I dropped Google Chrome and returned to Firefox. My reasoning revolved around features unavailable in Chrome which was available in Firefox through extensions. Well the two features I wanted most have been added in a previous build of Chrome: the ability to block all scripting except for pages I white list, and better cookie management. Yes I’m still on Firefox. Why? Because Chrome’s script blocking and cookie management features are severally lacking in my opinion.
In Chrome’s advanced settings you can chose to block all scripting and cookies from sites not on your white list. This is exactly what I want as scripting is the defacto method of exploiting a computer these days and cookies are tools for spying on sites you visit. The problem is Chrome’s interface for it’s script blocking sucks. If a site has scripts that are being blocked an icon appears in the address bar. If you click on this icon you have two options: keep blocking scripts or white list the sight. NoScript on Firefox gives a third option I’m very fond of, temporarily allow scripting. I only white list sites I trust and visit frequently. But oftentimes I find myself visiting websites that require scripting to be enabled in order to gleam information from. In this case I temporarily allow scripting, get the information I need, and know that scripting will be disabled automatically for that site when I close my browser. It’s a great feature.
So yes Firefox is a much slower browser that is a big on resources. But the power extension developers have in Firefox means you can make the browser extremely secure whereas in Chrome you can’t enhance its security outside of methods Google allows. Due to this I’m still on Firefox and will be for the foreseeable future. Since I’m here I thought I’d let everybody know what security related extensions I’m using.
NoScript: I love this extensions. I will go so far as to say this extension is the primary reason I’m still using Firefox. What it does is blocks all scripting on all websites unless you add said site to your white list. You can add a site to your white list either permanently or only temporarily if it’s a site you don’t plan on visiting again. It complicates web browsing and therefore isn’t for everybody (or even most people I’d venture to say). As a benefit most of those annoying flashing advertisements get blocked when using NoScript. This extension is constantly being updated with new security related features.
CookieSafe: Cookie safe is a plugin that allows you to managed website cookies. There are three options available for each web site. The first, and default settings, is to block cookies all together. The second option is to temporarily allow cookies (they will be wiped out upon closing your browser) and the third option is to add the website to your white list which will allow cookies for that domain. The plugin only allows cookies from specific domains meaning you don’t have to worry about third party cookies getting onto your system (although this feature is available on most major browsers the implementations generally suck).
Certificate Patrol: I’ve mentioned a research paper I’ve read recently that talks about SSL security and it’s ability to be exploited by governments. Although there is no sure fire way to detect and prevent this kind of exploit you can strongly mitigate it. Certificate Patrol is an extension that displays all major certificate information for a secure web page the first time you visit it or when the certificate changes. So when you visit www.example.com the certificate information (we’ll assume it’s a secure site) will be promptly displayed by Certificate Patrol the first time you navigate your way there. If the certificate changes when you visit the site again the new certificate information will be displayed including what has changed. One mechanism to catching a certificate is looking at the issuer. For instance Internet Explorer trusts the root certificate for the Hong Kong Post Office. If you visit www.example.com and Certificate Patrol notifies you that the certificate has changed and the new one is provided by a different root authority you know something could be up. If the site’s certificate was previously provided by VeriSign and the new one is provided by the Hong Kong Post Office you know something is probably fishy. This could point to the fact the sight is not actually www.example.com but a site made by the Chinese government in order to capture information about dissidence who visit www.example.com (obviously some DNS spoofing would be required to redirect visitors to their site as well).
Those three extensions help mitigate many common web based attacks. This post is not to say none of this can be done in Chrome though. For instance you can manually check for certificate changes in Chrome but you will have to do it every time you visit a site to see if the certificate changed or not. Certificate Patrol simply automates that task. Likewise you can block cookies and scripting in Chrome but the interface to do either is more cumbersome than using CoockieSafe and NoScript.
Personally I value security over performance and that is why I’m still sticking with Firefox.
A difficult question has been put forth in regards to Apple’s recently released iPad (you may have heard about it):
Doing a little coding, we’ve discovered that iPad apps only have access to 256MB of RAM and the processor thinks it is a single core (probably ARM Cortex A8) processor.
So how does Apple get applications to run so fast? Thanks Thomas!
Considering the device can only run one third party application at a time I’d say you have your answer. If developers have gotten so bad that they can’t get their small application aimed at mobile devices to run on an single core processor with 256MB or RAM then they have failed as a developer. Seriously my old Palm PDA opened and ran applications instantly and it has a paltry 16 Mhz processor and 512KB of RAM which was split between storage and application use.
So I just learned something that probably everybody else already knew. There is a Unicode character for the old Soviet hammer and sickle. Note for some of you it may show up as a question mark or a box with numbers in it. That’s just poor Unicode support in action.
U+262D prints ☭ (size increased to show detail). Apparently communism prevailed.
Note that I didn’t say security hole nor security flaw, that was intentional. The nerd part of my brain has been working in overdrive as of late which means I’ve been looking into geeky things. One thing that always intrigues me is the field of security. Well I found the following story on Wired that talks about a security issue in SSL/TLS (The security mechanisms used prominently by web browsers to secure web pages). The article leads to a “no duh” paper that shows how government entities can use their power to subvert SSL/TLS security by cohering certificate authorities into issuing valid certificates (Anybody who knows how SSL/TLS work already knew this was a possibility).
The part that interested me most was an exert from one of the sited sources in the paper. See back in the day there was some kerfuffle over the fact that Microsoft included a couple hundred trusted root certificates in their operating system. Root certificates are what ultimately get used to validate a certificate issued to a website. Thus root certificates are the ultimate “authority” in determine if a website you are visiting is valid or not. The more root certificates you have the large the possibility of a malicious certificate being certified as trusted (Statistically speaking of course. This assumes that with more root certificates the possibility of one of those root certificate “authorities” being corruptible increases). Anyways Microsoft eventually trimmed down the number of root certificates included in their operating system. But they didn’t actually cut down the number of certificates because according to their own developer documentation:
Root certificates are updated on Windows Vista automatically. When a user visits a secure Web site (by using HTTPS SSL), reads a secure email (S/MIME), or downloads an ActiveX control that is signed (code signing) and encounters a new root certificate, the Windows certificate chain verification software checks the appropriate Microsoft Update location for the root certificate. If it finds it, it downloads it to the system. To the user, the experience is seamless. The user does not see any security dialog boxes or warnings. The download happens automatically, behind the scenes.
Microsoft just pulled a security theater here. They didn’t cut down the number of trusted certificates, they just moved them somewhere people wouldn’t see them. If you connect to a web page that has a certificate that can’t be validated against a root certificate Windows will automatically go out to Microsoft’s servers and see if a root certificate there will validate the web site’s certificate. If one of those root certificates will validate the web site certificate it is downloaded onto your machine automatically and the site is listed as trusted. In essence Windows trusts more root certificates than it lets on.
So what does this mean? Well it means the window for having corrupted root certificate authorities is larger. With the exception of Firefox all major web browsers depend on the underlying operating system’s root certificate store to validate web pages (Firefox actually ships with it’s trusted root certificates and uses it’s own store as opposed to the underlying operating system’s). This also gives two potential locations to place a malicious root certificate. If an attacker was able to gain access to Microsoft’s online root certificate store and upload their own root certificate any SSL/TLS page they created using that root certificate for validation would show as trusted in all versions of Windows (Firefox still would show the site as untrusted). Granted the window for this attack would be small as Microsoft would most likely find it almost immediately and remove it. Likewise the likelihood of such an attack occurring a very small considering the short time frame it would be valid for. But it’s interesting thing to ponder regardless. Additionally the same attack could create a binary of Firefox with the same malicious root certificate included and make it available for download causing the same problem for Firefox users.
No matter what operating system or browser you use the validity of SSL/TLS connections eventually requires that you trust somebody (Which goes against the trust no one security motto). The question here is who are you willing to trust. Only you can determine that but knowing how a security system works and how it’s implemented are important in making that decision. Anyways I just thought that was interesting.