I think it is probably important to direct attention to this post, which contains the only convincing explanation of PRISM I’ve yet seen, including the tiny budget (if it only cost $20m to process everything in Apple, Google, Facebook etc, what do they need all those data centres for), the overt denials, and the denial of any technical backdoor.
Basically, the argument is that PRISM is an innovation in the technology of law rather than the technology of computing, some sort of expedited court order programmed in Lawyer requiring the disclosure of specified data, and perhaps providing for enduring or repeated collection. This would avoid the need to duplicate vast amounts of infrastructure or trawl every damn thing, would stick to the letter of the law, and would help engineers sleep, as it wouldn’t imply creating a vulnerability that could be used by both the NSA and God-knows-who. It would also permit the President and such folk to deny that everyone was being monitored, as of course they are not.
That said, data could be requested on anybody who the court could be convinced was of interest. As the legalities seem quite permissive and anyway the court is a bit of a flexible friend, this means a lot of people. And in an important sense it doesn’t matter. The fact that surveillance is possible is important in itself. Bentham’s panopticon was based on the combination of overt surveillance – the prisoners knew that there was a guard watching them – and covert surveillance – the fact that the prisoners didn’t know at any given moment who the guard might be watching and therefore could not be certain they were not being observed.
The degree to which this was an aim of PRISM must be limited, because it was after all meant to be secret. But it is hard to avoid the conclusion that it’s there.
Something else. I’ve occasionally said that the Great Firewall of China should be seen as a protectionist trade-barrier as much as an instrument of censorship. Huge Chinese Internet companies exist that probably wouldn’t if everyone there used Facebook, Google, etc. Here you see another benefit of it – the Public Security Bureau gets to spy on QQ, but it’s harder for the Americans (or anyone else) to poke around. This may explain why the NSA seems to pick up lots of data from India and much less from KSA or China; you can PRISM for terrorists trying to affect the Indo-Pak nuclear balance and you can’t for Chinese targets.
Borders are always interesting, and this is today’s version.
Iran, of course, does another twist on this. It has a vigorous internal ISP industry, but monopolises international interconnection through a nationalised telco, DCI, that practices serious censorship. However, the same company also sells unfiltered, real Internet connectivity to actors outside Iran, notably in Oman, Pakistan, Iraq, and Afghanistan, almost certainly following Iranian foreign policy goals. DCI has even gone so far as to invest heavily in a new Europe-Middle East submarine cable to add capacity and improve quality (notably by taking a shorter route to Europe, and adding path-diversity against Cap’n Bubba and his anchor). Back in 2006, supposedly, the best Internet service in Kabul was in the cybercafe they installed in the Iranian embassy’s cultural centre.
(A starter-for-ten. Has anyone else noticed that the major cloud computing providers, Amazon Web Services, Salesforce/Heroku, Rackspace et al, aren’t mentioned?)
Update:
Yahoo! has not joined any program in which we volunteer to share user data with the U.S. government. We do not voluntarily disclose user information. The only disclosures that occur are in response to specific demands. And, when the government does request user data from Yahoo!, we protect our users. We demand that such requests be made through lawful means and for lawful purposes. We fight any requests that we deem unclear, improper, overbroad, or unlawful. We carefully scrutinize each request, respond only when required to do so, and provide the least amount of data possible consistent with the law.
The notion that Yahoo! gives any federal agency vast or unfettered access to our users’ records is categorically false. Of the hundreds of millions of users we serve, an infinitesimal percentage will ever be the subject of a government data collection directive. Where a request for data is received, we require the government to identify in each instance specific users and a specific lawful purpose for which their information is requested. Then, and only then, do our employees evaluate the request and legal requirements in order to respond—or deny—the request.
Yahoo!’s top lawyer, spinning like a top, but basically confirming the notion of PRISM as a surveillance technology implemented in Lawyer.
This is basically my best guess as to what’s happening, too. That “direct access to servers” line is clearly bullshit, at least in this context. They may or may not be able to get root access to internal servers at will, but it doesn’t make sense to implement this sort of programme using those powers, even if they have them.
Obviously the 20m figure gives this away, for a start. But more than that, it just doesn’t make sense. They’d want to be processing information as high up the abstraction layers as possible, and in a nice well-defined format, not forever patching connectors to keep up with the latest internal Google API rev. Let alone grubbing around in an actual filesystem or database!
Doing things through nice clean interfaces also minimises need-to-know. As far as the sysadmins are concerned it is just another internal service user. If you think about just how many people would have to know to allow direct access, again, it just doesn’t make sense. Not to stereotype the community or be naive, but also consider the type of person who’d have to know.
> A starter-for-ten. Has anyone else noticed that the major cloud computing providers, Amazon Web Services, Salesforce/Heroku, Rackspace et al, aren’t mentioned?
Nanog? They are talking about the absence of L3, Global Crossing, etc, anyway. As I’m sure you are aware 😉
However, I wouldn’t be surprised if there are separate programmes for telcos and for cloud computing/hosting providers. ISTM that the aims, techniques and to some degree technologies involved would be quite different in all three cases.
Let’s see what else the Guardian has lined up for us!
Telcos of course already have CALEA*. Cloudsters, I don’t know; I think CALEA is linked to the condition of telco-ness. Is AWS providing Internet service? In what way isn’t it? But it’s not an ISP…
*I know a San Franciscan privacy advocate whose first name is homophonous with this fine piece of legislation. Ironic. Don’t you think?
I thought CALEA was for traditional targetted wiretaps rather than the mirror-all-traffic-into-this-big-sealed-room approach? I don’t follow this stuff too closely, though. I suppose it could do both.
I wonder about AWS, hosting providers & similar services. To target EC2 instances and other VPSes the provider would need to literally provide direct backdoor access to the (virtual) hardware. I don’t think this would work at panopticon scale; individual targets are a different story.
The further up the stack you go the less that holds, though. I could easily believe that web hosting providers are mirroring out their apache logs, for example.
Re the irony: reminds me of the story that (IIRC) Danny O’Brien tells about the EFF’s privacy lawyer continually getting snapped by everyone’s streetview cameras.
“You are not even aware of what is possible. The extent of their capabilities is horrifying. We can plant bugs in machines. Once you go on the network, I can identify your machine. You will never be safe whatever protections you put in place.”
Edward Snowden doesn’t seem to be talking about lawyers.
He sounds like he’s talking about NSA capabilities in general. If t’other side have physical access to your stuff, nothing can be ruled out.
I think this distinction is critical.
On one hand we have general, continuous, blanket surveillance, and some level of automated processing and long-term storage, of a significant proportion of the world’s electronic communications. This is what the released material to-date has been about, and we’ve heard quite a lot about it before from earlier whistleblowers like Mark Klein, and even from official sources.
Then there is specific, targeted monitoring of known individuals and groups. We’ve not heard a lot about that, but it seems to be very much on Edward Snowden’s mind. Understandably. That is what he is talking about in that part of the interview.
Obviously over time the capabilities of panopticon surveillance will improve, as will the capacity to engage in targeted surveillance. Nonetheless they are, and will remain, very different beasts. As I understand him, Alex was only talking about the panopticon in this post.
I doubt there are any practical limits on what the NSA (and others) could do if they took a specific, focused interest in you personally, and knew where to find you.
Sorry, I think I need to follow my own advice better; on reflection that was rather muddled.
PRISM isn’t part of the NSA’s panopticon, it is access to commercial equivalents. The question is: what sort of access? Can they, in effect or in actual practice, run arbitrary MapReduce jobs over it?
The speculation is that they cannot. That they need specific, identified targets. The denials would seem to rule it out, and efficiently processing that much data would require considerably more computing infrastructure than their budget allows.
On the other hand, how much weight to put on the denials is…debatable, and maybe the funding comes from a different budget (possibly the companies themselves if they piggy-back on their infrastructure).
The budget figures don’t really make sense. Edward Snowden’s salary alone would account for 1% of it. Clearly there is a lot that it isn’t counting.
and further to that, I don’t see how those effects couldn’t be achieved in Lawyer. the power of the state over entities in its territory is considerable.
Not convinced. There’s nothing in the interview with Glenn Greenwald to suggest he’s talking about (a) a legal hack (the interpretation of a couple of bloggers) rather than (b) a technical ditto (the interpretation of absolutely everyone else), and you’d think if it was (a) he’d take the opportunity to clear up the confusion.
Not impressed by argument of form “everyone else” vs “a couple of bloggers”.
No, I’m not saying “everyone else” is right – I’m saying that “everyone else”‘s interpretation is very very widely publicised, and if it had been drastically wrong I’d have expected Snowden to correct it.
This guy is wrong about his analysis, but right about costing. Storing all the stuff means replicating all the storage and also all the processing in order to get at it…means replicating the IT industry’s infrastructure. Much better to leave it where it is, and force disclosure as required.