john

John Morgan Salomon, information security consultant, professional chainsaw juggler and south American tyrant in his spare time.

Mar 052014
 

I recently read a post on a forum frequented by systems administrators titled “I’m scared”.

The text of the post contained something like “they all have local admin access.  All of them.”

Oops.

A few days ago I had a conversation with a colleague about what a small company actually needs in terms of information security.  I am used to working in places like international banks and big pharmaceutical enterprises – companies that require a lot of data confidentiality and integrity, and which work under significant regulatory pressure.  But what about your typical SME or startup?

It occurred to me that there are three basic categories in the security space that such companies need:

1. “Security management” – this is comprised of setting security policy, understanding what “what” and “why”, performing audits, and generally keeping track of things.
2. “Dedicated security” – the slightly more advanced elements of securing a firm.  This includes firewalls, VPN concentrators, intrusion detection systems, threat management, incident response, etc.
3. “Security operations” – the bread and butter stuff.  Authentication, data encryption, etc.  A good example of this would be login passwords, or Windows group policy.

Category #1 is a control function that oversees the company’s security, from policy and architecture to reviewing implementation and results.  Although it’s sensible to have someone dedicated in charge of this, most SMEs will not be able to afford the kind of full-time expert they would need to do it right (which raises the question of whether such a person or function really needs to be full-time in a small firm?)

Category #2 – An SME or other small organization will not, beyond certain very strict limits (e.g. putting black box appliances on its network) have the know-how, time, or money to deal with high-end security technology.  Companies hosting their email with an external provider already do this, as do those with managed networks, cloud hosting, etc.

Category #3 consists of tasks and capabilities which really should be integrated in any firm’s day to day IT work.  It’s part of what a system administrators must cover as part of his daily responsibilities.

And therein lies the problem for many small companies.  Because category #1 is often missing, policy, enforcement, and review is usually weak.  People become accustomed to doing stuff by themselves, leading to lack of control, and ability to deal with stuff when it goes wrong.

Let’s say you’re the poor systems admin whose post I read.  You start work at a firm with a few dozen employees, a couple of servers, not much in the way of IT budget and a product that, while it relies on IT working, isn’t yet profitable enough to hire a full-blown professional IT organization.  Let’s also assume that the first guys to start the company did everything themselves (hooray for startups) and that this idea of “if you want it done right, do it yourself” still permeates your informal culture.

What do you do?

The short answer:  risk management 101.

Part of your job as the IT guy is to make sure (a) you’ve identified what’s missing, (b) you’ve suggested ways to fix it, and (c) you’ve made the Powers That Be aware of this and the potential consequences in a way that they understand.  If you’re lucky, (c) will lead to a bunch of budget and a mandate, resulting in (d), actually making things better.  After all, that’s what you do, right?

This is nothing more than a simplified description of what a proper risk management organization does – make sure there’s sensible rules, make sure everyone’s following the rules, and if someone’s not following the rules, show the people with the money why this is bad and how they can fix it.

Step 1 Policy and Rules.  Figure out whether there are adequate security policies in place.  These don’t have to be overly complex.  Use best practices you find online, shamelessly plagiarize from others, but keep it simple and to the point.  Is someone in charge of these?  Are they aware of this?  Who is allowed to access what?  Are you following principles like that of least privilege/need to know? for access control?  What are the consequences for not following the policy, and is HR/management/aware of these and do they support it?

Make sure it’s short, sweet, understandable, and sensible (i.e. don’t make up stupid, complicated rules for no reason other than the security policy seems too short.)  Provide concise reasons – this will also help you avoid making rules for their own sake.

Basically this should help your organization define its “risk appetite”.  This is a fancy term for what it’s willing to accept and how much it’s willing to spend money on.  The key is to make management understand exactly how much risk they face so they can make an educated and informed decision.

Microsoft has some examples to work with.  So does NIST.   A quick Google search for “small business security policies” or “small business security best practices” will turn up any number of guides.

From this, you can derive your processes – again, don’t overcomplicate things.  Who is responsible for what?  How do you handle joiners / leavers?  How do you regularly review access rights?  How is data destroyed when no longer needed?  Does everyone have a deputy, and is everyone aware of their rights and responsibilities?

Step 2 Document and Classify.  Create a mechanism for classifying your assets.  Again, not overly difficult.  You can use the standard CIA (Confidentiality, Integrity, Availability) way of looking at information, services, applications, and infrastructure.  Measure it from 1-3, 1-5, low to high, whatever.  Just make it realistic and consistent, come up with  a sensible methodology used to rate and sign off on this, and make sure everyone knows about it.

Step 3 Audit.  Compare what is to what should be.  Who has access to what?  Is this stuff being reviewed regularly?  By whom?  What happens with there results?  Who checks logs and who reports anomalies to whom?  If you see something wrong, document (1) what’s bad, (2) how bad it is, and (3) why it’s bad.

Does stuff violate your policy, best practice, or just common sense?  How?  What are the consequences of not doing anything about it?  Is it not even possible to find this stuff out (Donald Rumsfeld’s “unknown unknowns”)?  Or does your security policy just plain suck?  All of these are risks and should be clearly and succinctly documented.  The risk severity is nothing more than some function of an asset’s value (Step 2), the likelihood of something happening to it, and the severity of the consequences if something does happen to it.

This can be an educated guess.  In fact, most risk management is as much subjective art as science.  Just make sure you use consistent logic, rely on policies and standards as much as possible, and be prepared to back up your reasoning.

Step 4 Recommend Solutions.  Recommend what needs to be done to fix this.  How much will it cost, how much time will it take, and what will the consequences be?  I.e. Bob from accounting may have to wait a day for someone to commit changes to his database rather than being able to do it directly himself, but as a result, you’ll have a properly backed up and redundant centralized system rather than a crappy seven-year-old Access installation on a PC in the corner of the office.

Step 5 Obtain Sign-Off.  The best policy or audit is useless if it’s not made adequately clear to someone who has the authority to make a decision about it.  This is the part where you have to explain clearly, simply, and understandably what you came up with in the above steps – why stuff is broken, why it’s bad, and should be done to fix it.  Do you understand this, boss?  Can you repeat it back to me in your own words?  Great.  Now can you please sign it and confirm that you get it?  Wonderful.  And if not, escalate up the food chain, and create a paper trail that shows you’ve done everything you can to make those ultimately accountable thoroughly aware of all of the above.  Because once you have done this, it’s time for

Step 6 Sleep Comfortably.  Your ass is covered.  You can’t save the world, but you’ve done your best.  If your boss wants to stick his head in the sand and refuses to let you help, you’ve now got proof that you really, really tried.

Sometimes you can’t win, and it’s usually very difficult to make people care who don’t want to care – congratulations, this is now no longer your problem.

 Posted by at 9:48 am
Mar 012014
 

A colleague pointed out an article from CSO online entitled “The Death of Windows XP“.  Upshot:  slightly less than 30% of the global installed base of operating systems is Windows XP, support/updates for Windows XP are going away except for large firms willing to cough up a lot of money, and you really should migrate, now.

Why, though?

First, there’s the usual issue that unpatched systems put your own enterprise at risk.  This is blindingly obvious. They’re not just subject to active attack, but are perfect incubators for user-borne/-downloaded exploits.

Second, and more significantly, infected systems affect not just their owners, but everyone else.  A few assumptions and some back-of-the-envelope math:

  1. Company X has ca. 10,000 Windows XP systems
  2. The company has Internet connectivity commensurate with its size (i.e. lots of bandwidth)
  3. Assuming a somewhat flat internal topology, one vulnerable system infected with an active trojan quickly leads to a full-blown outbreak

This means that, pretty much in one fell swoop, you’re dealing with ten thousand potential botnet zombies on fast Internet links.  For context, the Srizbi botnet is estimated to be the world’s largest at present, at ca. 450,000 hosts.  ~2% of that is…significant, if it’s one organization.

Of course this is all hypothetical, but still potentially a serious issue for other organizations.  Unpatched systems create risk for everyone, even if it’s only by helping chew up more bandwidth with distributed denial of service.

Granted, many vulnerable systems are owned by individuals – but these are neither as concentrated nor as easy to remediate as those grouped inside organizations.

If companies aren’t motivated to upgrade systems by now, inherent risk won’t drive them to do so.  At one firm I worked for, I spent an entire year (!) attempting to have a system upgraded from Windows XP to the next OS release – a grand total of 6 tickets, despite active chasing, complaining, and escalating were canceled or mistakenly marked as resolved by the outsourced (!!) helpdesk – this appears to have been fairly typical, with no action by the security risk management organization.  That’s bad.

Uninsured drivers are fined if discovered, because their negligence can result in economic loss for others.  Coming up with a policy, mandate, and framework for identifying, classifying, and remediating negligence by enterprises would be a major bureaucratic undertaking, but given the losses incurred by security exploits annually, may be worth looking into.

 

 Posted by at 8:39 pm
Jan 272014
 

An oft-quoted truism is that the majority of security breaches originate with internal employees.

I recently attended a presentation that astutely broke down these into three categories:

  1. Inadvertent acts — eg. someone mails a confidential document to their private email address to work on it at home
  2. Malicious acts — eg. a disgruntled employee willfully committing sabotage
  3. Whistleblowers — eg. an employee leaking data to expose wrongdoing

To this I would add

  1. Espionage — eg. an employee stealing trade secrets to sell to a higher bidder (or take along to a new employer).

We can simplify these four as “stupid, angry, well-intended, greedy”.

For example, Forrester Research’s Understand The State of Data Security and Privacy Report (summarized by CSO online here) claims that more than a third of incidents stem from “stupid”.

Much focus lies on data leakage protection – prevention, detection, and analysis, through techniques such as digital rights management (DRM), intelligent log monitoring, and filtering – and on mechanisms for allowing follow-up to violations, including better integration with human resources and more industry cooperation in investigations.

As a vast generalization, information security has often struggled to work together as effectively as possible with the business – the stakeholder who pays for security.  It is frequently viewed as a necessary evil, as an expensive condition of doing business that hinders a firm’s maximum effectiveness.  Part of this stems from ‘ refusal/inability to try and understand the need for, and value of, a strong security capability, part of it is due to security’s inability to guide the business towards asking the right kinds of questions, and to communicate its own value and necessity in a way that the business can understand.

So…the security people really need to provide “the business” with analysis that helps stakeholders understand why bad things are happening, and why they are worth paying attention to, rather than just invest in expensive preventive tools or fight fires.  And that includes asking some potentially uncomfortable questions when employees do bad things for the reasons we listed above:

  1. “Stupid” – why are they sending stuff home?  Are they insufficiently trained?  Are we not giving them the right tools to do their job?
  2. “Angry” – why is this person angry?  Granted, every company has an expected background noise of discontent, but when it causes an employee to act maliciously, perhaps it goes beyond “this person is bad / crazy”.  Perhaps HR and senior management should look into the root causes of what has driven someone to commit sabotage – and learn from it.
  3. “Well-intended” – why are we being reported to the tax authorities / environment agency / labor relations board?  Let’s be honest – companies, particularly in financial services, do a lot of things that a reasonably ethical person should not be comfortable with, including some downright illegal things.  Granted, whistleblowing rules have seen strong growth in many firms, but rather than prevent it, maybe someone needs to sit the traders down for a long hard talk on why they’re giving their database administrator reason to leak details of questionable transactions to the government or the press.
  4. “Greedy” – the why is self-explanatory – loss of competitive advantage needn’t have any basis in wrongdoing or neglect.  Nonetheless, this is the point where information security risk needs to tie into business risk, so that the firm as a whole can understand its potential exposure to loss from leaks and espionage.

Before writing someone up on a rule violation, maybe we should ask them to explain why they did what they did.  For this to be effective, someone has to be responsible for collecting, analyzing, communicating, and tracking such outcomes – as with any security incidents.

That requires strong accountability.  And as with anything else, management is hard if you want to do it right.

 Posted by at 6:52 pm
Dec 212013
 

I just learned about the existence of an ongoing effort to publicly audit TrueCrypt, which I’ve started using to replace all my PGP disks.

The latest progress update is at

http://blog.cryptographyengineering.com/2013/12/an-update-on-truecrypt.html

This is awesome, and in my opinion an important aspect of the future of cryptography – “open source”, crowdfunded, transparent audits of crypto implementations are the way to rebuild confidence in technology as a vital element for ensuring trust and privacy.

In this sense, I believe that the NSA revelations may actually have had a net positive effect, in finally dispelling any lingering doubts that even open and standardized crypto implementation standards are subject to being undermined – and as we all know, once a back door is in place, anyone can hypothetically use it.

Oct 252013
 

I recently found a link on reddit to a post describing a step-by-step analysis to find out whether downloaded TrueCrypt binaries might be backdoored or if their integrity was still intact.

The first thing that struck me in the article was this line:

TrueCrypt is a project that doesn’t provide deterministic builds. Hence, anyone compiling the sources will get different binaries, as pointed by this article on Privacy Lover, saying that “it is exceedingly difficult to generate binaries from source that match the binaries provided by Truecrypt.”

The reddit comments section had a few interesting insights on that note – e.g. the idea that you could hypothetically maliciously create custom binary versions on dynamically built download pages to different classes of users, including backdoors as wanted and generating new cryptographic hashes for downloadable source/binary packages on the fly.

One common theme was the impracticability of “just check the source” – the above issues would render such a process useless, and it was pointed out that, for the average user, it’s simply not practical.  This reply makes an assertion (which I disagree with) that you don’t necessarily have to resort to such measures:

But that’s okay. You implicitly trust a bus driver to not drive the bus over a cliff when you get on it, so you can trust that most software writers have no malicious intent toward you. Don’t drive yourself mad reading source code for days. Look out for warning signs, exercise common sense, and realize that if somebody with enough resources really wants you specifically, they will get you.

Unfortunately this is only partially true.  I had a thought about this exact problem – even for large enterprises, building and auditing source is a pain in the ass.  In one of my jobs, I ran a pretty sizable team dealing with both manual code review and automated static analysis, as well as pen testing and other ways of providing some degree of assurance that there weren’t avoidable security holes (either bugs or backdoors) in both in-house and externally obtained applications.

It’s partially right to (generally) assume “no malicious intent” – but this is where the fact that most people haven’t the slightest clue of how “risk” really works comes in.  One performs a risk assessment every time you make a decision like what you described; in big companies, it’s bad because most have fairly complex frameworks detailing how to do this, which often end up in formulaic processes that don’t really provide risk transparency, but rather a sort of tick-box fig leaf alibi that lets them say “yeah, we understand the risk of not doing this”. No, you really don’t. You understand a number within a certain set of parameters without (usually) understanding what you don’t understand – Donald Rumsfeld’s famous “known unknowns” and “unknown unknowns”, which, in retrospect, aren’t nearly as stupid as they first sounded.

Ultimately, it comes down to some variation of probability * impact – and for any company of even moderate complexity, there’s always enough software with the chance of a hole having massive impact to make it necessary to test code.

It’s a hugely expensive process – but for me, a positive side effect of the whole NSA clusterfuck is that companies have been forced to finally start systematically confronting the elephant in the room that they’ve been ignoring for years, that being, can you trust your vendors / open source?  No, not really, and you’re going to have to spend a lot of money and time to do it right.

So I wonder – there are e.g. static and dynamic analysis tools out there, but most are barely usable due to the highly variable nature of code or the opacity of compiled binaries.  They, as with other testing tools, are typical of the most common approaches to testing – i.e. looking into the final product.   Some vendors, e.g. Veracode, take the approach of systematically evaluating vendor-provided software, thus basically creating a test factory that a consumer then pays to be able to trust.

However, nobody is doing what Xavier de Carné de Carnavalet, author of the original post, is trying to do – evaluting whether you’re getting software that has its integrity intact.  Reading through his article shows the complexity and cost in time and expertise of doing this.

But why not try to streamline the process of checking integrity itself?  A hypothetical approach would be a standardized framework for ensuring compiled code integrity – starting with the already commonly-used formats for source package signatures.  Most code packages already come with hashes/signatures, so you can verify that.  And many of the steps in the article could be automated – the number of false positives and reports for review in comparing certain hex strings would surely be significantly less than with the average static analysis tool.  And it’d provide a strong additional layer of assurance for consumers of commercially built open source tools.

 Posted by at 12:03 am
Oct 032013
 

The classic security view of the enterprise is that of an eggshell – a lot of IT infrastructure and data inside a perimeter protected by access control, firewalls, malware defenses, and other “monolithic”, centralised technologies.

The logical progression to a more distributed set of protections was subsequent step in the early 2000s – hardening IT assets to be able to survive being exposed to the Internet, even if they were still located behind a protective perimeter.  Hardening costs money and tends to reduce functionality, so any increase in security has traditionally been viewed as a tradeoff with cost/flexibility – hence the need to tag data assets (e.g. with CIA ratings) and to assign security accreditation ratings to infrastructure components and applications – so high-value data is only handled by systems and applications with appropriate levels of protection.

Particularly for banks, the past 5-6 years have seen an increase in mechanisms, such as data leakage protection, that assume all information must be subject to controls like cryptographic digital rights management and role-based access control – to try and close as many avenues as possible for data to leave the perimeter by any means possible, with predictable waggish comments about how a malicious employee can still take a digital camera to his laptop.

This fundamentally misses the reasons for such controls – making leakage of information beyond the “protected” space malicious and wilful, ensuring that only truly authorized persons have access to critical data, and that accidental data leaks are made much less likely.

Nonetheless, few organisations systematically pay much attention to what happens to their assets outside their traditional perimeter.  Some examples of what’s often overlooked:

- Legal data handled by external law firms
- Third-party manufacturing
- Connections to stock exchanges
- Financial analysis purchased from external suppliers
- Externally hosted servers and services
- Inadvertent disclosure of classified information via social media
- Threat / vulnerability data about externally purchased software (hello to my friends at the NSA)

Why?  Among the fundamental reasons I can think of is the sad growth in compartmentalisation of IT responsibilities, driven by cost pressures – any organisation that’s put under enough existential pressure by the bean counters naturally goes into cover-your-ass mode.  Managers refuse to take on any responsibilities that are not specifically in their measurable and mandated objectives, even if it means turning a blind eye to obvious and widely known problems.

Many larger companies maintain pretty good contacts to others in the industry and government, as part of what a former colleague of mine calls “fight clubs” – a really cool term for discreet, fairly exclusive, highly trust-based organizations and networks, formal and informal, that exchange information about security trends, threats, potential “gotchas” and other security topic – likewise, particularly larger outfits tend to have some sort of internal communications flows, based on experience, where various parts of the business are kept informed of potential threats.  But is everyone who should be aware of this actually informed?  I’ve frequently tried to get clients and employers involved in such bodies, only to find out that someone’s already doing it – good!  But are they telling everyone about this internally who should know about these activities?

Then there’s the problem of coverage – I’ve often heard the phrase “yes, we know, it’s an issue, but there’s so much more to do / we don’t have funding for it / it’s not in our goals and objectives”.  Does the company consistently foster a culture of spotting problems and reporting them?  Is there an IT security consulting function that can talk to both tech and business managers, and that has a lot of freedom to do the right thing and ask hard questions?

Another reason for this is a “Maginot thinking” about corporate perimeters – always preparing for the last war.  This is not to imply that the idea of the perimeter is dead; just because an IT system could withstand being placed on a public network doesn’t mean it should be.  But insert clichés about the increasingly networked / interdependent nature of modern business, widespread practices like outsourcing common IT or business functions are potential sources of risk.  Does the company know what / whom it’s connected to?

A third big driver for this limited view is the focus on certain kinds of information assets, such as intellectual property or customer identifying data, without looking at all the other parts of the company that have a stake in making sure external threats are taken seriously.

Is there a link because the IT security risk organisation and someone with overall business risk competence?  Do the public relations people not just know where to go with questions about security, but are they even aware that they should be asking questions?  Does senior management have a holistic overview of everything it should consider an “asset”?  Does the organization regularly do threat modelling?  Is there an organization that is responsible for making sure anything reported is analyzed, tracked, and if necessary, followed up on?

In conclusion, there’s no single solution, but an upper management that encourages a few basic things is already far ahead of the game:

1. Create a competent, funded, flexible function, such as a consulting team, filled with competent, motivated people, with the freedom to ask difficult questions – and make sure they’re proactively going out and talking to all areas of the business
2. Make someone responsible for ensuring that reports of threats, vulnerabilities, trends, etc. are properly analysed and followed up on
3. Encourage all employees and managers to think, think, think, and to talk to others in the organisation about their security-related ideas and concerns
4. If anyone suggests putting an accountant in charge of IT or specifically security, feed them to the bear

 Posted by at 6:39 pm
Oct 182011
 

French magazine 01net reports (article in French) that researchers from ESIEA, a French engineering school, have found and exploited some serious vulnerabilities in the TOR network.

According to the article, they performed an inventory of the network, finding ca. 6,000 machines, many of whose IPs are accessible “publicly and directly with the system’s source code” (?), as well as a large number of hidden nodes.

There’s a lack of detail, but supposedly the attack involves creating a virus (?) and using it to infect such vulnerable systems in a laboratory environment, and thus decrypting traffic passing through them – again via an unknown, unmentioned mechanism.  Finally, traffic is redirected towards infected nodes by essentially performing a denial of service on clean systems.

source: wikipedia.org

I’m skeptical, as the piece contains just too much “oh, and then you hack component x and compromise component y and voilà, you’re in” to necessarily be plausible.  Furthermore, the ESIEA page has a large video presentation on French backwardness in “cyberwarfare” – any time a reputable institution uses such terms, it makes me wonder how much it’s angling for more funding from buzzword-prone politicians, with resulting pressure on researchers to provide supporting, news-grabbing headlines.

However, if it is real, details are to be presented at Hackers to Hackers in São Paulo on October 29/30.  TOR is no more than an additional layer of obfuscation and should  not be relied upon for anonymity or security.  Like any darknet, it is a complement to application-layer encryption and authentication, no more.

 Posted by at 10:12 am
Jan 312011
 

A lot of recent discussion has focused on the idea of the “Internet Kill Switch”, introduced as part of United States Senate bill S.3408 (PDF) Protecting Cyberspace as a National Asset Act of 2010″ in 2010, and its implications for a government-imposed blackout of the United States and its Internet communications as mandated by the President.

In particular, the ability of the Egyptian government to coerce backbone providers to essentially drop the country off the Internet, most likely through disabling of BGP associations has been interpreted as a frightening precedent for enabling the United States government to shut off dissent.

The Telecommunications Act of 1934 (full text here) already gives the President very broad powers over communications infrastructure in cases of “war or emergency” (Sec. 606).  This act, while obviously focused on radio communication, does not specify the communications medium.  Sec. 606(a) and (c) specifically pretty much specify that, in case of war or emergency, the President can effectively do as he sees fit with American communications infrastructure.  No distinction is made between private and government communications.

I had a look at S.3408. As far as I can tell, it establishes a directory of “Cyberspace Policy” which basically oversees most US non-military resources.

Most of it seems eminently reasonable (e.g. advising the President on security issues, coming up with risk management and incident response methods, helping to coordinate development and implementation of standards, making sure one hand knows what the other is doing, etc.)  The law also defines the responsibilities of US-CERT, which already exists.

Where it gets a bit weird is Sec. 244(g)(1) – I may be misinterpreting this, but says that the Director of US-CERT can obtain “any…information…relevant to the security of…the national information infrastructure necessary to carry out the duties, responsibilities, and authorities under this subtitle” (editing is non-destructive, i.e. I tried to not change the meaning of the phrase.) It’s very ambiguous, and implies to me a seeming total lack of control over what information (including confidential, personal data) the Director can access from anyone, anywhere. The bill does specify data protection/privacy requirements, but these appear to be often unclearly worded (a lot of use of subjective wording like “as necessary” or “reasonable”).

Sec. 248 seems very sensible (i.e. cooperate with other agencies, private companies, and foreign governments when dealing with vulnerabilities and attacks, but in terms of “recommendations”). 248(b)(2)(C) basically seems to say “come up with a plan in case shit gets real”. Fair enough.

Sec. 249 “National Cyber Emergencies” is where I assume the problem lies.  The Director, when the President declares this, can require owners of “critical infrastructure” that’s covered by 248(b)(2)(C) to take emergency measures that are the “least disruptive means feasible”. Such emergencies have a 30 day runtime, but seem to be extendable indefinitely. “Critical infrastructure” is defined as relating to section 1016(e) of the USA PATRIOT Act (42 U.S.C. 5195c(e)) — i.e.

the term “critical infrastructure” means systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.

On the other hand, Sec. 3 (14) refers to “Federal information infrastructure”, Department of Defense systems, “national security systems”, and “national information infrastructure”:

(A)(i) that is owned, operated, or con trolled within or from the United States; or

(ii) if located outside the United States, the disruption of which could result in national or regional catastrophic damage in the United States; and

(B) that is not owned, operated, controlled, or licensed for use by a Federal agency.

This is extremely vague, and despite the reference to the USA PATRIOT Act definition above, no information is provided as to who defines this.  Good luck getting AT&T staff in Germany to shut off lines when ordered to by headquarters.  Similarly, under the above definition, the Zambian national Cobalt Thorium G mining corporation’s email servers could be construed to fall under (B).  Talking Points Memo had an interesting run-down on the “Kill Switch” issue, but unfortunately glossed over the aspect of defining what is covered.

The rest of the bill deals with definitions of agency responsibilities, mainly on how to secure government information infrastructure and information.

Interestingly, S.3408 also spends a lot of time discussing the responsibilities of US-CERT and the Director of Cyberspace Policy with regards to risk management, communication (e.g. ensuring that the left hand knows what the right hand is doing), establishment and application of standards, vulnerability and threat response, and generally things that the industry has been screaming about for years.  The Russian Federation, Chinese People’s Liberation Army, and Israel, among others, have established significant information warfare capabilities, variously specializing in sabotage, espionage, denial of service, and other aspects.

The United States, by comparison, maintains the National Security Agency, US-CERT and numerous public-private partnerships, various military units specialized in “cyberwarfare”, and branches of several government agencies for preventative, offensive, and defensive operations.  Coordination among these makes sense, especially if it involves single points of contact and distribution for vulnerability and threat information (beyond the Dept. of Homeland Security‘s asinine “color” threat scheme (which is being discontinued in any case.)

The idea of any government being able to “shut down” communications is arbitrarily egregious, but that’s a political, ideological concern.  However, on a purely practical level, I don’t see it as feasible for the United States to do so.

Ars Technica has a discussion about the “hows” of disconnecting a country.  They make a good point:

Like in Egypt, in Europe almost all interconnection happens in the capitals of the countries involved. Not so in the US: because the country is so large, and traffic volumes are so high, large networks may interconnect in as many as 20 cities. Numerous intercontinental sea cables land in the Boston, New York, Washington DC, Miami, Los Angeles, and Seattle regions. So in Egypt or many medium-sized countries, killing the connections between ISPs wouldn’t be too hard. In the US, this would be quite difficult.

Likewise, DNS is out due to the distributed nature of root servers.

More importantly, though, Constitutional issues aside, the U.S. is simply too distributed.  Too many commercial interests are involved (shut down the NYSE’s connectivity, and Goldman Sachs bankers will show up on the White House lawn with shotguns), U.S. law enforcement and regulatory bodies are too decentralized to reliably be able to enforce a shutdown at an ISP level as what happened in Egypt.

Read the bill, draw your own conclusions, but don’t panic-monger.

 Posted by at 2:23 pm