Nov 272006
 

In March 2005, Brazilian federal police raided the offices of a Credit Suisse subsidiary in Sao Paolo. In addition to arresting the local management (and from what I understand, keeping them incommunicado for something like 48 hours), the Brazilians probably also had unfettered access to at least parts of CS’ worldwide corporate network. Similar harassment of international companies, both legitimate and arbitrary, has occurred in Russia and other countries.

Not all governments welcome the presence of high net worth private banking client advisors, or representatives of competitors to national champions in their jurisdiction, and such individuals should worry about industrial espionage. Furthermore, countries such as the United States have enacted restrictive immigration policies during the past few years, which allow pretty much unfettered access to laptops of business travellers. Several colleagues of mine entering the US for security conferences have been harassed upon identifying themselves as security consultants or experts.

Additionally, the maintenance of a fixed office may simply not be economical in certain regions, especially when individual sales reps, consultants or other professionals cover large geographical areas. The laptop should thus function as a completely secure, untraceable, self-contained office environment, giving the user access to all functionality he would have at a fixed, secure office but without endangering company data, user safety or corporate infrastructure.

One of my on-again off-again projects is the design of a secure mobile platform. Beyond the usual full-disk crypto + VPN solutions used by most companies with executives on the road, I believe that a true mobile architecture involves several more parts that are often ignored.

The Jericho Forum proposes ‘de-perimeterization’ as the removal of fixed barriers to the core network; they do not, however, offer a great deal of substance or concrete suggestions. To this end they are occasionally criticized, although in this particular case I can’t agree with the author. IPSEC may have its weaknesses, but solution elements such as Check Point’s Secure Domain Logon (which uses a “shim GINA” to establish a VPN connection to a corporate internal network before Windows authentication credentials are actually passed through from the XP GINA) both make PKI and smart-card authentication feasible and allow full integration of most types of mobile clients.

As the above hints at, the goal of a mobile solution must be to extend the corporate perimeter without sacrificing any security or usability, something which I believe is entirely doable and which, in fact, we’ve gotten nicely working on a number of occasions. However, as important as the integrity and security of a given laptop, are the maintenance of anonymity and discretion, and the manageability of a given workstation. Neither the laptop nor a user’s regular everyday actions should give any clues as to the provenance of a laptop, and in no way should compromise of a laptop construe a threat to the integrity of data on the laptop or corporate infrastructure.

In Part II, I’ll discuss individual elements of such a solution.

 Posted by at 8:10 pm
Nov 252006
 

I recently stumbled onto the Security Officers Management & Analysis Project and thought I’d share. It’s an attempt to create a community-supported risk analysis processes and best practices repository, and a pretty cool idea at that.

The handbook is currently in version 1.0, downloadable from the site, although some of the other resources (risk analysis guide, repository) are in development. The SOBF (Security Officer’s Best Friend) — a risk reporting and analysis tool is also in beta, but looks to be nearly usable. I am going to have a closer look at this to see how it can work together with more technology-specific risk analysis tools like Symantec’s, or with methods such as CERT’s OCTAVE. As it stands, it might be an interesting start for companies who’re out to build their own custom tools and methodologies anyway.

I hope this doesn’t go the way that the OpenCA/OpenPKI project did a while ago (published a handbook that was a great start, but stagnated for a long time, although there seems to have been some work going on recently.) Check it out.

Edit: Adrian Wiesmann from SOMAP just wrote me a nice note to tell me that they’re working on the second version of SOBF, and are looking for a project manager in Switzerland as of present (25.11.2006.) “To make it less possible that the project stops again right after starting, we are actively looking for a project manager which would coordinate the contributors, manage timelines and subprojects.” So if you’re interested, drop them a line via their website.

 Posted by at 3:45 pm
Nov 252006
 

Just a quick note that Kevin at http://www.vulnerabilityassessment.co.uk has a new version of his excellent framework diagram –

http://www.vulnerabilityassessment.co.uk/Framework.png

Good stuff, check it out. I’m a big fan of the pen testing processes he puts up, very methodical.  Also check out his ace default password list:

http://www.vulnerabilityassessment.co.uk/passwords.htm — wish I’d had that at Buenos Aires airport, as their wireless routers didn’t seem to be particularly well configured (and of course I’d forgotten to re-build nessus on the Powerbook, joy.)

 Posted by at 1:48 am
Nov 232006
 

Security Focus had an article yesterday about the virus attack that hit Second Life last Sunday. Apparently, this was a self-replicating exploit of the ability to create objects in SL, which bogged down servers.

A few years ago in a fit of mental masturbation, some colleagues and I postulated an online environment incorporating elements of Neal Stevenson’s Metaverse, Freenet, grid computing, various virtual currency incarnations such as e-gold, and various obfuscation, security and communications technologies. Underlying the concept was the nature of a computer; a processor, a bus and storage. And if you combine distributed computing, distributed storage and the Internet, voilà, a big computer.

With this in mind, the idea was basically to create a totally non-judgmental, uncontrolled secure and anoymous failure-resistant platform for online transactions — for legitimate business, tax evaders, kiddie pornographers, whoever. However, the parallel with the Metaverse doesn’t just stop at its distributed nature. Given the seemingly rising trend in attacks hitting MMORPGs and online communities, the villain Raven’s actual “Snow Crash” virus in Neal Stevenson’s book is something I can see being prototypical for a pretty big problem.

Picture this: just like with telephones and the Internet, commerce will adopt any new medium as a functional part of its business technology. So let’s say you have a totally decentralized, purely reputation-based, entirely secure transactions network of the sort that we’re postulating. For argument’s sake, let’s assume someone figures out how to exploit weaknesses in some of the protocols and/or client software used by participants in this kind of environment.

Given that the idea is to create a generally lawless state (i.e. not run by a company or controlled by a government agency, but designed to allow a green field for pure commerce), someone _will_ figure out a way to grief — be it for reasons of gain, sabotage, or pure vandalism. How do you respond to this? You have no recourse to Linden Labs, WIPO or the FBI. A community at large may not be sympathetic to, say, a Citibank under concentrated attack, and even then the response may be slow and ineffective.

A solution that comes to mind are variations on William Gibson’s “Black ICE” (i.e. the sort of strikeback capability that’s often poo-pooed and illegal in the real world.) However, in most virtual communities, there’s not enough of a “pay to play” mechanism to make vandals fear retribution, that they might lose their investment, and even if such a thing existed, there’s too much room for abuse (remember, who controls this? Even if there is a governing body, do you trust them?)

Just some thoughts.

 Posted by at 2:28 am
Nov 162006
 

This is a bit past its sell-by date, but Crypto-gram recently carried information of a story in the Neue Zürcher Zeitung (German article) about a supposed plan by the “Special Tasks Service” (DBA) of the Swiss communications ministry (Uvek) to requre Swiss ISPs to assist in infecting Voice-over-IP endpoint PCs with trojans that would enable interception of VoIP communications, such as Skype, Vonage or other protocols.

According to the NZZ, the Swiss company ERA IT Solutions is behind the trojan’s development, although no technical information is given. I especially love the claim that “it’s designed to be undetectable by firewalls or virus scanners.” Or Macs, or tripwire on Solaris, but maybe they can have a chat with Joanna Rudkowska about how to do it. Regardless, F-Secure probably won’t cooperate, and seemed to take a dim view of this toy’s chances of success.

The DBA, created as the Uvek’s “dirty tricks and espionage” department, lists wiretapping among its core tasks. According to Swiss telco law, when to deploy such toys is still within the purview of the local authorities, although data protection and warrant mechanisms are not mentioned. The trojan may apparently be either surreptiously installed by the police, or through ISPs. Under the threat of coercion, I assume.

More information is at PC Pro. I honestly can’t imagine what the hell ERA’s marketing directory was thinking; if I were him, I’d be doing PR damage control like mad now. Needless to say, Keystone Kop trojans don’t seem to be listed on their products page.

 Posted by at 10:16 pm
Nov 132006
 

In a brief follow-up to yesterday, I stumbled on Microsoft’s Windows Vista Security Guide, released November 8.

Interestingly, the thing differentiates between regular workstations (ECs or Enterprise Clients) and “other, more secure stuff” (SSLF — Specialized Security, Limited Functionality.) I guess they’ve figured out that a lot of outfits use XP for “non-end user” types of things *cough* medical data stations *cough*. They do mention that SSLF assumes a Vista-only environment (i.e. no mixed XP-Vista-2000-whatever networks.)

Some customer sites incorporate multiple vendor platforms; even vendors will probably want to manage multiple platform types. That is, assuming someone comes up with a way to reliably do this via central domain management — one of the problems of medical / IVD device maintenance is that you have a vast variety of customer IT operations policies regarding external vendors. Remember that some of these guys are the ones screaming about adding medical devices (data stations, laboratory information servers, etc.) operated and maintained by vendors’ support staff to their own NT domains.
Some problems I can see right off the top:

  • BitLocker/VEK usage expects either a TPM to be present, or the user to insert/remove an external USB device at boot (at risk of overwriting the startup key)
  • As mentioned previously, good luck on he homogenous OS environment
  • The domain management structure assumes a sensibly laid out directory structure before you actually start deploying systems and policies. That’s going to be difficult with such a wide variety of environments and machines, see above

To be fair, Microsoft seems to have put a bit of thought into user privilege separation, which will be of use in the maintenance of system integrity with different levels of user access (developers, field service staff, users, etc.) I’m not sure how I feel about Defender; Microsoft itself says that it needs to be checked against other security components for interoperability. That’s going to be interesting in the face of medical IT’s main problems, which is the issue of managing large numbers of diverse hosts on different network. It’s still pattern-based to a large degree, so how that works remains to be seen. And unfortunately, Vista’s firewall doesn’t seem to have made much progress towards being an easily configurable stateful (ipchains-style) firewall that secures against outgoing crap as much as against incoming badness.

Looking more closely at the SSLF baseline, it resembles the sort of “hardening” that frustrated engineers have been doing with XP-abused-as-anything-besides-a-workstation for years. Disabling a bunch of services and providing pre-defined GPOs is a good start, but one that will need auditing itself to make sure it’s not breaking anything.

One of my major dislikes of XP hardening scripts & packages so far is that there seems to be no concise listing anywhere of registry keys that should be disabled or changed, at least when I wrote my last hardening package about 10 months ago. This includes information on what all security-relevant keys do, and which components rely on them. That doesn’t appear to have changed, and will continue to make things difficult for anyone wanting a closer look at how Vista is hardened, including by the pre-defined MS policies. At this point, I think it’s a good idea to continue to treat Windows boxes as something that needs to be protected from remote bad mojo by physically distinct, external network security products.

 Posted by at 4:06 pm
Nov 132006
 

This is about 3 months out of date (announced in June — hey, I’m just catching up on my reading) but a colleague just pointed me to an interesting technique designed to subvert Windows Vista security when runing under AMD 64 CPus. Named “Blue Pill“, it was developed by Joanna Rutkowska of Singapore security firm COSEINC and circumvents the Vista requirement for runtime code to be signed by running inside a hypervisor through AMD Pacifica SVM hardware virtualization and either disabling OS signature checking entirely, or, in the case of what she refers to as “level 2″, completely hiding the memory portion where Blue Pill sits.

According to Rutkowska, this is OS-independent; the malware can be injected at runtime through a privilege weakness in how Vista handles paged memory, and is persistent across reboots. Theoretically, this could be ported to Intel VT as well.

George Ou has an ZDNet blog entry that raises the interesting question of being able to detect this by running timing analysis — apparently, there is a possibility of hybernating the malware if a timing analysis is detected. He doesn’t address the possibility of something like just hitting the host in question with constant, random semi-DoS attacks to generate load and thus obfuscating results of a system timing check. On second thought, I assume any such well-written process would take this into consideration (as the network stack would just be handling additional load within its design parameters.) But as he points out, any malware could just diddle with the system clock anyway.

Virtualization.info has an interview with Anthony Liguori titled “Debunking Blue Pill Myth” that doesn’t really go very far towards debunking anything — part of his point is that virtualization under Vista will rely on TPM-based attestation, which is interesting, seeeing how a lot of enterprises I’m familiar with actually turn off TPM functionality, especially in laptops due to management issues.

We’ll see, I guess. Very cool though.

More links at

Computerworld

Enterprise IT Planet

 Posted by at 3:01 am
Nov 112006
 

Users are the weakest link of any security solution that is not directly connected to a company network and thus under constant control and surveillance. Most mobile users will not be experienced IT staff with a high degree of technical knowledge, but managers, consultants, investment advisors and other professionals who expect a solution to “just work”. It is safe to say that these users will accept and work with a platform that follows two general guidelines, submitted for your approval:

No unreasonable complications are thrown in the way of the user’s work in the name of security

An excessively strict password policy is a classic example of such a hurdle. Security requirements that inconvenience users, such as having to repeatedly enter long pass phrases, tend to desensitize individuals to the need for information security. Users generally accept and understand the need to comply with security policies. However, at some point many employees start refusing to follow procedures that interfere with their productivity, and write down passwords or take sensitive data home on USB pen drives.

Users are confronted with familiar terminology and interfaces

Most of us prefer to work in familiar surroundings, using tools we are familiar with. Almost all technical experts have been confronted by staff from “the business” wanting to learn as little as possible about new interfaces and functionalities. A typical business computer user is accustomed to certain interface “memes”, and will usually think in terms of these. John’s girlfriend, for example, is a strategy consultant; while she is a highly intelligent and experienced professional (hi!), she understandably prefers to work with the tools that she is accustomed to, which are widely used in her field, and expects a certain look and feel from her laptop and applications.

Users seem generally willing to work with security measures if they are integrated into their accustomed working environment, and not made excessively difficult. The following is an example contrasting technical ideals of stereotypical users and security engineers:

User ideal: Single sign-on, one easy-to-use authentication medium for everything

Why not: Security breach of all assets if SSO credentials become compromised, difficulty of administration

Security pro’s ideal: Different, strong passwords for all applications, frequently expired, users will remember everything

Why not: Users will not remember anything, will write down passwords

In my experience, two-factor authentication, such as an RSA SecurID card or a Smart Card combined with a PIN code, is an example of an acceptable compromise for this particular problem,. These can be made to use the Microsoft Windows GINA, or any number of other authentication prompts that use a “look and feel” that users are comfortable with, such as the CheckPoint VPN-1 authentication prompt.

Windows Smart Card GINA

Users “get” this

Furthermore, I’ve have seen that multiple authentication steps are accepted if they use a “single set of credentials” (as opposed to single sign-on.) If a user is repeatedly required to enter a PIN code that authenticates a smart card-stored certificate or SecurID card, for access to various applications or data, the task of entering credentials becomes nearly mechanical and is not nearly as odious as the use of multiple passwords. A PIN code is far easier for users to remember than a password, and as an element of two-factor as opposed to single-factor authentication, more secure overall.

 Posted by at 8:46 pm
Nov 112006
 

cedula-small.JPG Before arriving in Chile, where I’m spending a year with my girlfriend, I did a bit of research on the information security and compliance landscape in South America. I came up with a single short law in Chile governing the security and integrity of information–”Ley 19628″, dating back to 1999. Ut-oh.

On 28 August 1999, Chile adopted privacy protective legislation. Law 19628 provides a set of detailed guidelines, principles and rules relating to the gathering, use, processing, storage and export of personal data. To be legal, all the above acts require the person’s written consent. The law does not create a data protection authority. Its application is monitored by ordinary courts. Personal data registrars are bound to respect professional secrecy rules. Data subjects are entitled to access and correct the data relating to them and to claim compensation where loss or damage is suffered as a result of the use or disclosure of such data. Infringement of the legislation entails administrative, civil and penal liability. Special provisions apply on financial, commercial, banking and medical data.

Gotta love the absence of a data protection officer. The law also does not specify penalties like the UK Data Protection Act or Swiss law. To be fair, I think the Argentines also only have something basic on the books.

Why is this fun? Well, like everyone here, we are in possession of a “cedula de etrangeros”, or a “papers pliss” kind of mandatory national ID card. The “RUT”, which I can only assume was originally some sort of pension information, serves as a universal identifying number. All government agencies are tied into the database containing these — companies also have these, as well as some contracts. It’s used it for taxes, pensions, passoprts, etc. etc. etc.

(Yes, that is a Cedula above; the smudged bit is my RUT, and I’m not going to put you through the agony of my ugly mug more than once on this page.) So, what’s the deal?

The RUT isn’t just used by the government, but by your bank, insurance and other organizations as an ID. Sounds good, except that it’s also your supermarket loyalty ID, your video club membership number, and your identifier for anything you can possibly imagine–it’s given openly over the phone, the Internet (often via unencrypted authentication elements even in SSL-protected pages), to the pizza delivery guy, you get the idea. As it turns out, everyone who asks for your RUT (i.e. everone) has full access to the RUT database (or whatever it’s called).

Bills of participating enterprises are payable online via two websites, one of which, when I logged in (using my RUT as user ID, with a 6-digit numeric password, no more are possible, and it only works under IE, let me check out my entire phone history for the month. What’s interesting is that at first I typed in the wrong phone number — and got someone else’s entire call history, along with their name, address and, you guessed it, RUT.

At risk of sounding like I’m scoffing — I’m not, just incredulous — this is in an environment where I’m asked to put two pen dashes across the face of a signed check “for security” because, as we all know, once you’ve written over a check, it can’t be forged. When confronted with the incongruity of this, at least two people I spoke with responded with some variation on “but this is South America / Chile.” It could never happen here.

In absence of enough time to put together a properly thought-through post, I’ll leave it to you, dear reader, to come up with your own conclusions as to the potential for identity theft once someone cottons onto the fact that English (and extremely poor Dutch and German) aren’t the only language in which a lot of gullible, not-terribly-technical people do business online.

 Posted by at 2:06 am
Nov 092006
 

Between the two of us, Arjo and I have helped design, set up or run on the order of a dozen public key infrastructures (PKIs). Looking back, not a single one of these was not somehow embroiled in administrative or organizational infighting, bogged down by uncooperative management, or less-than-optimally functional due to design decisions taken for some sort of political reason. To use a colleague’s phrase, these things seem to attract managers like FOS — flies on you-know-what.

We were recently asked to put together a requirements paper for a PKI that was being re-engineered from the ground up. It occurred to me that this might actually be a stellar opportunity to help a client get things right from the start. So, why do we always get bogged down in PKI-related organizational or management issues?

Obviously, the PKI’s main purpose is management of digital certificates for persons, organizations and technical entities. Certificate keys fall into three general categories, which I’ll use Entrust nomenclature to describe: encryption (obvious enough), signing (you can be sure that what you got from me is what I intended to give you) and non-repudiation (you can be sure that I cannot disavow what I gave you.) So, let’s look at what you might actually use these for:

  • User authentication (such as web sites or to workstations via smart cards)
  • Email and file encryption
  • Secure key exchange (e.g. for phase II IPSEC negotiation) between devices
  • Source code, binary and other data signing (e.g. financial transaction information)

Why is this important? Two reasons: first, your certificate templates, which govern the formats of the certificates and their usage, depend on what you intend to do with certificates. In most certificates, for example, there is a key usage flag (a hex value) that is used for this.

Second, and more importantly, your LDAP structure must reflect you organizational structure, certificate usage and a number of other factors from the outset. This is usually the largest bone of contention in PKI design; the classic example is the Jane Doe who marries Bob Smith and changes her last name. So a distinguished name (DN) like C=Albania,O=Bob’s Widgets Inc,OU=Research,CN=Jane Doe is now kind of messy. This can be dealt with by using a friendly name, but you get the idea. Changes in requirements for LDAP attributes and DN structure affect other areas, like host naming, validity of signing and encryption keys, and and and. It gets even more fun when you start doing things like cross-certification, setting up a metadirectory for multiple directories, or integrating existing directory structures.

The upshot of all this is that a PKI affects all aspects of an enterprise that are in some way involved in authentication, security, encryption, verification of data integrity, etc. — and it has to pretty much be done right from the outset, or you run into problems of application integration, user acceptance and overall cost. We know of several directory projects that had their scope drastically cut down after they were built, when it was discovered that they simply wouldn’t work for some of what they were intended for; someone obviously hadn’t done their homework.

Most tragically, sometimes design decisions are taken or changed late in the development phase. I will recount one infamous experience of a client who, based on a few casual observations our team made, called a stakeholder meeting consisting of a room full of 35 people shouting at each other, and reversed his entire enterprise directory strategy _twice_ in the course of two hours. We’d done the underlying design and recommended a certain set of project technology several months previously, but were ignored due to political reasons. In this meeting, our group just tried to remain rational and calm, and to give information when prompted, but in the end we realized there was just nothing to be done and snuck out. It was telling that nobody noticed.

Probably, everyone who’s affected will want a piece of the discussion, and we all know how well design by committee works. Many managers also seem naturally threatened by (or nervous about) any technology that effects a lot of blanket organizational and procedural change — a perfect categorization of an enterprise PKI. It doesn’t help that such products sometimes carry the stigma of taking away autonomy from other groups; at some level, any infrastructure component using PKI certificate must follow certain standards and procedures, there’s just no way around that. This is often seen as a loss of self-determination, or it may require technical effort, and who’s going to pay for that?

So, how does one deal effectively with this? A lot of good PKI design practice involves some pretty common-sense elements:

  • A knowledgeable, strong project manager who can organize management buy-in and support
  • A small design team who’ve done this before and have a clue what they’re doing (good luck!)
  • Individual discussions with potential stakeholders about what they want and how their concerns can be addressed
  • A _lot_ of time spent on research and design — use the “hire a 6-year-old to spot any obvious flaws in your plan” model liberally here
  • A clear idea of budget responsibilities in PKI deployment — but not to the end that every affected team will try to pawn off even minor changes on the PKI budget

And last but defnitely not least, a really good LDAP guy, you won’t regret it.

p.s.:  If you’re interested in reading more about non-repudiation, including how it differs from the usual technical usage of “signing”, have a look at Non-Repudiation in the Digital Environment.

 Posted by at 8:21 pm