Information security

Round_vaultKeeping things secure is often associated with keeping things secret. "Don’t tell anyone how the locking mechanism of the vault works, that will make it harder to break into it". The smart thief preparing a bank heist will of course take one of the engineers who designed the vault out for drinks and get him to spill the beans after a few bottles of something. The idea of keeping things secret to keep things secure is known as ‘security by obscurity‘ and it never works. This is because it is very hard to keep secrets when many people who know the secret (because they designed the vault for instance, or maintain it, or operate it) are just walking around being their normal human selves. People like talking about their work or may have a grudge against a former employer or colleague. Obtaining classified information is often a matter of just asking nicely (possibly while pretending to be somebody else). This is known as social engineering.

Because it seems counter-intuitive to say that the idea of "Keeping Things Secret To Keep Them Secure" does not work, it’s almost impossible to eradicate the concept..

Kerkhoffs When the steel vaults became communications devices and computers, these old ideas persisted even though they have been thoroughly disproven time and time again. In 1883 the Dutch cryptographer Auguste Kerckhoffs von Nieuwenhoff published a series of ideas about intrinsically secure information storage and communications by telegraph (the high-tech device of the day). His basic position was that the only secret in a secure system should be the key, and that all other components must be open for audit to as many experts as possible.  This idea has been proven to work time and time again and remains true to this day.

History’s lessons

The German Navy apparently did not read von Nieuwenhoff’s work because the design of their cryptology device Enigma was based on the premise that its inner workings could be kept a secret from the Allies. This may be possible when there are only two or three devices and all are kept inside military installations, but once you start putting hundreds of them on board submarines the chance of one of them being captured steadily increases.

The story of the capture of Enigma by British intelligence and the fooling of the Germans by the Allies about the cracking of the Enigma codes is one of the great unsung stories of how World War II was won. After misleading German Intelligence into thinking the submarine U-110 was sunk with the Enigma on board (in reality it was retrieved by the crew of HMS Bulldog), British intelligence was able to keep the Germans convinced that their system was secret and thus secure. The German Navy and Werhmacht kept using the system for several more years while the Allies were reading their mail. EnigmaThis interception and decryption happened pretty much in real-time thanks to the early computers that were being built by the people at Bletchley Park (aka Station X). The ability to intercept and decrypt most German communications shortened the war by an estimated two years and was key to the success of D-Day. For the Germans, of course, trusting security-by-obscurity pretty much cost them the battle for the Atlantic and thus the war on the Western front. (A great introduction to both the basics of cryptography and the history of WWII information warfare is Neal Stephenson’s page turner Cryptonomicon). Since the secret has been out for a while you can download your own paper enigma now.

This rather lengthy introduction and history lesson is relevant today because companies and governments have learnt nothing from all this and continue to make the same mistakes as the Germans 65 years ago. And we get stuck with insecure systems that cannot protect us, our information or our money.

Some recent examples of the consequences of this kind of thinking are serious, others are funny.

So can we make systems secure, or at least secure enough? The answer is maybe. It depends on the applications, the acceptable cost and mostly the end users of the system.

Open security, it’s the only way

It is now very broadly agreed by security experts worldwide that the only way to create reasonably secure systems is to have an open design and development process. This is the exact opposite of the vault manufacturer trying to keep the inner workings of the locking mechanism secret. In an open process all available data on design and the actual implementation of it are shared as quickly as possible with as many experts as possible. This allows all those experts to study both the design and implementation and point out possible mistakes and weaknesses to the people building the system. With many more brains working on the problem the end result is generally better than when a few isolated individuals are working alone.

In software engineering this method has become known as ‘Open Source‘. This refers to the public availability of the ‘source code’ of a computer program – the ‘recipe’ to make the actual software. Eric S. Raymond, one of the founders of the Open Source initative formulated in his essay ‘The Cathedral and the Bazaar‘: "given enough eyeballs, all bugs are shallow". The idea being that any software engineering problem can be solved if enough different software developers work on the issue.

What Eric Raymond did was to reformulate a much older method for solving tough problems called the ‘scientific method’ or ‘peer review‘. This is the formal method by which scientists keep tabs on each others’ work and challenge each others’ thinking. It is by no means a perfect system, but the scientific method generally gets results. As a reader you are using dozens of them right now.

Information security, like many scientific problems, is very, very hard. Getting many people to work on the problem with you or for you is still the best way to ensure your system has a fighting chance. As von Nieuwenhoff suggested 125 years ago: the only thing that needs to be secret about an information system is the key one uses to gain access; everything else should be open to peer review to enable permanent scrutiny.

The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed set of testing methodologies that can be used as a framework for assessing the strengths and weaknesses of information systems, protocols, or physical things like buildings.

Practical application

Knowing theoretically how secure systems should be put together is just a first step towards actually securing anything.

Cia_triad In order to do information security properly, it is first and foremost important to understand what it is we are trying to protect. One of the most used models is that of the CIA-triad. This sounds a lot more ominous than it is. The letters C I A stand for Confidentiality, Integrity and Availability. These are the three critical aspects about information that need to be balanced in order for non-public information to have value for an organisation. Firstly the information needs to remain confidential. This may be because it’s a strategic trade secret, private data about an organisation’s client (say their bank balance), or a military secret such as real-time troop deployment information. But this confidential information is only valuable if the people using it to make decisions can trust its validity. This is were Integrity comes in. The information needs to be complete and correct. Both of these are easy enough to achieve. Simply store the information with the best encryption available. Then switch off the computer, unplug it from the network and bury it somewhere. No-one will be able to copy or alter the data so its confidentiality and integrity are guaranteed. The problem, of course,  is if no-one can access it, it might as well not be there at all. The third aspect, availability, brings all kinds of headaches. For the information to have value it must be available to the people who need it when they need it. The three aspects are in permanent conflict with each other and good IT security means making the right trade-offs depending on the type of information and the way it is utilised.

This classic and somewhat static view has been augmented by the work of security guru Bruce Schneier who has stated that information security is about protecting it as well as is reasonably possible, detecting breaches in that protection, and ensuring a timely and adequate response to such a detected breach. This additional view allows for a much more flexible approach to securing data ‘well enough’ at a certain cost, while taking calculated risks that are considered acceptable in terms of cost trade-off.

Combining both models gives a good framework for implementing technology to safeguard information as required, while keeping it available to those who need it.

Awareness and training

Having the best technology does not make anything secure if the people using it do not know how to use it properly or are not motivated to follow the necessary procedures. Awareness among the users of any system about the importance of security is the foundation under any secure environment. Without it, all other efforts are useless. The most expensive network security equipment on the market can be defeated by passwords being shared (loudly!) with entire departments, including people not employed by the organisation in question. If people do not know or care about the importance of procedures, no technology can save you.

Beyond being motivated to do the right thing people need to know what the right thing is. Often end user training is bought on the cheap because the budget has been spent on state-of-the-art technology. In my opinion at least half of any security budget should be spent on awareness creation and training of the end users of any system. It’s the combination of good technology and people who are empowered, able and motivated that leads to secure environments. The image on the left shows the results of privacy and security from having the CIA-triad supported by good behaviour and well-implemented and maintained technology (so even the technology side is at least partly about people, since keeping the tech working properly is about skilled and motivated people too).

Audits and what to do with them

When all this technology, procedures and knowledge are in place, there need to be regular checks to see if they are being used properly. In other words – audits. Some of these may be automated (such as users having sufficiently strong passwords and whether they change them often enough), while other aspects may have to checked by internal or externally-hired human auditors. A good way of exposing weaknesses in the total combination of technology, procedures and people is often to let outsiders test the defences by trying to defeat them. This is referred to as penetration testing and you can hire specialized companies to do it for you. These type of audits are meant to be educational, so organisations should only do them if they are willing and able to spend the time to learn from them. Otherwise they are a waste of time (or a very expensive form of entertainment).

All this does not mean there should be no consequences to failing audits. If the impact of breaking rules is severe, there should be severe consequences. This needs to be clearly communicated to all stakeholders and be made a part of terms-of-employment and other relevant legal documents. It’s no good telling employees afterwards that they should have paid more attention to this procedure or that – tell them upfront.

Some leadership required

Our litmus test when talking to any organisation about security is always: do the rules apply to everyone? Or only to the people making up to twice the minimum wage? In many places basic protocol is being violated by highly-paid (and scarce) professionals without any corrective action by senior management – even when the possible consequences of this behaviour are known. The logic appears to be that it’s just impossible to herd cats anyway and those individuals are crucial to the primary process of the organisation, so they are allowed to get away with it. This encourages everyone else to start circumventing the rules as well and soon things start falling apart and there may as well be no rules at all.

So security guidelines need to be implemented from the top down and those at the top need to lead by example. Leading by example is generally a good approach if you need to motivate people to put up with a little inconvenience to achieve an abstract goal.