Just one week after Sir Iain Lobban, the head of GCHQ, told the BBC Radio 4 that UK Government and industry networks are subjected to 70 cyber espionage operations per month, we hear that the London 2012 Olympics could have been another target. Oliver Hoare, head of Olympic cyber security, told BBC Radio 4 he was woken at 04.45 on the morning of the opening ceremony by a call from GCHQ warning of a "credible attack" on the electricity infrastructure supporting the Games.
Happily it turned out to be a false alarm, but not one to be dismissed lightly because, according to Hoare, they had “tested no less than five times the possibility of a cyber attack on the electricity infrastructure”. Attacks thus avoided are reassuring, but can also encourage complacency while the stakes keep rising.
When South Houston's water supply network was hacked, at first it looked like a return to the good old days – when systems were hacked not as a cyber war threat or for criminal gain, but simply because a youngster wanted to show off. The Department of Homeland Security was able to claim: “At this time there is no credible corroborated data that indicates a risk to critical infrastructure entities or a threat to public safety."
The hacker did not agree: "This was stupid… no damage was done to any of the machinery; I don't really like mindless vandalism. It's stupid and silly. On the other hand, so is connecting interfaces to your SCADA machinery to the Internet… This required almost no skill and could be reproduced by a two year old with a basic knowledge of Simatic" he posted.
The hacker’s aim was not to cause mayhem but to spotlight the vulnerability of industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems deployed across critical networks like public utilities, but also chemical factories and other vulnerable operations. This is not just about financial loss – water has a direct impact on the community’s health, and a compromised supply could lead to loss of life; gas leaks could cause widespread damage; and the social impact of a chemical or nuclear plant leakage could be devastating.
Bear in mind also that the South Houston incident was a very simple attack, compared to something sophisticated like Stuxnet – a sophisticated virus designed to wreak havoc with SCADA systems. Stuxnet was reckoned to have caused severe physical – not just data – damage to Iranian uranium enrichment facilities, setting back the nation's nuclear weapons program by years. This was because the attack meant that actual physical repair work was needed, not just rebooting IT systems.
One of the main challenges in protecting these critical networks is the fact that they are designed to fulfil a purpose – usually to replace widespread manual operations with central control – and so not initially planned with cybersecurity in mind. Instead, the security solutions tend to be layered on in a piecemeal fashion after the networks become operational – with a bias towards keeping out intruders or non-approved access, rather than more subtle threats from the Internet. In addition, routers may be installed with factory settings like the “administrator” password that work just fine but leave the system vulnerable.
Adding layers of protection results in complexity, and it demands very thorough testing to reveal the inevitable gaps in security, as well as possible unforeseen performance glitches that could also arise and even cause as much damage as a hacking attack.
In theory a critical network could be totally independent of the Internet, running on its own dedicated cabling across the nation. But in practice it often makes sense to use existing telecommunication lines rather than laying new cables. And when it comes to connecting individual homes and far-flung sites it makes good sense to connect via the ubiquitous Internet – ie your broadband tap “utility” – rather than lay new cables. In fact the latest installations increasingly bypass wired connection in favour of the flexibility and simplicity of connecting wirelessly via mobile networks.
In the case of public utilities there is another reason to connect to via the Internet rather than a closed network: in keeping with anti-monopoly legislation, this allows the customer flexibility to switch between providers without notice. The smart grid must also allow direct and immediate access for the new provider of choice. A similar problem has emerged in the enterprise, where previously independent critical systems such as fire and burglar alarms, smoke detection, and industrial control systems increasingly run across the corporate IT network.
So connectivity via public networks is exactly where the greatest vulnerability can arise. Even if an attempt is made to quarantine the smart grid from the Internet, it is not easy to maintain that state. Corporate IT networks have a similar problem: for years they have been facing the challenge of “network permeability”. Whereas the earliest networks consisted of isolated computers linked by cables, today’s networks have to cope with mobile staff plugging in their laptops anywhere on the network, also with wireless access from smartphones and with data transfer via USB memory sticks.
So, instead of trying to seal the network from the Internet, the main focus has been on ways to run secure services over the Internet. A host of solutions are available, including firewalls, intruder detection systems and deep packet inspection devices to examine all the traffic on a network and look for anomalies and cyber threats.
These forms of protection are very necessary, especially for a nation’s critical infrastructure where so much is at stake. Don’t be fooled by the argument that public utilities rely on highly customized systems, with no two alike, so that hacking them would be impossible without insider knowledge. Hackers have long known how to get such knowledge and, according to US Defence Secretary, Leon Panetta: “We know of specific instances where intruders have successfully gained access to these control systems. We also know they are seeking to create advanced tools to attack these systems and cause panic, destruction, and even the loss of life.”
However, as everyone with experience in IT networks knows, every addition to a network, however necessary, increases its complexity and makes it harder to predict. So the real challenge for any national smart grid is this: how can we secure a highly complex system?
It is easy to underestimate this challenge, unless the engineers managing the grid have long experience with complex IT networks. But this is the sort of experience that has been gained over decades by telecommunications and IT network engineers. Their approach is first to design in all the safety and security features that are needed, but then follow up by submitting the whole system to rigorous testing under realistic operating conditions as well as extreme loads and attack situations in order to make sure it is secure both against attack and against unforeseen circumstances. This same process also allows fine-tuning of the network for optimal performance.
A similar level of testing is critically important for the sort of smart grids being planned for our national utility infrastructure. The good news is that there are companies that have long experience in testing IT and telecoms networks, and sophisticated tools are available to facilitate testing of highly complex networks under “real world” conditions.
The first lesson from their experience has already been suggested: security testing is not enough – you also need performance and scalability testing. This is because a complex network can develop surprising problems under diverse loads. A telecoms network, for example, might be able to handle hundreds of gigabits of data per second during file transfers and yet fail at a much lower bandwidth when handling a mix of different types of traffic – eg video and voice over IP. So it is necessary not just to test the network’s greatest data capacity but also to test how it performs under a whole range of realistic traffic scenarios and combinations of traffic – and then set QoS or traffic policies to optimise and secure performance, for example.
However, beginning with security testing: this should include three main stages. First the sort of assessment you expect from security solution vendors: an experienced eye looking over the existing or planned grid for obvious weak points or vulnerabilities, and suggested ways they can be protected.
The second stage is to simulate actual attacks under real-world operating conditions. Today’s sophisticated test tools can not only simulate all combinations of normal operating conditions but also combine these with state-of-the-art malware attacks. Especially relevant to a national utility grid are the so-called Denial of Service attacks that could cut off users from the service and cause widespread panic. Today’s most advanced test tools are integrated with a cloud database that is kept up to date with every new attack or virus as they occur – rather than waiting days or weeks for patches to be distributed.
The third stage is to explore further for unknown vulnerabilities, and today’s smart test solutions have the flexibility to allow very detailed testing around the boundaries of normal operation. For example: what happens when a system requires a long pass-code to be input and an operator mistakes a capital ‘O’ for a zero? Does it simply report an error, or does the wrong type of character crash the system? The best test solutions allow for “fuzz testing” – testing such variations from normal behaviour to anticipate problems that might accidently occur.
When we come to performance testing: today’s networks have to carry many types of traffic – data, video, voice, control signals etc – with a range of protocols. It is not enough just to know the maximum bandwidth capacity but also how the network behaves under a whole spectrum of different operating conditions. It is also necessary to allow for problems that arise with scale, as a smart grid across a nation is an enormous system comparable with the Internet itself, and this can raise issues around time control. However, the right test solution in the hands of an experienced network test engineer will be able to test the network to all its limits, and provide clear reports to show where problems could occur.
The ideal is, of course, to have a grid that can handle every operating condition and survive any type of attack or local fault, but this is hardly realistic. The real value of a comprehensive network test report is often that it spells out the systems limits, rather than saying it is perfect. So, during a crisis, when a certain type of traffic is surging, the grid operators know where the danger point lies and can take precautionary steps before that point is reached.
The moral is this: critical infrastructure is being laid down across the nation using technology that has been developed over decades by the telecoms and IT network industry. The incentive is that it works, it saves money, allows rapid response and the potential to develop smart, highly efficient services. So networking is stepping out into the public arena in a big way, but what is still being overlooked is that the network industry has also gained a lot of experience in anticipating vulnerabilities, testing and securing such networks.
These are skills that need to be applied to these public, and critical, networks prior to deployment. It’s the only sure way to avoid “deploy and pray”.Tim Cruddas is Director EMEA at Spirent Communications
Subscribe to our newsletter
Stay updated on the latest technology, innovation product arrivals and exciting offers to your inbox.Newsletter