Tag Archives: application security

The Future of Cyber-Warfare and Cyber-Security – Part II

Wikipedia defines Cyber Warfare as “politically motivated hacking to conduct sabotage and espionage.”  in its simplest form it can be the attempt to degrade service in another nation’s digital assets utilizing what is known as a Denial of Service or DoS attack. Other more malicious attacks include a category known as malware, the most well known example of which is Stuxnet, a virus discovered in 2010 which mainly affected the nuclear centrifuges of Iran. More recently, Symantec reported a piece of malware named Regin which collects information on companies, government agencies, etc. and has gone undetected for the past five years.

According to DARPA (the U.S. Defense Advanced Research Projects Agency), “Cyberspace is now recognized as a critical domain of operations by the U.S. military and its protection is a national security issue.” (Plan X) The United States is treating Cyber Warfare just like any other military operation and other nations are undoubtedly doing the same. In fact, reading through the requirements of DARPA’s Plan X feels more like the requirements document for an advanced weapons system than a computer system. But of course in reality it is both.

Among other things, Plan X has been prototyping a tool which allows an operator (think hacker) to visualize a network in a virtual reality environment. The idea is to make the hacking experience more immersive. So instead of hacking by sitting at a keyboard, the hacker attacks enemy resources as if they were in a video game. This is the future of cyber warfare. Once again, science fiction books and Hollywood have predicted the future sooner than everyone else. Films such as Swordfish have depicted hackers using visualization techniques to crack the world’s best crypto. Before that,  Tron tells the story of a hacker who gets immersed in a video game. In 1982!

The U.S. Cyber Command is maturing and although it is less than five years old, it is quickly becoming the hub of new cyber technologies for the U.S. government. Plan X gives us a tiny yet illuminating glimpse into the future.

The Future of Cyber-Warfare and Cyber-Security – Part I

I am speaking at the Cloud Expo in Santa Clara this week on The Future of Security in the Cloud so this week I have decided to lay out what I believe will be a few of the biggest concerns in internet security over the next few decades. I will return to my quest for intelligence in a few weeks.

The world we live in is changing rapidly and the pace is only going to accelerate. The impact of these changes will be immense. The changes in technology and the way we use technology will impact each and every one of us. And one of the biggest will be the impact to our online security – the ubiquity of online devices coupled with the use of intelligent machines will change the security landscape forever. Cyber Warfare has been around for years now. It has probably existed since we started connecting computers to the Arpanet back in 1969. The forerunner of the internet, the federally funded project known as Arpanet was used to send the first message across a distributed network. Back then we didn’t have firewalls; there was nothing to monitor suspicious activity. It was a trusted environment where nobody had any reason to believe there was any threat of having their data compromised or their facility broken into by pranksters, criminals, and government agencies. Cryptology had been advancing since before World War II far beyond the complexity of the ciphers used in ancient Rome. Over the last three decades of the twentieth century we saw incredible advances in technology giving rise to the current generation of the internet and what we now know as the World Wide Web. At the same time we have seen the development of complex machines capable of waging war from thousands of miles away, conducting surveillance from hundreds of miles above the earth, reaching deep into the innermost thought of companies and private citizens via the information stored on their computers.

At this rate we will see virtually no limit to what can be known about anyone or anything. If someone wants to know what food you have in your refrigerator they will skim that information off of the next generation net. What some people refer to as SkyNet, as homage to the Terminator trilogy and suggestive of the perils it may bring to humankind, will bring with it vast potential. Potential power – potential productivity – and potential abuse. Guarding our digital assets by guarding the single endpoint like the drawbridge to the castle will no longer be feasible (it’s actually not working all that well now). As long as our defenses rely on trying to identify the bad guys and stop them as they come through the door, failure will be inevitable. Almost all security breaches in the past few years have been due to vulnerabilities in the web application or mobile application – best estimates suggest about 86%. That means that most breaches could be avoided by simply writing application software that didn’t have bugs in it. Of course this isn’t as easy as it sounds. It is generally assumed that all software has flaws, there is no such thing as bug-free software. After all, we are only human.

But what if software wasn’t written by humans? And what if networks weren’t configured by humans? We have already seen widespread increases in the use of computers and other machines to increase quality in many industries from heavy manufacturing to electronics. Surely we can bring the many years of knowledge and experience gained from quality engineering in other fields to the software industry. As computer driven engineering becomes more pervasive, we are able to build products of all shapes and sizes and of great degrees of complexity with better results: better quality, better time from drawing board to production, better flexibility and customizability. We have used computers for years to gain productivity in software design and implementation and even testing. The advantages are undeniably astounding. One person writing code line by line, at a rate of less than one hundred lines of code per day, might take a year to write even a relatively simple non-graphical application. Today, through extensive use of modular, well-specified APIs, one person with a good understanding of software development can design and create a small but useful application in a day.

As this continued evolution of creating applications through the use of highly automated and very mature toolsets begins to integrate design, implementation, and testing we will see a new level of maturity in the field of application security assurance. No longer will we need to write code while checking a list of do’s and don’ts for secure coding. The need to have someone test our code as a last gateway before it gets rolled off the production line will be an anecdote in history. No doubt this sounds incredible to you. Sending your newest mobile application up to the online store without running it through the final Quality Assurance (QA) test to be blessed is like lion taming wearing a blindfold right? But if we know the software is built right, we really don’t need to test it one more time, do we? After all, our QA process doesn’t do anything but test for known vulnerabilities. We have a long list of ‘things that could be wrong’ and we try to identify whether any of these mistakes have made into our application. Isn’t there a better way to accomplish this?

Now I must confess to a bit of sleight of hand here. I’ve been saying that in the future it won’t be necessary to send software to that final all-inclusive QA testing before releasing it. But I didn’t say that testing software wouldn’t require testing, at least not in the way we test software today. The key is to validate our code as it is written. Think of it this way. If we create an application and it has a security vulnerability in it that we can identify during our QA testing, then that vulnerability exists because of a specific fragment of code. Before that fragment of code was introduced to our application the vulnerability didn’t exist. As soon as we add that fragment to the application the vulnerability does exist. So all we have to do is to identify that fragment of code as soon as it is added to our application. It’s that simple. And this simple but arduous task is precisely what computers are good at, and getting better at all the time. For any known security vulnerability (remember that’s all we have been testing for) we simply check every fragment of code as it is added to our application. It couldn’t be simpler. OK, once again I have made a statement that is not quite accurate. Security vulnerabilities aren’t generally a result of a single self-contained fragment of code. They are more often due to the way multiple fragments of code are connected to each other, in other words it depends on the context of the code fragment. But that doesn’t change anything except the number of fragment combinations the computer needs to identify and the level of difficulty in specifying these combinations. As the ability to automate software development improves, including testing for security flaws, we will see less and less need for the type of human involvement in writing and testing code. In fact, computers will be much faster and produce better results, making the manual aspects of software development an anachronism. As the task, and the responsibility, for creating software and the tools that test it shifts more and more to computers, we will see a shift in the ability to create bug-free software.

There may be someone reading this who has written some sort of code-checking program or perhaps written a full blown scanning engine to search for security vulnerabilities in code. Right now they are saying, “That’s not possible. It isn’t that easy! This is a very complex problem.” You are correct. This, like many challenges that arise in developing good robust software, is a problem that seems beyond the reach of a fully automated solution. But there is only one reason for this. We are only human. As the responsibility for creating software and the tools that test it shifts more and more to computers, we will see a shift in the ability to create bug-free software.

Automating application security testing will not be an option, it will be a necessity. Face it, computers are better at some things than humans. Hiring security testers to manually test your application will be a thing of the past. They will not be able to keep pace with current technological advances. The only way to thoroughly test applications is by leveraging the application security expertise of a human empowered by best of breed automated software testing tools.

Automation will be key to success. Next week I will move on to Cyber Warfare.