Category Archives: Technology

The Future of Cyber-Warfare and Cyber-Security – Part II

Wikipedia defines Cyber Warfare as “politically motivated hacking to conduct sabotage and espionage.”  in its simplest form it can be the attempt to degrade service in another nation’s digital assets utilizing what is known as a Denial of Service or DoS attack. Other more malicious attacks include a category known as malware, the most well known example of which is Stuxnet, a virus discovered in 2010 which mainly affected the nuclear centrifuges of Iran. More recently, Symantec reported a piece of malware named Regin which collects information on companies, government agencies, etc. and has gone undetected for the past five years.

According to DARPA (the U.S. Defense Advanced Research Projects Agency), “Cyberspace is now recognized as a critical domain of operations by the U.S. military and its protection is a national security issue.” (Plan X) The United States is treating Cyber Warfare just like any other military operation and other nations are undoubtedly doing the same. In fact, reading through the requirements of DARPA’s Plan X feels more like the requirements document for an advanced weapons system than a computer system. But of course in reality it is both.

Among other things, Plan X has been prototyping a tool which allows an operator (think hacker) to visualize a network in a virtual reality environment. The idea is to make the hacking experience more immersive. So instead of hacking by sitting at a keyboard, the hacker attacks enemy resources as if they were in a video game. This is the future of cyber warfare. Once again, science fiction books and Hollywood have predicted the future sooner than everyone else. Films such as Swordfish have depicted hackers using visualization techniques to crack the world’s best crypto. Before that,  Tron tells the story of a hacker who gets immersed in a video game. In 1982!

The U.S. Cyber Command is maturing and although it is less than five years old, it is quickly becoming the hub of new cyber technologies for the U.S. government. Plan X gives us a tiny yet illuminating glimpse into the future.

The Future of Cyber-Warfare and Cyber-Security – Part I

I am speaking at the Cloud Expo in Santa Clara this week on The Future of Security in the Cloud so this week I have decided to lay out what I believe will be a few of the biggest concerns in internet security over the next few decades. I will return to my quest for intelligence in a few weeks.

The world we live in is changing rapidly and the pace is only going to accelerate. The impact of these changes will be immense. The changes in technology and the way we use technology will impact each and every one of us. And one of the biggest will be the impact to our online security – the ubiquity of online devices coupled with the use of intelligent machines will change the security landscape forever. Cyber Warfare has been around for years now. It has probably existed since we started connecting computers to the Arpanet back in 1969. The forerunner of the internet, the federally funded project known as Arpanet was used to send the first message across a distributed network. Back then we didn’t have firewalls; there was nothing to monitor suspicious activity. It was a trusted environment where nobody had any reason to believe there was any threat of having their data compromised or their facility broken into by pranksters, criminals, and government agencies. Cryptology had been advancing since before World War II far beyond the complexity of the ciphers used in ancient Rome. Over the last three decades of the twentieth century we saw incredible advances in technology giving rise to the current generation of the internet and what we now know as the World Wide Web. At the same time we have seen the development of complex machines capable of waging war from thousands of miles away, conducting surveillance from hundreds of miles above the earth, reaching deep into the innermost thought of companies and private citizens via the information stored on their computers.

At this rate we will see virtually no limit to what can be known about anyone or anything. If someone wants to know what food you have in your refrigerator they will skim that information off of the next generation net. What some people refer to as SkyNet, as homage to the Terminator trilogy and suggestive of the perils it may bring to humankind, will bring with it vast potential. Potential power – potential productivity – and potential abuse. Guarding our digital assets by guarding the single endpoint like the drawbridge to the castle will no longer be feasible (it’s actually not working all that well now). As long as our defenses rely on trying to identify the bad guys and stop them as they come through the door, failure will be inevitable. Almost all security breaches in the past few years have been due to vulnerabilities in the web application or mobile application – best estimates suggest about 86%. That means that most breaches could be avoided by simply writing application software that didn’t have bugs in it. Of course this isn’t as easy as it sounds. It is generally assumed that all software has flaws, there is no such thing as bug-free software. After all, we are only human.

But what if software wasn’t written by humans? And what if networks weren’t configured by humans? We have already seen widespread increases in the use of computers and other machines to increase quality in many industries from heavy manufacturing to electronics. Surely we can bring the many years of knowledge and experience gained from quality engineering in other fields to the software industry. As computer driven engineering becomes more pervasive, we are able to build products of all shapes and sizes and of great degrees of complexity with better results: better quality, better time from drawing board to production, better flexibility and customizability. We have used computers for years to gain productivity in software design and implementation and even testing. The advantages are undeniably astounding. One person writing code line by line, at a rate of less than one hundred lines of code per day, might take a year to write even a relatively simple non-graphical application. Today, through extensive use of modular, well-specified APIs, one person with a good understanding of software development can design and create a small but useful application in a day.

As this continued evolution of creating applications through the use of highly automated and very mature toolsets begins to integrate design, implementation, and testing we will see a new level of maturity in the field of application security assurance. No longer will we need to write code while checking a list of do’s and don’ts for secure coding. The need to have someone test our code as a last gateway before it gets rolled off the production line will be an anecdote in history. No doubt this sounds incredible to you. Sending your newest mobile application up to the online store without running it through the final Quality Assurance (QA) test to be blessed is like lion taming wearing a blindfold right? But if we know the software is built right, we really don’t need to test it one more time, do we? After all, our QA process doesn’t do anything but test for known vulnerabilities. We have a long list of ‘things that could be wrong’ and we try to identify whether any of these mistakes have made into our application. Isn’t there a better way to accomplish this?

Now I must confess to a bit of sleight of hand here. I’ve been saying that in the future it won’t be necessary to send software to that final all-inclusive QA testing before releasing it. But I didn’t say that testing software wouldn’t require testing, at least not in the way we test software today. The key is to validate our code as it is written. Think of it this way. If we create an application and it has a security vulnerability in it that we can identify during our QA testing, then that vulnerability exists because of a specific fragment of code. Before that fragment of code was introduced to our application the vulnerability didn’t exist. As soon as we add that fragment to the application the vulnerability does exist. So all we have to do is to identify that fragment of code as soon as it is added to our application. It’s that simple. And this simple but arduous task is precisely what computers are good at, and getting better at all the time. For any known security vulnerability (remember that’s all we have been testing for) we simply check every fragment of code as it is added to our application. It couldn’t be simpler. OK, once again I have made a statement that is not quite accurate. Security vulnerabilities aren’t generally a result of a single self-contained fragment of code. They are more often due to the way multiple fragments of code are connected to each other, in other words it depends on the context of the code fragment. But that doesn’t change anything except the number of fragment combinations the computer needs to identify and the level of difficulty in specifying these combinations. As the ability to automate software development improves, including testing for security flaws, we will see less and less need for the type of human involvement in writing and testing code. In fact, computers will be much faster and produce better results, making the manual aspects of software development an anachronism. As the task, and the responsibility, for creating software and the tools that test it shifts more and more to computers, we will see a shift in the ability to create bug-free software.

There may be someone reading this who has written some sort of code-checking program or perhaps written a full blown scanning engine to search for security vulnerabilities in code. Right now they are saying, “That’s not possible. It isn’t that easy! This is a very complex problem.” You are correct. This, like many challenges that arise in developing good robust software, is a problem that seems beyond the reach of a fully automated solution. But there is only one reason for this. We are only human. As the responsibility for creating software and the tools that test it shifts more and more to computers, we will see a shift in the ability to create bug-free software.

Automating application security testing will not be an option, it will be a necessity. Face it, computers are better at some things than humans. Hiring security testers to manually test your application will be a thing of the past. They will not be able to keep pace with current technological advances. The only way to thoroughly test applications is by leveraging the application security expertise of a human empowered by best of breed automated software testing tools.

Automation will be key to success. Next week I will move on to Cyber Warfare.

Surviving the Rise of the Machines: Partnering with the Enemy

It is easy to see the new breed of machines as the enemy. In many areas they are faster, stronger, and better than we are. Over the past century many jobs have been lost to a machine and it appears that the trend will continue. In the face of increasing competition for jobs between machines and humans two things are apparent:
1.  There are some jobs that a human will never be able to compete with machines for.
2.  Some jobs cannot be done effectively without the aid of tools such as computers and other electromechanical machines.
Any task which is relatively simple or easily replicated can almost always be done faster, more efficiently, and at a lower cost by a machine. Since the 1940’s we have used automation in what is broadly known as CNC (computer numerical control) automation. Modern CNC systems allow us to take a design for a simple component and have the system use tools such as lathes, mills, cutters, hole-punches, and welders to produce the component. These types of jobs benefit greatly from the increased productivity of a machine. For this reason, fewer and fewer humans will be used to create machined components.
Many jobs are still done by humans but have benefited from the increased productivity afforded by using some sort of machine to assist them. Early examples include jack-hammers, electric drills, and nail guns. As computers have become more common and cheaper we have seen their use in the workplace become more and more commonplace. In the business environment, it is very rare to see anyone using a typewriter for letters, purchase orders or any other document. The word processor has become the norm and is present in one form or another on every laptop, tablet, and even mobile phone. Technical fields such as medicine, biology, and astronomy depend heavily on the power of the computer for processing immense amounts of data and performing complex calculations which would never have been possible before. The degree to which computers have become integrated with the careers of today is evident by looking at the curriculum of any modern educational institution. Learning to use a computer in one’s trade has become as necessary as a carpenter learning to use a hammer. As the use of machines and computers increases so does their value. But in some cases this decreases the value of the human worker. As the skills and complexity required to do a job are shifted from human to machine, the value shifts with it. This is especially evident where the role of the human becomes so depleted of specialized skills as to move them into the category of unskilled worker.
The key to surviving the silicon takeover, at least for now, will be to avoid jobs which fall into the first category and take sanctuary in jobs which fall into the second category. But you may ask yourself, “As computers become more intelligent won’t more and more jobs fall into the second category?” Well, yes, but it may not be as simple as dividing jobs into ones in which computers are better and jobs at which humans are better. Let’s look at an example that isn’t about jobs but has been one of the most often cited examples of human intelligence vs. machine intelligence: the game of chess.
For years, computers have been rather good at playing chess. They can assess the many possible moves with lightning speed, can remember countless tricks, traps, and gambits along with many historic games played by the very best chess champions in the world. Since the 1970’s almost any chess software program could defeat all but the best chess players in the world. By 1997 the IBM computer Deep Blue beat Gary Kasparov, the world chess champion. Since then computers have left human players in the dust.
It might seem that in the game of chess, and perhaps in the job market, humans will never be able to compete with these super intelligent monsters which never sleep and make few demands. But the story took an interesting turn a few years ago when a new form of chess tournament emerged: freestyle chess. Freestyle chess is a tournament between humans who are allowed “to make use of any technical or human support for selecting their moves.” It turns out that while no single human player can defeat even a mediocre chess program, a person assisted by a computer program used to evaluate options and assist in making decisions can beat even the best of chess playing computers. An even more astounding result came out of a freestyle tournament in 2005. In “The Chess Master and the Computer” Gary Kasparov describes what happened:

Human strategic guidance combined with the tactical acuity of a computer was overwhelming. The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.

By complementing each other’s strengths, the humans and computers formed the ultimate team, unbeatable even by the best of the best from either side alone.
The future is both inevitable and very clear. In at least some fields, the only way to survive the continuing migration of jobs from human worker to automated machine is to form an alliance. The machines will continue to improve in speed, efficiency, and intelligence. But the ultimate team will the team that best utilizes the strengths of both machine and human. By leveraging the machine’s capacity for processing immense amounts of data, analyzing and choosing the best options from millions of possibilities, yet guided by well-trained humans with experience in their domain of expertise, they will leave everyone else . . .in the dust.

Man vs. Machine – The Struggle for Superiority in the Past, Present, and Future

In an earlier post I mentioned that we all like to think that there is something superior about humans. We don’t just think we are superior, but believe that we humans have some quintessential element that machines do not, that they can never possess. We write books about it, we make movies about it. We even write songs celebrating it. One such song features the American folk legend John Henry.

John Henry
John Henry

John Henry was a steel-driver. His job was to hammer holes into rock that were used to place explosives to clear away the rock to build tunnels. When the new steam powered hammer threatened to replace men such as John Henry, he fought back. He was sure he could work faster and better than the new machine. This culminated in a face off in 1870. At the site of a new tunnel in West Virginia known as Big Bend, John Henry and the steam powered hammer spent almost two days demonstrating their ability. John Henry worked without rest and in the end he succeeded at besting the machine. This victory was at the expense of his life. He died, either immediately or shortly thereafter, by some accounts because his heart gave out after the prolonged effort to beat his nemesis. Regardless of the details or even of the accuracy of the accounts, the message is obvious: machines possess certain advantages over humans. They can work without breaks, they don’t get tired, they don’t sleep. They work “like a machine”.

Technology has been a threat to labor ever since the industrial revolution. Over the past century we have seen great strides in automating tasks formerly carried out by humans. These advances and the increases in productivity that come with it have become even more pronounced with the standardization and formalization of processes used in many industries. What were formerly considered skilled artisans and laborers were decomposed into specific tasks which were easily teachable to an unskilled person. No doubt the most well-known instance of this was Henry Ford’s creation of the assembly line for the efficient production of the motor car. By decomposing the building of an automobile into discrete tasks he was able to define specific skills required at each step of the process. No longer did the manufacture of the automobile require a team of people with many skills acquired over many years. He could hire anyone off the street and with a minimal amount of training make them a productive worker on the assembly line. This was the dawn of mass production.

In the second half of the twentieth century our ability to optimize our efficiency through the use of more advanced tools and machinery accelerated and towards the end of the century we began to see machines take over many jobs completely. By the turn of the century, automobile assembly lines became almost completely automated. Advanced robots became capable of moving quickly through warehouses and picking inventory for shipment. This was the first time we got a real glimpse of the future – of the future of the worker. Whereas John Henry was being replaced by a machine which still had to be in the hands of a human being, this new generation of machines could operate autonomously. While machines still rely on humans for supervision and maintenance, they are taking on more and more responsibility. They are requiring less supervision and taking on more difficult tasks.

In the next twenty-five years we will see a rapid increase in both the capabilities and responsibilities of machines. Every year that goes by we trust the capabilities of machines more and this gives rise to giving them more responsibility. We have seen cars which are capable of driving themselves even though we aren’t ready to trust them enough to give up our driver’s seat to them just yet. We have seen drones used first in military applications and now it seems they are ready to enter the business world as delivery drones. The advances in technology during the twentieth century which replaced the jobs of humans were characterized by electro-mechanical advances and to some extent the electronics which control them. In this century the machines seeing the most rapid advances are the intelligent machines. The physical capabilities of machines are still advancing but the real magic is the ability of these machines to do the things we have always thought only a human was smart enough to do. The next generation of computers, robots, and machines will be superior to humans not just physically but in their ability to process enormous amounts of information, solve complex problems, and react more quickly than their human predecessors.

Where does this leave us, the primitive human? Will we be relegated to cleaning up after our mechanical successors? Will the world degenerate into the final chapter of The Terminator? The list of sci-fi movies about this type of struggle is a long one. Is this life imitating art? Perhaps the real fiction is that in the movies the humans always win.

How the Acceleration of Technology Will Allow Computers to Take Over the World

This idea that computers will advance at a rate which will allow them to surpass the capabilities of humans is not a new one. One of the thought leaders in this area, Ray Kurzweil, has long been a champion, if not the originator, of the notion that technological advances increase at an exponential rate. This is not simply an observation that technology is advancing faster and faster, but that because of the nature of the advances, the improvements delivered by technology build on one another. Kurzweil has laid out in detail what he calls the law of accelerating returns (LOAR) in his first three books, most notably The Singularity is Near. He explains that the hierarchical nature of technology is what enables this exponential growth. The evolution of technology occurs in increasing levels of abstraction resulting in exponential complexity and as a result capability. A set of technological advancements is built upon to form a new and more complex innovation with far more impact than the individual components themselves have. Consider the evolution of electronics as an example. Just as we were completing development of the world’s first modern computer, the ENIAC, three physicists at Bell Labs were inventing the transistor, the fundamental component of all modern day electronics. The transistor allowed us to build electronics such as radios, calculators, and computers at a fraction of the size and cost of the same devices formerly built using vacuum tubes. Two years later, the integrated circuit was invented, putting five transistors on a single chip. Fifty years on the IC or microchip technology had advanced to the point where we could put millions of transistors and other electronic components on a chip the size of a fingernail. Within the space of fifty years we went from building computers with tubes and wires which took up an entire room and could only perform relatively simple calculations to machines small enough to carry around with us every day and could outperform even the most powerful computers of the twentieth century.

In their book The Second Machine Age Erik Brynjolfsson and Andrew McAfee discuss many of these same issues primarily from a socio-economic perspective, stating “Rapid and accelerating digitization is likely to bring economic rather than environmental disruption, stemming from the fact that as computers get more powerful, companies have less need for some kinds of workers.” This is not a case where we gradually lose ground to automation, the advances in machines are accelerating and have been ever since the introduction of the transistor. The impact was initially not evident and has been underestimated for some time, but just as the industrial revolution enabled vastly increased productivity within a few decades this second revolution will enable increased productivity in a different way. The industrial revolution addressed physical limitations of humans such as speed, strength, and consistency. This current advance in technology will address the non-physical, the power of the human mind. Computers can already do more than the human mind in very many areas such as processing large amounts of data and performing lengthy calculations. We are now starting to see the use of computers to predict our wants and needs when we visit a website or to tell us the fastest way to drive to the mall. We carry devices which tell us if we are getting enough exercise or eating too much. This next generation of computing devices will benefit us greatly, assuming much of the burden of everyday life and doing a better job. But the benefit will not come without sacrifice. The emergence of assembly lines allowed for faster and cheaper production but at the cost of the extinction of certain jobs. In the same way, many of the tasks which we rely on humans for will be performed faster, better, and cheaper by our silicon assistants. Just how far will this wave of succession extend? Will we ultimately find ourselves at the mercy of a society of robots, relying on them for every one of our needs to survive? The story has been the subject of countless science fiction such as “I, Robot” and “The Matrix”. Much of the future is still unknown, we haven’t lost control just yet. But the shape of things to come is evident and undeniable. As Brynjolfsson and McAfee summarize, “In short, we’re at an inflection point— a point where the curve starts to bend a lot— because of computers. We are entering a second machine age.”