Custom Search


The Death and Birth of Innovation in Networking



Netnostics, Inc. 6 Jun 02

A Sorry State of Affairs

Even though the business press and every other media outlet has lauded the success of the Internet and the companies associated with it as limitless engines of innovation, those who know the field best have known for some time that all was not well. Yet if one had the temerity to voice one’s concerns to anyone outside the field, who had been reading these accounts, they would look at you in disbelief and ask how you could make such claims when there was a constant flow of new companies and products. But you knew that most of that was based on Moore’s Law and innovative work done 20 – 30 years ago. You knew that the pipeline of new ideas had very little in it. Worse, the world had come to rely on the ‘Net and there were some very serious unsolved problems that were only being kept at bay by hard work and Moore’s Law. While we have gained much experience and knowledge over the past thirty years, the translation of that experience into deeper insights, new ideas and solutions has not followed. This is not to say that solutions have not been proposed and new technologies have not come to the market place. They have. But all have the flavor of being stopgaps, not the satisfying completeness of an answer. University research no longer leads private sector research by 5 – 10 years as it has in the past. In fact, the lack of progress has become so acute that the National Academy of Science commissioned a panel composed of both insiders and outsiders to the field to look at the problem. The report came to a shocking conclusion. The NAS study found fault with the fundamentals of networking research: “A reviewer of an early draft of this report observed that this proposed framework – measure, develop theory, prototype new ideas – looks a lot like Research 101. . . . From the perspective of the outsiders, the insiders had not shown that they had managed to exercise the usual elements of a successful research program, so a back-to-basics message was fitting.” Essentially a lack of “science.” But the National Academy study focused too narrowly and failed to recognize that the problems were much more systemic. There is evidence that indicates that the situation may be far more dire. Networking was founded on idea that the requirements for data were very different than those for voice. This lead to embracing concepts that ran counter to those of traditional telecommunications, such as non-determinism, distributed, end-to-end-ness, and connectionless, with layering as a means of managing complexity. In contrast, traditional telecommunications was based on concepts of determinism, hop-by-hop, circuit-based, centralized, and beads-on-a-string to manage complexity. After an initial explosion of new ideas in the mid-70s, the field settled into a period of enhancements and incremental changes in which it has remained. But as everything became “data” i.e. digital, new problems faced networking, but the new concepts have not seemed to rise to the challenge. So that more and more the proposals to solve the problems take on the appearance of the old solutions: telecom not networking. This is perhaps the most unnerving effect of this dearth of new ideas to those who have lead the field: an apparent turning to the dark side. As new problems have arisen and new concepts have failed to provide satisfying solutions, the private sector turned more and more to traditional approaches for solutions. Had the new ideas run out of gas? Were they a dead-end? Or was the old guard just that? Like a true believer faced with a challenge to his faith, the old guard furiously wrote papers arguing that the new concepts must be preserved; but failing to provide innovative solutions that leveraged the new concepts and hence would demonstrate their value and preserved them.

What is Good for the Goose may not be Good for the Gander

Recently when trying to explain why there were no new breakthroughs coming in this field and as I struggled to characterize the forces at work, I realized that two things had occurred when I wasn’t looking: First, we were living in a technological hegemony and second, the networking private sector and research funding had begun to act like second generation management. It is well known in business that first (founder) generation management are dynamic, visionary risk takers. As a company matures the later generations are drawn from marketing and finance, especially those that rose through the ranks. These leaders are exactly the opposite: conservative, favor the status quo, and have a sense of personal insignificance or lack of personal worth. Among networking companies today, we find the idea of “shared risk” is popular or “don’t take initiatives that aren’t being taken by others.” The attitude that “we don’t do research, we buy start-ups.” They lack confidence in their own research groups to have ideas that are ahead of the pack. This is not surprising. If the CEOs have little sense of personal worth, why would they believe their own researchers are that good. They will argue that small incremental changes sell more product. And it does, but it also leads to greater complexity, inefficiency, greater cost of ownership, and unless very careful, to a technological dead-end. So new ideas are not going to come from the captains of industry. But that is not news! No one expected innovation to come from these companies. The engine of American capitalism is the small company, the start-up. That is what the media tell us. But over the last 10 years or more, the VC have begun to behave like second generation CEOs as well. VC learned a long time ago, and were recently reminded by the Internet bubble, that they don’t understand technology well enough to pick winners. This combined with the “get rich quick” mentality dressed up as “two year ROI” lead them to develop a formula for the kinds of “new ideas” they will back. With the result that the formula effectively requires that by the time VC will invest in something, the products are less than truly innovative. In other words, they don’t take risks either. But we don’t really expect them to. High risk, high payoff efforts are the purview of research. Research funding agencies do that. However, the highly publicized success of VC in the 90’s has lead to the sources of research funding imitating the VC. A director of a program of research at one of these agencies is constantly trying to impress his management that he is on the “bleeding edge.” Consequently, there is a tendency to look for quick results, declare victory, and move on to the next hot topic. A large body of research shows that time is more effective at solving problems in science than money. Money is important, but increasing money by a factor of 10 does not shorten the time to solutions by anything close to the same amount. Most of the time, the attention span of the late 20th C has failed to match the pace at which the human process of science moves. In the US, there are two primary sources of network research funds: the National Science Foundation and the military. Most disruptive innovations have come from military research and there are good reasons for this. Any truly innovative (disruptive) concepts will by their nature gore a few sacred cows. Civilian research funding is peer reviewed. A committee of peers is unlikely to approve projects that gore any of their sacred cows. In contrast, during the heyday of military research funding agencies like ONR, AFOSR, and ARPA tended to put top tier scientists in charge of their research program with few strings attached. These people were free to pursue the research they thought best. They were able to take a long view and to take risks. They were essentially able to act like founders. But all good things must come to an end. During the Vietnam war, Congress passed the Proxmire amendment that forced the DoD to fund only work that was directly related to military applications. Even though, it is possible to construe any plowshare as a sword, this has had a dampening effect on fundamental research. It may well be that not a single disruptive innovation has come out of government research, since the passing of the Proxmire amendment.

A 500 Year Old Historical Precedent

This may all sound disconcerting but it hardly warrants the doom and gloom of the title, unless you are reader of history. We have seen one other example in the history of science where a rich and successful scientific tradition stagnated and actually lost information. We have also seen examples of what has been necessary for new ideas to germinate. For millennia, China had such a successful scientific tradition that China was ahead of Europe sometimes by centuries. The Chinese developed Pascal’s Triangle 300 years before Pascal, and Western medicine did not surpass the cure rate of Chinese medicine until the beginning of the 20th C are only two examples. But ultimately, Chinese science stagnated. In his multi-volume magnum opus, Science and Civilization in China, Joseph Needham concludes that the reason for stagnation was that the merchant class was very low in the social hierarchy. Without merchants to create demand for new innovations, the need for progress declined once the power structure lived as well as it thought it could. Half-jokingly, one could turn Needham’s argument around to say that it was strong reliance on government funding that caused science to stagnate in China. China was a hegemony, a very strong one. If you were in favor with the Emperor everything was golden, if not . . . In contrast to Europe, where there was no hegemony. When Galileo first got into trouble with the Pope, he went to teach at Padua then under the control of the Venetians. Venice was a major power and could afford to be independent. There was little the Pope could do. In Europe, if new ideas caused political problems and made things a bit untenable there was always someplace to go. My enemy’s enemy is my friend. But there is more to the stagnation of science in China. It is sometimes difficult to compare Western science with what one finds in China. Much of what is in Needham’s comprehensive survey (7 volumes, some in multiple books, all quite thick) is more technology than science. Many of the advances came out of an artisan tradition as they did in the West. But unlike Western science, they stayed there. This belies what is perhaps the greatest difference between Western and Chinsee science. While it was recognized by Needham, neither he nor other historians, have assigned much importance to it as a factor in maintaining a vibrant scientific tradition The most significant difference is that China had no Euclid. There was essentially no tradition of theory in Chinese science. Needham points out that the body of knowledge represented by Chinese science was more a set of individual techniques than an organized corpus of knowledge. There was no attempt to develop the kind of over-arching theory to integrate a set of distinct results into a comprehensive whole that there has characterized Western science. It is hard to find much evidence of attempts (outside of astronomy) to develop predictive theories, or anything analogous to the discipline of proof found in Western science. The Holy Grail of every scientist is to do for his field what Euclid did for geometry, reduce it to a small number of concepts from which every thing can be derived. Newton did it for mechanics, Maxwell did it electricity and magnetism, and modern physics is striving mightily today toward a “theory of everything” that unites the subatomic and the cosmological. Working toward a comprehensive theory, even if one is not found, is always beneficial. As different models are proposed and tested, a deeper understanding of the problem domain is achieved even if the proposed model is wrong. A good theory is, at the very least, a good mnemonic. There is less to remember. One only need remember the central concepts, a few intermediate principles and roughly how things relate and everything else can be derived. Many major results of Chinese science were forgotten. Some because they weren’t needed very often. For example, when Matteo Ricci first entered China at the end of the 16th C, he was convinced that he had brought the knowledge that the earth was round. As it turns out, the Chinese had determined this 300 years earlier but the knowledge had been lost. Without unifying theory to simplify knowledge, the amount of information was eventually overwhelming. But theory is much more than just a mnemonic. A theory, even a partial one, leads to deeper understanding of the phenomena being studied and often to further insights. Once there is a theoretical framework, there are results that can be derived that were far from obvious before. Theory not only provides a simpler, more logical explanation, but it also has a tendency to simplify individual techniques. Making them easier to understand and apply. Many techniques coalesce to become degenerate cases of a more general method. To see this one only need read accounts of electricity and magnetism before Maxwell, chemistry before the periodic chart, or geology before plate tectonics. It is the individual discoveries combined with the search for a comprehensive or simpler theory that is the essential tension that gives science direction and allays stagnation. Theory questions the meaning of the observations and techniques that make it up. Theory points at experiments that test its veracity and attempt to invalidate it. Theory points at its own limits. Theory becomes a map of our knowledge and thus, pushes us toward better theories.

All Technique and No Theory makes Jack a Dull Boy

So what does 500 year old science have to do with innovation in networking in the early 21st C? The processes that have been operating on our science are creating a situation similar to the one found in China of the Ming Dynasty. Although, network research has come to this juncture by a somewhat different route, hopefully it will not have the same end. While we have innovation being driven by the merchant class, they are looking for a quick return on investment, i.e. technique, not great leaps forward. To compound the problem, research funding is flitting from fad to fad every few years, generating techniques but giving theory little time to gestate. This, in turn, has a strong effect on researchers. Ross Ashby noted that the longer a state machine operated, the output became independent of the input and began to reflect the structure of the machine itself. We are conditioning researchers to produce techniques, not science. There is no theory. Therefore, it should be no surprise that even among the illustrious members that produced the National Academy study the basics of Research 101 are unfamiliar. As anyone who has taken a science course knows, the first step in science is not to measure as the NAS study says. The first step is to state a hypothesis. To state a hypothesis, one must start with a theory to be invalidated. As Einstein is often quoted, “it is the theory that determines the data.” Without theory, you don’t know what questions to ask and you don’t know what data is relevant, or how to measure it. I once asked a Nobel Laureate in physics about Galileo as a theorist and got a strong reaction, “NO! Galileo was the great experimentalist!!” True, Galileo’s experimental method was exemplary and important to the development of science. But his method would have meant nothing had he not had the insight to see the model that was at the core of the problem. Galileo’s brilliance was in picking the right model on which to base his experiments: Falling bodies in relatively controlled conditions. If Galileo had attacked a problem of practical value like predicting where cannonballs would land (and I am sure that APRAF would have funded him), he would have found his equations useless. In fact, people did try to apply his results to cannonballs and found his equations were off by as much as 50%. But Galileo did not start by trying to find equations that would take into account wind drift, air resistance, or that cannonballs were not perfect spheres. The solution to where a cannonball lands requires far more complex equations, at least a system of 3 dimensional second order partial differential equations. Whereas Galileo’s model could be described with a simple polynomial and confirmed by relatively simple well controlled experiments. Once the basics of the theory had been worked out, the results for cannonballs could be reasonably achieved by incrementally enhancements to Galileo’s equations. But almost impossible if you were to start with a clean sheet of paper. Galileo had the insight to pick an initial model that had a small number of independent variables, that could be tested experimentally, and would form the basis to solve more complex problems. Needham recognized that a strong central government as the only patron of science could lead to stagnation. Not surprisingly, he did not foresee that the short ROI demanded by a successful merchant class could have the same effect. It would seem that we have created the conditions for stagnation. Depending on your degree of pessimism, it is now up to us to avoid it or get ourselves out of it.

Finding a Way Out

As usual, suggesting a positive course of action is a lot more difficult than pointing out what is wrong. Avoiding the proliferation of technique or more positively, encouraging the consolidation of technique by theory is difficult. Although, the private sector is more interested in technique, or as they call it product differentiation, theory is not just the responsibility of the research community. Such consolidation is just as important to the private sector for decreasing cost, improving performance, and creating new opportunities. Furthermore it occurs at many levels, some only the purview of those building product. At the same time, we must keep theory from its flights of fancy and close to “its proper soil,” the practical. There are many platitudes for encouraging theory. As previously mentioned, it is exceedingly hard to rush theory. It often seems that all one can do is to create a fertile environment and hope for the best. But there are some things that will help create fertile ground. One can argue that research programs and funding should do more to stress theory. But how many researchers have a sufficiently broad exposure to multiple paradigms and the history of science to be able to think about it from the outside. Clearly, keeping an eye on the fundamentals is important, always holding proposals up to those fundamentals. A better understanding of the fundamentals is never a waste of time or money. But what are the fundamental principles that can be taken from theory in computer science? Fundamental principles are relations that are invariant across important problem domains. The more fundamental the greater the scope of their use. To have principles, we need good theory. But developing theory in computer science is much more difficult than in any other field because

We build what we measure.

It is very hard to know whether the patterns we see in our experiments are fundamental principles or an artifact of our engineering choices. Since there are few principles to rely on, we often see new efforts going all the way back to the beginning. Hence, many different decisions are made in each new effort which in turn makes it more difficult to compare different approaches to the same problem, which further complicates our ability to discern principle and artifact. But we can leverage this unique nature of computer science to get our arms around the problem of separating theory from artifact. There are really two parts to CS: a part that is mathematical and a part that is scientific. Mathematics is not a science. In mathematics, a theory must be “merely” logically consistent. In science, a theory must be logically consistent and fit the data. Many aspects of CS are purely mathematical: automata theory, complexity theory, algorithms, even to a large extent programming languages, etc. While they are rooted in mathematics, it is the “systems” disciplines of CS that are more scientific: computer system design, operating systems, networks, database systems, etc For theory to consolidate knowledge, it must find models that emphasize the invariants in the logical structure. As we indicated mathematics is independent of the data. This can provide us with a means to develop theory in the systems disciplines of CS. In a very real sense, there are principles in these fields, which are independent of technology and independent of the data, i.e. mathematical. Principles that follow from the logical constraints, i.e. axioms, that forms the basis of the class of systems. This is the architecture. Within this structure, there are additional principles, which are dependent on the data and independent of the technology or implementation. Specific choices in this space yield specific designs. These two form the content of university level texts in a field. And finally, there are the “principles” or relations that are dependent on the technology. These form the basis of product manuals and technical books. The first two are pure theory. Here is where the pressure to emulate Euclid is most felt. One wants to find the minimal set of principles needed for a model that yields the greatest results with the simplest concepts. The general principles derived from the empirical will often take the form of a trade-off or principles that operate within certain bounds or relations between certain measures. Goodput in networking is a good example of this. We have seldom distinguished these three forms of knowledge. We have not subjected ourselves to the same discipline that other sciences follow. This tendency contributes to the proliferation of techniques and ultimately to the stagnation of our field. What we need is recognition of these distinctions in our research programs, perhaps a journal dedicated to publishing technologically independent results. This is hard work. It requires disciplined experimental work, and it isn’t as likely to lead to immediate monetary reward but it is possible and it beats working in a stagnant field.