Technology Roadblocks

Technology Roadblocks

Something that I discussed in my last post is that every forecast of a technology (well–every forecast in general, but we’re focusing on a particular field here) has limitations and associated laws that impact it, and that it’s important to identify and associate what those are. With Moore’s Law, the implication of Dennard’s Law breaking down meant that you could look at Amdahl’s Law to see the limitations of parallelization.

When I was working on technology consulting last year at Prokalkeo, my partner and I came up with what we called ‘Technology Roadblocks’ when we were seeking to characterize the issues that various instances of science and technology ran into. To return to the last article,

“Every technology (that we know of) has roadblocks as well. Roadblocks are what I call ‘any obstacle to progress in the development of a technology’. There are a variety of types of these roadblocks, and they can impact forecasting accuracy (macro) or simply describe problems that need to be/will be overcome in the pursuit of development (micro). In the case of Amdahl’s Law, it follows from mathematical axioms and is thus what I would call a ‘Axiomatic Roadblock’. This associates with the impossibilities mentioned in “Possible, Probable, Personal“, specifically the axiomatic impossibility–indicating that the limitation is put in place due to mathematical reasons more than physical laws (a semantic distinction that dissolves if looked at closely enough, but useful for identification purposes).”

So, similar to how I isolated various types of fallacies, we found it useful to isolate various types of roadblocks. What are they?

 

Types of Roadblocks

The first and simplest roadblock is the ‘Independent Roadblock’. This is any case in which the obstacle to the development of a technology is intrinsic to that particular technology. As an example, this might be figuring out how exactly to best design a new virtual reality head set to address the problems that poses, or what sort of algorithms are needed for better data compression.

If you look closely enough at most ‘Independent’ cases, however, you’ll find most of them are of the second type of roadblock, which is the ‘Dependent’ roadblock. This is best described as a case where development of a technology requires advances in another area of science or technological development–shrinking power sources, higher resolution/lower weight screens, better laser diodes (to think of a few off of my head). In many cases what appears to be an independent roadblock is actually the conjunction of many dependent roadblocks–in others it isn’t.

Finally, the third type of roadblock we identified is the ‘Physical Roadblock’. This is any sort of roadblock that will not be overcome simply by finding a new trick or combination, or new way to improve your toolchain–things like the size of atoms, the second law of thermodynamics, and other physical laws. This ties heavily with the Physical Impossibilities discussed in my article on Castles in the Sky.

 

How they Fit Together

It’s interesting to me, though, that mapping these out just returned me to a web of dynamics of technology interplay again. When independent roadblocks really are a number of dependent roadblocks, and dependent roadblocks in turn are each dependent on other roadblocks, and at the root of many of these are physical roadblocks, you begin to see again how it all interacts.

At a very very simple level, it’s like the lathe. With a lathe, you can jump start civilization. But the complex interplays of parts, returning back to different dependencies and how they block you from advancing, solving each in turn, shows just how complicated everything gets.

This might not be immediately obvious to someone working in a field at all times–as you are focused on your own work, the advances in the fields surrounding you are part of a changing environment, not necessarily things you notice in their own right. Improving computer speeds are something everyone is aware of, but projects that might not have even been able to be started without the advances aren’t noted to necessarily be dependent on them once initialized, especially if they’re finished within a single generation of hardware.

This is a useful way to look at problems, though to avoid drawing too complicated of a web  you need to restrict it to one or two degrees of freedom. When evaluated technology markets at Prokalkeo, one of the things we looked at was what sort of roadblock a given technology had. I’ll do a worked problem in the next article.

 

10 Rules of Technology Forecasting

A Liberal Decalogue

Bertrand Russell, famous philosopher and mathematician, once shared what he considered a ‘Liberal Decalogue’ at the end of an article called ‘The best response to fanaticism: Liberalism’, that embodied what he thought might represent the commandments that a teacher might wish to propagate, modeled after the ten commandments. Listed as they were originally, the decalogue included:

  1. Do not feel absolutely certain of anything.
  2. Do not think it worth while to proceed by concealing evidence, for the evidence is sure to come to light.
  3. Never try to discourage thinking for you are sure to succeed.
  4. When you meet with opposition, even if it should be from your husband or your children, endeavor to overcome it by argument and not by authority, for a victory dependent upon authority is unreal and illusory.
  5. Have no respect for the authority of others, for there are always contrary authorities to be found.
  6. Do not use power to suppress opinions you think pernicious, for if you do the opinions will suppress you.
  7. Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric.
  8. Find more pleasure in intelligent dissent than in passive agreement, for, if you value intelligence as you should, the former implies a deeper agreement than the latter.
  9. Be scrupulously truthful, even if the truth is inconvenient, for it is more inconvenient when you try to conceal it.
  10. Do not feel envious of the happiness of those who live in a fool’s paradise, for only a fool will think that it is happiness.

 

Technology Forecasting Rules

This resembles efforts of my own last year in attempting to come up with a list of commandments of forecasting and futurism, to avoid being influenced by politics, wishful thinking, or bias (though some is inevitable, obviously). Looking at this list, it’s quite fantastic, and is worth modeling off of. How can we change it to perhaps more closely match the virtues we wish to encourage in our own field?

  1. Never make a claim you cannot defend.
  2. Be honest with yourself and others in the strength of your predictions.
  3. Never discard a possibility without investigating it first.
  4. Discard all investigations into non-falsifiables.
  5. When you meet with forecasts that don’t match yours, understand why and what they’re rooted in.
  6. Do not assume fame means accuracy in forecast.
  7. Do not fear disagreement, because axioms vary from micro-thede to micro-thede.
  8. Do not fear to make radical statements, if you can defend them.
  9. Do not conflate your dreams of the future with the likelihood of the future.
  10. Do not let your politics influence your forecasts. Every political ideology under the sun has said that emerging technologies will help out their political ideology, and only so many of them can be true.

 

With minimal editing, I feel this still holds tightly to much of the intent of the original decalogue, and provides a nice decalogue of technological forecasting.

Dennard, Amdahl, and Moore: Identifying Limitations to Forecasting Laws

Will Moore’s Law Hold Up?

As the hallmark law of technology forecasting (and often, the only case that people are familiar with) a debate rages around Moore’s Law and its validity–will it hold true? Will it fail? Will it plateau and then see breakthroughs? Fact of the matter is, that all of these are true statements…depending on what the exact metric you’re measuring is. In fact, the precise measured metric and how to choose one is going to be the focus of a later post, but for now I’d like to address what most people think of when they say Moore’s law, and what they expect, which is computers seeing drastic gains in raw speed performance from a processor level (disregarding improvements on the part of other parts of the system like the speed improvements from Solid State Drives).

If you go by that metric, Moore’s law has failed to keep up. There’s no two ways about it. I’m not saying the sky is falling, and I’m certainly not saying that this won’t change. All I’m saying is that, for now, the raw speed improvements in computers has failed to keep up. Why is that?

Well, there’s a corollary of Moore’s Law called ‘Dennard Scaling’. Simply put, Dennard Scaling states that as transistors get smaller their power density stays constant, or that total power per transistor decreases linearly. This means that if you cut the linear size of a chip by half in two dimensions, the power density will decrease by 1/4. If this wasn’t the case, 3 Moore’s law doubling cycles (ie an 8x improvement in number of transistors in a given area) would mean an 8x higher power density.

Dennard Scaling is what’s broken down. More details are explained here, but the gist of it is that the smaller the transistors get, the more static power loss there is. The more static power loss there is, the more the chip heats up, leading to even more static power loss, which is a self-reinforcing cycle called thermal runaway. Another problem occurs when the static power loss (which is a signal) is greater than the gate voltage, leading to errant activation of transistors, meaning faulty operation.

To avoid this, manufacturers began producing multicore chips (which you may have observed in the last few years). This is a valid approach, and also led to the push in parallelized code. However, while there are a number of architectural issues above my head here, there is one important fact about building multicore instead of single core system. What is it?

 

The Problem

For a multicore system to work, a task has to be distributed to different cores and then gathered again for a result. This is a drastic simplification, but works for the purpose of this argument. Say you have have a program comprised of 100 tasks that need to be accomplished, with 40 that can be parallelized and 60 that can’t, and you run these tasks on a single core processor that does 1 task per ‘tick’ (a general unit of time). It will take you 100 ticks to finish the operation. Now, if you replace your single core processor with a quad core processor, what changes? Well, 40 of them can be parallelized, meaning they can be sent off to your quad core system. That leaves you with 60 tasks that have to be done in sequence–so even though you might have 4 times the number of transistors in your system, it will still take you 70 ticks to finish the operation–10 ticks (40 ticks / 4 processors) plus 60 ticks (one processor handling the non-parallelized tasks).

This is a general law called Amdahl’s Law. Amdahl’s Law states that the time T(n)  an algorithm takes to finish when being executed on n threads of execution with a fraction B of the algorithm that is strictly serial corresponds to:

T(n)=T(1)(B+\frac{1}{n}(1-B))

Amdahl's Law Depiction

Amdahl’s law at 50%, 75%, 90%, and 95% parallelizable code. Source: http://en.wikipedia.org/wiki/Amdahl’s_law#mediaviewer/File:AmdahlsLaw.svg

As can be seen in the graph, even if your code is 95% parallelizable, as n approaches infinity (and infinite number of processors) you only get a 20x speedup…or just over 4 Moore’s Law cycles (8-10 years).

This article isn’t meant to try to convince you that these issues won’t be solved. In fact, for what it’s worth, I’m strongly of the opinion that they will be solved–new computing architectures and substrates mean that we will likely resume some form of rapid growth soon (this may be influenced by a degree of hope, but there are certainly enough alternatives being explored I find it somewhat likely). While it’s an interesting problem to look at, I think it’s a more useful example of how every technology forecasting law has associated theorems and roadblocks, and that finding these is important to a forecast.

Associated Laws and Roadblocks

Forecasting laws have associated laws. That’s a pretty simple sentence with a lot of meaning, but what exactly is it saying? Exactly this: for every statement you make about a capability changing over time (transistors, laser capabilities, etc.) there are associated laws relating to associated capabilities. Dennard scaling associates with Moore’s law–it’s an observation that power density stays the same, meaning power requirements per transistor must be dropping, allowing Moore’s law to continue. There are any number of these, and in some ways you might even be able to consider multiple versions of a forecasting law to be very close associated laws (such as what type of forecasting method you’re using).

Every technology (that we know of) has roadblocks as well. Roadblocks are what I call ‘any obstacle to progress in the development of a technology’. There are a variety of types of these roadblocks, and they can impact forecasting accuracy (macro) or simply describe problems that need to be/will be overcome in the pursuit of development (micro). In the case of Amdahl’s Law, it follows from mathematical axioms and is thus what I would call a ‘Axiomatic Roadblock’. This associates with the impossibilities mentioned in “Possible, Probable, Personal“, specifically the axiomatic impossibility–indicating that the limitation is put in place due to mathematical reasons more than physical laws (a semantic distinction that dissolves if looked at closely enough, but useful for identification purposes).

While the identification of the issues in moving forward in trivial Moore’s law forecasting is important, and I hope that I clarified things somewhat for my readers, it’s just as important to give a good example of how these associated laws that might be passed over can lead to new limitations. I personally think that the issues will be overcome, and that Moore’s Law will continue (or need to be reformulated if a different substrate has different research patterns associated). All the same, being able to identify when axiomatic and physical impossibilities and roadblocks will arise is absolutely necessary for identifying the validity of a forecast.