10 Rules of Technology Forecasting

A Liberal Decalogue

Bertrand Russell, famous philosopher and mathematician, once shared what he considered a ‘Liberal Decalogue’ at the end of an article called ‘The best response to fanaticism: Liberalism’, that embodied what he thought might represent the commandments that a teacher might wish to propagate, modeled after the ten commandments. Listed as they were originally, the decalogue included:

  1. Do not feel absolutely certain of anything.
  2. Do not think it worth while to proceed by concealing evidence, for the evidence is sure to come to light.
  3. Never try to discourage thinking for you are sure to succeed.
  4. When you meet with opposition, even if it should be from your husband or your children, endeavor to overcome it by argument and not by authority, for a victory dependent upon authority is unreal and illusory.
  5. Have no respect for the authority of others, for there are always contrary authorities to be found.
  6. Do not use power to suppress opinions you think pernicious, for if you do the opinions will suppress you.
  7. Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric.
  8. Find more pleasure in intelligent dissent than in passive agreement, for, if you value intelligence as you should, the former implies a deeper agreement than the latter.
  9. Be scrupulously truthful, even if the truth is inconvenient, for it is more inconvenient when you try to conceal it.
  10. Do not feel envious of the happiness of those who live in a fool’s paradise, for only a fool will think that it is happiness.

 

Technology Forecasting Rules

This resembles efforts of my own last year in attempting to come up with a list of commandments of forecasting and futurism, to avoid being influenced by politics, wishful thinking, or bias (though some is inevitable, obviously). Looking at this list, it’s quite fantastic, and is worth modeling off of. How can we change it to perhaps more closely match the virtues we wish to encourage in our own field?

  1. Never make a claim you cannot defend.
  2. Be honest with yourself and others in the strength of your predictions.
  3. Never discard a possibility without investigating it first.
  4. Discard all investigations into non-falsifiables.
  5. When you meet with forecasts that don’t match yours, understand why and what they’re rooted in.
  6. Do not assume fame means accuracy in forecast.
  7. Do not fear disagreement, because axioms vary from micro-thede to micro-thede.
  8. Do not fear to make radical statements, if you can defend them.
  9. Do not conflate your dreams of the future with the likelihood of the future.
  10. Do not let your politics influence your forecasts. Every political ideology under the sun has said that emerging technologies will help out their political ideology, and only so many of them can be true.

 

With minimal editing, I feel this still holds tightly to much of the intent of the original decalogue, and provides a nice decalogue of technological forecasting.

Dennard, Amdahl, and Moore: Identifying Limitations to Forecasting Laws

Will Moore’s Law Hold Up?

As the hallmark law of technology forecasting (and often, the only case that people are familiar with) a debate rages around Moore’s Law and its validity–will it hold true? Will it fail? Will it plateau and then see breakthroughs? Fact of the matter is, that all of these are true statements…depending on what the exact metric you’re measuring is. In fact, the precise measured metric and how to choose one is going to be the focus of a later post, but for now I’d like to address what most people think of when they say Moore’s law, and what they expect, which is computers seeing drastic gains in raw speed performance from a processor level (disregarding improvements on the part of other parts of the system like the speed improvements from Solid State Drives).

If you go by that metric, Moore’s law has failed to keep up. There’s no two ways about it. I’m not saying the sky is falling, and I’m certainly not saying that this won’t change. All I’m saying is that, for now, the raw speed improvements in computers has failed to keep up. Why is that?

Well, there’s a corollary of Moore’s Law called ‘Dennard Scaling’. Simply put, Dennard Scaling states that as transistors get smaller their power density stays constant, or that total power per transistor decreases linearly. This means that if you cut the linear size of a chip by half in two dimensions, the power density will decrease by 1/4. If this wasn’t the case, 3 Moore’s law doubling cycles (ie an 8x improvement in number of transistors in a given area) would mean an 8x higher power density.

Dennard Scaling is what’s broken down. More details are explained here, but the gist of it is that the smaller the transistors get, the more static power loss there is. The more static power loss there is, the more the chip heats up, leading to even more static power loss, which is a self-reinforcing cycle called thermal runaway. Another problem occurs when the static power loss (which is a signal) is greater than the gate voltage, leading to errant activation of transistors, meaning faulty operation.

To avoid this, manufacturers began producing multicore chips (which you may have observed in the last few years). This is a valid approach, and also led to the push in parallelized code. However, while there are a number of architectural issues above my head here, there is one important fact about building multicore instead of single core system. What is it?

 

The Problem

For a multicore system to work, a task has to be distributed to different cores and then gathered again for a result. This is a drastic simplification, but works for the purpose of this argument. Say you have have a program comprised of 100 tasks that need to be accomplished, with 40 that can be parallelized and 60 that can’t, and you run these tasks on a single core processor that does 1 task per ‘tick’ (a general unit of time). It will take you 100 ticks to finish the operation. Now, if you replace your single core processor with a quad core processor, what changes? Well, 40 of them can be parallelized, meaning they can be sent off to your quad core system. That leaves you with 60 tasks that have to be done in sequence–so even though you might have 4 times the number of transistors in your system, it will still take you 70 ticks to finish the operation–10 ticks (40 ticks / 4 processors) plus 60 ticks (one processor handling the non-parallelized tasks).

This is a general law called Amdahl’s Law. Amdahl’s Law states that the time T(n)  an algorithm takes to finish when being executed on n threads of execution with a fraction B of the algorithm that is strictly serial corresponds to:

T(n)=T(1)(B+\frac{1}{n}(1-B))

Amdahl's Law Depiction

Amdahl’s law at 50%, 75%, 90%, and 95% parallelizable code. Source: http://en.wikipedia.org/wiki/Amdahl’s_law#mediaviewer/File:AmdahlsLaw.svg

As can be seen in the graph, even if your code is 95% parallelizable, as n approaches infinity (and infinite number of processors) you only get a 20x speedup…or just over 4 Moore’s Law cycles (8-10 years).

This article isn’t meant to try to convince you that these issues won’t be solved. In fact, for what it’s worth, I’m strongly of the opinion that they will be solved–new computing architectures and substrates mean that we will likely resume some form of rapid growth soon (this may be influenced by a degree of hope, but there are certainly enough alternatives being explored I find it somewhat likely). While it’s an interesting problem to look at, I think it’s a more useful example of how every technology forecasting law has associated theorems and roadblocks, and that finding these is important to a forecast.

Associated Laws and Roadblocks

Forecasting laws have associated laws. That’s a pretty simple sentence with a lot of meaning, but what exactly is it saying? Exactly this: for every statement you make about a capability changing over time (transistors, laser capabilities, etc.) there are associated laws relating to associated capabilities. Dennard scaling associates with Moore’s law–it’s an observation that power density stays the same, meaning power requirements per transistor must be dropping, allowing Moore’s law to continue. There are any number of these, and in some ways you might even be able to consider multiple versions of a forecasting law to be very close associated laws (such as what type of forecasting method you’re using).

Every technology (that we know of) has roadblocks as well. Roadblocks are what I call ‘any obstacle to progress in the development of a technology’. There are a variety of types of these roadblocks, and they can impact forecasting accuracy (macro) or simply describe problems that need to be/will be overcome in the pursuit of development (micro). In the case of Amdahl’s Law, it follows from mathematical axioms and is thus what I would call a ‘Axiomatic Roadblock’. This associates with the impossibilities mentioned in “Possible, Probable, Personal“, specifically the axiomatic impossibility–indicating that the limitation is put in place due to mathematical reasons more than physical laws (a semantic distinction that dissolves if looked at closely enough, but useful for identification purposes).

While the identification of the issues in moving forward in trivial Moore’s law forecasting is important, and I hope that I clarified things somewhat for my readers, it’s just as important to give a good example of how these associated laws that might be passed over can lead to new limitations. I personally think that the issues will be overcome, and that Moore’s Law will continue (or need to be reformulated if a different substrate has different research patterns associated). All the same, being able to identify when axiomatic and physical impossibilities and roadblocks will arise is absolutely necessary for identifying the validity of a forecast.

Feature Demo: Emerging Technology Articles Collection

deus_ex_wallpaper_by_anubins-d5swuvn

Like I mentioned in my introduction, I spend a good portion of my time sorting through science news and emerging technology articles. I’m working on generating a way for you to easily sort through my recent readings on the topics, as well as attaching a synopsis of my views so you can easily catch up with what different technologies mean, how they work, and where they might go.

 

In the mean time, however, I’ve put together a quick demo showing you everything that passes into my favorites read list, sorted out from all the articles I see every day. There’s no explicit tagging on this yet, but hopefully there will be in the future. Feel free to peruse and enjoy! Feedback is welcomed on number of articles per page I should display. I’m also going to try to hack in some CSS to format things so they’re a bit easier to read.

 

«    2 of 22    »

Global economic losses from cyclones linger for decades, study finds

Around the world, economic losses due to hurricanes continue for decades after disastrous storms strike, and the losses are not alleviated by spending on reconstruction and may climb with storms that are intensified by climate change.
Learn more

The perfect atom sandwich requires an extra layer

(Phys.org) —Like the perfect sandwich, a perfectly engineered thin film for electronics requires not only the right ingredients, but also just the right thickness of each ingredient in the desired order, down to individual layers of atoms.
Learn more

Study links GI symptoms and autism in children

(Medical Xpress)—Five-year-old Veer Patel was diagnosed with autism spectrum disorder (ASD) in October 2010. Typical of children "on the spectrum," he manages best with a rigid, unchanging daily routine.
Learn more

Twin hearing study helps discover gene that influences hearing ability

The largest ever genome wide association study on hearing ability has identified the salt-inducible kinase 3 (SIK3) gene as a key influencer in how well we can hear, particularly at high frequencies.
Learn more

Researcher creates bioinspired and biofunctional materials for widely diverse applications

In one project, Brad Olsen's lab seeks to engineer soaps that can be sprayed onto a toxic chemical release and not only wash off the chemical, but detoxify it.
Learn more

Racing game proves effective in teaching scientific reasoning

An online game that has students race through a course and learn about scientific argumentation during pit stops has proven effective at a crucial time in American education.
Learn more

Serotonin receptor structure revealed

The structure of a serotonin receptor has been completely deciphered for the first time using crystallography.
Learn more

Is cosmic radiation the dawn of new physics or statistical slip-up?

Recent observations suggest that there is something not quite right with our view of our universe – that something is skewing our view of the oldest radiation arriving at our telescopes.
Learn more

Common variation genes behind the risk of autism

A number of relatively common gene variations combined may increase the risk of autism. These are the findings of a new study from Swedish and American researchers published in Nature Genetics.
Learn more

Cheap and compact medical testing

Harvard researchers have created an inexpensive detector that can be used by health care workers in the world’s poorest areas to monitor diabetes, detect malaria, discover environmental pollutants, and perform tests that now are done by machines costing tens of thousands of dollars.
Learn more

«    2 of 22    »

Possible, Probable, Personal: Arguing against Castles in the Sky

Introduction:

 

I labeled this article with the heading ‘Possible, Probable, Personal’ because I think that a lot of failures in qualitative forecasting and putting the boundaries on quantitative forecasting result from an inability to differentiate what category new technologies and forecasts about technologies fall into. Like the flying car mentioned in the last post, people who were enthusiastic about it jumped straight from it being possible to it being personal, instead of an intermediary category of ‘Pragmatically Improbably’.
It is my hope that this framework or one that will evolve from it will help people understand why some technologies take a long time to make it to market, some are adopted immediately, and some never see the light of day at all (failing other interests, which are beyond the scope of this blog).

 

Is a technology impossible?

Flying cities--depending on how you count them, improbable or impossible. Credit to http://solartistic.deviantart.com/art/Laputa-The-Flying-City-385811018

Flying cities–depending on how you count them, improbable or impossible. Credit to http://solartistic.deviantart.com/art/Laputa-The-Flying-City-385811018

The first question to ask when evaluating the future with regards to a new technology or scientific discovery is to simply ask if it’s possible. Rather than establish all the different ways something could be determined to be possible, we can establish the ways that technologies could be easily determined to be impossible. Now, there could be specifics to certain scientific and technological fields that I don’t cover here, but I believe that these are the major ones.

  • Axiomatic impossibility: This is a scientific or technological discovery that violates the absolute most fundamental principles that we understand about the universe. This might include things like the values attached to fundamental forces, entropy, or the fact that there are no integers between 3 and 4.
  • Physical impossibility: This is a scientific or technological discovery that is completely un-grounded by what we understand about the universe currently, but doesn’t violate an axiomatic statement. This might include FTL travel, vacuum energy systems, anti-gravity, etc.
  • Conditional impossibility: This is a scientific or technological discovery that is impossible but only in a relational sense. In other words, describing one technology as being required to have higher capabilities than a different technology that it has never been shown to have an advantage over.

Now, as is the case with both of the previous articles in this series, none of these are absolute statements. As someone who studied Physics in my undergraduate years and still follows discoveries with the eye of a keen hobbyist, I’m well aware that there are still a number of interesting inconsistencies. All the same, just because we don’t know what the answer is doesn’t mean we can arbitrarily say that some result is likely–something that you can’t even comprehend or guess is just as likely, which is to say completely made up. It’s even worse in cases like FTL, where relativity is one of the most confirmed results in all of physics and no amount of wishing will get us around that. Choosing one outcome over another for no other reason than you like what it might mean is intellectually dishonest.

 

Is a technology improbable?

This question is much harder than the impossibility of a technology–and rightly so, as it relies significantly more on opinion than anything else. Well, perhaps  not opinion, but arguments are likely to be driven b opinion which will lead to cherry picked facts. All the same, we can still attempt to try to break it down further anyway.

  • Theoretically improbable: the most strict form of the term, a theoretically improbable technology is unlikely to be seen in even a laboratory or purely research sense, due to some restriction on the part of funding or even ethics. An example of the first would be experimenting with large equipment made out of rare earth metals (arbitrary-I do not know at the time of this writing if there’s any POINT to making large objects out of rare earth metals) and an example of the second might include human cloning or a number of experiments likely to leave their subject maimed, dead, or out of their mind.
  • Pragmatically improbable: while not burdened by some form of practical hard cut-off like theoretically improbably technologies, a pragmatically improbably technology is one that could theoretically be constructed unburdened by any realistic concerns, but is unlikely to be implemented on any larger scale. This relates back to the discussion of flying cars in the previous post, in that I’d consider flying cars to fall into this category. Notably, not even the military (which is well known for spending significant quantities of money on devices that wouldn’t necessarily be worth it outside of that context) has used ‘flying cars’ (when technologies that might be considered whackier, such as the Osprey, have been the focus of significant amounts of development effort because they were pragmatic).
  • Individually improbable: The last filter establishes that while a technology may see some form of large scale use (on an industrial, military, organizational, or government level–ie supporting a large number of individuals or requiring the support of a large group for the usage of one individual on behalf of that organization) it is unlikely to ever reach the hands of one person fro their own sake, either via purchasing or as an individual’s piece of equipment they used on their own behalf as opposed to the behalf of an organization.

Much of this blog will be about discussing the probability/improbability of various technologies and their implementations, though that specific discussion is likely to be far in the future considering the material yet to be discussed.

So, we’ve established impossible and improbable categories of technology. Many technologies will sit in one of these categories forever (in the case of impossible technologies) or for a very long time (in the case of improbable technologies). As our capacities advance, though, the scope of what is pragmatic is pushed back in some cases (transistors), though not all (flying cars), and things move to ‘personal’.

 

Is a technology personal?

So if a technology isn’t impossible, and it isn’t improbable–even on an individual scale, we can say that it may become a personal technology. In my opinion, technologies that are individually improbable (ie the loosest class of improbable technology implementation) are the most likely to eventually become personal, as they are often simply limited by results of scaling or development (mainframes, genetic sequencing, etc.).

Personal technologies are not necessarily technologies for private use. They may still be restricted to certain organizations–an example might be a certain firearm. The distinguishing feature here is that they are being used by an individual on behalf of themselves (and possible additional individuals as a side effect, but not as the point). Exoskeletons in warfare might go from ‘individually improbable’ to ‘personal’ when they become standard issue as opposed to being issued to squads (which itself is a forecast, though one I hope to cover in depth eventually).

 

Using this Framework

To conclude, this gives us a rough framework in which we can place technologies to evaluate their likelihood, at least at a very high level sketch. Axiomatic, physical, and conditional impossibilities can be examined first. If none of those prevent a technology, then theoretic, pragmatic, and personal restrictions on implementation can be examined. If the forecasted technology isn’t restricted by any of those reasons, then it likely is (or will be) a personal technology.