—especially in their notoriously complex chip design practice—but Intel managed by taking it a step further. Some quotes from a TomsHardware article about Intel’s Tera-scale processor technology, along with my commentaries on innovation:
Intel used mostly off-the-shelf logic components for its prototype. This means that…[every component]…was either used exactly as it had already been developed, or with the barest minimum of customized changes. This technology re-use enabled Intel to take a research project from drawing board to prototype in less than a year.
Generally, innovation doesn’t happens with the things that you don’t already have in your possession (be it material or mental,) so stop looking and start working.
One of the most powerful features of Tera-scale is [that]…it does not really matter what compute engines are inside each core. In fact, when Intel was designing the overall system, the actual contents of the compute cores were literally of almost no importance. First and foremost was the scalable bus architecture, which allowed any one of cores to communicate directly with any of the others. Bautista called this a “one to any” communication method.
This is very important. The nature of the building blocks/elements of an innovation actually matters less than the relationship between them. Intel’s Tera-scale technology isn’t so much about the cores themselves as it is about the communication between the cores. This is why Link En Fuego talks about anything, “from Dada to dabberlocks, Dacron to ducksauceology.”
The prototype itself used 80 homogeneous cores. We were told it could have used any number, and they did not have to be homogeneous. The reason Intel chose 80 cores was because the design specs allowed for a certain number of transistors. And basically with the memory/logic tradeoff they had in mind, the company settled on the 80-core number because it provided enough memory and compute cores to prove the new idea works.
You don’t necessarily need to go all out proving that your innovation rocks the house. Why? Well, besides the obvious reason of limited money/resource, you’ll see why in a second.
It could have just as easily been 200 cores, 50 cores, or any other number because of the on-board communication system, Bautista said.
Why? Because if you innovation indeed rock, it’ll be good enough to be scalable. Just pick a good, manageable prototype size and work on it quickly, because you don’t ever want to lose the initial mojo/momentum/enthusiasm. Intel announced this technology in March 2006 and got a working prototype in less than a year. So if you’re going to waste your money and brain power on anything, use it on establishing a solid ground of relationships on which the entire innovation can thrive upon.
Bautista stressed that this generic routing system (NOTE: it was called the “‘one to any’ communication method” and “on-board communication system” in the quotes above) is the highlight of Tera-scale. It allows anything within a node to communicate with anything else on chip…[In fact, the cores] can be of any specialized design. As far as the design goes, each node could be anything…The cookie-cutter nature of this design allows flexibility in compute abilities that we are not used to.
Ultimately, because the extent to which this technology can be applied to is so far-reaching, it can be based on nothing (not even a standard “compute core”) but its “one to any” relationship principle.
How elegant is that?