go to Xputer pages

a due pdx

The Reconfigurable Computing Paradox

homepage TU Kaiserslautern
Karlsruhe Institute of Technology (KIT) homepageInstitut für Technik der Informationsverarbeitung (ITIV) des Karlsruher Institut für Technologie (KIT)


Reconfigurable computing represents a fundamental paradigm shift away from the von Neumann machine paradigm, over to the anti-machine paradign (or Xputer paradigm) using data counters instead of a program counter. Many research papers report massive speed-ups by software-to-configware migration (software-to-FPGA migration). See fig. 1. The reported speed-up factors range up to more than four orders of magnitude, and the electricity consumption reduction up to almost four orders of magnitude. This is paradox since the clock frequency is substantially lower, and other technological parameters like microchip layout area efficiency of FPGAs tend to be massively behind that of classical microprocessors.

One main reason is reconfigurability overhead. For example of each hundred transistors, maybe about 3 of them serve the application, whereas the other 97 transistors are needed for routing and other reconfigurability features. (I only know the order of magnitude, but no exact figures.) This means that a technology about 4 orders of magnitude worse provides performance results which are by about up to 4 orders of magnitude better. This yields a difference of up to about 8 orders of magnitude. This is really a paradox: The Reconfigurable Computing Paradox! Also see "Computers are facing a seismic shift"

This paradox is due to a paradigm shift which avoids the von Neumann syndrome caused by memory-cycle-hungry instruction streams needed to cope with huge overhead phenomena within software packages of up to astronomic dimensions. "Nathan's Law" sais, that software is a gas which completely fills all available storage space. Due to Pattersons Law the memory bandwidth gap grows 50% per year and has reached much more then a factor of 1000.

The fundamental model of the data-stream-based anti-machine (also called Xputer), is the counterpart of the instruction-stream-based von Neumann machine paradigm. A non-heterogeneous straight-forward FPGA-based implementation of an application does not need any instruction streams at run time, so that its operation does not suffer from the von Neumann syndrome. Also see "The Hard Ceiling".

Reinvent Computing (also see the Reinvent Computing page). The first electrical computer ready for mass production in 1884 was the Hollerith tabulator. Its basic paradigm has been data-stream-based, i. e. an anti machine. The data stream came in as a punched card stream. In relation to the technology available at that time it was highly efficient and had the only size of 2 small kitchen refrigerators. Around 1964 computing has been reinvented by a paradigm shift to instruction-stream-based computing. The first prototype has been the ENIAC - an extremely inefficient huge monster, although meanwhile the vaccum tube and magnetic tape storage had been invented. This paradigm shift was the biggest mistake in the history of computing. Because of this massive inefficiency we are again forced to reinvent computing. We also have a second paradox: the FPL market paradox [10] (fig. 2). For more than 10 years the FPGA fraction of the semiconductor industry is only 1.5% or less.
We have to cope with two paradoxes (fig. 2).

[1] - [9] see The Anti-Machine Page
[10] Are FPGAs Suffering from the Innovator’s Dilemma?        
[11] C. Bobda: Introduction to Reconfigurable Computing: Architectures; Springer, 2007

                                                                  What Color is Your Abstraction?