Using Residue Arithmetic to Simplify VLSI Processor Arrays for Dynamic Programming
We introduce a technique for transforming dynamic programming algorithms into ones which are equivalent, but require asymptotically less space and time under the logarithmic cost criterion. When mapped into silicon, these transformed algorithms result in VLSI processor arrays which can be significantly
smaller and faster. The condition necessary for such savings is characterized. We illustrate the discussion with a general case of serial dynamic programming and nonserial pattern matching problem. Finally, we show that our technique does not apply to every instance of dynamic programming.