It was GDDR2 that was too hot and slow, DDR2 on the other hand is a good budjet solution and it actually consumes less power than GDDR3. GDDR2 has more common to DDR1 than to DDR2, and infact, GDDR2 is an older tech compared to plain DDR2 which is the chip-of-choise for budjet GFX today, the exact same chips are used as main system memory on most recent AMD/Intel platforms. DDR3 is not the same as GDDR3. GDDR3 is a DDR2 chip modified to be used as high-bandwidth texture buffer memory where latency is not an issue.
And btw, the 64bit/128bit/256bit/512bit memory interface on graphics cards is not a feature of a single memory chip nor it is limited by the type of chip used (DDR2/GDDR3/etc.). Infact, you could build a 1024bit bus using any given generation chips. Memory bus width is the sum of chip count and chip width.
For example the X1900XT I had sometime ago had 8 Samsung GDDR3 chips (K4J52324QC-BJ11). As you know, X1900XT has a 256bit wide memory bus. The bolded "32" in the part number is the width of the chip; chip width is basically the amount of parallel cell arrays inside the chip, thus chip width is the amount of bits the chip can send or receive within a single clock cycle.
One "x32" chip forms a 32bit bus - 8 such chips form a 256bit wide bus together (8*32bit=256bit), as long as the GPU's memory controller has a dedicated bus for each of the chips.