I got to thinking again how marvelous these little graphing calculators are. You know, the ones that can add 1/3 to 2/3 and get exactly 1 instead of 0.999999999~. They can do that because their processors handle everything as fractional. So, why hasn't Intel and AMD learned from Texas Instruments instead of using those dodgy, general purpose floating point units? I mean, they are fine for games, money, etc. but when precision is more important that performance, why not use a specialized fractional unit? Until then, my $150, 5 year old TI-89 is still better at math (specifically, division) than all my $2000+ computers.