In this presentation Intel tries to give us a rough idea about what's going on in their research labs for the-day-after-tomorrow's technologies.
Since this is all part of ongoing research, the final products in a few years might be different to what this research is about.
Hmm.. what if you put a big computer on a spaceship and accelerate it to near light speed? According to Einstein, the time passing on the ship would be accelerated a lot compared to the time passing on Earth, making computation times pretty non-existent.. But you wanted to read about what Intel is researching, so back on topic.
Tera scale as the name suggests is a term that describes all technologies related to computing at a trillion operations per second - 1 000 000 000 000.
As the last years have shown you can't just ramp up the clocks speeds indefinitely, and definitely not into the Tera-Hertz range. That's why Intel is spending a ton of money ($5B a year total research budget) on researching alternate ways to get there.
If Moore's Law, which says that the number of transistors doubles all 24 months, is correct (and it still is today), then we will have 32 billion transistors on a CPU in 2010 (291M today one Core 2). What this slide also reveals is that the next process sizes after 45nm are probably going to be 32nm and 22nm.
Tera scale research is not only about building a CPU that can do 1T operations per second but it also deals with problems like memory, communications and algorithms.
A processor 5-10 years away will be very different from what we are using today. The cores will be much more simple and a lot more energy efficient in design. Since the whole concept is designed around multi-threaded computing, the cores will be optimized for that. Please also note that each of these small cores still uses the x86 architecture (IA in the slides, and I asked to confirm). When asked why not research such hardware for other platforms like ARM the reply was: "Maybe think the other way round? We could use IA not only in PCs". What you can also see is that there will be different kinds of caches. One local to which one only the single core has access and another one which is shared by multiple cores. It does not make that much sense to share a single cache with all cores because different latencies might affect performance. A multi-level cache hierarchy sounds very possible though.