• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Floadia Develops Memory Technology That Retains Ultra-high-precision Analog Data for Extended Periods

Nov 11, 2004
11,486 (1.78/day)
System Name Overlord Mk MXVI
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte X570 Aorus Master
Cooling Corsair H115i Pro
Memory 32GB Viper Steel 3600 DDR4 @ 3800MHz 16-19-16-19-36
Video Card(s) Gigabyte RTX 2080 Gaming OC 8G
Storage 1TB WD Black NVMe (2018), 2TB Viper VPN100, 1TB WD Blue 3D NAND
Display(s) Asus PG27AQ
Case Corsair Carbide 275Q
Audio Device(s) Corsair Virtuoso SE
Power Supply Corsair RM750
Mouse Logitech G502 Lightspeed
Keyboard Wooting Two
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/33u9si
Floadia Corporation, headquartered in Kodaira-shi, Tokyo, has developed a prototype 7-bit-per-cell flash memory chip that can retain analog data for 10 years at 150 degrees Celsius by devising a memory cell structure and control method. With the existing memory cell structure, the problem of characteristic change and variation due to charge leakage was significant, and the data retention was only about 100 seconds.

Floadia will apply the memory technology to a chip that realizes AI (artificial intelligence) inference operations with overwhelmingly low power consumption. This chip is based on an architecture called Computing in Memory (CiM), which stores neural network weights in non-volatile memory and executes a large number of multiply-accumulate calculations in parallel by passing current through the memory array. CiM is attracting worldwide attention as an AI accelerator for edge computing environments because it can read a large amount of data from memory and consumes much less power than conventional AI accelerators that perform multiply-accumulate calculations on CPUs and GPUs.

This memory technology is based on SONOS-type flash memory chips developed by Floadia for integration into microcontrollers and other devices. Floadia made numerous innovations such as optimizing the structure of charge-trapping layers, i.e. ONO film, to extend the data retention time when storing 7 bits of data. The combination of two cells can store up to 8 bits of neural network weights, and despite its small chip area, it can achieve a multiply-accumulate calculation performance of 300 TOPS/W, far exceeding that of existing AI accelerators.

View at TechPowerUp Main Site