How to efficiently scale AI compute?
Chips that do better math, faster

More operations, fewer bits

SciSci Research is an accelerated computing chip design company with a novel architecture for radically faster and stronger arithmetic, the kind needed to drastically scale AI compute.
SciSci has a novel instruction set architecture (ISA) for what it calls an Exact Processing Unit (EPU), an accelerator chip whose operations are based not on floating-point or fixed-point, but on a procedure from modern number theory called p-adic arithmetic.
​
This modern way of doing arithmetic will give the EPU operation speeds 15× faster than floating-point, unlock much deeper parallelization, and offer exact number representation (i.e., no rounding) enabling representation of more numbers with fewer bits. Speed, parallelism, bit budgets: these are the ingredients for the future of AI compute scaling.
​
For over 10 years, GPU FLOPS have scaled, not because or Moore's law, but thanks to efficient number representations, namely lower-precision floating-point and INT, which sped up GPU arithmetic. Now it's gotten down to 4-bit... There are no bits left; a new paradigm for arithmetic is desperately needed. SciSci has it.
​​

