UK AI-chip designer Graphcore is in the process of building an ultra-intelligence AI computer slated for release in 2024. The company claims the AI computer will exceed the parametric capacity of the brain.
Graphcore has dubbed the ultra-intelligence AI computer Good after the computer science pioneer Jack Good.
The company is developing the next generation IPU technology to power Good, expected to cost about USD 120 million apiece. The ultra-intelligent AI computer will have over 10 Exa-Flops of AI floating point compute and up to 4 Petabytes of memory with over 10 Petabytes/second bandwidth. In addition, the supercomputer will be able to support AI model sizes of up to 500 trillion parameters and is fully supported by the company’s Poplar SDK.
Earlier, Graphcore announced updated models of its multi-processor computers, IPU-POD, running on the Bow chip. The company claimed the new models are five times faster than comparable DGX machines from NVIDIA at half the price.
Graphcore is leveraging Wafer-on-Wafer 3D technology developed with TSMC to deliver higher bandwidth between silicon die. As a result, Bow can speed up the training of neural nets up to 40 percent at 16 percent less energy than its earlier generation.
The Bow Pod256 delivers more than 89 PetaFLOPS of AI compute, and superscale Bow POD1024 produces 350 PetaFLOPS of AI compute. Bow Pods can deliver superior performance at scale for a wide range of AI applications – from GPT and BERT for natural language processing to EfficientNet and ResNet for computer vision, to graph neural networks etc, the company claimed.
Microsoft has built a supercomputer for OpenAI to train huge AI models. The move is seen as a first step toward making the infrastructure needed to train huge AI models available as a platform. The supercomputer has over 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.
NVIDIA’s Selene is the seventh-best supercomputer globally in terms of performance. Built in three months, it is the fastest industrial system in the US and the second-most energy-efficient system ever.
Meanwhile, Meta has announced AI Research SuperCluster (RSC) supercomputer recently. The RSC will accelerate its AI research and help it build the metaverse. Meta’s researchers are using RSC to train Large Language Models.
Graphcore’s competitors include Habana Labs – their chips are used to power AWS’ DL1 instances– and SambaNova, which raised USD 676 million to develop AI training and inferencing chips.
Last year, Cerebras Systems started the quest to achieve brain-scale AI. The firm designed an external memory system that helps multiple computers to train neural networks using trillions of parameters. Today, the most advanced AI clusters support one trillion parameters (the machine equivalent of a synapse) and require megawatts of power to run.
Last year, Cerebras Systems claimed it could support AI models with 120 trillion parameters. The company’s new WSE-2 processor features hundreds of thousands of AI cores across a massive 46225 mm2 of silicon. Cerebras has enabled 2.6 trillion transistors for 850,000 cores. To put things in perspective, the second biggest AI CPU on the market is 826 mm2, with 0.054 trillion transistors. Cerebras also claims 1000x more onboard memory, with 40 GB of SRAM and can “unlock brain-scale neural networks” through its next-gen processors.
According to Nick Bostrom, if Moore’sMoore’s law is still applicable, then human-level intelligence can be achieved between 2015 to 2024. Ray Kurzweil said computers can crack human-level intelligence by 2029. “I have predicted that in 2029, an AI will pass a valid Turing test and successfully achieve human levels of intelligence,” he said. Check out other predictions here.
Interestingly, Graphcore’sGraphcore’s timeline is more or less on par with the predictions made by renowned scientists.