"To get to a future state of 'AI everywhere,' we'll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it's collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel Vice President and General Manager, Artificial Intelligence Products Group.
Built from the ground up to train deep learning models at scale, Intel Nervana NNP-T (Neural Network Processor) pushes the boundaries of deep learning training.
It is built to prioritise two key real-world considerations training a network as fast as possible and doing it within a given power budget.
This deep learning training processor is built with flexibility in mind, striking a balance among computing, communication and memory, said Intel.
Intel Nervana NNP-I, or Springhill, is purpose-built for inference and is designed to accelerate deep learning deployment at scale, introducing specialised deep learning acceleration while leveraging Intel's 10nm process technology with Ice Lake cores to offer industry-leading performance.
Additionally, the Intel Nervana NNP-I offers a high degree of programmability without compromising performance or power efficiency.
"Data centres and the cloud need to have access to scalable general purpose computing and specialized acceleration for complex AI applications," said Rao.