AI hardware Problem: how the silicon Industry will develop electronic chips in future
Our hardware works on a multiply-accumulate function. This simple function multiplies two numbers and adds this into accumulate. The majority of AI applications are inference engines that have a neural network model. It is composed of layers of neurons interconnected by a weight parameter; each employs the multiply-accumulate function. Even a relatively smaller network, such as a mobile net, has 2.2 million weight and requires 569 million multiply accumulator functions. In a digital neural implementation, the weight and input data get stored in system memory and must be fetched and stored through a bushels of multiple accumulator operations in the network. Most of the power gets consumed in fetching and storing model parameters and input data to the arithmetic logic unit, and this arithmetic logic unit is part of the CPU where the actual multiply-accumulate operation takes place. A multiply-accumulate operation takes 5.03pJ of energy while the energy consumed during data transfer to and from the operation is more than 0.08 kWh/gigabyte.
Ai Accelerator
There are three components for computational systems that are memory/storage, communications and computation. Memory and storage consume more power and space as we didn’t work on them in our designs in the last decades. But we did work on computational power. So there is a need to work on re-architecting the chips. Designing systems to be more intelligent based on ML principles and human learning principles. We must take advantage of data and metadata to improve the architecture.
Architecture is terrible with dealing with data.
We must exploit properties of data such as security characteristics etc.if we understand the characteristic of the data, we can improve the algorithm and architecture in various metrics. We must move from processor centric to data-centric. The existing architecture doesn’t use the advantage of data. The machines can be designed to take their own simple decisions. Low latency and low energy access to data. Low-cost data storage and processing through compression, de-duplication and hybrid memories. We want intelligent data management and a lot of technology scaling issues that relate to data.
We need to do computation near memory. This is not a new idea. It was published in 1970. Logic in the Memory system, but this idea didn’t work out. We must scale memory technology such as DRAM. and we must work on the above mentioned points to come up with new age AI compatible hardware.
In the last post, I received queries about the data science course that you can do to become job ready. I suggest you can choose jain university as it offers course of Data Science in BTech, MTech, MSc, BSc Honors, MBA and MCA. For more information, You must visit CollegeBatch website.