Edge computing is increasingly gaining media coverage and BRN’s fundamental ideas appear to be well supported but the big question is still whether the big players can find a way around BRN’s IP to do the same thing. MIT and Intel pushing software. Brainchip mentioned in the same sentence as IBM, Qualcomm. Intel recognising the need for an accessible developer community which is exactly where the Akida developer kits are targeting. BRN’s product on market whereas others still seem to be at earlier stages of development.
MIT are working on an algorithmic approach backed by some big players.
“Learning on the edge
A new technique enables AI models to continually learn from new data on intelligent edge devices like smartphones and sensors, reducing energy costs and privacy risks.”
”Han and his collaborators employed two algorithmic solutions to make the training process more efficient and less memory-intensive. The first, known as sparse update, uses an algorithm that identifies the most important weights to update at each round of training. The algorithm starts freezing the weights one at a time until it sees the accuracy dip to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights don’t need to be stored in memory.
Their second solution involves quantized training and simplifying the weights, which are typically 32 bits. An algorithm rounds the weights so they are only eight bits, through a process known as quantization, which cuts the amount of memory for both training and inference. Inference is the process of applying a model to a dataset and generating a prediction. Then the algorithm applies a technique called quantization-aware scaling (QAS), which acts like a multiplier to adjust the ratio between weight and gradient, to avoid any drop in accuracy that may come from quantized training.”
From my understanding these processes sound like they achieve the same sort of result as BRN spiking neural networks.
”On-device learning is the next major advance we are working toward for the connected intelligent edge. Professor Song Han’s group has shown great progress in demonstrating the effectiveness of edge devices for training,” adds Jilei Hou, vice president and head of AI research at Qualcomm. “Qualcomm has awarded his team an Innovation Fellowship for further innovation and advancement in this area.”
This work is funded by the National Science Foundation, the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google.”
SOFTWARE, NOT HARDWARE, WILL DRIVE QUANTUM AND NEUROMORPHIC COMPUTING
“But as Intel noted this week at its Intel Innovation 2022 show, while the hardware is important to bringing quantum and neuromorphic to life, what will drive adoption is the accompanying software.”
“Until Lava, it’s been very difficult for groups to build on other groups’ results even within our own community because software tends to be very siloed, very laborious to construct these compelling examples,” Davies told journalists. “But as long as those examples are developed in a way that cannot be readily transferred between groups and you can’t design those at a high level of abstraction, it becomes very difficult to move this into the commercial realm where we need to reach a broad community of mainstream developers that haven’t spent years doing PhDs in computational neuroscience and neuromorphic engineering.”
Lava is an open-source framework with permissive licensing, so the expectation is that other neuromorphic chip manufacturers – which include the likes of IBM, Qualcomm, and BrainChip – will port Lava to their own frameworks. It’s not proprietary, though Intel is the major contributor to it, Davies said.
Disc: held in RL and SM