Top member reports
Company Report
Last edited 2 years ago
PerformanceCommunity EngagementCommunity Endorsement
Performance (49m)
25.2% pa
Followed by
362
Straws
Sort by:
Recent
Content is delayed by one month. Upgrade your membership to unlock all content. Click for membership options.
#Industry/competitors
stale
Added 3 years ago

I was recently asked to comment about how memory technology like 4DS' could benefit AI, a commonly-commented but sparsely addressed subject.  The following are a few layman thoughts...and IMO only.

Here are a few thoughts.

To begin, AI is a massive field.  Anything I describe might be relevant for just a few  % of AI opportunities currently identified.

"Old" AI solutions are based on convolutional technology.  They rely on massive amounts of data centre computing to achieve Deep Learning.  But that is a misnomer, actually Training is happening, not Learning. Think of old school education, where children recited times tables and had spelling beaten into them through repetition. Not efficient and not conducive to self-starting scholars.

Using one of the simpler cases, Training in a vision/identification sense involves providing many (in some cases, millions) of  instances, pictures, images of things to be identified, so that the application can achieve the highest possible level of successful identification.  And images  from as many angles as possible.  In the process consuming preposterous amounts of electricity, machine cycles.

NVDIA found that its graphics processors and beefed up gaming CPU's were ideal for this kind of heavy, repetitive processing.  And grew an incredibly lucrative business from that gained awareness.  Along with the in-house processing engines built by Alphabet, Amazon, Tesla and the other hyperscalers, no doubt there is, today, a Deep Learning opportunity for better memory stacks....to get the repetitive learning jobs done quicker.

But this convolutional approach is increasingly being noted for its inefficiencies,  And hideous consumption of resources, in the process.   To deliver a workable solution , an approach which more closely follows the (much more efficient) behaviors  of the brain is just now coming to market.  This neuromorphic approach relies on sparsity within Spiking Neural Networks to achieve orders of magnitude efficiencies.  Sparsity means observing when something happens, but only reporting when things change and then only act on/report the part (of the picture) that has been changed.  So only a small number of pixels needs to be processed in a neuromorphic Vision case. 

New memory technology, including RERAM, could certainly help reduce resource needs for the current crop of implementations.  But as 4DS continues to fly under the radar, so too does the neuromorphic revolution. Meaning that the likes of Tesla continue to talk up full automotive autonomy, based on their massive databases and some user experience progress (to level 2 of 5) to date.  But at the same time, accidents do keep happening and unclassified objects apparently continue to be problems for the convolutional automotive implementations.

Soon enough (next month in the case of Brainchip, which I follow) neuromorphic hardware will start to appear.  It will take various forms of analog and digital implementations of Spiking Neural Networks so to make a call about RERAM adding value there, I'm just not in a position to do that.

But for me, I continue to see plenty of reasons to believe the original thesis behind 4DS can be delivered.

I just wish I knew when!