What's new

Analog A.I.? It sounds crazy, but it might be the future

Drizzt

FULL MEMBER
Joined
Nov 29, 2020
Messages
989
Reaction score
1
Country
India
Location
India
Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.


Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog — and had been for as long as computers have been around.

Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer. While ENIAC represented the beginning of the end for analog computers, in fact, analog machines stuck around in some form until the 1950s or 1960s when digital transistors won out.

“Digital kind of replaced analog computing,” Tim Vehling, senior vice president of product and business development at Mythic, told Digital Trends. “It was cheaper, faster, more powerful, and so forth. [As a result], analog went away for a while.”

In fact, to alter a famous quotation often attributed to Mark Twain, reports of the death of analog computing may have been greatly exaggerated. If the triumph of the digital transistor represented the beginning of the end for analog computers, it may only have been the beginning of the end of the beginning.


Building the next great A.I. processor​

Mythic Ai logo on a chip graphic.
Mythic
Mythic isn’t building purposely retro tech, though. This isn’t some steampunk startup operating out of a vintage clock tower headquarters filled with Tesla coils; it’s a well-funded tech company, based in Redwood City, California and Austin, Texas, that’s building Mythic Analog Matrix Processors (Mythic AMP) that promise advances in power, performance, and cost using a unique analog compute architecture that diverges significantly from regular digital architectures.

Devices like its announced M1076 single-chip analog computation device purport to usher in an age of compute-heavy processing at impressively low power.

“There’s definitely a lot of interest in making the next great A.I. processor,” said Vehling. “There’s a lot of investment and venture capital money going into this space, for sure. There’s no question about that.”

The analog approach isn’t just a marketing gimmick, either. Mythic sees problems in the future for Moore’s Law, the famous observation made by Intel co-founder Gordon Moore in 1965, claiming that roughly every 18 months the number of transistors able to be squeezed onto an integrated circuit doubles. This observation has helped usher in a period of sustained exponential improvement for computers over the past 60 years, helping support the amazing advances A.I. research has made during that same period.

But Moore’s Law is running into challenges of the physics variety. Advances have slowed as a result of the physical limitations of constantly attempting to shrink components. Approaches like optical and quantum computing offer one possible way around this. Meanwhile, Mythic’s analog approach seeks to create compute-in-memory elements that function like tunable resistors, supplying inputs as voltages, and collecting the outputs as currents. In doing so, the idea is that the company’s chips can capably handle the matrix multiplication needed to enable artificial neural networks to function in an innovative new way.

As the company explains: “We use analog computing for our core neural network matrix operations, where we are multiplying an input vector by a weight matrix. Analog computing provides several key advantages. First, it is amazingly efficient; it eliminates memory movement for the neural network weights since they are used in place as resistors. Second, it is high performance; there are hundreds of thousands of multiply-accumulate operations occurring in parallel when we perform one of these vector operations.”

“There’s a lot of ways to tackle the problem of A.I. computation,” Vehling said, referring to the various approaches being explored by different hardware companies. “There’s no wrong way. But we do fundamentally believe that the keep-throwing-more-transistors-at-it, keep-making-the-process-nodes-smaller — basically the Moore’s Law approach — is not viable anymore. It’s starting to prove out already. So whether you do analog computers or not, companies will have to find a different approach to make next-generation products that are high computation, low power, [et cetera].”

The future of A.I.​

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images
If this problem is not taken care of, it’s going to have a big impact on the further advancement of A.I., especially when this is carried out locally on devices. Right now, some of the A.I. we rely on on a daily basis combines on-device processing and the cloud. Think of it like having an employee who’s able to make decisions up to a certain level, but must then call their boss to ask advice.

This is the model used by, for instance, smart speakers, which carry out tasks like keyword spotting (“OK, Google”) locally, but then outsource the actual spoken word queries to the cloud, thereby letting household devices harness the power of supercomputers stored in massive data centers thousands of miles away.

That’s all well and good, although some tasks require instant responses. And, as A.I. gets smarter, we’ll expect more and more of it. “We see a lot of what we call Edge A.I., which is not relying on the cloud, when it comes to industrial applications, machine vision applications, drones, in video surveillance,” Vehling said. “[For example], you may want to have a camera trying to identify somebody and take action immediately. There are a lot of applications that do need immediate application on a result.”

A.I. chips need to keep pace with other breakthroughs in hardware. Cameras, for instance, are getting better all the time. Picture resolution has increased dramatically over the past decades, meaning that deep A.I. models for image recognition must be able to parse ever-increasing amounts of resolution data to carry out analytics.

Add onto this the growing expectations for what people believe should be extractable from an image — whether that’s mapping objects in real-time, identifying multiple objects at once, figuring out the three-dimensional context of a scene — and you realize the immense challenge that A.I. systems face.

Whether it’s for offering more processing power while keeping devices small, or the privacy demands that require local processing instead of outsourcing, Mythic believes its compact chips have plenty to offer.

The roll-out​

Mythic Ai logo on a chip graphic.
Mythic
“We’re [currently] in the early commercialization stages,” said Vehling. “We’ve announced a couple of products. So far we have a number of customers that are evaluating [our technology] for use in their own products… Hopefully by late this year, early next year, we’ll start seeing companies utilizing our technology in their products.”

Initially, he said, this is likely to be in enterprise and industrial applications, such as video surveillance, high-end drone manufacturers, automation companies, and more. Don’t expect that consumer applications will lag too far behind, though.

“Beyond 2022 — [2023] going into ’24 — we’ll start seeing consumer tech companies [adopt our technology] as well,” he said.

If analog computing turns out to be the innovation that powers the augmented and virtual reality needed for the metaverse to function … well, isn’t that about the most perfect meeting point of steampunk and cyberpunk you could hope for?

Hopefully, Mythic’s chips prove less imaginary and unreal than the company’s chosen name would have us believe.
https://www.digitaltrends.com/computing/mythic-ai-analog-artificial-intelligence/

Mythic Launches Industry First Analog AI Chip​

https://www.forbes.com/sites/karlfr...ndustry-first-analog-ai-chip/?sh=1eda9b1c3e09

https://mythic.ai/
@jamahir
 
.

I have watched this vid a few days after it was published. I am yet to understand the artificial neural network elements like bias but do understand that the Mythic chip is very good for the specific application of live image processing which in turn can be used for recognizing visually-taken analog events ( people walking by, an analog water meter etc ) which are many because it doesn't require a big form factor, doesn't take much energy and expel it as waste heat all issues are which are there in GPU cards. However, I don't think it can do regular control, general purpose computation. But fantastic development. I think this will be used instead of Quantum Computing.
 
.
Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.


Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog — and had been for as long as computers have been around.

Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer. While ENIAC represented the beginning of the end for analog computers, in fact, analog machines stuck around in some form until the 1950s or 1960s when digital transistors won out.

“Digital kind of replaced analog computing,” Tim Vehling, senior vice president of product and business development at Mythic, told Digital Trends. “It was cheaper, faster, more powerful, and so forth. [As a result], analog went away for a while.”

In fact, to alter a famous quotation often attributed to Mark Twain, reports of the death of analog computing may have been greatly exaggerated. If the triumph of the digital transistor represented the beginning of the end for analog computers, it may only have been the beginning of the end of the beginning.


Building the next great A.I. processor​

Mythic Ai logo on a chip graphic.
Mythic
Mythic isn’t building purposely retro tech, though. This isn’t some steampunk startup operating out of a vintage clock tower headquarters filled with Tesla coils; it’s a well-funded tech company, based in Redwood City, California and Austin, Texas, that’s building Mythic Analog Matrix Processors (Mythic AMP) that promise advances in power, performance, and cost using a unique analog compute architecture that diverges significantly from regular digital architectures.

Devices like its announced M1076 single-chip analog computation device purport to usher in an age of compute-heavy processing at impressively low power.

“There’s definitely a lot of interest in making the next great A.I. processor,” said Vehling. “There’s a lot of investment and venture capital money going into this space, for sure. There’s no question about that.”

The analog approach isn’t just a marketing gimmick, either. Mythic sees problems in the future for Moore’s Law, the famous observation made by Intel co-founder Gordon Moore in 1965, claiming that roughly every 18 months the number of transistors able to be squeezed onto an integrated circuit doubles. This observation has helped usher in a period of sustained exponential improvement for computers over the past 60 years, helping support the amazing advances A.I. research has made during that same period.

But Moore’s Law is running into challenges of the physics variety. Advances have slowed as a result of the physical limitations of constantly attempting to shrink components. Approaches like optical and quantum computing offer one possible way around this. Meanwhile, Mythic’s analog approach seeks to create compute-in-memory elements that function like tunable resistors, supplying inputs as voltages, and collecting the outputs as currents. In doing so, the idea is that the company’s chips can capably handle the matrix multiplication needed to enable artificial neural networks to function in an innovative new way.

As the company explains: “We use analog computing for our core neural network matrix operations, where we are multiplying an input vector by a weight matrix. Analog computing provides several key advantages. First, it is amazingly efficient; it eliminates memory movement for the neural network weights since they are used in place as resistors. Second, it is high performance; there are hundreds of thousands of multiply-accumulate operations occurring in parallel when we perform one of these vector operations.”

“There’s a lot of ways to tackle the problem of A.I. computation,” Vehling said, referring to the various approaches being explored by different hardware companies. “There’s no wrong way. But we do fundamentally believe that the keep-throwing-more-transistors-at-it, keep-making-the-process-nodes-smaller — basically the Moore’s Law approach — is not viable anymore. It’s starting to prove out already. So whether you do analog computers or not, companies will have to find a different approach to make next-generation products that are high computation, low power, [et cetera].”

The future of A.I.​

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images
If this problem is not taken care of, it’s going to have a big impact on the further advancement of A.I., especially when this is carried out locally on devices. Right now, some of the A.I. we rely on on a daily basis combines on-device processing and the cloud. Think of it like having an employee who’s able to make decisions up to a certain level, but must then call their boss to ask advice.

This is the model used by, for instance, smart speakers, which carry out tasks like keyword spotting (“OK, Google”) locally, but then outsource the actual spoken word queries to the cloud, thereby letting household devices harness the power of supercomputers stored in massive data centers thousands of miles away.

That’s all well and good, although some tasks require instant responses. And, as A.I. gets smarter, we’ll expect more and more of it. “We see a lot of what we call Edge A.I., which is not relying on the cloud, when it comes to industrial applications, machine vision applications, drones, in video surveillance,” Vehling said. “[For example], you may want to have a camera trying to identify somebody and take action immediately. There are a lot of applications that do need immediate application on a result.”

A.I. chips need to keep pace with other breakthroughs in hardware. Cameras, for instance, are getting better all the time. Picture resolution has increased dramatically over the past decades, meaning that deep A.I. models for image recognition must be able to parse ever-increasing amounts of resolution data to carry out analytics.

Add onto this the growing expectations for what people believe should be extractable from an image — whether that’s mapping objects in real-time, identifying multiple objects at once, figuring out the three-dimensional context of a scene — and you realize the immense challenge that A.I. systems face.

Whether it’s for offering more processing power while keeping devices small, or the privacy demands that require local processing instead of outsourcing, Mythic believes its compact chips have plenty to offer.

The roll-out​

Mythic Ai logo on a chip graphic.
Mythic
“We’re [currently] in the early commercialization stages,” said Vehling. “We’ve announced a couple of products. So far we have a number of customers that are evaluating [our technology] for use in their own products… Hopefully by late this year, early next year, we’ll start seeing companies utilizing our technology in their products.”

Initially, he said, this is likely to be in enterprise and industrial applications, such as video surveillance, high-end drone manufacturers, automation companies, and more. Don’t expect that consumer applications will lag too far behind, though.

“Beyond 2022 — [2023] going into ’24 — we’ll start seeing consumer tech companies [adopt our technology] as well,” he said.

If analog computing turns out to be the innovation that powers the augmented and virtual reality needed for the metaverse to function … well, isn’t that about the most perfect meeting point of steampunk and cyberpunk you could hope for?

Hopefully, Mythic’s chips prove less imaginary and unreal than the company’s chosen name would have us believe.
https://www.digitaltrends.com/computing/mythic-ai-analog-artificial-intelligence/

Mythic Launches Industry First Analog AI Chip​

https://www.forbes.com/sites/karlfr...ndustry-first-analog-ai-chip/?sh=1eda9b1c3e09

https://mythic.ai/
@jamahir
https://mythic.ai/products/m1076-analog-matrix-processor/
Do you think its possible to repurpose a digital IC to work like this, watch the video at 15:56
for eg, rewrite the drivers of digital flash storage cells, to read the voltage level instead of 1 and 0
 
. .
https://mythic.ai/products/m1076-analog-matrix-processor/
Do you think its possible to repurpose a digital IC to work like this, watch the video at 15:56
for eg, rewrite the drivers of digital flash storage cells, to read the voltage level instead of 1 and 0

I watched the vid from that point-.

Well, the Flash storage cells here are being seen as plain electronic devices that store voltages ( like a magnetic tape ) and that in the other case of digital computing are used to not store a 0 or a 1 but can be interpreted to do so. I think a real digital storage device is the Rewritable CD. But I didn't understand the Mythic chip's use of the analog-voltage-storing Flash cells in doing analog matrix calculation and for this we must first understand artificial neural networks so I will ask @fitpOsitive if he can watch the vid and explain these things.

What I understand is that the Mythic chip does direct mathematical matrix multiplication using analog voltages arranged in simple circuitry instead of from having large number of transistors for enabling processor instructions to do it within a ecosystem of program code which seems to make Mythic be a lot smaller and less power consuming than a regular digital GPU. A hybrid combination of analog and digital is the future of electronic computing.
 
Last edited:
.
I watched the vid from that point-.

Well, the Flash storage cells here are being seen as plain electronic devices that store voltages ( like a magnetic tape ) and that in the other case of digital computing are used to not store a 0 or a 1 but can be interpreted to do so. I think a real digital storage device is the Rewritable CD. But I didn't understand the Mythic chip's use of the analog-voltage-storing Flash cells in doing analog matrix calculation and for this we must first understand artificial neural networks so I will ask @fitpOsitive if he can watch the vid and explain these things.

What I understand is that the Mythic chip does direct mathematical matrix multiplication using analog voltages arranged in simple circuitry instead of from having large number of transistors for enabling processor instructions to do it within an ecosystem of program code which seems to make Mythic be a lot smaller and less power consuming than a regular digital GPU. A hybrid combination of analog and digital is the future of electronic computing.
BTW, no chip in this world is purely digital. Every chip in this world is 50% at most, digital. The Digital part performs computations or controls. Analog part generates signals or power.
Digital parts are here due to exactness. Analog things can't match that, at the moment. Also, storage is almost all digital. This is again due to exactness and error (crc) calculations.
But youtube videos are usually biased and over thrilled.
 
Last edited:
.
Also, storage is almost all is digital. This is again due to exactness and error (crc) calculations.

Yeah, in the bigger scheme of things when individual bit-storing units / sections on a storage device form understandable data combined with error correction mechanisms the storage overall becomes digital but in storage devices like tape it is ultimately all analog, yes ? :) The Flash cell here has been repurposed from something used in digital storage to become used as the means to hold a analog voltage.

Digital parts are here due to exactness. Analog things can't match that, at the moment.

I wonder about the non-electronic means of computing. For example some years ago there was talk of biological computers :
Biological computers use biologically derived molecules — such as DNA and proteins — to perform digital or real computations.

The development of biocomputers has been made possible by the expanding new science of nanobiotechnology. The term nanobiotechnology can be defined in multiple ways; in a more general sense, nanobiotechnology can be defined as any type of technology that uses both nano-scale materials (i.e. materials having characteristic dimensions of 1-100 nanometers) and biologically based materials.[1] A more restrictive definition views nanobiotechnology more specifically as the design and engineering of proteins that can then be assembled into larger, functional structures[2][3] The implementation of nanobiotechnology, as defined in this narrower sense, provides scientists with the ability to engineer biomolecular systems specifically so that they interact in a fashion that can ultimately result in the computational functionality of a computer.

Scientific background​

Biocomputers use biologically derived materials to perform computational functions. A biocomputer consists of a pathway or series of metabolic pathways involving biological materials that are engineered to behave in a certain manner based upon the conditions (input) of the system. The resulting pathway of reactions that takes place constitutes an output, which is based on the engineering design of the biocomputer and can be interpreted as a form of computational analysis. Three distinguishable types of biocomputers include biochemical computers, biomechanical computers, and bioelectronic computers.[4]

Biochemical computers​

Biochemical computers use the immense variety of feedback loops that are characteristic of biological chemical reactions in order to achieve computational functionality.[5] Feedback loops in biological systems take many forms, and many different factors can provide both positive and negative feedback to a particular biochemical process, causing either an increase in chemical output or a decrease in chemical output, respectively. Such factors may include the quantity of catalytic enzymes present, the amount of reactants present, the amount of products present, and the presence of molecules that bind to and thus alter the chemical reactivity of any of the aforementioned factors. Given the nature of these biochemical systems to be regulated through many different mechanisms, one can engineer a chemical pathway comprising a set of molecular components that react to produce one particular product under one set of specific chemical conditions and another particular product under another set of conditions. The presence of the particular product that results from the pathway can serve as a signal, which can be interpreted—along with other chemical signals—as a computational output based upon the starting chemical conditions of the system (the input).

Biomechanical computers​

Biomechanical computers are similar to biochemical computers in that they both perform a specific operation that can be interpreted as a functional computation based upon specific initial conditions which serve as input. They differ, however, in what exactly serves as the output signal. In biochemical computers, the presence or concentration of certain chemicals serves as the input signal. In biomechanical computers, however, the mechanical shape of a specific molecule or set of molecules under a set of initial conditions serves as the output. Biomechanical computers rely on the nature of specific molecules to adopt certain physical configurations under certain chemical conditions. The mechanical, three-dimensional structure of the product of the biomechanical computer is detected and interpreted appropriately as a calculated output.

Bioelectronic computers​

Biocomputers can also be constructed in order to perform electronic computing. Again, like both biomechanical and biochemical computers, computations are performed by interpreting a specific output that is based upon an initial set of conditions that serve as input. In bioelectronic computers, the measured output is the nature of the electrical conductivity that is observed in the bioelectronic computer. This output comprises specifically designed biomolecules that conduct electricity in highly specific manners based upon the initial conditions that serve as the input of the bioelectronic system.

Network-based biocomputers​

In networks-based biocomputation,[6] self-propelled biological agents, such as molecular motor proteins or bacteria, explore a microscopic network that encodes a mathematical problem of interest. The paths of the agents through the network and/or their final positions represent potential solutions to the problem. For instance, in the system described by Nicolau et al.,[6] mobile molecular motor filaments are detected at the "exits" of a network encoding the NP-complete problem SUBSET SUM. All exits visited by filaments represent correct solutions to the algorithm. Exits not visited are non-solutions. The motility proteins are either actin and myosin or kinesin and microtubules. The myosin and kinesin, respectively, are attached to the bottom of the network channels. When adenosine triphosphate (ATP) is added, the actin filaments or microtubules are propelled through the channels, thus exploring the network. The energy conversion from chemical energy (ATP) to mechanical energy (motility) is highly efficient when compared with e.g. electronic computing, so the computer, in addition to being massively parallel, also uses orders of magnitude less energy per computational step.

Future potential of biocomputers​

Many examples of simple biocomputers have been designed, but the capabilities of these biocomputers are very limited in comparison to commercially available non-bio computers. Some people believe that biocomputers have great potential, but this has yet to be demonstrated. The potential to solve complex mathematical problems using far less energy than standard electronic supercomputers, as well as to perform more reliable calculations simultaneously rather than sequentially, motivates the further development of "scalable" biological computers, and several funding agencies are supporting these efforts.
Maybe one should just jump to a hybrid of electronic digital computing with biological computing. What do you think ?

@Drizzt
 
.
Yeah, in the bigger scheme of things when individual bit-storing units / sections on a storage device form understandable data combined with error correction mechanisms the storage overall becomes digital but in storage devices like tape it is ultimately all analog, yes ? :) The Flash cell here has been repurposed from something used in digital storage to become used as the means to hold a analog voltage.



I wonder about the non-electronic means of computing. For example some years ago there was talk of biological computers :



Maybe one should just jump to a hybrid of electronic digital computing with biological computing. What do you think ?

@Drizzt
I think this Analog ICs will be the key to create true AI, our brains are also analog(neuron voltage). They are not a replacement for digital computer, here the application is purely for neural networks. just like Digital Signal Processors DSPs are not a replacement for general purpose processors,

Derek Muller is not a average youtuber, he is a physicist, PHD from University of Sydney,
 
.
any digital computer can simulate a analog computer.
 
.
Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.


Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog — and had been for as long as computers have been around.

Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer. While ENIAC represented the beginning of the end for analog computers, in fact, analog machines stuck around in some form until the 1950s or 1960s when digital transistors won out.

“Digital kind of replaced analog computing,” Tim Vehling, senior vice president of product and business development at Mythic, told Digital Trends. “It was cheaper, faster, more powerful, and so forth. [As a result], analog went away for a while.”

In fact, to alter a famous quotation often attributed to Mark Twain, reports of the death of analog computing may have been greatly exaggerated. If the triumph of the digital transistor represented the beginning of the end for analog computers, it may only have been the beginning of the end of the beginning.


Building the next great A.I. processor​

Mythic Ai logo on a chip graphic.
Mythic
Mythic isn’t building purposely retro tech, though. This isn’t some steampunk startup operating out of a vintage clock tower headquarters filled with Tesla coils; it’s a well-funded tech company, based in Redwood City, California and Austin, Texas, that’s building Mythic Analog Matrix Processors (Mythic AMP) that promise advances in power, performance, and cost using a unique analog compute architecture that diverges significantly from regular digital architectures.

Devices like its announced M1076 single-chip analog computation device purport to usher in an age of compute-heavy processing at impressively low power.

“There’s definitely a lot of interest in making the next great A.I. processor,” said Vehling. “There’s a lot of investment and venture capital money going into this space, for sure. There’s no question about that.”

The analog approach isn’t just a marketing gimmick, either. Mythic sees problems in the future for Moore’s Law, the famous observation made by Intel co-founder Gordon Moore in 1965, claiming that roughly every 18 months the number of transistors able to be squeezed onto an integrated circuit doubles. This observation has helped usher in a period of sustained exponential improvement for computers over the past 60 years, helping support the amazing advances A.I. research has made during that same period.

But Moore’s Law is running into challenges of the physics variety. Advances have slowed as a result of the physical limitations of constantly attempting to shrink components. Approaches like optical and quantum computing offer one possible way around this. Meanwhile, Mythic’s analog approach seeks to create compute-in-memory elements that function like tunable resistors, supplying inputs as voltages, and collecting the outputs as currents. In doing so, the idea is that the company’s chips can capably handle the matrix multiplication needed to enable artificial neural networks to function in an innovative new way.

As the company explains: “We use analog computing for our core neural network matrix operations, where we are multiplying an input vector by a weight matrix. Analog computing provides several key advantages. First, it is amazingly efficient; it eliminates memory movement for the neural network weights since they are used in place as resistors. Second, it is high performance; there are hundreds of thousands of multiply-accumulate operations occurring in parallel when we perform one of these vector operations.”

“There’s a lot of ways to tackle the problem of A.I. computation,” Vehling said, referring to the various approaches being explored by different hardware companies. “There’s no wrong way. But we do fundamentally believe that the keep-throwing-more-transistors-at-it, keep-making-the-process-nodes-smaller — basically the Moore’s Law approach — is not viable anymore. It’s starting to prove out already. So whether you do analog computers or not, companies will have to find a different approach to make next-generation products that are high computation, low power, [et cetera].”

The future of A.I.​

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images
If this problem is not taken care of, it’s going to have a big impact on the further advancement of A.I., especially when this is carried out locally on devices. Right now, some of the A.I. we rely on on a daily basis combines on-device processing and the cloud. Think of it like having an employee who’s able to make decisions up to a certain level, but must then call their boss to ask advice.

This is the model used by, for instance, smart speakers, which carry out tasks like keyword spotting (“OK, Google”) locally, but then outsource the actual spoken word queries to the cloud, thereby letting household devices harness the power of supercomputers stored in massive data centers thousands of miles away.

That’s all well and good, although some tasks require instant responses. And, as A.I. gets smarter, we’ll expect more and more of it. “We see a lot of what we call Edge A.I., which is not relying on the cloud, when it comes to industrial applications, machine vision applications, drones, in video surveillance,” Vehling said. “[For example], you may want to have a camera trying to identify somebody and take action immediately. There are a lot of applications that do need immediate application on a result.”

A.I. chips need to keep pace with other breakthroughs in hardware. Cameras, for instance, are getting better all the time. Picture resolution has increased dramatically over the past decades, meaning that deep A.I. models for image recognition must be able to parse ever-increasing amounts of resolution data to carry out analytics.

Add onto this the growing expectations for what people believe should be extractable from an image — whether that’s mapping objects in real-time, identifying multiple objects at once, figuring out the three-dimensional context of a scene — and you realize the immense challenge that A.I. systems face.

Whether it’s for offering more processing power while keeping devices small, or the privacy demands that require local processing instead of outsourcing, Mythic believes its compact chips have plenty to offer.

The roll-out​

Mythic Ai logo on a chip graphic.
Mythic
“We’re [currently] in the early commercialization stages,” said Vehling. “We’ve announced a couple of products. So far we have a number of customers that are evaluating [our technology] for use in their own products… Hopefully by late this year, early next year, we’ll start seeing companies utilizing our technology in their products.”

Initially, he said, this is likely to be in enterprise and industrial applications, such as video surveillance, high-end drone manufacturers, automation companies, and more. Don’t expect that consumer applications will lag too far behind, though.

“Beyond 2022 — [2023] going into ’24 — we’ll start seeing consumer tech companies [adopt our technology] as well,” he said.

If analog computing turns out to be the innovation that powers the augmented and virtual reality needed for the metaverse to function … well, isn’t that about the most perfect meeting point of steampunk and cyberpunk you could hope for?

Hopefully, Mythic’s chips prove less imaginary and unreal than the company’s chosen name would have us believe.
https://www.digitaltrends.com/computing/mythic-ai-analog-artificial-intelligence/

Mythic Launches Industry First Analog AI Chip​

https://www.forbes.com/sites/karlfr...ndustry-first-analog-ai-chip/?sh=1eda9b1c3e09

https://mythic.ai/
@jamahir
Thanks for sharing , very interesting indeed.
 
.
Back
Top Bottom