The Financial ROI of AI Hardware – Top-Line and Bottom-Line Impact

Ayn de Jesus

Ayn serves as AI Analyst at Emerj - covering artificial intelligence use-cases and trends across industries. She previously held various roles at Accenture.

The Financial ROI of AI Hardware - Top-Line and Bottom-Line Impact

Episode Summary: At Emerj, we often talk about the software capabilities of AI and the tangible return on investment (ROI) of recommendation engines, fraud detection, and different kinds of AI applications. We rarely talk about the hardware side of the equation, and that will be our focus today. For hardware companies like Nvidia, stock prices have soared thanks to the popularity of new kinds of AI hardware being needed not only in academia but also among the technology giants. Increasingly, AI hardware is about more than just graphics processing units (GPUs).

Today we interview Mike Henry, CEO of Mythic AI. Mike speaks about the different kinds of AI-specific hardware, where they are used, and how they differ depending on their function. More specifically, Mike talks about the business value of AI hardware. Can specific hardware save money on energy, time, and resources? Where can it drive value? Where is AI hardware necessary to open new capabilities for AI systems that may not have been possible with older hardware? What is the right business approach to AI hardware?

This interview was brought to us by Kisaco Research, which partnered with TechEmergence to help promote their AI hardware summit on September 18 and 19 at the Computer History Museum in Mountain View California.

Subscribe to our AI in Industry Podcast with your favorite podcast service:

(Note: Mike’s connection was a bit choppy in some of the sections of the audio, but it came through more than well enough to get the point – and to hear the examples.)

Guest: Mike Henry, Founder and CEO – Mythic AI

Expertise: ASIC, VLSI, Simulation, Circuit Design, Verilog

Brief Recognition: Mike holds a doctorate in Electrical and Computer Engineering from Virginia Tech (2011). He is the primary author of 11 publications related to low power design in top Institute of Electrical and Electronics Engineers and Association for Computing Machinery conferences and journals. Prior to founding Mythic, Mike was a visiting scholar at the University of Michigan and an Instructor for microchip design courses at Virginia Tech.

Interview Highlights – The ROI of AI Hardware

The following is a condensed version of the full audio interview, which is available in the above links on TechEmergence’s SoundCloud and iTunes stations.

(00:20) We’re talking about the financial ROI of AI hardware. Before that, it will be useful for our audience of business leaders if we could first layout the possibilities of AI hardware in general. How do we define that and what kinds of AI hardware are currently available?

Mike Henry: Understanding the types of AI emerging and already in the market is the first vital step in determining the ROI. We can evenly split AI computation into two.

The first is the training of the AI algorithm, which entails force-feeding it with data and teaching it to recognize the patterns behind the data. This is a very computationally-heavy workload that almost always exists in the data center. It entails taking gigabytes of data and continuously circling through the algorithm. It requires different kinds of high precision hardware.

The other side of the AI hardware is inference or deployment. Once the data has been trained, how does it work? Again, this is something that can be segregated, running the algorithm at the data center, or it can be at the edge where the data is. The work is lighter because the math would be lower precision, but the scope of the deployment is wider. The economics would be different. I like to split the markets for inference, which encompasses teaching or deploying, and within inference, server and edge.

(02:35) Can you walk us through an example of each so people can understand what kind of hardware would be needed for each kind of computation?

MH: Let’s take a simple example, like teaching an algorithm to know the difference between a cat or a dog. Training the algorithm entails collecting a hundred thousand images of the cat and a similar amount for the dog. In the data center, you force feed those images into the algorithm over and over again, and eventually it learns to distinguish the two.

On the inference side, if the business wants to sell this app that knows the difference between a cat and a dog to a hundred million users who each take 10 pictures a day, you will see that the workload has shifted. Billions of images are streaming into the data center where the algorithms have been trained. That would be a data center inference point.

The business could write it to run on a phone, eliminating the need to send photos to the data center.

(04:58) In edge deployment, people often associate AI hardware with GPU. They are aware that Nvidia stock is exploding. But are some inference examples not GPU? What are the other types of technical hardware that could be used in the training phase?

MH: That raises a great point about the ROI of AI hardware. For deploying, you can amortize the hardware across millions of users to lower the costs. If you are running it in the edge, every user has to pay the hardware cost. The edge is way more cognizant of the power and cost of the hardware because you can’t amortize it.

Today, the industry is trying to run processors inside mobile phones that are relatively underpowered. A lot of improvement needs to happen in that area. There is not a lot of ROI for that because there are few things you can do on the phone that generate meaningful use for the user.

Some trends in AI hardware can potentially lower the cost by a hundred million dollars. In the future, it can be possible to put more power on the phone, which is the kind of hardware our company is building. That’s going to have a profound impact on what you can run on phones and what meaningful experiences users can have. Businesses who want to put deploy applications in platforms such as cell phones should definitely pay attention to hardware trends.

You see this happening in other areas of the edge such as autonomous vehicles or autonomous drones, where they have to fill the trunk with $20,000 worth of GPUs. That is definitely not scalable. If we can get the cost down to $100 to $1,000, we can get more autonomous agents to create self-driving cars or robotics.

We are limited in what we can do. If can get the cost and power down for inference at the edge, it will have a profound transformation.

(09:25) Let’s talk about the financial justification of upgrading AI hardware. What examples are there where a switch to AI specific hardware will mean less cost, time, energy in a meaningful business case?

MH: Pick a customer and see how a server application can transform that business. In inference and deployment, servers are important, too. They have to choose what is scalable. Twitch has 45 million users every day on 100,000 gaming channels going on at once. We have massive amounts of data flowing through.

In a pyramid of important elements that AI can impact in a business, quality of the experience would be at the bottom of that pyramid. These include experiences such as translating content into 50 languages, automatic feedback like green screening, automatic editing, automatically creating highlight reels of a game. These are all quality of experience things that expand the platform.

The next level in the pyramid is about saving the business money. AI will be able to do mundane things with better quality at a lower cost. That will save people 50 to 70% of bandwidth. In content moderation, AI will be able to automatically detect offensive content. We are talking about 100,000 channels at once.

At the top of the pyramid would be revenue. Things like recommendation engines, ad serving, and personalization really matter. We can think of examples in Twitch like analyzing gameplay, automatically measuring the skill and style of the player, and the quality of narration. You can use these elements and build a powerful recommendation system, but bringing in the money is the hardest thing to do.

(14:00) Is it possible that enabling new AI functionalities would potentially drive revenue? Where might better AI hardware not just save money but open new business possibilities?

MH: Yes, the most important is revenue, next is reducing costs, and last is the quality of the experience. AI will have a profound impact on all those things, but the toughest one would be revenue, which would be mostly generated by advertising or customer purchases. And because customers stay on a platform until they find content, knowing the context of a specific user and what interests them is a powerful tool. That’s why among the major frontiers of AI is recommendation, marketing personalization, and AI-based search.

Cheaper inference will open a much wider volume of data to make that decision. Imagine taking a small number of features and feeding that into a powerful recommendation system. That’s where the cost of the AI is critical.

(19:35) What are the consequences of using the wrong hardware or hardware that is not optimized for the kind of applications business want to produce?

MH: That’s a good question. Spending a lot of money to build infrastructure to deliver better user experience is hard to relate to ROI. With a recommendation system, it is easy to do some rigor on the design of the algorithm and figure out the ROI.

For instance, how do you measure the ROI of language translation? You might spend a lot of money on something that may not move the needle for the customer experience. Let’s say Apple spent $10 on processing chips for facial recognition, it’s hard to say if this will give them an ROI. You have to be careful with user experience and make sure that you do the rigor and it does move the needle.

Another example would be voice interfaces, which are still frustrating to use at the moment. So if a business spends money to scale up inference and creates voice services, and customers are frustrated, that’s a lot of money gone.

On the training side, a lot of expensive training hardware is coming out. A business has to consider first if they are working at high volumes. At the end of the day, there might be issues that make the hardware hard to use.

 

This article was sponsored by Kisaco Research, and was written, edited and published in alignment with our transparent Emerj sponsored content guidelines. Learn more about reaching our AI-focused executive audience on our Emerj advertising page.

Header image credit: IEEE

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.