Managing the Risks of AI – A Planning Guide for Executives

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Managing the Risks of AI - A Planning Guide for Executives

This article is based on a presentation given by Emerj CEO Dan Faggella at a recent conference “Artificial Intelligence and Business Ethics: Friends or Foes?” held at the University of Notre Dame. 

The conversation about artificial intelligence and ethics ranges from the broad to the specific. On the one end, there are governments making laws and policies on national, state, and local levels. In the middle, you have industry-wide agreements on certain protocols on privacy expectations, ways of handling customer data, and other ethical considerations within the framework established by the government.

On the other end of that spectrum is specific executive decisions made for managing the risks of AI regarding ethical issues, again within the framework of established government and industry rules.

We at Emerj are most focused on the decisions that executives basically get to wield almost entirely on their own and have maximum control over. Our audience is almost exclusively business executives, and most are not interested in the abstract. They want something actionable to address different issues.

In this particular instance, we are focusing on two things about ethics: (1) how high-powered executives can determine what ethical concerns are relevant and applicable to their businesses in the now and (2) how to work through many of these relevant ethical issues within a simple planning guide or framework.

The goal here is to train business leaders’ antennas to pick up on what matters and ignoring everything else. Business leaders might find this more difficult than they would expect. AI and ethics is a hot topic now, so there will be a lot of tweets and emails in the coming years about it. Business leaders need to have a strong foundation in determining to what part of the conversation they need to pay attention.

The insights contained within this article are not something we randomly pulled out of a hat. This is what we do for a living, and this is exclusively from the executive perspective. Our focus is on the possibilities and probabilities of AI in any given sector, including what has real traction now and what would most likely have a real impact on industry in the future.

We currently have the largest audience of AI-focused business and government executives, with about a quarter million people on the page per month. We give away 95% of our research, which is why we maintain a large email list and the biggest B2B AI podcast on the market.  We often get commissions to do market research for bigger businesses or governments, such as a series of presentations we are giving on behalf of the World Bank for government officials in Thailand.

We are mostly talking about making the best use of present AI trends. There are advantages to early adoption as long as it is mindful of the resource limitations, which often has the biggest impact in that transition. Business leaders can do that by defining their goals, consolidating their position, and knowing what AI technologies and states are relevant to their business.

We also look at AI strategy development, in which ethics play a big part. In our research, we have identified five major factors for developing a framework for the AI and ethic conversation. These are transparency, accuracy, accountability, man versus machine decisions, and job security.

AI and Ethics Tropes

However, before discussing these factors in depth, it would be a good idea to address the tropes that plague the AI and ethics space. Quite often, AI discussions involve tropes, or figures of speech masquerading as literal statements. Three of the most often used tropes are:

  • “AI needs to have human values”
  • “AI needs to be free from bias”
  • “AI should be used for good”

These are all worthwhile concepts, but they are vapid and meaningless if they are not applied to real circumstances. They become catchphrases that make anyone that says them seem virtuous and noble. However, these ethical considerations only matter when they actually happen in real-world applications.

“AI Needs to Have Human Values”

The underlying ethics behind this particular trope is ensuring that AI will always be beneficial to people, embodied in the 23 principles of the Asilomar AI Principles. One of these is the Value Alignment principle, which states that:

Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

If we can inform an AI system to incorporate complex human values in the way we want in its operations, then that would truly be a worthwhile undertaking. Unfortunately, that is more difficult than most people know.

“AI Needs to be Free From Bias”

This sounds like a great idea, but the fact is, an AI system will always have some type of bias because the data it feeds on is inherently biased. According to Deutsche Welle, algorithmic bias is something researchers have been working on for years to eliminate, but without much success.

In the real world, an AI system free from bias is simply not possible. However, this is not necessarily a bad thing. For example, in loan applications, a financial institution may instruct the AI system developer to remove any type of preference for gender and race, but to keep in any bias regarding age and credit score, as these would have a direct impact on the applicant’s ability to pay.

“AI Should be Used for Good”

People say this to feel good, but the fact is what may seem good for some people may not seem good for others. When it comes to AI applications, companies have to consider many things when they make a decision. These include financial risks, marketing risks, stakeholder returns, and so on.

The trick is to make decisions in a given circumstance and hope it represents the best possible good in the best of all possible worlds. Not everyone will be happy about it, but that is the reality.

Overall, there is just too much virtue signaling in the AI and ethics space. It is so easy to appear virtuous when there is no payroll to make. The ethics conversation, just like AI, cannot be divorced from the business conversation.

Business Context Comes First

One thing business owners and executives should understand by this point is that when making decisions regarding AI, ethics, or both, business context comes first. Anything that does not serve the business is ultimately a waste of time.

One of the biggest problems with AI when it first became the trend in the business world was toy applications. Just like the AI and ethics tropes, these applications represented something, but instead of virtue and nobility, AI stood for cutting-edge and cool. However, toy applications do not add value to the business and may even cause serious problems for it because they waste resources in terms of talent and data without having a strong return on investment. Subsequently, businesses that implement them may not be getting a lot of buy-in for actual meaningful AI. The same is true for bringing in ethics to feel good without considering it in the context of business value.

Strategic Direction of the Company

Before making any decision about AI, it is important to determine the strategic direction of the company first, such profit growth and market positioning goals. This would help identify current critical issues for reaching these goals. At this point, it may be a good time to consider how AI can be of some help in reaching these goals by opening up some new ideas or possibilities.

However, the conversation can take a risky turn when bringing AI to the table prematurely. It is easy to make the wrong choices with AI given how little most executives understand about the technology. It is much harder to convince stakeholders to take a chance on AI after initial forays ended unfortunately, and that might mean a loss of competitive edge in the future.

Executives need to proceed with caution when making decisions about AI at the start. Feedback can be a valuable source of information for making the most of any system adopted at the initial stages, and it gets easier from there.

After the third or fourth successful adoption of AI, a company might begin to think about other applications of existing AI systems as well as the future of job roles and departments, Once there is an understanding of the realistic use of AI applications, the conversation can get around to ethical concerns.

Determining Relevance

By the time ethics become a major issue, the company already has a thorough understanding of which AI applications will be a priority. By consequence, executives will be able to tell which of the endless stream of AI ethics conversations would bring company values into the product, and which will not.

If the ethics conversation is about self-driving cars, and the company gives commercial loans, then there is no reason to continue with the conversation. If the discussion is about computer vision having a racial bias, but the company sees no version of the future where computer vision will become a part of it, it may be a waste of time to think about computer vision in great depth.

Ethical Considerations in AI

That said, there is much to discuss in the AI and ethics space that will be relevant to all businesses. Read on to find out the lenses business leaders may need to look through in order to bring ethics into their business, simple frameworks for solving these ethical concerns, and their implications for a company.

Transparency

Most people know that artificial intelligence and machine learning systems are capable of rapid and complex computations  and are content to leave it at that. However, when choosing or designing an AI system, one of the decisions executives have to make is about transparency. When is transparency, also known as interpretability, needed?

This is an important factor because transparency is a big deal when it comes to machine learning and artificial intelligence. Machine learning as an approach to computers doing what humans can is often very difficult to comprehend.

When Transparency is Not Necessary

For example, if someone at Amazon wanted to know why the recommendation engine recommended a product to a person at a given time, he or she might be able to understand roughly the approach the computer is attempting to make.

However, actually explaining the statistical set of feedback factors around clicks and purchases that led to a product recommendation is a tall order. Most of the time, the user will not be able to follow the process the machine takes. It is extremely challenging to garner interpretability in a machine learning system. This is not a problem in circumstances when transparency is not necessary.

One circumstance in which this is the case is when finding articles to read in a large media site like the Huffington Post. As long as the machine gets people to click on an article, the process that led to the recommendation of certain articles does not require transparency. The same applies to Amazon when it comes to product recommendations. As long as the user clicks on a recommended product and buys it, nobody cares about the process itself.

When Transparency is Necessary

However, in some instances, transparency is necessary. Since interpretability is hard with machine learning, this poses serious and critical issues in such fields as healthcare and finance.

To illustrate, say a client has $2 million in mutual funds, and the system recommends allocating it in the market in a certain way. The fund manager should have a clear idea why this happened. The same will apply for the doctor that recommends chemotherapy for a patient based on an AI-based medical diagnosis.

When Transparency Might be Necessary

In still other cases, transparency may be necessary for some parts of the process, but not for the entire process. One example is a loan application, where the applicant may want to know why their application was rejected. The loan officer should be able to follow the logic of the system for rejecting an application because of bad credit.

What we are really considering here is the basis for the decisions the machine makes. When we have transparency, it means pulling out the reasons for a decision and finding out the kinds of data influencing the machine to act in one way or another.

This is important from an ethical perspective because it may have a direct impact on company values. Taking the above example of a loan application, a machine may consistently reject applications from a certain location because, based on previous data, loans originating from people living in that area tend to go into default. This type of bias may not be consistent with the company values of non-discrimination.

Accuracy

The second factor is about managing expectations. Most people think that machines are infallible and will never make mistakes. Realistically, however, machine learning systems are not likely to give 100% accuracy. They are bound to have a small margin of error, and this is because of nuances.

For example, if an AI-based email system trains on a million instances of spam email, and 10 million instances of non-spam email, the system will generate all kinds of patterns to identify spam, such as IP address, subject line, text, links, and so on. Overall, the inbox is spam free.

However, most people will notice that every now and then, one or two spam emails still manage to get into the inbox. How did this happen? All the machine has to do is follow a simple rule: if there is any mention of a Nigerian prince in the email, filter it out.

Unfortunately, simple rules do not account for the complex nuances of the real world very well. Machine learning does a better job of that, but there would be exceptions. Machines can only do so much to catch all the infinite combinations of human behavior.

Executives have to understand that 100% accuracy is not realistic for machine learning systems. These statistical systems take time to get to even an acceptable rate of accuracy, and so certainty is definitely off the table.

Different levels of accuracy may also be necessary for specific situations. For example, a company called Digital Genius provides customer support-related services, and one of these is routing email tickets. When a ticket comes in, the software determines the ticket type so it can send it to the right department or person.

When AI Needs to be as Accurate as People

While machines can work much faster than humans can on some tasks, there is always a margin of error. In this instance, the degree of accuracy of the software must at least match that of a human to provide value to the company. If the machine can do that, this will justify re-allocating people to do things that are more productive.

This would be a massive savings for the company because the machine, while not 100% accurate, can sort through tickets correctly with the same or better degree of accuracy as people and do it much faster.

However, some of these systems have long feedback loops and testing over time. Executives should be aware that considerable investment in machine learning may be necessary for complex processes, such as loan application processing and hiring or recruiting.

When AI Needs to be 100% Accurate

There might be other instances where systems need to be 100% accurate, such as in computing for filing taxes or presenting financial data in the form of reports to an executive. In such cases, machine learning is not the solution. The more appropriate software for this would be template structures with hard-coded if-then rules that still fit under the purview of AI. Some experts claim that if it is not neural net, then it is not AI. There is a lot of debate there, but the fact remains that when 100% accuracy is required, it may be AI, but not ML.

One such solution from a French company is Easy Office. Easy Office has people input financial data on the dashboard that turns it into a financial performance report in paragraph form for executives to read. These reports have to be error-free all of the time. It is not enough to be very good. It has to be 100% accurate.

When AI Does Not Need to be Accurate

On the far end of this particular spectrum, there are some instances when accuracy is not required. The recommendation engine in Amazon is one example. It does not really matter if the recommendations made to a particular user are accurate as long as the customer lifetime value goes up.

The goal of Amazon is not accuracy, but sales. If the system notices that the click through rates of a user is going down, it adjust the recommendations until it sees a rise in the click through rate. If the spend goes up, the system will continue recommending the same types of products. Accuracy in this case is a little bit of a non sequitur.

Generally, when a considerable amount of transparency and/or accuracy is required, then it requires a more complex AI application.

The AI application will most likely take a longer time to inform and require more effort to reach acceptable levels of transparency and accuracy. In some cases, it may not even be realistic to achieve.

When working with a big business with multiple AI applications and various requirements of transparency and accuracy, mapping it out in this way will help identify problem areas. The framework would make it immediately apparent which ones will have such complex ethical issues that it would not be realistic to expect to reach those goals.

However, many executives make purchasing decisions without thinking through these things. They get halfway into a $2 million project and realize they are never going to achieve the requisite level of transparency and accuracy.

For example, the existing set of data has an inherent bias to a particular race or gender, so it would not be possible to achieve an objective level of transparency or accuracy in the results. The only choices they have at this point is to abandon the project altogether or spend even more money trying to fix it.

It is essential to think these things through before making any decisions regarding AI applications. The simple framework above should provide a clear picture of the realities of the situation.

Accountability

The third factor in managing the risks of AI in the planning stage is accountability. Who owns what kinds of issues? In other words, when something goes wrong, who will take the blame?

Generally, this lands squarely on the team that built the AI model and is responsible for continually staying on top of developing the algorithm itself to make sure it does not drift in the wrong direction. The team would be held accountable for any issues that come out at any point of adopting and integrating AI into the business.

This may include a third party, such as a vendor, if the AI application is being developed in partnership with them. There should be an agreement tying in the responsibilities of the vendor in supporting company values and the business in general. This may take the form of ongoing maintenance, testing, and updating on the part of the vendor and subsequent accountability for addressing any issues with the software pertaining to these areas of responsibility.

However, the C-suite or decision makers may also take on some part of the blame in some cases. This is when executives fail to provide the tactical teams with the right direction and bounding box of ethical concerns to work with in the first place. It is not enough for the AI team to know the business objectives. They also need to know any ethical issues that pertain to the role of AI in the business, such as the risks of privacy, transparency, and accountability,

When the AI application fails because the team lacked essential information, accountability goes straight to the executives. Providing the team, whether a vendor or in-house, or any combination of the two, with a set of potential ethical issues and concerns at the start will allow them to address and incorporate these concerns into the application at the DNA level.

This is the smart way for executives to manage ethical risks in AI. They can also avoid accountability for any issues that may come up in the future.

Man Versus Machine Decisions

The fourth factor executives may consider for AI applications is determining which part of the business or which processes should be handled by humans and which should be handled by machines. In most cases, the line is not all that clear. There is a lot of middle ground in real life situations.

One might say a machine should handle product recommendations, and humans should handle cancer diagnosis. However, situations are seldom as clear-cut as this. It may be worth opening the minds of executives to the fact that there are multiple levels of middle ground when it comes to decisions made by man or machine.

Sometimes, a machine can handle something up to a certain point. For customer support tickets, for example, a system can reply to a ticket if its confidence is in the 90% – 95% range that the response is going to satisfy the customer. However, if it is below that level, then it may be better to pass the ticket to a human. The machine can be frank about when they are more or less confident on certain matters, so there is no ego problem here.

However, this middle ground is constantly shifting. A machine learning system can gradually become more capable over time by learning from its mistakes. In addition, machines can inform human decisions.

Take cancer diagnosis, which at this time is best handled by humans. Applications in this space are extremely nascent right now, so it is unlikely an AI application can inform an existing treatment program the next time you go to the doctor. That said, there may be some exceptions.

For example, given the genetic profile, medical history, and patient information, a machine may be able to find a narrow set of relevant medical literature to support or improve a prospective treatment plan. The doctor will still be the one to make the decision in treatment, but he or she would have the benefit of a few highly targeted research papers as a reference.

Of course, anyone can make an online search for the same medical literature without the use of AI. However, it would probably not be as efficient because the user has to cross-reference multiple data points with hundreds, if not thousands, of articles. In this case, the machine supports but does not complete the decisions.  

When we think about the sliding scale between man and machine, the general rule is if a machine can handle more of a process or task than humans, it requires a less complex AI solution. On the other hand, if the task requires more humans than machine, it requires a more complex and resource-heavy application.

This is because when machines are in charge for the most part, it means there are few or no inherent risks attached to the process or task that may lead to legal troubles or ethical issues. To determine what role machines will play in a particular situation requires breaking it down to its different components into another worksheet.

When considering the support ticket process, look at the current system and put in the various steps in completing it from start to finish. This may include determining the nature of an inbound ticket, routing it to the proper department, and so on. For each of these steps, determine if man or machine should do it, and include any qualifiers for the decision.

Determine all the human elements that will inform the system to make it more capable. The goal is to transfer responsibility more and more to machines over time.

For example, if a machine misdirects a ticket, a human would relabel it, and that data would be fed into the system. The system would then reconsider this example and train itself to think the same way as a human would when it encounters a similar situation.

Mapping out the process in the man versus machine decision can help executives make adoption go a little easier. This framework can help executives visualize it.

Job Security and AI

Finally, there is the issue of job security with regards to artificial intelligence, which is a touchy subject. For large enterprises, we may expect to see the following in terms of communications and considerations of job security:

  • Large companies will hide any conversations they may have about potential layoffs or firings as much as possible. The idea is to seem progressive by promoting a vision for the future of the different roles in the company by overtly talking about retraining and retaining employees as it moves forward as a successful firm.
  • Companies will definitely consider each of the roles that are likely to change with the adoption of AI, and how they will look in the next 5 to 10 years.
  • Companies will also consider new hiring practices

Firms are going to make great efforts to manage perceptions. It will be very important for companies to have at least a vision of the impact of AI on employees from a public relations perspective. Companies should have a plan for handling how departments and job roles might shift and for retraining and the marshaling of human resources to reach an exciting company goal.

While the narrative should not emphasize any negative aspects of a transition, such as layoffs, considering job security concerns can bring about a better perspective of realistically planning for these transitions. It is possible to turn this into a good thing by preparing for all the possible scenarios, managing employee expectations, and making them part of a compelling vision.

We believe this would be the right way to position a company in adopting AI. Thinking it through before a crisis happens makes it more manageable. This is particularly important for companies that are likely to experience considerable employee shifts, such as in financial services, retail, and potentially transportation,  in the next decade.

The Two Parts of Job Security Conversations

While it is essential to consider the ethical implications of job security in AI adoption, executives should be very skeptical of two types of people that talk about job security. One side understates the risks, and the other side overstates them.

The Understaters

Those that understate the job security risks of AI adoption seem to have certain things in common. They are generally large, multinational companies with a large number of employees and stakeholders. They are not on the cutting edge of technology in all their departments and tend to be older firms. They use words and phrases such as “always,” “never,” “empower,” or, “Our people are most important.”

The companies have a vested interest in emphasizing augmentation, rather than automation, because they are looking to calm the potential revolt of employees and shareholders. The goal is to frame the very realistic risk of the shuffle of jobs as not important. Executives may want to take anything that people on this side of the coin say with a grain of salt. In many cases, they should not even listen to them.

The Overstaters

People on the other side of the coin also deserve criticism. These people tend to be in small, tech-oriented firms of 50 people or less. They may be slightly older, bootstrapped companies, or companies with a reasonable number of investors. They have yet to acquire an appreciable amount of traction in the AI space, but either claim to be using AI (of which about 60% are more are not) or are actually using AI.

The companies are likely to say AI is going to revolutionize job roles and completely take over departments. They use words such as “completely automate,” “disrupt,” and “push-button.” Their homepage is littered with references to AI in every sentence to make sure everybody knows they are using it.

The reason these firms will overstate the role of AI on job security is to sell the idea that they are the real innovators and not the big companies. They know the future and that what they are doing is so important and revolutionary it should be stated unequivocally.

Executives need to be extremely careful about to whom they listen and accept in their social feed. They need to identify the loudest voices in any conversation because these are the ones with a stake in this space.

Stay with credible sources as much as possible, and even then take everything with a healthy amount of skepticism. Reputable companies are not proof against taking sides in these issues if it will benefit their business.

Rules of Thumb

However, executives can make a realistic determination of how AI can affect job security by keeping in mind these rules of thumbs:

  • Positions that require management skills and social connections (i.e. kindergarten teacher) are highly unlikely to be in danger of automation
  • Positions that deal with a broad context and multiple points of relevance (i.e. plumber) are least likely to be taken over by machines
  • Positions that deal with minimal contextual factors and have fixed input and output (i.e. welder in a manufacturing line) are at the highest risk of automation

To illustrate, compare the work of a plumber and a welder in a manufacturing line. A plumber comes into the home and engages in conversation with the homeowner. During this conversation, the plumber will be able to get information about the nature of the problem. They have to deal with all kinds of context to diagnose and solve a problem.

On the other hand, a welder in a manufacturing line needs to understand nothing more than a specific configuration of metal is coming in that needs to come out in a different kind of configuration. That is not to say it is not a worthwhile or highly-skilled job. However, the work is limited in context, so it is generally easier for a machine to take it over.

The same thing applies in the white-collar context. For example, a financial manager in procurement may need to understand the balance sheet and P&L for a variety of reasons, such as to use it as leverage to negotiate with vendors and contractors or to determine how to cut costs, how this will affect the business in a broader perspective or number of other things.

An auditor, on the other hand, may simply scrutinize reports to find certain kinds of errors and other factors to produce a certain report. While this is quite important, it is just about the inputs and outputs. This position has a lower barrier to entry of automation than a financial manager for procurement does.

Takeaways for Business Leaders

The big takeaway for this article is the importance of keeping these conversations of ethics and AI couched squarely within the strategic planning process of the business and not in isolation.

When executives participate in the AI and ethics space outside the context of doing business in the real world, this can lead to making poor decisions. Executives are either going to get all fired up and carried away into a risky and expensive adoption of AI in risky and expensive ways, or they are going to get distracted by conflicting opinions and miss out on important opportunities.

In terms of decision-making, this means imbuing company values into the technology and products, as well as determining the resource requirements to bring the value of AI to life.

Ethics is an essential factor when making decisions about AI. However, it should inform the AI rather than palliate the inevitable consequences of adopting AI. Painting a pretty picture about the friendly, human values that can come out of AI may be soothing, but it does not do anything good for customers or stakeholders. If ethics is part of the real world applications of AI, then it can actually bring it to life.

 

Header Image Credit: Startup Donut

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe