The Institute of Electrical and Electronics Engineers (IEEE) comprises about 421,000 members in over 160 countries, and is known to most in the tech community for its publications, technology standards, conferences, and professional and educational activities. The organization has been developing internationally recognized technology standards for decades.
Since 2016, it has been actively developing standardization methods for autonomous and intelligent systems. It launched a high-level ethics program called the Global Initiative on Ethics of Autonomous and Intelligent Systems, which develops new standards, certifications, and codes of conduct for the ethical implementation of AI.
In this article, we cover various initiatives and councils carried out by the IEEE for endorsing AI ethics, including:
- ECPAIS: An overview of the IEEE’s ethics certification program for autonomous systems, comprising its objectives and applications
- #AuroraAI: A pilot program carried out by the Government of Finland and the ECPAIS in order to gauge the effect of the implementation of ethics principles for AI’s use in the public sectors
- Ethically Aligned Design, OCEANIS, AI Commons, and CXI: How the IEEE is currently battling certain awareness challenges in incorporating AI ethics
Emerj takes AI ethics very seriously. We have previously covered the IEEE’s efforts in the AI ethics space in The Ethics of Artificial Intelligence for Business Leaders – Should Anyone Care? – and have covered the more long-term consequences of AI (see our interviews with Nick Bostrom and Ben Goertzel) since 2013.
Through a series of interviews and correspondence between IEEE’s AI ethics leadership and our founder Daniel Faggella, we’ve aimed to put together a view of where the “rubber meets the road” on AI ethics today – including some of the IEEE’s efforts to bring AI ethics into the public sector, beginning with ECPAIS:
Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)
Konstantinos Karachalios (previously an interviewee on our podcast), Managing Director at IEEE Standards Association, said in his interview with Emerj that the IEEE is developing processes for AI systems and products that will be verified by outside bodies and organizations. Essentially, these organizations will provide certification to entities that go through the processes that the IEEE sets up.
In October 2018, the IEEE announced the launch of the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS). The ECPAIS’s declared goal is to develop certifications and processes for autonomous and smart systems, such as smart homes, companion robots, autonomous vehicles, and so on, which would deem them “safe” or “trusted” by a global body of experts.
Karachalios also mentioned that, since its launch:
The ECPAIS already has [started collaborating with] its first customers–more like partners–the Government of Finland and the City of Vienna, and the City of New York.
He claims that the IEEE ethical standards for AI systems are being widely referenced, including the recently released IBM guide for AI development.
According to Professor Ali Hessami, Chair & Technical Editor of IEEE P7000 Standard at IEEE Standards Association, the ECPAIS will address three focal points:
- Transparency: In AI, adopting techniques that humans can trust and easily understand would solve the “black box” problem, in which even AI program designers cannot explain why the AI made a particular decision. For example, if a fintech AI used for vetting customers for credit risks cannot explain why it chose certain customers as flight risk, it would be hard for fintech companies to trust the AI’s decisions.
- Accountability: IEEE believes that organizations should take responsibility for the use, effects, and results of their programmed algorithms.
- Algorithmic bias: Any AI system is only as good as the data fed to it. More often than not, the input and training data provides a skewed version of reality, which affects the AI’s decision-making capabilities. Identifying and mitigating this data bias could ensure objective decisions from the AI.
Prof. Hessami clarifies that the ECPAIS is not a certification body in itself. It will help develop standards, not enforce them. It is essentially an extension of the IEEE standards. According to Prof. Hessami, there are definite deciding factors for any government or private organizations to trust the ECPAIS. He says:
Some [AI] practices and businesses are self-certified. That could be a risk. It would be more respectable and credible to delegate [the ethics endorsement] to an independent party who have [already] developed standardized certifications. Another factor is the expert consensus. Expert groups tend to have much more focus on a specific area than any company or government. More focused expertise from a panel of experts for that particular area [would solidify the ethical aspects of an AI product]
In Proffessor Hessami’s opinion, the ECPAIS will be useful for:
- Developing processes, metrics and assisting with implementation of the AI ethics certification methodology.
- Minimizing AI risks globally
- Getting endorsed by the stakeholder community that an organization’s AI product is ethically acceptable
Prof. Hessami thinks that the ECPAIS’s collaboration with businesses and governments would serve as a “gluing agent,” in its alignment with the IEEE’s standards. He admits that the AI ecosystem is rich and complex; therefore, credibility and some form of market acceptance lies with larger entities because of their access to a wide body of experts, regulations, infrastructure, and networking.
#AuroraAI is an artificial intelligence program in Finland; their efforts are carried out by the Ministry of Finance in joint association with the ECPAIS. The Ministry of Finance is one of the founding members of the ECPAIS.
The #AuroraAI program aims to create a “people-oriented, proactive society” through the ethical use of AI. It also focuses on implementing AI in the public sectors in Finland to aid the proliferation of nation-wide digitization.
In her words:
#AuroraAI is an artificial intelligence and autonomous applications network that helps create the conditions for a people-oriented, proactive society. The network is also designed to bring Finland’s public administration into the digital age more quickly. Intended for organizations that produce services for different life events and business events, the network facilitates interaction between artificial intelligences and other smart applications.
The Ministry of Finance (Finland) is currently coordinating the project. However, #AuroraAI is being built in a vast and open network of experts and officials. Now there are over one hundred of us working hard towards our mutual goal. We welcome you to join us to build people-oriented societies all over the world!
An official letter written by the Ministry of Finance to the High-Level Expert Group on Artificial Intelligence at The European Commission (an EU executive) says that #AuroraAI will direct the way for ethical use of AI by setting standards for ethical design and development for AI utilization in public and private sectors of Finland. It also mentions that this program encourages openness, transparency, and collaboration of multidisciplinary specialists in identifying ethical issues and finding sustainable solutions, keeping the ECPAIS’ primary goals in check.
This program would require direct participation of Finnish citizens. In Karachalios’s opinion:
Finland is a small nation. People trust the government. Everything [product and services] is registered, unlike in the U.S. The government wants to use this database for allocation and planning [of algorithmic resources] for the future. They want transparency. [According to Saidot], every algorithmic system will have an identity, description, and [will be] certified. They are working with ECPAIS to implement that.
The letter to the High-Level Expert Group seems to agree with Karachalios. It argues that AI is only as good as the data or environment it learns from. It further insists that Finland’s public information and publicly funded research results be “made openly accessible, discoverable and reusable” both in Finland and abroad “with due consideration for protection of personal data and national security.”
The letter does not, however, address how it plans to carry out this data accessibility and availability without compromising security.
The Ministry of Finance in Finland launched a preliminary study on the #AuroraAI program in late 2018. The purpose of this study is to create a system that will enable interaction within the AI network for organizations that “provide services for people’s life events and for companies’ business-related events.” The project was active from September 2018 to February 2019.
Part of the preliminary study project involves preparing a 2019–2022 development and implementation plan for #AuroraAI.
The first trial version of Aurora will be implemented at the preliminary study phase by the end of February 2019. This trial will focus on select life events, such as moving to a place of study, improving life through competence development, and preserving the wellbeing of children and parents in changing family relationships.
We drive the creation of trust-based culture for Finland’s AI era. Finland’s public sector wants to build its AI governance on world leading ethics best practice as well as be active in contributing to developing standards, certificates and other innovations for responsible AI. We truly strive to make Finland into an international testbed, leader and innovator in responsible AI
The Ministry of Finance’s letter to the EU also echoes Haataja’s words: “We recognize #AuroraAI also as a possible unique testbed for High-Level Expert Group on AI Ethical Principles deployment. Now it is also time to discuss whether this kind of solution should be European wide.”
However, the preliminary study is a little more skeptical on expanding this initiative across the EU, but only currently. It says that the decision on extensive implementation of #AuroraAI will be made separately at a later stage based on the results of the preliminary phase.
Ethically Aligned Design, OCEANIS, AI Commons, and CXI
According to Karachalios, there are two main challenges with IEEE’s awareness and establishment initiative for AI ethics: the general, ignorant mindset and keeping up with fast-moving technology. Karachalios mentioned that, initially, the idea of developing and investing in ethical elements for technology was met with resistance.
The critique was that we don’t need ethical aspects. We are engineers. Ethics are not our business. Ethics are for politicians and such.
However, from the above-mentioned IEEE’s active initiatives, it is clear that, currently, many organizations and individuals are increasingly becoming aware of incorporating ethical elements in the implementation of AI. Karachalios is also hopeful that this is just the start of the ethical era in AI and that this trend will continue. He says:
[In] Britain, Geneva, China, [and other countries] all around the world, these organizations admit that [they] cannot neglect or ignore this [AI ethics]. This is a transformative moment.
The IEEE must also share the same sentiments as Karachalios. Since late 2016, the organization has been busy launching various initiatives and programs to proliferate the awareness about incorporating AI ethics. Some of them are:
In 2018, the IEEE also founded, along with 11 other standards-development organizations, the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS). According to Karachalios, OCEANIS would serve as an awareness-building platform for AI ethics proliferation. OCEANIS will focus initially on the topics of algorithms, sensors, big data, ubiquitous networking and technologies used in autonomous and intelligent systems across all industry sectors.
The outputs of OCEANIS will be collectively determined by its participants and will include articles, white papers, and workshops on AI ethics.
Ethically Aligned Design (EAD) is a manual created by the IEEE comprising over a hundred practical recommendations for policy makers, technologists, and academics to advance public discussion and establish standards and policies for ethical and social implementations for AI, “aligning them to moral values and ethical principles that prioritize human well-being.”
Prof. Hessami says:
IEEE doesn’t endorse standards. However, it has made an exception when it comes to the EAD. IEEE endorses the EAD.
According to an MIT editorial on the EAD, the report highlights five general principles:
- Ensuring that A/IS do not infringe on human rights
- Developing metrics to help prioritize human well-being
- Developing methods to assure that designers and operators are responsible and accountable
- Creating methods to ensure that A/IS are transparent
- Avoiding misuse of A/IS.
Karachalios is a founding member of AI Commons, a nonprofit organization comprising AI practitioners, academia, NGOs, AI industry players, entrepreneurs, and “organizations/individuals focused on the common good.” AI Commons believes that technology must be made available to the largest possible community of the above-mentioned users such that they can develop “broader applications” addressing actual needs. Simply put, AI Commons connects problem owners with the community of solvers through the use of AI.
Council on Extended Intelligence (CXI)
In order to further increase awareness of incorporating ethics in AI development, the IEEE and the MIT Media Lab have launched a global Council on Extended Intelligence (CXI). This council comprises individuals who “prioritize people and the planet over profit and productivity.” It argues that there is no assurance that progress defined by “the next” intelligent technology is beneficial. Emerj founder Daniel Faggella is also a member of the CXI.
It also advises that “individuals, businesses and communities involved in the development and deployment of autonomous and intelligent technologies should mitigate predictable risks at the inception and design phase and not as an afterthought.”
Header Image Credit: RSA