ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 1 – AI ETHICS

ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 1 – AI ETHICS

Dr. James Baty • July 18, 2023

a close up of a statue of a man with a beard

Image generated by Jim Baty using DreamStudio from stability.ai


By Dr. James Baty
Advisor, Raiven Capital


The headlines around AI are screaming for attention: Launching yet another AI company, Elon announced on his Twitter Spaces that he had warned Chinese leaders that digital superintelligence could unseat their government!, Other headlines herald the coming super-positive impacts on world economies: Goldman Sachs noted that generative AI could boost GDP globally by 7 percent.


Amid calls for more regulation, the debate surrounding artificial intelligence has taken a multifaceted turn, blending apprehensions with aspirations. The fears and uncertainties often act as catalysts for attention and advancement in AI. The technological prowess, the risks, the allure – it’s all a heady brew. While some workers clutch their paychecks fearing obsolescence, shrewd employers rub their hands together asking, “Can AI trim my overhead?”


In this three-part Dry Powder series, I will deconstruct the issues around AI governance: ethical frameworks, emerging governmental regulation and the impact AI governance is having in venture capital funds. As a technologist, my career designing and advising on large-scale tech architecture strategy has leveraged and suffered the previous two of Kai-Fu Lee’s ‘Waves of AI’. Clearly this third wave is big.


Setting the Stage: Ethical Principles of Artificial Intelligence

The question of AI safety and regulation has sparked heated discussion globally, especially as AI adoption spreads like wildfire. Despite calls for more regulation, the Washington Post reported that Twitch, Microsoft and Twitter are now laying off their ethics teams, adding to the industry’s increasing dismissal of those leading the work on AI governance.


This should give us pause: what are the fundamental ethical principles of AI? Why are some tech executives and others spending millions to warn the public about it? Should it be regulated?


Part of the answer is that fear sells. In part, AI is already regulated, but more of it is on the way.


First, Let’s Discuss “The Letter”

Enter the March storm: Pause Giant AI Experiments: An Open Letter. Crafted by the Future of Life Institute, and signed by many of the AI ‘hero-founders,’ (who warn us about AI, while they aggressively are developing it), this letter thundered through the scientific and AI community. There were calls for a six-month halt to AI research, while the red flag of existential threats was raised. 


The buzz generated by the letter was notable. But, hold the phone! Forgeries appeared among the signatures, echoing ChatGPT’s infamous “hallucinations.” Moreover, some of the actual signatories backtracked. Critically, many experts in AI research, the scientific community, and public policy underscored that the letter employed hype to peddle technology.


Case in point, Emily M. Bender, a renowned neurolinguistics professor and co-author of the first paper cited in the letter, expressed her discontent. She called out the letter for its over-the-top drama and misuse of her research, coining it as “dripping with #AIhype.” Bender’s comments are suggestive of a cyclical pattern in technology adoption, where fear and hype are instrumental drivers of decision-making. 


As technology historian David Noble documented, the adoption of workplace and factory floor automation that swept the 1970s and ‘80s was driven by managers’ competitive fear that came to be known as FOMO (‘Fear of Missing Out’). Prof. Bender’s critique points to ‘longtermism,’ a hyper-focus on the distant horizon, while eclipsing more urgent current issues of misrepresentation, discrimination, and AI errors. Still the legitimate question remains, how should artificial intelligence be governed?


How Should AI Be Governed?

As we explore the labyrinth of AI governance, it’s imperative to first recognize the importance of ethical and safety principles in its development and implementation. Similar to other technologies, there are already in place industrial practices guidance and regulation of AI, not only for basic industrial safety, but also for ethics.

AI poses unique challenges compared to previous technologies, necessitating tailored regulations. Determining how to regulate it involves more than just legal measures by governments and agencies. How do we develop an overall technical framework for AI governance?


In 2008, Prof. Lawrence B. Solum from the University of Illinois College of Law published a paper that analyzed internet governance models. These include the different models of self-governance, market forces, national and international regulations, and even governance through software code and internet architecture. This framework can also be applied to AI governance.


Considering the full range of mechanisms — industry standards, legal frameworks, and AI systems regulating other AI systems. Governance necessitates not one form, but a comprehensive approach with multiple models of regulation. It requires long-term considerations, yet must address short-term immediate challenges so that it ensures responsible and ethical development of AI. By integrating industry standards with legal frameworks and technology-specific regulations, we can work towards creating a sustainable and ethical AI ecosystem.


What are the Key Principles for Ethical and Safe AI?

The past decade has been marked by a surge in technical and public policy discourse aimed at establishing frameworks for responsible AI that go far beyond “Asimov’s Three Laws,” which protect human beings from robotics gone awry. The plethora of notable projects includes: The Asilomar AI Principles (sponsored by the Future of Life Institute), The Montreal Declaration for Responsible AI, the work by IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, the European Group on Ethics in Science and New Technologies (EGE), and the ISO/IEC 30100:2018 General Guidance for AI. These undertakings have subsequently inspired specific corporate policies, including, for example, the Microsoft Responsible AI Standard v2 and the BMW Group Code of Ethics for AI. There are so many other notable attempts to provide frameworks, perhaps too many.


A useful 
cross-framework analysis by Floridi and Cowls examined six of the most prominent expert-driven frameworks for governing AI principles. They synthesized 47 principles into five:


  1.  Beneficence: Promoting well-being, preserving dignity, and sustaining the planet.
  2.  Non-Maleficence: Focusing on privacy, security, and exercising “capability caution.”
  3.  Autonomy: Upholding the power of individuals to make decisions.
  4.  Justice: Promoting prosperity, preserving solidarity, and avoiding unfairness.
  5.  Explicability: Enabling the Other Principles through Intelligibility and Accountability.


These principles provide a framework to guide ethical decision-making in AI development. That last one is AI’s distinctive stamp on the ethical spectrum. AI should not just ‘do,’ it must ‘explain.’ Unlike most previous technological advancements like the similar foundational principles of bioethics, artificial intelligence should be required to explain itself and be accountable to users, the public, and regulators.


Are These Principles Being Implemented?

Yes. Virtually all major companies engaged in artificial intelligence are members of the Partnership on AI and are individually implementing some form of governing principles. The partnership comprises industry members (13), nonprofit organizations (62) and academic institutions (26). It also is international, operating across 17 countries.


The community’s shared goal is to collaborate and create solutions that ensure AI advances positive outcomes for people and society. Members include companies such as Amazon, Apple, Google, IBM, Meta, Microsoft, OpenAI, and organizations like the ACM, Wikimedia, the ACLU, and the American Psychological Association.

Notably, large global corporations that have implemented such principles are complex global entities. They require parallel implementation by division or geography. For example, AstraZeneca, as a decentralized organization, has set up four enterprise-wide AI governance initiatives, including: overarching guidance documents, a Responsible AI Playbook, an internal Responsible AI Consultancy Service & Resolution Board, and the commissioning of AI audits via independent third parties. AI audits are a key part of any compliance structure, and are recommended in many frameworks. This enterprise model is a sort of ‘principles of AI principles’.


AI Ethics: A Form of Governmental Competitive Differentiation

In establishing governmental principles, Europe is a trailblazer. In September 2020, the EU completed its EAVA ethical AI framework. The key conclusion: by exploiting a first-mover advantage, a common EU approach to ethical aspects of AI has the potential to generate up to €294.9 billion in additional GDP and 4.6 million additional jobs for the European Union by 2030. Governments can feel FOMO too.


The framework emphasizes that existing values, norms, principles and rules are about governing the action of humans and groups of humans as the key source of danger, not designed for algorithms. The EU warned “the technological nature of AI systems, and their upcoming features and applications could seriously affect how governments address four ethical principles: respect for human autonomy, prevention of harm, fairness, explicability.”


Literally every government is adopting some form of ethical AI framework. The 2018 German AI strategy contains three commitments: make the country a global leader in AI, protect and defend responsible AI, and integrate AI in society while following ethical, legal, cultural and institutional provisions. Similarly, the 2019 Danish national strategy for artificial intelligence includes six principles for ethical AI: self-determination, dignity, responsibility, explainability, equality and justice, and development. It also provides for the establishment of a national Data Ethics Council.


In 2021, the US launched the National Artificial Intelligence Initiative to ensure US leadership in the development and use of trustworthy AI. In 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights.This June, the European Parliament passed the European Artificial Intelligence Act, which not only regulates commercial use of AI, but sets principles addressing government use of AI (e.g., limiting national surveillance technology).


But What About Military AI?

In most dystopian AI fiction, military AI takes over. We’re especially worried about Colossus, Skynet and Ultron, the most evil AI presented in film. In real life, most nations provide for separate governance of AI for defense and security. In 2020, the US Department of Defense, Joint Artificial Intelligence Center, adopted AI Ethical Principles for governance of combat and non-combat AI. The five principles are that AI is responsible, equitable, traceable, reliable and governable.

AI generated image of a robot and a cowboy in the desert

Image generated by Jim Baty using DreamStudio from stability.ai


These include the same AI concerns that the Floridi and Cowls taxonomy included under explicability — the ability to ‘disengage or deactivate deployed systems that demonstrate unintended behavior’. Universally, we agree that AI needs to be explainable, accountable and controllable. Don’t worry, there’ll be a kill switch on the Terminator.


Great! How to implement this controllability? There’s the recent story where USAF Chief of AI Test and Operations Col. Tucker Hamilton who, at the Royal Aeronautical Society’s Future Combat Air & Space Capabilities Summit described a “simulation” where an AI-controlled drone, tasked to destroy surface-to-air missile sites, decided that any human “no-go” decisions were obstacles to its mission. So, it killed the human operator. However, when trained not to kill the operator, it instead destroyed the communication tower to stop the operator from interfering with the mission. It turned out the scenario was a fiction and the story was amended, but it echoes the warning the movie WarGames illustrated 40 years ago.


In Conclusion:

The Governance of AI Ethics and Principles Is Growing,
With Significant Challenges


Circling back to “The Letter.” What remains of its looming questions?


Is Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI) an existential risk? The consensus is yes.


Do we have an adequately articulated systemic ‘Third Wave” of AI ethics to address this existential risk? Not yet. Is it worse than we think? Probably.


The recent concerns laid out by Geoffrey Hinton, the renowned deep learning expert who quit Google, as well as others is that our earlier risk assessment of AI is wrong. Hinton contends his previous belief that AI software needed to become much more complex — akin to the human brain — to become significantly more capable, and dangerous, was probably wrong. The consensus of researchers had believed this ‘more complicated than the human brain threshold’ was some time off – approaching the end of this century. Hinton now suggests that generative AI may exhibit a possibility to outsmart us in the near future, far before we reach AGI.


Many researchers and ethicists are focused on this looming shift from ‘generative’ transformer LLMs, to ‘agentic’ AI, self-directed, self-improving models with the power to act in the real world. In essence, the doomsday clock of existential risk AI is being sped up by an emerging weapons race between researchers working on open-source AI, and the large labs with big training models. This self-directed and self-improving AI presents both an identifiable existential risk, and an urgent demand for a pivot in the ethical AI policy debate. 


All this suggests what has been referred to as ‘third wave’ of AI ethics, beyond fairness, accountability, transparency, including not only the military’s ‘controllability’, but also addresses much larger system level issues in society.


As an example of this complexity, consider the issue of ‘informed consent’. Most ethical frameworks mandate that human subjects be informed if they are affected by AI systems, or that their personal data might be used by AI (e.g., patients informed of AI in medical devices). But what about the AGI itself? Part of the work on AI ethics is efforts to investigate a protocol for the ethical treatment of AGI Systems. Are they ‘conscious’ by some measure? Then should we have to obtain informed consent from them for their use? Would giving them “rights help make them more ethical?


Of course, there’s always Eliezer Yudkowsky’s (Machine Intelligence Research Institute) solution. He suggests that until there is such a plan to govern AGI/ASI: “We should shut down all advanced AI research, shut down all the large GPU clusters, shut down all the large training runs…No exceptions for governments and militaries.” Is he related to Sarah Conner? Speaking for the opposition, Marc Andreesen continues to assert that AI itself is our savior from existential risk. 


Still perhaps there is hope. Even while there may be claims of ‘ethics washing’ on the pronouncements of ethical principles of AI by some corporations and industry groups, the tide is turning. Binding regulations, like the EU AI Act, herald a new era where principles are reinforced with tangible enforcement. The penalties under the EU AI Act are three times the maximum penalties under their laws governing data, under General Data Protection Regulation (GDPR).


The good news for now is that we have started the conversation between the perspectives represented by the Partnership on AI (self-governance), and the emerging EU Artificial Intelligence Act (governmental regulation). We have shifted from ‘can we do something’ to ‘what do we do now?’


In Summary…


  •  Artificial intelligence proposes a new challenge to the historical mechanisms of governance of societal ethics by moving beyond governing the actions of individuals alone, to governing ‘algorithms’.


  •  Artificial intelligence creates unique challenges to ethical governance of technology, e.g., explicability, controllability, self-directed ‘agentic AI’.


  •  Artificial intelligence governance requires finding the right mix of public interest, private, and governmentally-adopted frameworks of ethical principles.


  •  The emerging ‘AI arms race’ suggests a we need a ‘next wave’ ethical and regulatory framework for ‘AI ‘arms control’ that could protect us from the risks of destruction that Hinton, Timnit Gebru, Margaret Mitchell and others highlight, and at the same time, deliver on the societal benefits promised by Andreesen and others.


The labyrinthine interplay of AI governance is growing, blending ethical aspirations with legislative teeth. It’s an odyssey that warrants close monitoring and active participation by all stakeholders.


Stay tuned for our part two of this series, where we examine the state of AI regulation.

Agentic Infrastructure – The Architecture of the Next Business Revolution

The decisive driver of mass AI adoption won’t be model size or compute

Founders - Building Your Future with Raiven Capital

Empowering founders to thrive in the AI era with Raiven Capital’s operator-led venture expertise.

AI Chatbot Shootout

At Raiven Capital our target investments are in founders using emerging technologies (AI, IoT, 6G, 5IR...) that enable highly scalable digital platforms & efficiencies in many key

Why Dubai

Raiven Capital recently announced the launch in Dubai of its Dubai International Financial Center (DIFC) based $125M USD tech venture fund. We are proud of this

ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 3 – AI GOVERNANCE AND VENTURE CAPITAL

Dr. James Baty PhD, Operating Partner & EIR and Supreet Manchanda

ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 2 – REGULATING AI

by Dr. James Baty PhD, Operating Partner & EIR and Tarek El-Sawy PhD MD, Venture Partner RAIVEN CAPITAL This is our second release

ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 1 – AI ETHICS

By Dr. James Baty Advisor, Raiven Capital The headlines around AI are screaming for attention: Launching yet another AI company

Raiven & ESG in 2022

As the United Nations General Assembly (UNGA) debates opens in New York City with the theme of “A Watershed Moment,” the world faces unprecedented and

Post-Pandemic Workplace Culture

Preamble Raiven Capital is a global early-stage technology venture capital fund that believes in the power of innovation. The fund seeks to strengthen the ecosystems that it

September 2022 Summer's Almost Over

At Raiven, we are happy to share that we had a great summer. It is also time to roll up our sleeves and get back to work. We’ve been on the move:

Raiven Leads Research on the Future of Work

Raiven Leads Research on the Future of Work...

Oatly Co-Founder Invests In Raiven Capital

Bjorn Oste is a Raiven Advisory Board Member and Investor

A World of Innovation Awaits

Technology innovation knows no bounds. Neither do we.