Should we let Artificial Intelligence solve our political disagreements?
The 21st century is the century of Artificial Intelligence.
Artificial Intelligence (AI) has quickly evolved from a nascent technology limited to computational and mechanical manipulation to an integral part of humanity’s future, with machine learning and generative AI being integrated across multiple disciplines. A Chatham House report from 2018 deemed AI as being ‘mundane for the foreseeable future… with the field seeing relatively minor advancements that bring specific practical benefits in identified areas, rather than AI with general application’ – this has been completely subverted in the progress of the last three years, with a slew of new AI products being developed and launched. ChatGPT was launched on November 30, 2022 by San Francisco-based OpenAI, capable of anything from debugging code to writing poetry, taking yet another clear step in the direction of developing AI that can perform like the human brain. Sora AI, developed in 2024, moves beyond traditional AI cut-and-splice video editors to incorporate an element mirroring human thinking and creativity, generating intricate narratives featuring multiple characters, specific movements, and precise details within the environment. Most notable of all, perhaps, is the Claude 3 family of general AI models: released in March 2023, the three models (Claude 3 Haiku, Claude 3 Opus, and Claude 3 Sonnet) stand out for achieving near-human comprehension and fluency in complex tasks across various AI evaluation benchmarks. Claude 3 Opus proves a breakthrough in previously known capabilities of generative AI, demonstrating exceptional capabilities in analysis, forecasting, content creation, code generation, and multilingual conversations. Its sophisticated vision capabilities allows it to process various visual formats beyond just photographs, including charts, graphs, and technical diagrams.
The pertinence of these latest developments lie in the fact that AI can now be applied to a much broader scope than just the traditional STEM (science, technology, engineering and mathematics) fields of computer science, biomedical engineering, or logistical work. AI development has moved beyond narrow AI, which operates under a limited pre-defined range or set of contexts, or reactive machines, which do not store memories or past experiences for future actions, to more powerful and robust models such as general AI, a type of AI endowed with broad human-like cognitive capabilities enabling it to to discern, assimilate, and utilise its intelligence to resolve any challenge autonomously. Generative AI, which can create unique content and synthetic data, has been used in social science research via creating simulations; machine learning – specifically deep neural networks – have been adopted by historians in examining and deciphering historical documents; diplomats have been using predictive AI to model on-the-ground situations and advise critical decision-making.
Artificial Intelligence For Politics: Powering The Future Of Dispute-Resolution And Diplomacy
It thus appears that there is a legitimate claim – and a rather convincing one, at that – that we ought to let AI solve our political disagreements. After all, the trajectory of AI development seems to promise a point where AI will possess the capabilities to completely simulate human thinking and human decision-making. AI has already entered the sphere of politics – in the 2024 United States presidential race, AI is already being used by campaign teams to predict where voters are, what they care about and how to reach them. Even in its older forms – i.e. barebone video conferencing solutions, AI has aided political actions, with Stephanie Williams, special representative of the United Nations’ chief in Libya, using a hybrid model integrating digital and personal interactions to lead mediation efforts and establish election roadmaps, reaching over a million people living in regions considered too dangerous to travel to. During the peak of 2020 COVID-19 restrictions, mediators unable to travel for in-person discussions with their interlocutors used remote communication softwares to facilitate dispute resolution and negotiations. Then-United States envoy Zalmay Khalilzad used Skype, a proprietary telecommunications AI platform, to engage in the Qatar-brokered talks between the United States and the Taliban in February 2020, eventually allowing the two parties to resolve certain differences and sign a peace accord – the The Agreement for Bringing Peace to Afghanistan or Doha accord – to work towards bringing an end to the 2001–2021 war in Afghanistan.
In its more sophisticated forms, AI can be useful in enhancing political perspectives, reducing partisan bias, and enhancing evidence-based policies in resolving political disputes. Any attempt at resolving political disagreements requires acquiring a fundamental understanding of the various stakeholders, who they represent, what shapes their perspectives, what their priorities entail, et cetera. AI can perform the data analysis required for this purpose faster and more accurately than human analysts, processing and analysing data in a fraction of the time. A processor operating at 1GHz can execute a single operation within 1 nanosecond. In contrast, a human neuron requires approximately 5 milliseconds to receive inputs from a synapse, process this information, and transmit it to the subsequent neuron. Consequently, this indicates that a computer system operates at a speed at least 125,000 times greater than that of a human neuron. The cumulative result of these differences could mean a swifter response in solving political disagreements, crucial to mitigating the detrimental effects of any particular situation on the general populace and avoiding further escalation. Often, political disagreements standing in deadlock are spurred towards compromise and solution by new information discovered or new perspectives introduced. Therefore, the use of AI in this aspect can ultimately contribute to more timely and effective resolutions of political disagreements.
Furthermore, political decision-making is too often plagued by partisan biases – take for example the the United States’ partisan divides on the issue of climate change: the Democrats and Republicans have a significantly antipodean approach to climate policy, with the Democratic party acknowledging the scientific consensus on climate change while the Republican party adopts a more sceptical approach taking into the consideration that certain climate policies may hinder the economic growth of their nation. Climate-related legislation often faces gridlock in Congress due to partisan disagreements. Efforts to pass comprehensive climate bills, such as carbon pricing or extensive renewable energy subsidies, frequently stall due to lack of bipartisan support. When legislative action is blocked, Presidents may resort to executive orders to implement climate policies. However, these actions are often reversed when the opposing party gains control of the executive branch, leading to inconsistent policy implementation. For instance, President Obama's Clean Power Plan was aimed at reducing carbon emissions, but it was rolled back by President Trump, who favoured deregulation and fossil fuel development. Subsequently, President Biden re-entered the Paris Agreement and proposed new climate initiatives. This is a clear example of partisan biases detrimentally affecting political disagreements: the inability to reach a bipartisan consensus on climate policy results in fragmented and often ineffective responses to climate change, undermining long-term planning and investment in sustainable technologies and infrastructure. The lack of consistent policy direction even goes as far as to hamper international efforts, with other countries being uncertain about the United States’ commitment to global climate agreements. With AI models, however, the evidence-based decision making frameworks give AI the potential to offer objective, data-driven solutions free from the partisan biases and ideological conflicts that often plague human decision-making. This could lead to greater compromises resolving political disagreements through more centrist and pragmatic policies that better serve the general public. These decisions based on robust data and evidence rather than on political rhetoric or special interest lobbying could also improve the quality of public policy and increase public trust in government institutions.
Politics: By People, For People
Where there are claims for the use of AI in solving political disagreements, however, there are arguably even stronger deterrents from doing so. The entire notion of letting AI solve our political disagreements surfaces a two-prong problem – the first being a dystopian surrender of human agency and autonomy to AI. Classic science-fiction horror stories often depict AI taking over the world (think 1999 science-fiction action film The Matrix or 2014 science-fiction thriller film Ex Machina), and delegating human processes like political negotiation and decision-making to non-human entities is an alarming step in that perilous direction. The capacity for individuals to shape their own destinies is a fundamental aspect of human existence – which allowing AI to make our political decisions and resolve our political disagreements irrevocably diminishes. Undermining human autonomy and the democratic process will leave citizens feeling disempowered if they believe that machines, rather than people, are making important political decisions which will manifest in tangible impacts upon their lives.
The second lies in the simple fact that politics is an art of making common decisions for a group of people, where differing interests are conciliated by granting the different groups proportionate power, and political actors make decisions for the welfare and survival of the whole community. It is exciting to consider that AI may soon think like humans, write like humans, or draw like humans, but it is nearly impossible for AI to think for humans, feel like humans, or – perhaps the most critical aspect in politics – to feel for humans. Any sort of mediation, be it political, legal, or otherwise, relies on nuanced human skills: from elements as simple as how to make eye contact and listen carefully to detecting shifts in emotions and subtle signals from opponents. More importantly, in having and solving political disagreements, there must certainly be an involvement of the interests of various groups of individuals being governed, and it is difficult to trust utilitarian models of AI to be able to navigate these complex priorities, let alone to approach them from a human angle of empathy and morality. From a perspective of virtue ethics, the character of an AI ‘moral agent’ is near non-existent. Take for example the classic ethical dilemma of the trolley problem: autonomous vehicles trained to make split-second decisions in situations where harm is unavoidable, such as choosing between hitting a pedestrian or swerving and risking the lives of the passengers, are programmed to optimise for specific outcomes based on predefined criteria (for instance, minimising harm). However, these decisions involve complex moral judgments about the value of different lives and the ethical principles guiding such choices (evidenced in the different beliefs in utilitarianism and deontological ethics). In previous motor incidents, AI has exhibited the lack the ability to make these nuanced moral decisions, catastrophically resulting in ethically dubious outcomes. AI lacks the capacity for virtue and moral development, and for navigating complex ethical considerations in making political decisions. It must be considered that making the most evidence-informed decisions, as AI models are built to guarantee, may not always be the most optimal approach; ultimately, politics serves the community of people, and balancing these citizens’ needs and interests is most paramount. Furthermore, AI may not even always take an objective, evidence-informed approach as its design seems to promise it will. AI systems are designed and trained by humans, and can inherit and even amplify biases present in their training data. In the training stages, AI models are scarcely discerning (for they have not yet been trained with the ability to discern) and instead absorb all the information provided in training sets, constructing analysis methods from there. This process and the biases introduced within could potentially lead to unfair or discriminatory outcomes in political decision-making.
Fundamentally, both John Locke’s and Thomas Hobbes’ social contract theories suggest that political legitimacy arise from the consent of the governed – with AI systems operating on behalf of humans, in non-transparent ways that do not sufficiently cater for moral and ethical considerations, there will be a possible erosion of the social contract and a compromise on the legitimacy of political authority.
Issues Of Practicality: AI Cannot Solve Our Political Disagreements
While AI has been left to handle increasingly complex processes such as the majority of Share Trades, Bond Trades, and Futures Trades being made by computer expert systems, these automated actions still depend upon human concepts and models. For AI to be trained with these massive datasets, it requires measurable, quantifiable outcomes to guide its decision making. Indeed, AI models can now digest stock market data and develop strategies maximising the clear goal of profitable trading – but how can one comprehensively measure political outcomes? The prevailing social science studies on the effects of Britain’s withdrawal from the European Union, for example, can only quantify outcomes in specific sectors and investigate the impacts upon limited sets of demographics. In being told to ‘resolve a political disagreement with the best possible outcome’, an AI model would have to be taught what the ‘best possible outcome’ is. If AI has not yet been capable of measuring opportunity cost in business and economic sectors – where the fundamental consideration is as simple as purely profit-maximising – in a definitive manner, it is hard to see a near future in which AI models will be able to do so in more complex and multi-faceted subjects such as politics. In an area where even humans, experts, politicians and voters are unable to come to a common agreement or objective standard, it is near-impossible for AI to weigh the considerations of all stakeholders and reach a ‘best possible outcome’ that all parties are, at the very least, accepting of. If people are unaccepting of the outcomes of political dispute resolution, the entire purpose of delegating this process to AI – a faster, better, fairer way of solving political disagreements – is moot.
Consider the plot of Kubrick’s political satire Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, where an unhinged Air Force general issues deranged orders to unleash an atomic attack on the Soviets – if even humans can go rogue and people can err, what gives us the confidence that AI will be bound to making apt and just decisions purely relying upon rows of code (which are, in fact, increasingly self-generated through machine learning rather than being built by human programmers)?
Towards A Brighter Human Future With AI
With AI, not by AI: thus far, the best possible incorporation of AI in politics appears to be an assistive rather than operational role. In such a scope, AI can be used in two critical functions: analytical and predictive.
Analytical AI may be used to assist political decision makers in consolidating and organising the relevant data, as well as recognising trends and patterns, such that they are able to make informed decisions in the face of political disagreements. Powerful tools such as Databricks and Microsoft Power BI may be used to rapidly build reports and dashboards, as well as for data reporting. Predictive AI may be used to help policy makers consider the short and long-run impacts of their actions, allowing them to achieve more sustainable resolutions to disagreements that not only fulfil stakeholders’ interests and needs at the moment of inception, but also for years to come. Virtual reality (VR), for example, may be used to create immersive environments that can provide politicians with a deeper understanding of the on-the-ground situation in geographically distant situations that they are mediating.
Finally, it is also important to ensure the accessibility of these AI aids, to ensure that different political actors are placed on a level playing field, instead of further cementing the power imbalances that exist within political spheres by making the use of AI to put forth better arguments or solutions during political disagreements an exclusive privilege only granted to powerful or well-financed institutions or actors. It is with caution and consideration that humanity must approach AI and its integration into politics, to guarantee a beneficial effect that uplifts all communities and stakeholders in a society.
Bibliography:
Biswal, Avijeet. “7 Types of Artificial Intelligence That You Should Know in 2020.” Simplilearn.com, February 11, 2022. https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/types-of-artificial-intelligence.
Cummings, M, Heather Roff, Kenneth Cukier, Jacob Parakilas, and Hannah Bryce. “Artificial Intelligence and International Affairs Disruption Anticipated #CHAI Chatham House Report.” Chatham House Report, 2018. https://www.chathamhouse.org/sites/default/files/publications/research/2018-06-14-artificial-intelligence-international-affairs-cummings-roff-cukier-parakilas-bryce.pdf.
Donovan, Moira. “How AI Is Helping Historians Better Understand Our Past.” MIT Technology Review, April 11, 2023. https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/.
Fix Your Fin. “Top 10 Most Advanced AI Systems | Medium.” Medium. Medium, March 25, 2024. https://fixyourfin.medium.com/the-cutting-edge-of-artificial-intelligence-a-look-at-the-top-10-most-advanced-systems-in-2024-c4d51db57511#:~:text=1..
GW Media Relations. “AI in Political Campaigns: How It’s Being Used and the Ethical Considerations It Raises | Media Relations | the George Washington University.” Media Relations, April 22, 2024. https://mediarelations.gwu.edu/ai-political-campaigns-how-its-being-used-and-ethical-considerations-it-raises.
Hirsch, Alexander V. “Why Political Disagreements over How the World Works May Be Easier to Solve than Those over Goals.” USAPP, August 8, 2016. https://blogs.lse.ac.uk/usappblog/2016/08/08/why-political-disagreements-over-how-the-world-works-may-be-easier-to-solve-than-those-over-goals/.
Iype, Sujith. “Human Brain vs Artificial Intelligence Systems | Ignitarium.com.” https://ignitarium.com/, June 5, 2018. https://ignitarium.com/human-brain-vs-existing-artificial-intelligence-systems/.
Salami, Mehdi. “Artificial Intelligence and the Future of International Relations.” www.ipis.ir, June 19, 2023. https://www.ipis.ir/en/subjectview/722508/artificial-intelligence-and-the-future-of-international-relations#:~:text=International%20relations%20have%20always%20been.
Shah, Kaavya . “How AI Data Analysis Enhances Analytics: Key Benefits & Top Tools.” www.proserveit.com, May 22, 2024. https://www.proserveit.com/blog/ai-data-analysis-benefits-and-tools#:~:text=On%20the%20other%20hand%2C%20AI.
United States Department of State. “Joint Declaration between the Islamic Republic of Afghanistan and the United States of America for Bringing Peace to Afghanistan.” www.state.gov, February 29, 2020. https://www.state.gov/wp-content/uploads/2020/02/02.29.20-US-Afghanistan-Joint-Declaration.pdf.
Xu, Ruoxi, Yingfei Sun, Mengjie Ren, Shiguang Guo, Ruotong Pan, Hongyu Lin, Le Sun, and Xianpei Han. “AI for Social Science and Social Science of AI: A Survey.” Information Processing & Management 61, no. 3 (May 1, 2024): 103665. https://doi.org/10.1016/j.ipm.2024.103665.
Such a beautiful article Hehua! Thank for sharing your essay with us