As algorithmic and machine learning systems begin to proliferate in governmental and public sectors, might we be heading toward a form of governance by algorithm – or an “algocracy?”
Algogracy – a form of algorithm-driven governance related to technocracy and cyberocracy – systematically applies AI, blockchain, and algorithms to various aspects of law and society.
The term “algocracy” itself emerged in academic discourse around 2013, but examples of algorithmic governance date back to the 60s and 70s.
While algorithmic processing and machine learning or AI are not the same, they do represent a continuum of technologies.
The core themes of algorithmic governance – relinquishing human control to computerized decision-making systems – have graduated from more straightforward mathematical simulations to today’s advanced self-learning models.
Is it something we should welcome or be wary of?
Origins of algocracy
Algorithmic governance was born from an era of rapid digitization – the 1960s.
Here’s a brief timeline of how algorithmic and AI-integrated decision-making have featured in public and governmental projects:
- 1962: Alexander Kharkevich proposes a computer network for algorithmic governance in Moscow.
- 1971-1973: Project Cybersyn in Chile attempts to manage the national economy.
- 1970s: Development of early legal reasoning and tax law interpretation systems like the UK’s LEGOL project and the US’s TAXMAN project.
- 1993: Publication of “Towards a New Socialism,” discussing the potential of a democratically planned economy built on modern computer technology.
- 2000s: Algorithms begin to be utilized for surveillance video analysis.
- 2006: A. Aneesh introduces the concept of algocracy, discussing the impact of information technologies on public decision-making.
- 2013: Tim O’Reilly coined the term “algorithmic regulation,” advocating for using big data and algorithms in government.
- 2017: Ukraine’s Ministry of Justice conducts experimental government auctions using blockchain.
- Today: There are numerous examples of machine learning systems deployed across governmental and public sectors, some of which are proven to be making high-level, life-changing decisions with minimal human oversight.
One of the most fascinating projects is Cybersyn, in Chile, which ran from 1971 to 1973. Despite its short life, the project still serves as an example of how society – and people – can be successfully modeled by computer systems.
The mechanics behind it are fascinating, foreshadowed against the CIA-backed fall of Salvador Allende and Chile’s progression into the devastating Pinochet era.
At the core of Project Cybersyn were four main components:
- Economic simulator: This module was designed to model the Chilean economy, allowing government officials to simulate the outcomes of various economic decisions.
- Software for factory performance: Custom software was developed to monitor and evaluate factory performance, focusing on key indicators like production levels and raw material supplies.
- Operations room (Opsroom): This was the physical hub of Cybersyn, where economic data was collected, processed, and displayed. The Opsroom enabled decision-makers to rapidly assimilate complex information and make informed choices.
- National network of telex machines: These were linked to a mainframe computer and formed a communication network (dubbed ‘Cybernet’) across state-run enterprises. This network facilitated the real-time transmission of economic data to the central government.
Project Cybersyn demonstrated its potential during a national truckers’ strike in 1972, where the government could rely on real-time data to mitigate the strike’s impact. The telex network was crucial in maintaining communication and coordinating resource distribution amidst the crisis.
However, the project ended abruptly after the military coup on September 11, 1973. The Opsroom and broader system were dismantled, and Cybersyn remained an unfinished vision of a technologically advanced and potentially socially responsible economic management system. The politics of Cybersyn are much debated, with it standing as an example of innovation under socialism, for which it’s been revered.
There are several other early examples of algocracy, including the LEGOL and TAXMAN projects in the 1970s, setting the stage for the automation of rule-based processes in tax agencies. The subsequent decades witnessed the evolution of these technologies, with surveillance applications becoming prominent in the 2000s.
In 2006, A. Aneesh, in his book “Virtual Migration,” introduced the concept of algocracy, exploring how information technologies might constrain human participation in public decision-making, distinguishing it from bureaucratic and market-based systems.
Then, in 2013, Tim O’Reilly, founder, and CEO of O’Reilly Media Inc., coined the term “algorithmic regulation,” articulating a vision for governance that harnesses the power of big data and algorithms to achieve specified legal outcomes, calling for a paradigm shift towards more efficient and accountable governance.
More recently, in 2017, Ukraine’s Ministry of Justice ran experimental blockchain-based government auctions (aka. public sector tendering), showcasing the potential of these technologies to enhance transparency and combat corruption in governmental transactions.
And that brings us right up to the burgeoning era of generative AI and today’s advanced frontier models.
Public sector experimentation with AI and algorithms
Public sector and government agencies have experimented considerably with algorithmic decision-making, AI, and machine learning in the past few years.
A comprehensive study conducted by Stanford University revealed the significant adoption of AI and ML tools in the US federal sector, with 45% of agencies experimenting with these technologies by 2020.
Palantir Technologies is a major commercial provider in this space, contributing to the residual applications.
Looking at specific agencies, the Office of Justice Programs leads with 12 use cases, followed by the Securities and Exchange Commission with 10 and NASA with nine. Other prominent agencies include the FDA, USGS, USPS, SSA, USPTO, BLS, and US Customs and Border Protection.
There is widespread evidence of bias among some of these systems. Healthcare algorithms in the US were found to be less likely to refer black patients for additional care, and predictive policing tools like COMPAS have been accused of racial bias in predicting reoffending risk (recidivism) and sentencing.
A recent Guardian investigation into the UK public sector found that algorithms used in areas like welfare, immigration, and criminal justice have been implicated in cases of misuse and discrimination.
Notably, the Department for Work and Pensions (DWP) faced criticism for an algorithm that allegedly led to wrongful benefit suspensions, and the Metropolitan Police’s facial recognition tool showed higher error rates for black faces.
Justice by AI and algorithm
Another branch of algocracy is algorithmic or AI-influenced justice and law enforcement.
In addition to the US COMPAS system, in Australia, the “Split Up” software aids judges in determining asset division during divorce proceedings, and China has pioneered the establishment of internet courts, with a virtual AI judge assisting in basic litigation tasks.
The Dutch government’s experience with the System Risk Indication (SyRI) algorithm, designed to detect social welfare fraud, faced legal challenges due to its potential discriminatory effects and lack of transparency, leading to a landmark court ruling against its use.
Brazil’s adoption of facial recognition technology in São Paulo through the Smart Sampa project and other forms of predictive policing, including several programs in the US, indicates technology’s potential to influence the governance of people’s fundamental freedoms.
Palantir’s Gotham software has been used by the New Orleans Police Department since 2012 for predictive policing, and newer iterations have been declared a failure, inflicting bias on marginalized communities and wasting police resources.
Additionally, law-bots are increasingly taking on tasks traditionally handled by paralegals, with technologies like ROSS Intelligence assisting US law firms in legal research.
There have been recent reports of lawyers using generative AI, including ChatGPT, to assist with case research – including a high-profile incident where a lawyer submitted legal claims with AI-fabricated cases.
AI politicians run for office
Completely replacing politicians with AI has been the subject of much speculation, including in a recent podcast by Joe Rogan and OpenAI CEO Sam Altman.
Joe Rogan brings up the topic, saying that AI could prove more objectively capable of benefitting people’s specific needs as detached from financial and political influences.
Altman agrees that conventional government decision-making is often founded on corrupt motives but isn’t comfortable with over-trusting AI with important societal decisions.
In 2018, Michihito Matsuda ran for mayor in Tokyo’s Tama city area as a human proxy for an AI program, showcasing a novel approach to political candidacy. Despite not winning, this initiative highlighted the potential of AI in politics.
Cesar Hidalgo introduced the concept of augmented democracy in 2018, proposing legislation through digital twins of individuals. Hidalgo said, “Democracy can be updated or improved using technology and new ideas, I’m absolutely convinced.”
In 2022, “Leader Lars,” a chatbot, was nominated to run in the Danish parliamentary election, representing The Synthetic Party.
Unlike its predecessors, Leader Lars led a political party and engaged in critical political discussions without claiming to be objective, adding a new dimension to the concept of virtual politicians.
Benefits and critiques of algocracy
Algocracy’s benefits typically fall under the umbrella of “efficiency.” Using AI and algorithms to make complex decisions is quicker than relying on humans.
This is particularly beneficial in complex areas of the public sector, which are already strained by delays.
Thus, the temptation to experiment with AI-powered decision-making is immense, hence why there are so many notable public sector examples in resource-challenged departments such as law, justice, economics, and welfare.
Gaining trust in these tools is challenging, as governments don’t want their proprietary models exposed, as this could leave sensitive data open to threats. As such, many of these projects are kept highly secret, which contributes to their risks.
Historian and author of bestsellers such as “Sapiens” and “Homo Deus,” Yuval Noah Harari, has pointed out that the ongoing conflict between democratic and authoritarian regimes could be seen as a battle between two data-processing systems, with AI and algorithms potentially tipping the balance towards centralization and control.
Harari highlights AI’s ability to manipulate language and generate persuasive content could compromise democracy. The concern is not just about AI producing biased or false information but about its capacity to mimic human conversation and influence public opinion undetected.
Harari further describes how AI could create echo chambers, which authoritarian regimes might harness more effectively than democracies. Authoritarian governments, with their centralized control over data and less concern for privacy or ethical constraints, could deploy AI more aggressively to surveil and manipulate public opinion.
There are methods of remedying this. For instance, AI startup Anthropic has dedicated considerable research to “constitutional AI,” and more specifically, “collective constitutional AI,” that feeds people’s opinions into an AI model to ‘crowdsource’ its values from the general population.
These techniques could democratically inform AI governance, ensuring models are guided by public opinions rather than solely those of their creators.
Public opinion of algocracy
Critically, what do the majority of people think about these applications of AI?
Those voices are most indicative of what society really feels about being wholly or partially governed by algorithmic or AI systems. And let’s not forget the public funds these projects through tax.
A 2019 poll conducted by IE University’s Center for the Governance of Change in Spain revealed varying levels of public support across European countries for allowing AI to make significant national decisions. The Netherlands showed the highest approval at 43%, while Portugal had the lowest at 19%.
Researchers have found that disillusionment with political leaders or security providers can increase the public’s inclination toward artificial agents, perceived as more reliable.
In time, AI models may be able to act as a calibrator for socially responsible governmental action, taking opinions from the public and impregnating them into AI models to negate some of the less trustworthy manifestations of traditional democracy.
If executed correctly, these techniques could even transparently hold governments accountable to the people’s democratic wishes.
However, to date, most examples of algocracy currently illustrate state, rather than democratic, decision-making.