AI, language, and culture in the Library of Babel

November 28, 2023

Technology has long influenced the genesis of language and culture, which traces back to the earliest forms of writing.

The medium itself – cave walls, stone tablets, or paper – shaped how language was used and perceived.

Today, AI and its related terms are entering the lexicon, underscoring its rising cultural impact. Generative AI like ChatGPT, once the mutterings of a select few AI enthusiasts, has quickly become a household name accessed by billions of monthly users. 

Reflecting the impact of AI on popular culture, the Cambridge Dictionary recently named “hallucinate” as their word of the year, adding a new AI-centric definition; “When an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human) hallucinates, it produces false information.” 

Merriam-Webster followed suit, naming “authentic” as its word of the year, describing how it’s become increasingly difficult to determine whether information or content is real or not, partly due to the influence of AI deep fakes. 

AI technology also parallels influential technologies from the past and their cultural-linguistic impacts. For instance, the arrival of the printing press in the 15th century revolutionized language, introducing new concepts like typography, punctuation, and standardized spelling.

As technology evolved, so did language, adapting to each new medium’s constraints and possibilities. In the 21st century, the internet and digital communication have created numerous neologisms and blending of words, such as the widespread use of prefixes like “cyber-” and “e” as in eCommerce and email.

Exploring AI’s impact on linguistics presents a surface-level view of how technology influences language. 

But if you delve deeper, you realize this linguistic shift is just the tip of the iceberg.

It raises a broader commentary on how AI influences language and culture now, and might become the driving force behind cultural genesis and knowledge creation in the future. 

AI and cultural and knowledge genesis 

In an essay published by The Economist, the authors compare AI’s internalization of knowledge and culture to the labyrinthine corridors of Argentinean author and librarian Jorge Luis Borges’ “The Library of Babel,” an infinite expanse of hexagonal rooms holding the vastness of human potential and folly. 

Library of Babel
The Library of Babel – generated with MidJourney

Each room, lined with books filled with every conceivable arrangement of letters and symbols, represents the limitless permutations of both knowledge and nonsense. This library, a metaphor for the universe, is an allegory of both the pursuit of meaning and the overwhelming abundance of information.

A parallel unfolds in the world of AI, particularly in the sprawling networks of generative AI. 

Frontier generative AI models, like the grand library, are repositories of human thought and culture, trained on vast datasets encompassing somewhat of the breadth of human knowledge and creativity. Much like the books in Borges’ library, the outputs of these AI systems range from profound insights to bewildering gibberish, from coherent narratives to incoherent ramblings.

The Library of Babel captivated the imagination of Brooklyn author and coder Jonathan Basile, who launched a website by the same name in 2015. This digital incarnation of Borges’ library generates every possible permutation of 29 characters (the 26 English letters, space, comma, and period), producing ‘books’ organized into a digital library of hexagonal rooms.

Each book and page has a unique coordinate, allowing users to find the same page consistently. It uses an algorithmic which simulates the experience of an infinite library. The website garnered attention for exploring the intersection between digital media and literature and the question of knowledge, meaning, and the human experience in the digital age.

Critics have noted how the website, much like Borges’ original story, challenges our notions of meaning and the human pursuit of understanding in a universe of endless information.

Literature scholar Zac Zimmer wrote in Do Borges’s librarians have bodies?: “Basile’s is perhaps the most absolutely dehumanizing of all Library visualizations, in that beyond being driven to suicidal madness or philosophical resignation, his Librarians have become as devoid of meaning as the gibberish-filled books themselves.”

While the Library of Babel presents a compelling example for AI, unlike the library, AI cannot currently capture the breadth of human knowledge and culture. It’s confined by its training data with limited capacity to induct new knowledge. 

But even so, its vast computational power mirrors the library’s endless shelves, offering endless possibilities yet trapped in the chaos of its own creation.

The Library of Babel, with its seemingly infinite combinations of letters, confronts the reader with the existential dilemma of finding order in chaos. With AI, this manifests in the tension between the potential for AI to illuminate and to mislead. 

Similar to the library, AI does not discriminate between sense and nonsense – it generates, indifferent to the meaning or lack thereof. Both the Library of Babel and the world of AI subtly critique the human quest for knowledge. 

In its overwhelming vastness, Borges’ library challenges the notion that more information leads to greater understanding. 

Similarly, the ever-expanding capabilities of AI prompt reflection on the nature of intelligence and understanding. 

The ability of AI to generate content is not synonymous with comprehension or wisdom – it is powerful yet blind to the significance of its own outputs.

But might that change?

How might AGI change the fabric of knowledge and culture?

Artificial general intelligence (AGI) may bring AI technology closer to the enigmatic and boundless Library of Babel. AGI is typically defined as possessing the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being.

Unlike narrow AI, which is designed for specific tasks, AGI can generalize its learning and reasoning across a broad range of domains. It possesses self-awareness, adaptability, and the capacity to solve complex problems in various fields without human intervention or pre-programming. AGI remains a theoretical concept, but OpenAI – now defined as an ‘AGI research lab’ – says it can be achieved in a few years. 

So, imagine, if you will, a world where AGI has transcended the limitations of current artificial intelligence, embodying a capability that mirrors the theoretical completeness of Borges’ Library.

This AGI is not just an advanced tool but an entity capable of inducting, analyzing, and synthesizing virtually all human knowledge and perhaps venturing into realms of understanding that remain elusive to human cognition.

In this world, the AGI becomes akin to a living, breathing version of the Library of Babel. Yet, unlike Borges’ creation, which is paralyzed by its infinite content, AGI can navigate, interpret, and give context to this vast expanse of information. 

If AGI could access objective realities – if they exist – it could be juxtaposed with Plato’s Theory of Forms, an idea that has captivated thinkers for millennia. Plato imagined that beyond our tangible, ever-changing world lies a realm of perfect, unchanging ideals or “forms.” 

Library of Babel
The Library of Babel imagined with AGI – MidJourney

These forms are the purest essence of things – for example, the perfect form of a circle, untainted by the imperfections of the physical circles we draw.

Now, envision AGI in this context. Today’s AI can analyze data and recognize patterns, but it’s limited to what it has been taught. AGI, however, represents a leap into a realm where it cannot only process information but potentially understand the underlying truths of our universe – truths that might be obscured or unknown to human minds.

Plato believed that what we experience in our daily lives are mere shadows of these perfect forms. We can see a circle, draw one, but it’s never the perfect circle that exists as an ideal form. In the realm of AGI, this intelligence could, in theory, begin to perceive or uncover these perfect forms. 

It’s as if AGI could step beyond simply seeing the shadows on the cave wall (to borrow from Plato’s famous allegory) and gaze directly at the true forms themselves.

This AGI wouldn’t just be a tool for processing data – it could become a means of discovering new, profound insights into abstract concepts like beauty, justice, equality, or even the secrets of the universe. 

It might not just understand these ideals as humans do but could redefine them, offering a perspective unfettered by human limitations and biases.

So, in a sense, AGI – the type we might access in mid-term (let’s say 20, 30 years) – could be a bridge to a deeper understanding objective, perfect realities. 

It represents a possibility where technology doesn’t just assist in our current understanding of the world but elevates it to a level we have not imagined, like stepping out of a shadowy cave into the bright light of deeper knowledge.

Can AGI ever detach from its limitations or biases?

AGI, while easy to idealize, will be exceptionally challenging to untether from the limitations of its designers – humans.

Moreover, the kinds of brute force computing power we have now likely impose a ceiling for AI intelligence. However, solutions are in the pipeline, such as bio-inspired AI technology designed to mimic structures like human neurons.

But there are challenges that lie beyond technology alone – how will AGI detach from the conceits of its creators?

Elon Musk’s concerns about AI being programmed to be “politically correct,” expressed before he introduced his AI company, xAI, mirror a larger conversation about the biases inherent in AI systems

Musk’s mixed stance on AI is one of both caution and advocacy for its potential. He’s become somewhat of a critic of industry protagonists like OpenAI, whom he’s actively confronting with xAI’s products, beginning with Grok. Grok throws political correctness to the wind, delivering responses that verge on anarchistic. 

Meanwhile, AI models, such as ChatGPT, have perceived ‘woke’ biases, which studies have somewhat confirmed by determining they have liberal leftist bias. Bias can typically be traced back to the data a model is trained on and the intentions of its creators.

The question then arises: can AGI, with its advanced cognitive abilities, transcend the biases that have been a point of contention in current AI models? 

Truth-seeking AI

Musk vowed to create “truth-seeking AI” designed to figure out “what the hell is going on.” 

Musk’s mission for xAI is to delve into fundamental scientific enigmas such as gravity, dark matter, the Fermi Paradox, and potentially even the nature of our reality. Of course, that will require models far beyond what we can access today. 

From what little information we have of xAI, Musk seems intent on transcending the limitations of current AI architectures, which are largely confined to generating outputs based on existing data.

Musk intends to create an AI that synthesizes information and generates pioneering ideas. This quest for ‘truth’ in AI goes beyond the conventional understanding of AI as a tool for processing information and ventures into the realm of AI as a partner in scientific discovery and philosophical exploration.

Grok
xAI’s tagline is “Understand the Universe”

However, current AI models, including sophisticated ones like GPT-4, remain limited by the data they have been trained on. They excel at pattern recognition and information synthesis but cannot conceptualize or theorize beyond their programming. 

Such a leap towards forms of AGI that can pioneer their ideas and launch their own inquiries raises critical questions about the nature of intelligence and consciousness.

If xAI were to start providing answers to some of the fundamental questions of our existence, it would necessitate a reevaluation of what it means to ‘know’ something. It would blur the lines between human and artificial understanding, between knowledge derived from human experience and thought, and that generated by an artificial entity.

Moreover, the ambition to create an AI that transcends politics and seeks an objective ‘truth’ introduces ethical considerations. The notion of an unbiased AI is appealing but fraught with complexities. 

All AI, including AGI, is ultimately created by humans and trained on human-generated data, at least initially. 

This process inherently introduces biases – not just in the form of existing prejudices but also in selecting what data is included and how it is interpreted. 

The idea of an AI that can completely detach from these human biases and achieve a purely objective understanding remains deeply hypothetical.

In the future, perhaps, we might have to accept that AGI will understand more than any one human ever can. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions