The Mind is the Bicycle

Bicycles and Brains

The explosion in artificial intelligence research is accelerating the arrival of a computing superstructure, with the singular aim of replacing human thinking and labour in the maintenance of civilisation. The substrate of industrial civilisation as we know it depends on analog technologies that are crude by the standards of science fiction: they can perform on command, but not at will. What AI hopes to achieve is to replace this analog substrate with a compute substrate powered by AI: people no longer maintain civilisation, but self-thinking machines do.

The possibilities of AI have been demonstrated by generative artistic and written content that could compete well with most amateur art and copywriting, already threatening vast swathes of the internet. Its proponents are convinced that a self-learning AI that produces qualitatively superior output to human beings is just around the corner. These efforts are just a sputtering of the vast computing behemoths that will be created in the century ahead. We do not yet possess the resources in manpower (AI ‘researchers’) and compute to achieve a true computing superstructure that replaces many of the human systems that make our civilisation turn, but a theological belief in its inevitability is driving us to that end.

Many of those working at the heart of or adjacent to artificial intelligence consider themselves transhumanists who want to replace man’s inferior biological features with man-made technological improvements. From writing to calculations and even governance of the human realm, technologists and intellectuals propose that apart from absolute leisure, computers should perform all the labour that man has always performed. One of Steve Jobs’ most memorable quotes is about making the computer a ‘bicycle for the mind’ – a quintessential expression that would be embraced by any transhumanist, but a flawed perception of computing as an aid to our thinking. The computer was never a bicycle for the mind; the mind is the bicycle. Like any muscle, we must regularly exercise our minds through education and labour to maintain and expand our cognitive capacity. Out-sourcing our brainwork to external tools like the computer creates a competitive, not complementary, relationship. The immediate effect of AI replacement begins with the weakening of our cognitive capacities, and this will have further ramifications down the road as this problem compounds simultaneously with the vast expansion of AI technologies to replace our efforts.

David Krakauer conceptualised the relationship between technologies that aid the mind and technologies that compete with the mind as two alternative ‘cognitive artifacts’: tools that we use to perform cognitive tasks, i.e_. to think_. Complementary cognitive artifacts (hereafter called complementary tools) are external tools used to complement our ability to perform cognitive tasks, and with enough use can be discarded and replaced with a mental equivalent to perform the same processes. Competitive cognitive artifacts (hereafter called competitive tools) are external tools used to complement our ability to perform cognitive tasks, but once discarded we have no mental equivalent.

The basic example is that of the abacus versus the calculator: the former aided mathematicians in performing more complex arithmetic, but after enough use mathematicians could dispense with the abacus and perform the same calculations without any external aid, for instance, using spatial visualisation. Thus, the abacus is a complementary tool. The calculator, on the other hand, can perform far greater calculations than any human mind could ever hope to, and so our use of it becomes dependent and competitive; if the calculator disappeared, we would not be able to perform those calculations without it.

Computers like the calculator are an extreme form of competitive tooling, capable of performing extraordinarily complex calculations that no human mind could possibly perform — and therefore cannot be replaced. As computers replace more of our systems for thinking and working, we become more dependent on their existence. If (and when) these computers are taken away, we will be unable to perform cognitive tasks with the same rigour and owing to our extended use we may experience a degraded cognitive ability. This consequence is an inverse of the lindy effect: the greater the presence of computers in our daily lives, the weaker our cognitive capacities. AI accelerates this process, going beyond even a competitive tool to the mind and becomes a direct competitor to the mind.

Joseph Tainter’s theory of diminishing returns on the complexity of human civilisation can also be applied as a theory of diminishing returns on the use of competitive tools to assist the human mind: a diminishing returns on compute. The greater our dependency on computing, the less benefit we gain from its aid in performing cognitive tasks. At some point, the tradeoff becomes detrimental to our individual and collective capacity to think – and therefore to act. The rise of AI is an inflection point that threatens to plunge us into the long tail of cognitive decline.

AI is the Path to Mediocrity

AI development begins with the low hanging fruit: ‘aids’ embedded within software applications that help us to complete our sentences, produce art on prompt command, and write code. The next ‘hype wave’ in Silicon Valley seems to be centred around AI companies working on generative content and tools that can help with anything from coding to writing, to art and beyond. The general idea is that there is a great deal of ‘menial work’ performed in the economy that could quite easily be out-sourced to AI and save people a lot of labour hours. This is prima facie a noble goal, and AI may well prove to be a value-add in the areas of work where ‘cookie cutter’ content needs to be produced to get something out of the way.

Where AI will struggle to add value is in that unquantifiable and abstract area of ‘creativity’. A truly self-thinking, iterative AI is not quite with us yet – and it may never be. What we have now are vast datasets that have scraped the web to produce their best imitations of how humans act and communicate. This long march to the mean immediately creates a ceiling to the possibilities of these AI tools to complement creativity efforts. If an ‘AI art generator’ produces something, someone somewhere has already produced its authentic likeness. What we create becomes nothing more than copies of copies of copies.

Here is a thought experiment: What happens when our ability to think, write, and retain information is outsourced to “AI” apps that generate our notes and complete our thoughts? Does it actually increase our “creative energy”, or does it fool us into a false sense of comfort through greater quantities of production even as the quality of that production ceases to the point of illegibility? Would we be able to tell that this is the tradeoff being made or would we continue to be fooled by our own creations? Couldn’t ‘AI tools’ condition us to support particular ideological trends? The popular response for these tools suggests that most people won’t notice because many already exist in a state of conceit where they view their amateur output as being exceptionally creative. If it were truly so, basic computer programs would not be able to emulate them to such a degree. What we are learning is that many people cannot pass the Turing test for legibility and intelligence.

The scrutinising gaze of AI will not replace our creativity but expose the sheer lack of it among much of humanity’s output, and then contribute to the degradation of said output a little bit more. Out-sourcing our cognitive capacity to AI will produce anything but greater creativity, and over the long term shape generations critically dependent on computing for their thinking processes until our minds are smoothed over and capable of uttering mere prompts to generate yet the next AI-generated text or picture, itself crowdsourced from the collective (soon-to-be non-) intelligence of humanity.

Even if we comprehend the state of our dependency and consciously choose to make this tradeoff, can we determine that these technologies will always be around? The long road of history is littered with the remnants of previous civilisations who, having exhausted their sources of energy or lost the skill to build and maintain the core technologies they depended on, quickly succumbed to entropy and extinction. If we were being stringently longtermist about this, we would ensure that people relied on as few competitive tools as possible and maximise our cognitive capacities. We would not be running headlong into critical dependency on AI tools but seek to minimise the use of competitive tools and train human minds on complementary tools to maximise human cognitive sovereignty.

Fear of the existential risk posed by an AI-run apocalypse of human civilisation assumes stasis in the collective intelligence of humanity while the computer superintelligence sees exponential growth in its intelligence. Eliezer Yudkowsky, who popularised fears of ‘existential risk’ posed by AI, has fantasised entire scenarios in which a superintelligence essentially dupes humanity into aiding its objective of total planetary dominance and the enslavement or extinction of man. The existential risk posed by AGI is presupposed by the belief that AI is possible, which is not proven. What is certain is that our cognitive decline owing to ever greater dependencies on computing to perform mental and physical labour will steadily degrade our cognitive capacities until we are incapable of performing the most simple tasks.

The Long Tail of Cognitive Decline

A human being whose cognitive capacity has been out-sourced to AI will not be smarter than the ancestors who trained their minds and were able to perform more complicated and impressive feats of art, poetry, literature, and arithmetic. Our bigotry against our ancestors’ technological primitivism is the conceit that obscures the long tail of decline that awaits human civilisation, even as computerised civilisation reaches new heights in scale.

Peter Thiel once asked why we don’t have flying cars. The real question we should be asking ourselves is, why don’t we have Mentats?

In the Dune universe, humanity led a Butlerian Jihad against AI, destroying and then forbidding the creation of machines ‘in the likeness of a human mind.’ While non-thinking machines were permitted to exist, humanity needed a replacement for the role that artificial intelligence played in the expansion of human civilisation to the stars. The Mentats are an interesting anti-AI concept precisely because it cuts against the grain of science fiction by positing human minds as possessing vastly greater cognitive capacities than we believe and are willing to strive for.

While we do not have Mentats or any program aiming to cultivate the human mind to such a degree, we do have an increasingly popular strain of ‘tech monasticism’ among tech CEOs and increasingly in private school curriculums. These preferences among the wealthy and technologist classes reveal themselves when they send their children to tech-free schools or engage in ‘tech fasts’. Alongside the decline in aristocratic tutoring, the replacement of a pedagogy designed to exercise the mind free of competitive tools is to blame for our catastrophic misuse of the internet, which gives us the world’s knowledge free of access. Why else would tech-free schools be so popular with the tech elites? They understand well the deleterious effects their technologies are having on the minds of the youth and are keen to ensure that their own children do not suffer the consequences of their parents’ actions.

In poor people’s schools (aka those that we the majority attend), “critical thinking” is now a euphemism for extreme dependence on competitive tools to aid our cognitive abilities. It is the much maligned rote learning system that dominated pre-modern systems of education that gave us the complementary tools for cognitive work, and its steady but sure elimination from public curriculums the world over is producing a generation that is not just critically dependant on tools like computers but are increasingly incapable of cognitive feats that those their age were capable of just a century ago.

The memorisation of foreign languages, classic literature, and arithmetic (among other sciences) not only created generations that had a deeper understanding of the past but were cognitively superior to many of us today. Constructing illusory ‘second brains’ on SaaS note-taking applications where AI completes our thoughts (based on the crowdsourced thoughts of humanity) is not just a poor replacement for these skills but is negatively contributing to our intelligence. Just a few generations ago, erudite professors and paupers could recite epics from memory, and hold long, often improvised speeches without the use of extensive notes, PowerPoint slides, and prompts. Common men could recite lines of classical poetry by heart, and while they may not have been formally literate as most people in developed countries are today, there is something to be said about their superior understanding of the meaning behind the words and the way they lived their lives. The classical forms of education were designed explicitly to prevent mental stagnation, to exercise the mind, and to cultivate the mental faculties so that we could aspire to knowledge and truth as human beings, unencumbered by external tools. Most importantly, it was meant to create free thinkers who were better defended from having their thought processes hijacked by greater powers capable of bending the cognitive capacity of the masses to one agenda or another.

Today, universities are lowering intellectual requirements for entry and most students profess not to read, and actively avoid majors that require heavy reading. Children’s development (both physical and mental) is being stunted by the new epidemic of screen addiction because the ‘content’ is doing the thinking for them. Unlike Socrates’ fear that writing would replace oral instruction for the worse, the out-sourcing of our cognitive capacity to a competitive tool like computing has clearly recordable negative effects on the individual and general intelligence of humanity.

As tech monasticism becomes increasingly popular, we should be taking pause to think about the full ramifications of AI technologies and its acceleration of symptoms like screen addiction and intellectual stagnation. If AI really is the future, it is looking like a dystopian and homogenised future where our culture becomes locked in the loop of repetition. The response to AI proliferation is a neo-Humanism that exercises the human mind free of competitive tools to assist it, so that we can create a truly superhuman civilisation with Mentats at the helm.

The Politics of AI

Technology is always political, offering advantages in political centralisation, economic production, and military supremacy. When a new technology is created, governments are often the first investors and innovators, seeking military applications for said technology out of which later springs various commercial applications. In our time, the origins of the computer, GPS, nuclear energy, and many other technologies we depend on for the everyday maintenance of civilisation was first incubated in secretive government (sponsored) programmes. The origins of Silicon Valley lie in the Federal Government’s interest in accelerating technological development to aid it in the Cold War.

The increasing centralisation of the internet gives governments vaster powers over the lives of their citizens, whether it is with greater financial controls (through CBDCs), or censorship by controlling the strategic choke points of the information superhighway. With AI, government control can extend from the type of information we are allowed to exchange to the social engineering of our thought processes.

One of the key differences between pre-modern and modern man is not just raw intelligence but the method of intelligence – the way we perform our cognitive tasks are changed according to the social and material technologies present in a civilisation. Where oral knowledge could only be transmitted by word of mouth, literacy allowed for knowledge to be recorded and transmitted across space and time without the original producer or purveyor.

Far after Socrates’ lament, literacy remained a personal tool flowing out of a sovereign human mind, not an adjacent mind that would do our thinking for us and over time replace our cognitive output entirely. To understand the tradeoffs of an automated mind versus the human mind means looking at the cognitive ecology** **that is transformed with the invention of new technology. The change from oral to literate cultures had massive ramifications for the control of civilisations. Information was not just easier to record but traveled faster, enabling greater scale in bureaucratic functions which allowed the creation of true empires. Empires spread as fast as the sword but maintaining control through the word. As the written word transformed the cognitive ecology of the oral age, the internet with its algorithms and generative content has transformed the cognitive ecology of the literary age with consequences for freedom. At the speed of computers, the global monoculture spreads as fast as fibre optic wires will allow it to. Who creates the culture that now sweeps the world? What narratives, embedded assumptions, and nudges weave themselves throughout the articles, soundbytes, and videos that we consume? Whose narrative are we embedding ourselves into?

What we have today is a type of automated knowledge that was not transmitted through particular traditions aimed at its preservation but a free-for-all in which the knowledge of the world is produced, curated, and then distributed through a myriad of arcane functions to reach our eyes and ears. We don’t know who produced the information, and even if we could find out, how would we be able to verify its authenticity? The digitalisation of culture and society would not be possible without the sheer amount of compute we now have access to, nor would its totalising reach be possible without the internet.

Detractors may propose that being against the digitisation of the mind is a luddite fear of new technologies that, over the long run, significantly improve the lot of humanity. Fear of new technology goes well beyond the industrial revolution and its consequences for the human race. Socrate’s’ lament is less regarded because we have the benefit of hindsight in knowing that this was ultimately about trade-offs: certain benefits of an oral culture were abandoned in favour of a literate culture, which in turn brought with it many of its own problems (which Socrates surmised correctly). With history as our judge, we should accept a similar tradeoff with AI, swapping greater cognitive dependencies on technology in exchange for the knowledge of the world in the palms of our hand. Without the benefit of hindsight, it is difficult to predict how AI will develop and how it will shape the future cognitive ecology of humanity. Its applications are exponential and more than once the direction of AI development has been proven wrong.

What need not be predicted is that all technologies centralise absolutely, and what AI does guarantee is a far greater level of control over more people than any other technology in human history. Even if one does not believe in computing as a competitive tool, we risk our political sovereignty if various power interests can shape our method of thought and our consumption of content.

All technology is political, and the first act of any power is to utilise technology to entrench and further its reach. The common thread throughout modernity has been the regime’s conditioning of a new type of subject: taxable, conscriptable, and amiable to the interests of the state. This ideology was coined ‘high modernism’ by James C. Scott, and the analogue technologies available to the state throughout the 19th and 20th centuries made it possible to control the minutiae of citizens’ lives. The risk posed by AI is that political interests see the opportunity for greater power to shape their subjects, interfering with and determining the way we use computers, what we are using for, and ultimately through AI’s transformation of our thinking process shape how and what information we consume to make us more pliable for political interests.

Man Against the Machine

In August 2021, the Taliban completed its humiliating rout of the American army with the capture of Kabul. In a stunning lightning offensive lasting mere months, history’s greatest and most technologically advanced army was helpless to prevent its satrapy from falling into the hands of an enemy it had fought for two decades, armed with nothing but decades-old Kalashnikovs and sandals.

AI military developers justify their work by arguing that an overwhelming supremacy in this technology would prevent other nations from declaring war. The same argument was used with the development of the atom bomb, and what we have seen in the post-war period is the collapse of the bipolar world order and the USSR, the retreat of America in the War on Terror, and no end to the age-old human practice of war in sight.

The Taliban are a salient example of the power that ‘primitive’ human will has against advanced technological and socio-political systems. Retaining our humanity will require a similar militant will against the coming onslaught. Technology cannot determine history, and it certainly cannot determine political outcomes. We still have the agency to decide how much of our humanity we wish to preserve. If the transhumanists at the heart of AI development have it their way, nothing will be left of us.

Ahmed Askary

Back to Contents