qummunismus / kommunismus reloaded
Home > Informationstechnologie > The imminent AI Revolution from a Marxist Point of View

The imminent AI Revolution from a Marxist Point of View

Wednesday 5 April 2023, by mond

What is happening in the field of Artificial Intelligence (AI) these days and weeks is clearly the beginning of a technological revolution that will change our world far faster and far more fundamentally than the Internet or smartphones. (See also: Do not underestimate ChatGPT and other AI)

In the meantime, this has dawned on many skeptics, even if some still act as if there is nothing to see here. Only a few months ago all 5 parliamentary parties in Austria spoke out against an unconditional basic income (UBI). None of the deputies obviously has even a tau of what is coming up now. The right-wingers don’t want a UBI because they don’t represent our interests but those of the corporations, and the center-left (Greens and Social Democrats) don’t really seem to understand it. Partly they don’t want to understand it, because the basic income questions the existing capitalist economy much more fundamentally than these parties want it to.

But back to the topic of AI:

Unexpected and yet predictable

The capabilities of ChatGPT and other AI systems sometimes even amaze experts in the field. In fact, it is a breakthrough that 12 months ago no one could predict if it would come in this form now or if it would take another 3 or even 5 years to reach this level of capability. But, in view of Moor’s Law, it was clear to most of us (IT Nerds) that we will reach this point sooner or later. However you may feel about Kurzweil. He foresaw the whole thing decades ago.

On January 29, 2005, we, the KPÖ (Austrian Communist Party) section "Dogma", organized an event on the singularity at Cafe 7*Stern. With a rough calculation for the efficiency of the human brain, compared with the development of the computing power expected by Moor’s Law, a date of 2025 resulted at that time for the point at which AI can reach the level of human intelligence. Of course, this is with a significant uncertainty level of +/-5years. If we look at what ChatGPT can do, then we are well in time today (early 2023). In August 2014 the Youtube video of CGP Grey "Humans need not Apply" appeared. At least since then, there is no excuse to ignore the effects of digitalization.

There are still voices that never get tired of repeating the old mantra: “But the new technologies will also create new jobs. That has always been the case”. But: First. it is not so this time. (See the "Humans need not Apply Video") and Second: Let’s have a look at how the new jobs have been created in the last years: Some of the added productivity from technological advances has actually raised our standard of living. Another part was collected by the top 1% or better 0.1% and was transformed into luxury goods there. But the biggest part is destroyed by the system itself, because capitalism can only work where there is scarcity: What is available in abundance is hard to sell. So capitalism creates artificial scarcity: More advertising is done for products we don’t need. Advertising has only one product: our dissatisfaction. We buy short-lived junk that we don’t need. If we look at their ecological footprint, many of the products and services offered today are more harmful than useful. With laws and patents we artificially restrict what would be available in abundance: Knowledge and information. The most "efficient" method to create artificial scarcity is of course through armament and war: This hits twice: On the one hand it creates demand for weapons, on the other hand what was bombed out must be rebuilt afterwards. “Once weapons were manufactured to fight wars, now wars are manufactured to sell weapons.” — Arundhati Roy. And then there are the bullshit jobs in the bureaucracies of the big corporations described by David Graeber.

To sum up: The "new jobs" created despite technological advances are bullshit jobs. We could meet our human needs with a fraction of the work that is done today, and in a much more ecological way. How many people have their job only to be able to afford the car they need to have their job, etc., etc.?

Even Marixists have to rethink

This is a way of thinking that even Marixists are not necessarily familiar with, because Marx argues how well capitalism does in increasing efficiency, since every capitalist is constantly in competition with other capitalists... but if you look more closely: Already in the Manifesto Marx describes how it comes to the creation of artificial scarcity:

“In these crises there breaks out an epidemic that, in all earlier epochs, would have seemed an absurdity — the epidemic of over- production. Society suddenly finds itself put back into a state of momentary barbarism; it appears as if famine, a universal war of devastation had cut off the supply of every means of subsistence; industry and commerce seem to be destroyed. And why? Because there is too much civilization, too much means of subsistence, too much industry, too much commerce. The productive forces at the disposal of society no longer tend to further the development of the conditions of bourgeois property; on the contrary, they have become too powerful for these conditions, by which they are fettered, and no sooner do they overcome these fetters than they bring disorder into the whole of bourgeois society, endanger the existence of bourgeois property. The conditions of bourgeois society are too narrow to comprise the wealth created by them. And how does the bourgeoisie get over these crises? On the one hand by enforced destruction of a mass of productive forces; on the other, by the conquest of new markets, and by the more thorough exploitation of the old ones. That is to say, by paving the way for more extensive and more destructive crises, and by diminishing the means whereby crises are prevented.”Karl Marx, Communist Manifesto

In the last 150 years, the capitalist system has "internalized" and perfected this creation of scarcity. We continue to see periodic crises, but much of the destructive power is already extremely well "integrated" into current operations.

Anyone who has understood this mechanism must understand how dangerous the current developments in the field of AI are. That the whole thing will end in a dystopia is more than likely.

Do we need an AI moratorium?

A little hope is given by the fact that even many AI researchers now plead for a temporary stop of the developments. Less hope, however, is the fact that this stop is being criticized by those who should actually know: For example, "netzpolitik.org" argues that this is essentially just a PR stunt to promote the hype .

Some terminology to clarify the discussion

  • AI based on neural networks have been around for a long time. The first implementation of a perceptron dates back to 1958 by F.Rosenblatt. In the 1980s there was a small hype around neural networks which then died down again. ("AI Winter"). In my high school days we also played around with that kind of software.
  • To determine whether a machine has human intelligence right there is the "Turing Test". Can an AI that gives text answers to questions in a terminal convince a human tester that it is human? Depending on the tester, it can.
  • AGI ("Artificial General Intelligence") means an AI that has reached human intelligence level and does not only act on the basis of language but can also see, hear, speak, etc. "multi modal". Kurzweil thinks that we will reach AGI only in a few years. On the other hand, many see the capabilities of ChatGPT-4 already very close to AGI. We could reach it within weeks but certainly will reach it within the next years.
  • Once we have reached AGI and the AGI has reached a level with which it can continuously improve itself, we will get super-intelligence and with it the singularity: The technological development will reach a speed and level in which we humans can no longer keep up. Kurzweil predicted this for about 2045. In view of the current developments, we could of course get there much earlier.

The risks of AI

Even under ideal conditions, creating an AI with human intelligence is a risk: what the AI "wants" we define at the learning stage -> but if we make a mistake then the AI’s goals could inadvertently include our extinction. (See also Robert Miles, Intro to AI Safety.)

Under capitalist conditions, the whole thing is so much riskier: The corporation that controls the most powerful AI ultimately controls the world. In the race for this position, troublesome safety concerns tend to be ignored.

The extinction of humanity by an AI accidentally running amok is of course only the tip of the iceberg. The dangers of AI are extremely diverse and frightening even on a small scale. How will SPAM, phishing and cybersecurity evolve in the future? Disinformation campaigns in which every posting is 100% customized, etc, etc, etc.... Especially in view of the fact that we are currently at war, it is probably necessary to talk about killer robots.

Will a moratorium help?

The development can certainly no longer be stopped. Research around AI does not necessarily need a lot of resources: A lot can be done even on your own laptop. The training of the currently used networks typically costs a million dollars in computing time, but that is nothing that medium-sized corporations, research institutes or rich private individuals (billionaires) could not afford.

In addition, the costs will decrease exponentially with Moore’s Law. What costs 1 million today will cost only half that in 18 months. Moreover, the new networks can build on the trained data of the old ones to be "refined". And we can assume that because of recent successes, lots and lots of money is now pouring towards AI development.

The genie is already out of the proverbial bottle.

The fact that this technology is already relatively easy to access is, of course, on the one hand a danger in itself, but on the other hand it is also an opportunity to keep the whole thing in check. We should even push for these technologies to be open source. Not only the code and the weights of the AI but also especially the training data and the goals set in the training.

Political demands we have to set up to reduce the danger of AI

  • Open source: technologies around AI and training data need to be disclosed and need to become public domain.
  • In general, we need very rapid and very large investments in research around AI security. These investments need to be collected by extra taxes from the corporations.
  • We need mandatory registration for AI systems. Every AI system above a certain performance level must be documented in detail for public inspection.
  • Online advertising ban. And best of all an offline advertising ban as well. Online advertising is the reason for the most massive mass surveillance.
  • All researchers in the field of AI must be monitored by peers directly working with the technicians. This is true for publicly funded research but also for private companies. Democratic control over the corporations would be long overdue anyway, but in this particular case it is unavoidable. Since the controllers of course need the relevant know-how, it can only work in such a way that they control each other as peers. So AI researchers from Meta/Facebook control Microsoft. AI researchers from Google control a university. The researchers of a public university could control again those of the companies... So many researchers are bound with controls anyway. This would slow down AI research but make it safer. In order to be able to control we need of course also defaults which regulations must be kept:
  • We need legal regulations for the goals that are given to an AI. E.g.: that the goals that are given to an AI must always be useful to mankind and must not be oriented to the private interests of corporations.
  • and of course we need these regulations globally!

Given that almost all of our politicians are complete douchebags and an equally large part is bought by the corporations, the chances that even a part of what is necessary will be implemented here are small. But we should at least know what we demand!

In fact, there is already some fair amount of research on the topic of AI security. (See for example: “Principled Artificial Intelligence: Mapping
Consensus in Ethical and Rights-based Approaches
to Principles for

All of what would be needed now shines a light on what has been missed in recent years:

  • Disempowering the corporations and building real democracy
  • Development of a commons based economy
  • Introduction of a universal basic income (UBI)
  • In one word: the overcoming of capitalism.

In any case, the AI revolution shows us exactly that: how dangerous and obsolete capitalism is in the 21st century.

What does this mean for Marxism?

We must prepare ourselves for the fact that human labor will become largely obsolete in the coming months and years. Exactly how fast this will happen is not yet clear, but in the next 5 to 10 years there will be gigantic upheavals.

Central to Marxist analysis is human labor: it is the measure and above all the source of all value. The value of a product (around which market prices also oscillate) is the human labor used to produce it. With increasing automation, the proportion of living labor became smaller and the proportion of relative labor coagulated in machines and technology became larger. With the development of AI, we are now coming to a point where human labor is hardly necessary. We thus have a kind of "division by zero". Human labor is no longer the source of value. (Well, the existing labor in the machines is still there - but in a world in which these machines improve and maintain themselves this labor is no longer "consumed"). Marx tries to approach this question in the machine fragment but in the end comes to no real results.

Marx AI

“But to the degree that large industry develops, the creation of real wealth comes to depend less on labour time and on the amount of labour employed than on the power of the agencies set in motion during labour time, whose ‘powerful effectiveness’ is itself in turn out of all proportion to the direct labour time spent on their production, but depends rather on the general state of science and on the progress of technology, or the application of this science to production”. Karl Marx, Grundrisse

(Also see: Marx und die Roboter - Rezension)

However, we find a very important reference to the economy in times of AI in another place: In Capital Volume 1 Marx mentions: “... the original sources of all wealth — the soil and the labourer”. (Karl Marx, Capital Vol I)

Obviously not only labor is source of value but also the soil (in the sense of natural resources). But 150 years ago these were abundant and their value was essentially determined only by the labor necessary to extract them. At the latest, since our world has reached its ecological limits, this is no longer quite true and with the devaluation of labor, the soil will soon represent the dominant part of the value. No wonder that the rich have bought up a lot of land in the last years. Obviously they have read their Marx more exactly than many leftists.

Fight while it is still possible

So many call the logical consequence of this development correctly the unconditional basic income. The only problem is how to achieve it. We have the chance to fight for a high basic income only as long as our work is still needed. Unfortunately, many trade unions still lack the most elementary understanding of its necessity.

In any case, we are living in exciting times. It is not yet certain whether we will soon end up in a utopia or a dystopia. In any case, there is not much room for wishing. Let’s try our best together.

Franz Schaefer (Mond), April 2023

German Version of this aritcle Die bevorstehende KI Revolution aus Marxistischer Sicht.

Update: Also see: The A.I. Dilemma (
Center for Humane Technology, YT Video)

Update (may 7, 2023): Geoffrey Hinton at MIT Technology Review’s EmTech Digital

| Newsletter | About | Impressum / Kontakt | RSS Feed | SPIP | Copyleft: Alle Artikel und Fotos unter GFDL falls nicht anders angegeben