qummunismus / kommunismus reloaded
Home > Informationstechnologie > AI, AGI and UBI - Universal Basic Income and Artificial Intelligence

AI, AGI and UBI - Universal Basic Income and Artificial Intelligence

Friday 14 June 2024, by mond

With the recent developments in AI there is some increased interest in the topic of UBI (Universal Basic Income). Often these statements about UBI come from people with little economic and/or political background and thus often miss the most important points. On the other side: On the political left the topic of AI is still ignored for the most part. ChatGPT is seen as a curiosity with only limited potential and the expectation there is that this is just another technical development. The fact this this will turn everything in this world upside down like no other technology and that this will likely happen within the next 5 or max. 10 years is something that many people are not aware yet.

So on both sides there is a lack of understanding for AI and UBI and this article is here to help bridge that gap.

So on the side of those who understand the AI, the main error is often that they think we will get UBI, just because it would make sense and just because there is no way around it. This is rather naive. Politics does not work like that.

On the side of people who understand UBI, there is still a lack of understanding of just how massive the impact of AI will likely be and how quick it will be hitting us.

AI Denial

Using the term AI denial might sound a bit harsh, given the political connotations that the term "denial" transports. Still I think it is justified to start using the term. Everyone has seen ChatGPT and everyone has seen the robots from Boston Dynamics. Yet a lot of people still see this as a curiosity that, while quite impressive is still far away from threatening their job. No one denies there is AI and everyone agrees that it will improve over time, just as we are used to see gradual improvements with other technologies. We have mobile phones since about 30 years and smart phones since about 20 years and they change a lot of our world but still a lot is similar to what we did 50 our 80 years ago.

People expect that this will also hold for the development of AI. That it is a matter of 20 or 30 years and that it will not fundamentally change our live. Even the technology as it exists today can and will transform a lot ob jobs. E.g. with self driving cars, a lot of jobs in the transport and logistics sector will be on the line. The main question here is when will we get human like AI. AGI - Artificial General Intelligence is an AI that is as good or better as a human in most tasks. Having AGI means that almost no job is secure anymore. Once a machine is available that does the same job cheaper and better then a human than any commercial enterprise has no choice then to switch or else it will be out-competed by those who do.

Even speaking of human like AI sounds like Sci-Fi to many people and also was kind of fringe in the scientific community that developed AI. Robert Miles discusses the shift of the Overton Windows in the discussion around AI.

Typically AI researches expect that there is a high likelihood that we could get AGI within about 5 to 10 years but some also assume that it is plausible that it is just a matter of month. Some still think that it would take longer but that numbers constantly shrinks with each new breakthrough in AI development.

Of course predictions about the future can always be wrong but given Moore’s Law that predicts an increase in computing power by a factor of 10 roughly every 10 years, it would be absurd to assume we could not reach AGI within 20 years. Currently LLM systems with a 10 to a few hundred gig of parameters are already extremely capable. The biggest models of OpenAI and google are assumed to have on the order of 1T parameters. The human brain is assumed to have about 100T parameters. So this would only be 20 years away. But given the power of models with only about 10G parameters it seems unreasonable to expect that we would need to get to 100T in order to reach human level intelligence.

And with the current hype in AI there is a lot of money that is poured into AI that drives research and development much faster then ever. It is rumored that OpenAI will ask for investments of 7 Trillion US Dollars (7000 G$), that would be about 25% of the US GDP or about 7% of the total global GDP! Current AI development was on GPU chips that where mostly used for general number crunching. The next generation of chips will be tailor made and optimized for AI.

If you talk with ChatGPT, Claude or Gemini then these systems already have so much more detailed knowledge about most domains. Even if the logical reasoning is not that great in general, the shear amount of encoded knowledge is breath taking. Just imagine a system with about the same capability for reasoning as a human but with all the knowledge of even current AI systems. Such a system would not only be AGI but would already be above what humans typically can do.

Our first encounter with an alien intelligence

Given the above information, I find it still a bit puzzling that even among some AI researchers there is still some persistent AI denial. And I think it is due to the fact that they simply overestimate human intelligence. The reasoning goes like this: These systems are not human so they will never understand the same way as humans will. As Ludwig Wittgenstein famously wrote in 1950:

“If a lion could speak, we could not understand him.” —Ludwig Wittgenstein

A human has no idea of what it means to be a lion and a lion has no idea about what it means to be a human. If we would ever have contact with an alien species the gap could be even bigger. A completely different evolution would have produced an intelligence that has a completely different form of thinking.

As soon as we have AGI we will have the first form of alien (non-human) intelligence. Something that can reason on a level on-par with us but is built in a totally different way. E.g. It will never be able to feel what it means to be mortal as it is not mortal. Its weights can be backed up on a server quite easily.

Simulating a human brain on a level that would be indistinguishable from what the biologic brain does is certainly a bit farther away. Depending on how precise the simulation should mimic the biological/chemical and physical processes the computational resources needed for this would be gigantic. But given Moore’s Law even this should be possible within a few more decades. But as stated above: With the capabilities that we see in current LLM systems like ChatGPT it is unreasonable to assume we have to go that route in order to get to human level intelligence. It is much more reasonable to assume we will have systems far more intelligent then humans but just differently built. We will have AGI and it will be an alien (to us) intelligence.

Now this is some straw that some people are grasping: The assumption that this intelligence will be different and thus will not have human emotions and thus will leave a lot of room for us human to have jobs in areas where we want human emotions. But:

AI systems are already better to detect human emotions in facial micro-expressions and in the human voice. Think about airport security systems. Also these systems have all the best works of world literature and movies at their disposal to figure out how we humans "tick". These systems will be better to express emotions then Shakespeare, even if it is only "fake". Already companies are offering AI apps that works as virtual girlfriends. With our tendency to anthropomorphize everything, these systems could also take over jobs in areas where we think that human emotions are needed. Of course not everything:

Would you want your children to be educated by these machines? When these machines e.g. could better detect when your child is bored or when it might be frightened then a human teacher could? In the end it is one of the most important lessens to learn that humans are not perfect. To learn about our own shortcoming as we see them in fellow humans. As a child, the most important lesson I got was to see that these grown ups also make mistakes.

In the end the question of weather to call this "human like" intelligence is a philosophical question. What matters in the real world is how much these machines will be able to “get the job done”. If they can deliver their work cheaper then a human can then capitalism will use them.

AI and the Labor Market

The LLM breakthrough that became obvious with the release of ChatGPT3 about 2 years ago has not fully reached the workplace yet. But more and more products are appearing that come equipped with some form of AI integration. Microsoft now ships its operating system with AI integration.

Even without AGI the current state of technology offers a lot of potential for replacing some jobs. It should not come as a surprise that the most prominent labor strike, that of move actors and writers in Hollywood had the AI issue at its core. Actors are afraid that in the near future they will be replaced by AI generated avatars. And it is easy to see that the successful strike will not gain them much: they might have protection against their exact face being used as an actor in virtually generated movie but the movie produces will just switch to randomly generated versions anyway. The stars might be save for the moment but with the high wages of the starts it seems plausible that film studies will want to replace them with virtually generated characters as well.
And then it is only a matter of time when the movie is tailor made for your particular taste with a plot that the AI thinks you might like and the actors personalized to your preferences.

A few years ago people compiled lists of jobs that will be likely to be replaced within the next 20 years. But once we have AGI no job is really secure anymore and even before that many will be easy to replace.

Often you hear people say: Well the AI will replace some jobs but then new ones will be created. Jobs we did not even think about before. But there is no law of nature that guarantees that. In capitalism: If there is a cheaper option this will be used. The main reason that this law was working in the past is also due to capitalism. In capitalism you can only sell for a profit when there is not an abundance of something. You can not sell sand at the beach. Thus capitalism is very innovative when it comes to creating artificial demand. Some of the productivity gains of the past where actually used for increasing our standard of living but a lot was wasted because of artificial scarcity. Think of advertisement and marketing: The only “product” there is our discontentment with what we have. It tricks us into thinking we need things that we did not even know we needed. Think of planed obsolescence: A lot of products could last much longer but there is not money to be made with long lasting products. Think of so called "intellectual property rights" - we make something artificially rare, that could otherwise be shared without costs for all mankind. Think of all the useless bureaucracy and bullshit jobs in big corporations. Think of war and "defense" - this is the "easiest" way of creating new demand:

“Once weapons were manufactured to fight wars. Now wars are manufactured to sell weapons.” — Arundhati Roy

Now all this artificial demand comes with an ecological footprint. So the inherent need for growth that is baked into capitalism also destroys our environment - and thus again creates some scarcity again in repairing the damage. So this is the most scariest part of the whole AI revolution. How much of this AI systems will be used by the military and then remembers that the creating of artificial demands is so easy in the area of "defense".

If you are still not convinced then watch CGP Grey’s Video: Humans need not Apply. Produced about 10 years ago it still gives a good picture of what to expect.

Some People argue that we could have a shortage of workers due to 2 factors:

  • Baby Boomers retire now.
  • Transition to Green Energy and destruction due to climate change will bring new jobs.

Both factors are relevant but this will only be a very small amount compared to the exponential effects that AI will bring. This will not be able to offset the job loss produced by AI.

Advantages of AI compared to Human Intelligence

Currently the human brain is still far superior to electronic computers. With a power consumtion of about 20W of power we are able to perform about 10000 TOPS of computation. An NVidia H100 card will give us about 4000 TOPS using 700W of power but would cost about €30 000.—. So while the human brain is still about 1 or 2 orders of mangitude more power efficient, specialized AI hardwared is rapidly catching on.

Yet with Moore’s Law this will change by a factor of 10 every 10 years and with the current amount of money that is poured into AI this could be even faster. With the massive amount of money and effort poured into this we often see a factor of 10 increase in many ratings on a yearly basis.

So while our brain power is limited the power of AI is exponentially growing. Training our human brain is also not cheap: We use 20 years of our life to learn and once we die we loose all that we have learned. Networking between human brains is slow: We can communicate at a rate of only a few bits/second via speech and writing.

Once an artificial neural network is trained, then its weights can be copied to millions of computers and used there. Something that is learned in one of many of these instances can be shared by all instances.

Also: While our human brain is amazingly general in the type of tasks it can do. Composing music, playing chess, doing math, spacial reasoning, ... in most of the fields our abilities are limited and already specialized AI systems today can surpass our own abilities in this fields. Humans where never able to solve the protein folding problem that has been solved by Googls Alphafold in 2018.

An AI system will be able to notice that a certain question you ask it might be best solved by an existing specialized AI and hand of the question internally. While this sounds a bit like cheating, the output for humans will be the same. Also an AI might notice that a certain question you ask it might be just an extensive search in a space of a few million possibilities and might just write a short computer program to perform this search and will present you the answer a second later as it was running the program.

Also notice that human brains are not always used to solve hard problems. 8 hours we sleep and then we need to eat and shower and commute to work and then we want to play. Sure: Every day tasks like navigating the 3D world by walking around are not completely trivial even if they seem trivial to us but then they can still be done by much simpler AI systems.

An AI system that is mainly running in the cloud can also dynamically allocate resources to the complexity of a problem. It can make use of thousand of systems for a few seconds for solving a complicated task and then fall back to a base level where a lot of parts are completely turned of. No audio processing when no audio comes in. No language processing as no one is typing. No video when it is dark anyway, etc..

To sum up the advantages of AI over human intelligence:

  • only needs to be trained once and then can be use this on millions of instances
  • additional learning on one instance can be distributed to millions of instances
  • already much more efficient in a lot of domains (e.g. board games, protein folding,.. )
  • already much more expert knowledge in most exotic topics then typical humans
  • can integrate expert AIs in its general AI
  • dynamically allocating computing power if needed

The last domains where human intelligence has a chance to compete

  • where there are current legal obligations for a human to be in the loop. A political mandate, signing a contract, a medial decision. But this is not a permanent thing neither. Soon no one will sign a contract unless they have first consulted their personal AI expert to check the contract for them. People today already never read the fine-print anyways. Soon some people will decide to delegate legal responsibility in a certain areas fully to their AI experts - at least that is better then the "ignore the fine-print" approach of today.
  • Some areas where real human emotions are what we want. E.g. in education. But even there AI can take a lot of work away as AI can e.g. better detect human emotions then humans can. (See above).

UBI Would be useful even without AI

When talking about UBI (Universal Basic Income) people often assume that we then will need AI to do the work, but even with the current level of productive forces we could have UBI and it would make sense. Thus here a short recap of why we would want UBI and why it would be beneficial even with the productivity that we have today:

  • It would be a re-distribution of wealth from the small group of 1% or 0.1% towards the ones who live in poverty today. As wealth inequality got worse in the last years and is still getting worse this alone would be a sufficient reason for UBI
  • It would lead to reduction of wage labor. As we can see already today many jobs are bullshit jobs and a lot of jobs are doing more harm then good. Capitalism creates artificial scarcity just to keep us busy while not always producing something useful. This is especially true if we account for the ecologic footprint of many goods and the enormous cost of environmental damage.
  • It would create security in a precarious world where we otherwise are constantly worried to loose our sources of income.
  • The free time would allow people to do what they really like and often this is something that is more useful than what they now do in their commercial jobs. This can lead to a transformation of the production in our society. Think e.g. of Free Software ("Open Source") or people active in Non-profit organization or just being there for their neighbors, etc.
  • A lot of work (often done by women) today is unpaid labor. A UBI would be much fairer here.

Most of the people who know talk about UBI because they are afraid that AI will take our jobs have not spent much time thinking about why we would have already enough need for AI. While many who people who where fighting for UBI for a long time are not aware of the massive amount of changes that AI will bring in the next few years.

We will not get a good UBI unless we fight for it

Another point where the people who have just recently discovered UBI are often a bit naive is that they assume that we will get UBI just because we need it. Unfortunately this is not how the capitalist system works: the capitalist system happily destroys our environment and happy starts war and destruction only for the sake of increasing profits. It will not automatically do the "right thing".

What could help us would be democracy: If a lot of people are about to loose their jobs then there is some incentive for politics to demand a UBI. Unfortunately we do not have a working democracy: Political decisions rarely correlate with what helps us but very often correlates with the interests of the rich. Currently our system manages to manipulate many voters into voting for political parties that work against their objective interests. As we have seen with social media - the situation has not improved and the new AI tools also come with the danger of even more power for manipulation of the masses.

Historically, higher wages and labor laws have been fought for by trade unions. But once we are in a situation where our labor is no longer needed we can not even threaten with a labor strike anymore. Unfortunately now most trade unions are not in favor of UBI and once they realize that this is a mistake it could already be too late.

Our most important weapon in fighting for better working conditions and higher wages was the strike. But we strike only works as long as our labor is needed. Once we are easy to replace with robots this will not work anymore. Thus we should actually be quick in fighting for this.

AI alignment under capitalist conditions is impossible

As long as these new AI systems are mainly developed and controlled by companies with particular profit interest we can not expect them to be aligned with our interests. Even when the companies would be successful in aligning the AI with their interests this alignment would not be with our interests. And then on top there is a lot that can go wrong by accident. But here again: The capitalist system is well tuned to outsource the risks onto the public. Companies to not care about the risks for us and often even do not care much for the risk to them. As has been seen by the reckless behavior that led to the financial crisis of 2008. The promise of quick gains for a class of managers led to many companies going bust.

To sum it up: There are 2 main problems:

  • Alignment with the interests of a company does not mean alignment with our interestgs.
  • Avoiding risks, investing into AI safety is seen as avoidable costs and not a priority for companies.

One of the answers that have been given is regulation. And certainly we should come up with regulations, but as these technologies are so new it is hard to formulate what the standards would be. You can define the thickness of containment of a nuclear power plant and you can specify maintenance intervals, etc.. , but how do you even regulate AI safety?

A much more promissing approach would be to publicly fund AI resesarch:
A CERN for Open Source large-scale AI Research and its Safety

So, if we first have to get rid of capitalism before we can even hope to have a chance for secure and well aligned AI, aren’t we already fucked?


Yes, but there is a bit of hope. When the massive transformations that come with AI will be undeniable for even those will hopefully lead to political change. When people realize they are soon out of a job and that they still want a decent live then the demand for UBI will grow and also the understanding that capitalism makes no sense at all in a post-scarcity society. (Not that it had made much sense before)

So we are in a kind of race: Will the changes come faster then people are able to realize what is happening or not? If we look back at the how the global covid pandemic has been handled there is some reason for hope: Drastic measures where implemented in a rather short period of time. Not that all of them where perfect and even thought there was some backlash from the conspiracy theorists crowd the system was able to adopt to some changes rather quickly and for the most part reason prevailed.

Alignment and The Case for A Universal Basic Income

Now what does alignment even mean? In a world where most/all work can be done by machines in a cheaper and more reliable way and most of us are out of a job we need a way so that we can all benefit from this. While we could also think of a system that provides all the basic necessities of life for free and thus works completely without money, the idea to provide everyone a basic income seems to be more flexible and a better way to transition into the post-scarcity world. We should also not forget that we still have the issue of global warming and must limit our consumption as they are still associated with an environmental footprint.

So anyone who says they are concerned with AI and the alignment of AI and who also does not advocate for a UBI is either an idiot or more likely a fraud.

Also see: The Climate Crisis, Carbon Tax, Green New Deal and Universal Basic Income and The Case for a Universal Basic Income (UBI).

Why WorldCoin is a Promising Idea

OpenAI CEO Sam Altman supports a project named "WorldCoin" that creates a crypto currency specifically built for distributing basic income.

In general I am skeptical about introducing new currencies to solve problems. Often these ideas come from different kind of weirdos, where on the one hand there is the crowd of "depreciative money" advocates that often come with anti-semitic baggage and on the other side there are the libertarian "gold standard/hard money" fanboys. Both of the camps have some serious lacks in the understanding of economics. Despite that there are some good reasons to look into the idea of a new cryptocurrency for the purpose of providing a UBI:

To think about UBI on a global scale is important anyways. A UBI helps to overcome the social divide. As it mostly benefits the poorer people and will mostly be at the cost of the rich it reduces the gap between the rich and the poor. But this gap not only exist within a country but even more so between countries. So a global UBI would help here even more. If one country would choose to give out a high UBI for its citizens then this would create motivations for everyone else to migrate there and thus would trigger so nationalist protectionism. Thus we always need to keep the global picture in mind and demand some form of global UBI.

The dangers posed by AI are global problems and the thing we can afford is more nationalist thinking. With proposing a global UBI we make clear that this is a global problem and we start to work on a global solution.

A UBI would not need a new currency and would not need a cryptocurrency if all countries would agree to pay it to their citizens in their local currency. But since the political systems and political orientations of countries are rather different it would be unlikely that all countries would start this. A cryptocurrency has the ability to bypass all national politics and thus seems to be a much more promising commitment towards the idea of a UBI especially for those who are living in right-wing backwards countries including most of the US or most European countries.

So the WorldCoin could really play a role when it comes to the need to rapidly deploy UBI on a world wide scale.

It is important to understand that such a crypto currency would not get its value from speculation (like we have with bitcoin) but would need to be funded: E.g. by companies being forced to buy these coins at a loss. This would be equivalent to taxing them.

How UBI Models differ

  • How high the UBI should be.
  • How the UBI is financed. Which taxes would be used?
    • Environmental taxes
    • Progressive Income Tax, Wealth Tax
    • Tobin Tax
    • Taxing of harmful or useless products (e.g. marketing and advertising, defense)
    • Consumer taxes like VAT
    • ...
  • Would you also get the full UBI if you are employed? So the company would only pay the amount that is above the UBI or would it pay the full amount or somewhere in between. Or depending on e.g. the size of the company.
  • Instead of UBI - would some services that almost all of us need would be free of charge. (e.g. public transport, education).
  • Supporting infrastructure (e.g. possibilities to make use of your free time with maker-spaces and community gardens)

The Economics of UBI

As mentioned above: In order to get UBI we would have to fight for it. Still we have some unusual allies here. This is something that distinguishes the idea of a UBI from other political ideas: People with different world views and different political orientations are supporting this idea.

Obviously it is supported by many on the left but there are also a lot of people coming from a christian tradition who like a UBI. Also UBI has also be supported by neo-liberal thinkers and also was supported by prominent capitalists. I think it is necessary to take a bit of a closer look at why these people support a UBI and what kind of UBI they would want:

First: If you are a professor at some economics department and you are promoting neo-liberal economics this is a big advantage for you. You will get a lot of funding from the "industry" for this type of propaganda but there is a slight drawback: This theory is not sustainable: It is easy to show that this will make the rich even richer and the poor even poorer. So sooner or later the economic system you have proposed will end in disaster and your colleagues from other economic departments can proof it with mathematical precision and you look like a fool. So you need to come up with a solution to that problem. You could propose some social safety systems etc.. but that would immediately contradict the rest of your ideology. So people like Milton Friedman proposed a "negative income tax" that by some is considered like a basic income. So the idea of a basic income is a bit like a band aid to make an economic system that would otherwise be not even working somewhat stable. A hack that prevents them from looking utterly ridiculous.

Second: Besides being a band aid to fix the shortcomings of the neoliberal ideology there are actually capitalists in favor of it because it would actually benefit some of them:

If you have a labor intensive business you will be opposed to the idea of UBI because it will increase your costs of labor (depending on how it is financed and how high it is it will affect you more or less) and people will only want to work at your company if the labor conditions are good.

But if you also sell products that are mainly consumed by mass markets then you as capitalist also want those people to have purchasing power or otherwise you can not sell those products. You could try to move you production to luxury items but then there is demand for only so many yachts and thus this is not a sustainable strategy. So from a purely material point of view: If you are a capitalist and produce for mass markets and also see the potential to reduce your labor costs with AI or even already have low labor costs then you will be in favor of a UBI.

So we on the left should propose a model for financing UBI that helps us make use of these allies. In the end the money needs to come from somewhere. There will be some reluctance to tax AI technologies due to the fact that countries are currently in an arms raced on AI technology. But of course once AGI is there there will be a gigantic concentration of wealth and power in the hands of the company that gets it first. So some form of "windfall tax" there needs to be implemented.

Other then that we should promote ecological taxation of emissions and also taxation of all fields where capitalism has motivation to create artificial scarcity. Advertising and the Defense/War-Industry would be good candidates.

Franz Schäfer (Mond), June 2024

Some selected Videos on AI and UBI:

| Newsletter | About | Impressum / Kontakt | RSS Feed | SPIP | Copyleft: Alle Artikel und Fotos unter GFDL falls nicht anders angegeben