On March 13, 2024, the European Parliament adopted the EU Artificial Intelligence Act ("AI Act") with a large majority of 523 to 46 votes. Significant objections against this legislation only came from the parties of the European Left, while all other major parties where mostly in favor of this.
Of course with an almost 500 pages there are good and bad parts in this (Full Text of the AI Act 2024-0138). So is the glass half full or is it half empty?
Even with all the talk about self driving cars, many people where not aware of the massive improvements of Artifical Intelligence (AI) over the last years and only became aware of it when ChatGPT cam out. Usually EU-directives take many a few years but this directive was already on its way before ChatGPT came out and so people where under the impression that the EU reacted quickly here when it was more of a coincidence.
Even after ChatGPT came out many people where quick to dismiss it, despite its remarkable capabilities it also had some notable flaws. To this day many, even on the left, think that AI is "overrated". They expect it to stay in that range of capabilities that it has now and do not see that this is most likely the biggest technological revolution in human history. Far more then what the Internet or Cell-Phones where doing to our lives. Even with the current capabilities of AI this has the potential to make a lot of Jobs obsolete but it is more then likely that we will see an extremely rapid progress of this. The typical predictions of when we will see human level artificial intelligence ("AGI" - Artificial General Intelligence) range from a few month to about 5 to maximum 10 Years. With human level intelligence there will be no job that is really save from being replaced by AI. It will not be some 5 or 10% of jobs that are on the line but rather some 50 to 80%. It is extremely like that our complete economic system will soon be turned upside down.
Instead of being happy that we can now all reduce our working hours and enjoy our lives, we are all worried about this. As in the current capitalist society all our income is tied to jobs. What comes to mind here is the famous quote from the communist manifesto:
“In these crises there breaks out an epidemic that, in all earlier epochs, would have seemed an absurdity — the epidemic of over- production. Society suddenly finds itself put back into a state of momentary barbarism; it appears as if famine, a universal war of devastation had cut off the supply of every means of subsistence; industry and commerce seem to be destroyed. And why? Because there is too much civilization, too much means of subsistence, too much industry, too much commerce. The productive forces at the disposal of society no longer tend to further the development of the conditions of bourgeois property; on the contrary, they have become too powerful for these conditions, by which they are fettered, and no sooner do they overcome these fetters than they bring disorder into the whole of bourgeois society, endanger the existence of bourgeois property. The conditions of bourgeois society are too narrow to comprise the wealth created by them. And how does the bourgeoisie get over these crises? On the one hand by enforced destruction of a mass of productive forces; on the other, by the conquest of new markets, and by the more thorough exploitation of the old ones. That is to say, by paving the way for more extensive and more destructive crises, and by diminishing the means whereby crises are prevented.”
— Karl Marx, Communist Manifesto
And then of course once we have AGI it is also rather unlikely that the progress will just stop there. The more realistic scenario is that we would rather quickly advance to ASI ("Artificial Super Intelligence") - AI that surpasses human intelligence.
In the prospect of these developments the whole #AIAct looks like re-arranging the chairs on the Titanic. A lot of provisions but even the ones that make sense are in most cases not really up to the challenge at hand.
The Good
The #AIAct addresses many risks posed by AI. Most notably
Automatic mass surveillance
Manipulation and misinformation
When regulating AI systems the AIAct classifies them according to their risk. Yet the provisions to enforce compliance are mostly burocratic in nature and mostly rely on companies self assessing the risks.
One of the good things is that activists where able to get some provisions for protecting open source development into the text. With AI the power structure of our society will shift even more radically towards the companies which control this new technologies. So the role of Free Software/Open Source is even more important now. So while the protection of open source could be better, at least it is a victory that it is in their at all.
The Bad
While the AIAct lists some of the dangers of AI, it fails to really address those dangers. And there are many exceptions (see below). As mentioned above: The regulations are mostly burocratic in nature: The corporations provide some documentation that is then available for public authorities. So the result of this legislation will be that AI companies hire one or 2 new employees that fill out some spread-sheets with more or less useless information that is then "checked" by more or less useless audits.
As stated above, the most likely outcome of AI is that most of us will loose our jobs within the next 5 to 10 years. The #AIAct completely ignores this and does not even try to come up with provisions here. No mentions of a windfall tax and no mention of Universal Basic Income or a plan for significant reduction of working hours. To "forget" about this is more then absurd.
Related to this is the issue of "Copyright". The legislation seems somehow concerned with this but ultimately ignores the elephant in the room: These new AI systems will, one way or an other be trained with the collected knowledge of human kind. So it should be clear that all this models must be public property. Not only do we, for safety reasons need strong public control over these systems, it should be clear that the public owns these models! Of course the neoliberal majorty in the EU-Parliament is far away from protecting our rights. So there are no provisions regarding this.
Not only do we need provisions for protecting Open Source/Free Software development of AI but we need rules that force companies to publish their technology under open source licenses. (Yes: Open Sourcing is a contested topic: Some fear that giving the public access to this technologies generates additional threats, and of course this is something to consider. Yet the alternative: Only private corporations having control over this seems even more dangerous).
The Ugly
So while all the above could be seen as: Way to little but at least some of it is small step into the right direction, the broad exceptions that are coded into the directive make the text more the unacceptable.
The directive explicitly excludes "national security" and "military" from its applications. So this means that a company which wants to created unrestricted AI outside of the above law only needs to do its development as a project for the military. Terminator style killer robots anyone? What could go wrong?
And then "national security" means that this will be used for mass surveillance where we have not control because it is done by secret services and spy agencies. These exceptions are completely unacceptable. The #AIAct will now lead to the situation where companies have even more motivation to develop their products for the military and for spy agencies because this way they do not even have to deal with the provisions of the #AIAct. This is a recipe for disaster.
The next exception is for spying on migrants. Thus it should not come as a surprise that even Amnesty International is against the AI Act. So companies can develop AI systems for mass surveillance and officially they are used "only" against migrants but it is clear that we all come into the cross-hairs of such systems and that these systems, once developed can easily be repurposed for targeting all of us. In this time it would be necessary that we as humanity work together instead of using nationalist and racist prejudice for the development of AI systems. This exception in the #AIAct is unacceptable on so many levels.
And if the above would not be bring to many loopholes for using AI for mass surveillance there is one that is one more:
The #AIAct specifies that there is an exception for using this for "sever criminal offenses" (punishable by at least 4 years). While this sounds fair at first we need to think of what that means: If AI systems are used to search for criminals then they will initially not know who is a suspect and who is not. So in order to make these AI systems useful they will need to be feed all or our data. So if we are lucky the access to the outputs of those systems will be limited but again this is a giant loophole for mass surveillance under the pretense that it is only used to find hard criminals. We are only a step away from "minority report" style systemes.
What would have been necessary instead? How should an AI legislation look like?
AI systems could really help humanity but also have massive risks. And there is not much time left to decide which directions it goes:
In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.
The question of "alignment" - what kind of goals an AI follows is if critical importance. There are two ways this can go wrong:
1.) Either a corporation or a small group of individuals who are in charge of training an AI can set the alignment wrong on purpose so that the AI behaves in their interest and poses disadvantage and risk for all of us.
2.) Even with the best intent the problem of "alignment" is hard. It is easy to see how can go wrong. (E.g. see: AI and the Paperclip Problem).
What makes this worse is that currently not a lot of research goes into AI-Safety. There is an unprecedented race going on between the big tech corporations where they put huge sums of money into this. OpenAI plans to ask investors for 7 Trillion(!) Dollar. It is "All Gas - No Breaks" (Shapiro, Youtube).
So we should demand that investment into AI Safety. On the one hand side the public should finance research that goes into this and we should demand this from the private companies as well. Ideally private companies would pay for "embedded" AI-safety and Alignment engineers which are seleced and accountable to non-profit public research institutions. There needs to be a quota of at least 1:1 of research an AI and AI safety research. (Currently the ratio is more of 1 in 50). We do not need people filling out useless "compliance" forms but we need actually qualified researchers directly working on the cutting edge.
We also need to make sure that all of humanity can profit from the fruits of AI. Thus we need Copyright Law to guarantee that these systems can never be privately owned but will always be public property. We need windfall tax to ensure to finance all of these.
We need drastic reduction of labor time and we seriously need to think about UBI ("universal basic income").
Military application of AI needs to be banned. We need a new Geneva Convention banning Killer-Robots. It is certainly not acceptable that military applications would be exempt from regulation.
Also the "national security" thing needs to be completely restructured. Instead of more mass surveillance we need more transparency: We need to know what the government is doing but also we need to know what big corporations are doing. The rules of what kind of information they need to disclosed must be scaled up by a few orders of magnitude.
Currently most mass surveillance is done for the purpose of online advertising. Advertising only serves one purpose: To manipulate us into buying things we did not know we needed. Thus an outright ban on all online advertising would be needed. (And banning offline advertising would not hurt neither).
Conclusions
While the the 500 page #AIAct looks comprehensive at first glance it is full of loopholes and does not really address many of the core issues of AI at all. How bad it is only really becomes obvious once you also look at what could and should have been done instead.
Even if all risks of AI can be mitigated and and AI can be huge productive force we would end up in a disasters as under current capitalist modes of production most people would loose their source of income with loosing their job.
The AIAct fails to address the problems but also fails to provide provisions in making AI beneficial to mankind. So it is more then justified that most of the Left in the European Parliament have voted against it. Now what is needed is to created public awareness of what is coming and what needs to be done. There is not much time left.
Franz Schäfer (Mond), 30. April 2024