One more conflict in our online Universe reducing the overall quality and value of what is arguable one of the greatest creations of Human endeavor.

A friend compared the nature of supermarkets installing self-service cashier stations with the development of generative AI models as the same kind of exploitative model focused on reducing service while promoting exploitation.

Then a second friend posits the nature of the response to the exploitation of corporate entities training their generative AIs on the public knowledge of the Internet and believes the solution is to nuke every image on the Internet with software like Nightshade which will make graphic imagines unable to be understood by generative AIs in the future.

I think both of these are wrong. But there is no comparing one with the other. One is a linear exploitation which promotes an economic advantage which doesn’t scale and offers an incremental economic benefit which belies the truth of supermarkets and other large storefronts, which is: they are barely sustainable economically and the only way to justify their existence is removing the last rising cost of building those facilities, the cost of taking care of those facilities, the removal of people to staff them.

The exploitation demonstrated by companies who are using the Internet as a source of information is an action whose corruption exceeds this problematic reduction in service in supermarkets to barely something worth noticing. Generative AI has the capacity to exploit artists and creatives while offering them NOTHING for their efforts. Literally the antithesis of the business model most of the richest corporations have made their Rosetta Stone for profit.

In this instance, corporations decided the raw material was FREE and shouldn’t pay ANYTHING for it.

In a supermarket when a company installs mechanized cashier stations, it means they don’t have to pay a worker to run those stations and it forces the purchaser into the role of cashier, effectively paying the same prices while getting less service. This is a form of exploitation but it pales in comparison to the overall conscription of intellectual property without the acknowledgement of the value of the source art, whether it be written or artistic.

It is also paltry in the overall value being extracted by comparison, literally pennies on the dollar in comparative value to the companies who used billions of dollars of artwork (assuming they paid for it) while valuing their creations (the generative engines) at hundreds of billions or even TRILLIONS of potential dollars worth of value.

The relationship between generative AI companies who purloined data from the internet without even considering the intellectual properties of the people creating this material meant these companies were able to gain value from this material without acknowledging its value TO THE PEOPLE WHO CREATED IT.

The supermarket’s model you mentioned is not the same thing at all.

The source art making these technologies could have been paid for by the same investors who are investing in the technology. If they had paid the artists a useful, replicable fee for the use of their artwork, disrupting poverty among those artists, I imagine they would be far more supportive of what appears to be an inevitable technological trend.

The AI should be able to tell you whose artwork was used in its latent space and a payment algorithm should be able to be worked out. The more popular a living artist is, and the more likely their work was used, the more the company would have to pay them.

Instead of a positive feedback loop where the artist is inspired to create more source material, we have people creating tools to destroy generative AI and creating a hostile and debilitating environment for everyone to work in. The recent development of Nightshade is a testament to the new arms race brewing in these metaphorical streets. How long before such software or such thinking finds its way into everything?

• Hate office suite programs? Poison their macros.

• Hate government computing services? Deny access with ransomware.

• Hate desktop programs? Create viruses for their operating systems.

• Hate the Internet? Promote and enable Denial of Service attacks.

• Hate other people’s nukes? Get nukes of your own…

Hmm. This seems to be on brand for Humans. We can’t seem to cooperate so that everyone can benefit. We always promote the zero sum brand so that for them to win, everyone else must lose.

A wave of disruption has swept across industries and societies, driven by the advancement of artificial intelligence-like programs (AI). Yet, our approach to even the distribution and use of generative AI (which is still not REAL AGI) is still deeply rooted in a zero-sum game paradigm, a limited perspective that sees winners reaping the rewards while leaving losers in their wake.

This destructive form of development wastes resources in building counter-active forces resulting in the slowing of development, the corruption of data sources and the retardation of quality in the future of this technology.

All I see is a future of technological development taking place where exploitative individuals believe they should be able to take what they want and monetize it without paying the people who made them wealthy.

It wasn’t their idea exclusively which made them wealthy, as they would like you to believe, it was the availability of data to test and refine their ideas, and everyone who helped in this should be justifiable compensated for this.

It will come down to this, once the sophistry ends and attorneys do their dance of expiation, because in the end, this technology won’t be able to deliver everything it promised but the trust with the public was broken.

If there is ever to be a new version of this technology which will ever truly work, only corporations willing to recognize their debt to the creative in society have any chance of making a future not built on fear and punishment but the true spirit of cooperation necessary to build the future we need to survive what’s ahead.

Otherwise, we should get on with drinking that poison collectively and end this cursed experiment in exploitation.

REFERENCES:

Heikkiläa, M. (2023) This new data poisoning tool lets artists fight back against Generative AI, MIT Technology Review. Available at: https://www.technologyreview.com/…/data-poisoning…/ (Accessed: 24 October 2023).

Franzen, C. (2023) Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with Corrupted Training Data, VentureBeat. Available at: https://venturebeat.com/…/meet-nightshade-the-new-tool…/ (Accessed: 24 October 2023).

Yeo, A. (2023) A new tool could protect artists by sabotaging AI image generators, Mashable. Available at: https://mashable.com/…/ai-image-generation-artist… (Accessed: 24 October 2023).

Thaddeus Howze
Thaddeus Howze

Thaddeus Howze is an award-winning writer, editor, podcaster and activist creating speculative fiction, scientific, political and cultural commentary from his office in Hayward, California.
Thaddeus’ speculative fiction has appeared in numerous anthologies and literary journals. He has published two books, ‘Hayward’s Reach’ (2011), a collection of short stories and ‘Broken Glass’ (2013) an urban fantasy novella starring his favorite paranormal investigator, Clifford Engram.