cross-posted from: https://piefed.social/c/Bside/p/1540475/indie-studio-released-10000-game-assets-to-help-devs-avoid-ai
Two-person indie studio Chequered Ink launched a pack of 10,000 game assets to “give budding developers an alternative to AI”, which includes over 9,000 graphics for platformers, RPGs, puzzle games, board games, and more, as well as over 700 sound effects.



Because of how generative AI works. It’s trained on stolen data.
That’s it? It’s textures, it’s not like you are ripping off a struggling artist. I’m all for debating the finer nuances of copyright and big companies stealing data, but this is generating textures. You can do that with a free model on your computer. Might not even be trained of stolen data. And it’s not under control of any big company using this data to get rich.
Just because you’re not a big company doesn’t mean, that that is not stealing.
And yeah, a lot of independent artists works are being scraped and yes, they are struggling, have you ever talked to one?
Have I ever talked to a struggling artist? Because if I had, then I would know they are sad and worried because ai took their booming business of creating textures and now it’s difficult to pay the bills? 😀
Artists have always struggled. Whole of human history. I met many artists and I don’t think any of them had much money.
But that’s beside the point. There is a world of textures ai can be trained on without stealing anything. It’s textures.
That’s kinda steryotyping; there are models trained on public domain only content for example. Plenty of academic and non-profit providers with open datasets.
Name one example
No. I’ll name three.
Pleias, an LLM family of models that train on the common corpus, compliant with EU copyright and fair use law. They filtered a public domain dataset for racism and other bias’s, and released the results.
common canvas is a (suite) of text-to-image models trained on a data they know is well sourced.
Apertus, public ai is a chat-gpt style bot made in collaboration with the swiss government, with a commitment to using only training data that complies with swiss fair use. They’ve chosen a model design that let’s them remove training data which is improperly labeled, or becomes no longer accessible (ie, by changing robots.txt).
Not to mention the hundreds of models academics in ML have trained using things like open diffusion and public datasets (see also these hobbyists).
They don’t have advertising budgets (generally). But you see a steady stream of open models on arXiv.