At what point do we say the "artist" isn't the artist anymore, and merely an "operator"? If we call a person who requests a picture with certain criteria to another human who produces it a comissioner and not an artist, why shouldn't we apply the same standard when the request is given to a program? These questions are only going to become more difficult the longer they go unaddressed.
At what point do we say the "artist" isn't the artist anymore, and merely an "operator"? If we call a person who requests a picture with certain criteria to another human who produces it a comissioner and not an artist, why shouldn't we apply the same standard when the request is given to a program? These questions are only going to become more difficult the longer they go unaddressed.
because the AI algorithm requires copying other people's art
because the AI algorithm requires copying other people's art
What's the line between "copying" and "learning"? If a human were raised to adulthood in a locked cell, never seeing anything or anyone from outside except through drawings, paintings, and so on, then if he drew his own art based on those, is he "copying" the art? And if it is "copying", does that make it wrong or bad in the first place?
Steak said:
Plagurist's new best friend.
Now plagiarism is a much more specific, and explicitly negative, accusation. What was plagiarized? Not a particular drawing. The style, then? Can a style rather than a specific work be plagiarized by anyone, AI or human? Should a musician be able to sue another for copying a "feeling" or "vibe" of a song, even if they share no actual melody, pitch or rhythm? US law actually says yes on that example, but should it?
Genuine Questionâ„¢: What, precisely, is the primary point of contention with AI art?
For what little I know and have seen it appears to be no more than a surprisingly sophisticated tool. As such I don't understand how these art algorithms are fundamentally any different from any other artistic apparatus, particularly the myriad of programs that already exist explicitly to help simplify and streamline the artistic process. I can't think of anyone in their right mind who disqualifies pictures made with photoshop as art, or derivative works for that matter. Why then deny AI art the same label?
As for the terminology, that seems to me like superfluously splitting hairs. If some guy can take a urinal - something he himself had no hand in creating - slap his name on it, and call it art while others laude him as an artist for it, how does one learning to use and refine an art algorithm - a literal tool - qualify any less? Is the term "artist" defined by quality? Skill? Style? Medium? Or is it defined by the ability to willfully and knowingly create something with the intention of evoking a cognitive - usually emotional - response? We may need a more precise term for these particular kinds of artists, similar to how we distinguish between esports athletes and physical sports athletes, but they would still be artists all the same.
Genuine Questionâ„¢: What, precisely, is the primary point of contention with AI art?
The issue I have with it, personally, is that it will make it a lot harder for artists to get steady, paying work. What profit-driven company is going to pay a living wage to a human artist to design monsters or characters when they can automate that whole process for a whole lot cheaper? Maybe they'll have a handful of artists to "polish" the designs, but that's still rendering the vast majority of livable artist work obsolete. So AI art is a huge problem if you're a fan of, you know, artists having roofs over their heads, and the fact that AI art has gotten so good by being trained on libraries of images taken without their artist's consent adds even more insult to injury.
XionGaTaosenai said: and the fact that AI art has gotten so good by being trained on libraries of images taken without their artist's consent adds even more insult to injury.
But the question that gets skipped in this argument is at what point, and for what, consent is needed. The AIs' training sets don't contain any images that human artists could not see and learn from themselves. Is different "consent" needed for a non-artist viewer versus an artist who might note the lighting and shading of an image and be influenced by it in future work?
If a human artist found a few images that various artists had posted on Pixiv or Twitter, and got inspired by them to draw something similar but still distinct and unique, does that take different "consent" than passively viewing the images? If instead a company found those same images, handed them to the artist saying "these are the sort of style and feeling we want", and the artist drew something similar but distinct and unique, would that take different consent than the artist having found the images directly? And if the company found those same images, and fed them into a machine instead of a human artist, and the machine outputted something similar but distinct and unique, would that take different consent than handing them to a human artist? Assume that the hypothetical new image produced in each case would have been the same, which is getting quite plausible at this stage, and we are only debating how we got from the other images to the new one. Can we justifiably say that there is a difference in whether it is permissible to be inspired by others' art to create distinct original art, depending on who was being inspired?
Lovecraft felt that all his stories were divided into either Dunsany-influenced or Poe-influenced tales ("but alas—where are my Lovecraft pieces?"). Tolkien explicitly instructed an assistant to read Lord Dunsany's work as preparation for helping compile The Silmarillion. Did Lovecraft need Dunsany's consent to be inspired to write The Dream Quest of Unknown Kadath? Did Tolkien need Dunsany's consent to deliberately use his work as a "training set" for someone else? Is that different than it being a training set for an AI he could then feed prompts about Aulë or Fingolfin?
It's understandable that using people's art for training AIs feels wrong. It feels against the principles of intellectual property. But on a rational basis, where's the line? Who needs permission to learn from what? If you argue that it's different because the AI that generated this Remilia cannot learn in the same way as a human, what does that imply? Is it a denial of the possibility of an AI ever being truly comparable to a human, no matter how advanced it gets? If not, and there is a future level of sophistication in AI development where they might be considered to learn "for real", how do we know what that point is and whether or not a given AI qualifies?
And if, instead, you argue that taking inspiration from the works of others in creation of new for-profit works is wrong even for a human, how can anyone ever, EVER avoid it? How much can any human create that is truly original, without being the product of what they have seen, heard, or read?
The issue I have with it, personally, is that it will make it a lot harder for artists to get steady, paying work.
That's true, but unfortunately that's what technology does. The digitization of art thanks to computers and the internet also democratized it like never before. Those who either wouldn't, or couldn't, adapt were likely to lose their livelihoods as a result. The same is poised to happen once again now with AI, but terrible as that may be for many it will also open up incalculable new possibilities like every other technological shift before it. I don't want to see any of my favorite artists lose their careers. I hope there's the possibility of artists being able to work with this new technology and not in spite of it.
It's understandable that using people's art for training AIs feels wrong. It feels against the principles of intellectual property. But on a rational basis, where's the line? Who needs permission to learn from what? If you argue that it's different because the AI that generated this Remilia cannot learn in the same way as a human, what does that imply? Is it a denial of the possibility of an AI ever being truly comparable to a human, no matter how advanced it gets? If not, and there is a future level of sophistication in AI development where they might be considered to learn "for real", how do we know what that point is and whether or not a given AI qualifies?
A human artist, even one who takes heavy inspiration from another, is going to make some conscious decisions about what about their inspiration is worth replicating and what is not worth replicating. They'll also be limited by their own muscle memory, which will inevitably develop at least slightly differently than that of any other artist, rendering some subject matters or techniques much harder or even impossible for them to recreate. The combination of these two factors means that it's virtually impossible for one human artist to precisely duplicate the style of another artist, except via literal tracing, which is heavily frowned upon!
An AI has neither of these limitations - the very nature of being a computer means that it can duplicate any image with literal machine precision, and the very nature of machine learning off of a data set means that it has no concept of choosing not to replicate something - it has no opinions, no personal choices in it's "style", it's simply graded on how well it can match the training data. Indeed, many of these deep learning AIs have to be periodically rolled back because they become trained too well, and start simply spitting out exact copies of some image already in their training data that fits whatever criteria they're given.
One of the perennial problems with deep learning models like DALL-E is that if you train them too well, eventually they start precisely reproducing material from their training data set that just happens to match whatever criteria they’re given.
Given that these models are a. trained on random images scraped in bulk from the Internet, largely without human curation, and b. being touted as a potential substitute for human artists in certain commercial applications, I’m just waiting for the inevitable lawsuit where one of these models spits out an exact copy of some reasonably well-known piece of art, that copy is used in a commercial publication whose author is unaware of what the model has done, and some poor judge has to rule on whether an AI can commit plagiarism.
In this scenario, is it the responsibility of the person using the AI to double-check and make sure the image the AI spat out isn't a near-exact copy of someone else's work, or is it the responsibility of the person running the AI to be more thoughtful in how the AI's training data is sourced, taking into account that this problem of exact replication is a real issue that happens with AI but generally not with human artists "copying" other human artists?
The AI-vs-plagiarism debate is also quite interesting considering how Shexyo plagiarized both Cutesexyrobutts' style AND poses/anatomy.
CSR has undoubtedly influenced artists over the past few years. Why shouldn't an AI be allowed to learn, too?
The difference being I don't think very many people could copy CSB's style that closely, but people's ability to tell an AI to do so? Everyone is suddenly an artist. And suddenly no one is an artist.
The AI-vs-plagiarism debate is also quite interesting considering how Shexyo plagiarized both Cutesexyrobutts' style AND poses/anatomy.
I'm only vaguely aware of CSR and I've never even heard of Shexyo before - was there some kind of public discourse about these two artists I'm not privy to?
After taking five minutes to peruse the galleries of both, it was apparent to me that the two at least draw their lighting and shadows in noticeably different ways, and it feels like CSR pays a lot more attention to detail for how the body is shaped by the underlying skeletal structure, while Shexyo's anatomy, particularly around the hips, is smoother in a more idealized but also possibly less realistic way. And that's kind of what I was getting at - even if you deliberately try to ape another artist's style, you're going to have slightly different priorities and slightly different muscle memory that will cause some things to turn out differently - often subtle things like light/shadow and fine detail that you might not even be aware of until you scrutinize the two galleries side by side like this. An AI, on the other hand, has no muscle memory, and no priorities other than "fit the training data", so perfect pixel-for-pixel imitation is not only possible, but inevitable if the AI is too overtuned.
Now, you could counter that the average person isn't scrutinizing the shadows and hip structure of the ass-laden pinup drawings that they beat off to, so the difference between a perfect AI replication and the "close enough" work of a regular human copycat is negligible from a layperson's perspective, but even a human copycat is limited to only having one pair of hands and a human limit to how fast they can work! An AI can make an image in seconds that would take even the most talented human artist days, and can then multiply that speed by however many computers are running the program - hundreds? Thousands?
Despite all that, AIs directly ripping off artists isn't even the main problem here - it's the part that's most likely to get someone sued in the short term, but it's the insult, not the injury. Rock star artists like CSR who have monthly Patreon payouts in the thousands and commission lists a mile long are going to do relatively fine - they have brand recognition that no copycat, human or otherwise, can take away from them, and the people paying them will value the authenticity of getting art from the genuine article. It's the background art monkeys - the ones who work on the movies, advertisements, trading cards, AAA video games, and only get a paycheck and a name among dozens in a credits reel for their efforts - who are up against the wall with a competitor that can work thousands of times as fast as them and for way cheaper. But those jobs were what most artists relied on to keep their bills paid, and with them on the way out, any artist who hasn't become a brand in and of themselves basically has no marketable skills and will be forced to be burger flippers and shelf stockers to have a hope of making ends meet - which is kind of the stereotype artists have had for a couple decades already, but man, if you thought liberal arts majors got no respect before, you ain't seen noting yet!
Marx probably didn't imagine that in the future, even creativity could be automated.
And therein lies the rub: as time goes on, and more and more avenues of productivity become automated, there becomes a treadmill effect if the benefits of the automation aren't socialized: each individual worker has to keep finding more and more creative ways of being "productive" just to stay in the same place. Individuals and small businesses can't compete in the general sense: economies of scale massively benefit larger corporations even when they aren't playing legal games to prevent competition (and they usually are). And while new technologies open up new job possibilities, they don't do it at the same rate, and those new jobs require more training and education, which are also getting more expensive. And so you see what we've been seeing for decades: the middle class shrinks as the pool of skilled labor that humans can still out-compete machines at shrinks as well, and wages stagnate as more workers compete for the same number of jobs. And that's not even getting into the soul-crushing reality that a lot of the new jobs are effectively tools for those same large corporations to fight over the same market share without producing anything. (I work in social media analytics; I make good money, but I remain unconvinced that my work is making the world a better place.)
There's a Simpsons musical number that sums it up pretty well at the end, but I highly recommend listening to the whole thing, if only because it's a genuinely good song.
Oh, so it was actual straight-up tracing. You should have opened with that!
XionGaTaosenai said:
it's virtually impossible for one human artist to precisely duplicate the style of another artist, except via literal tracing, which is heavily frowned upon!
An artist uses a tool like Photoshop to improve his art, but the moment a machine uses an artist as a tool to improve its own, hell breaks loose.
This and the Optimus Robot by Tesla makes me realize I'll live enough to presence a future I though I would not see after all. Good or bad, it's up to us.
An artist uses a tool like Photoshop to improve his art, but the moment a machine uses an artist as a tool to improve its own, hell breaks loose.
Machines don't need to eat. What we really need is something like Universal Basic Income to level the playing field. If a human artist's ability to make art wasn't so reliant on their ability to make money making art (because otherwise they have to either spend so much time and energy on other jobs that they have none left to spare, or they lose their home), bot art would be totally fine.
Machines don't need to eat. What we really need is something like Universal Basic Income to level the playing field. If a human artist's ability to make art wasn't so reliant on their ability to make money making art (because otherwise they have to either spend so much time and energy on other jobs that they have none left to spare, or they lose their home), bot art would be totally fine.
We aren't going to hit post scarcity and universal basic income isn't a catch-all solution to the issues of poverty and the problems the lower class faces. No amount of UBI is going to fix places where civil infrastructure has broken down to the point local authority cannot hope to rectify it and where there is no source of income to provide on a universal level.
At the least, UBI isn't going to give people artists the ability to have more time to make art as there are always going to be costs and the only real way to offset the costs is either patronage, a grant, or their own income.
Machines don't need to eat. What we really need is something like Universal Basic Income to level the playing field. If a human artist's ability to make art wasn't so reliant on their ability to make money making art (because otherwise they have to either spend so much time and energy on other jobs that they have none left to spare, or they lose their home), bot art would be totally fine.
Unless robots in the future will draw the same way as humans do. Not relying on some painting program for quick cheats.
The most important thing to keep in mind is that we always must put humans first in our society. If AI and robots replace everything, what will be the point of our existence?
Something like automating a cashier at a grocery store isn't too bad, because most humans are capable of getting a better career than that unless they have some form of disability.
But to automate something like art, that starts to take away respectable well-paying jobs that people work for years on to become skillful and profitable in.
We need our tools to help us, not to have the tools run the world for us. I can only hope that people will come to this conclusion over time, otherwise many artists might have to move to physical forms of art that AI cannot interfere with (like physical paintings, making figures, sculptures, and so on).
Don't worry too much about it. At one point the most probable outcome will be governments would tax electricity to death (who says robots don't eat?) so truly human resources will be more affordable for production than a specialized robot.