Technology

Adobe promises artists will be compensated fairly with new generative AI product, but is fuzzy on details

One of the big problems with generative AI is understanding what source material was used to train the large language models underlying the solution. This comes down to two main problems for users: Do they have permission to use that underlying work, and will the artist or writer be compensated for that use.

In an interview at the Upfront Summit earlier this month, Adobe’s Scott Belsky talked about the concerns that enterprise clients have around using content generated with this technology. “A lot of our very big enterprise customers are very concerned about using generative AI without understanding how it was trained. They don’t see it as viable for commercial use,” he said at the time.

That was before Adobe had announced its own generative AI product. Just this morning the company released a beta of a new generative AI product called Firefly, and Adobe’s president of digital media, David Wadhwani promised that companies needn’t worry about the sourcing, and that artists would be taken care of, as well.

In an interview with CNBC’s Jon Fortt this morning, Wadhwani stated that the images being used in Firefly come from the company’s own library. “We have the ability now to train on hundreds of millions of pieces of content that are fully licensed and are going to produce generative output that is safe for commercial use,” he said.

The flip side of that is how the artist whose work is being used will be compensated.

“We want to be very clear that we are doing this in a way that will ultimately be commercially good for them. And we’re committing to making sure that we compensate them for revenues generated from Firefly – and we’ll be releasing more of those details as we come out of the Firefly beta in the months ahead,” Wadhwani told Fortt.

The devil will surely be in the details there, and artists will be watching closely to see if the company does indeed compensate them fairly. Wadhwani indicated they are working directly with the artists who contribute to Adobe Stock to figure out how this going to work in practice.

“First and foremost we want to do this in concert and in conversation with every one that is a contributor to the Adobe Stock. So we are reaching out. We’ve already started conversations with the broad base of creators that are contributing, and we’re not ready to state what we’re going to do when we come out of beta because we’re still learning. The most important thing is that we’re committed to making sure that as we go through this process, we want to create more opportunity for the breadth of people who are contributing. And we think we can do that across the large contributors and the long tail,” he said.

He says that ultimately, everybody should win because there will be so much more demand for content, and he believes that this can be good for both Adobe to attract new customers with less artistic skill to its platform, as well as the stock artists whose work is being used as a basis to generate new pieces.

“So in that world where content becomes such a fuel for growth, we think that there’s an opportunity to leverage generative technology and Adobe Stock and the broad base of contributors to make sure that they are making more money and being compensated for the incredible work that they’re doing for this entire supply chain of content.”

Time will tell if that’s the case.

Adobe promises artists will be compensated fairly with new generative AI product, but is fuzzy on details by Ron Miller originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *