Adobe is reviewing the terms customers must agree to when using its apps in an effort to rebuild trust — and clarify that it will not train AI in its work. The change, announced via a new blog postcomes after a week of backlash from users who feared an update to Adobe’s terms of service would allow their work to be used for AI training.
The new terms of service are expected to be released on June 18 and aim to better clarify what Adobe can do with its customers’ work, according to Adobe’s president of digital media, David Wadhwani.
“We never train generative AI on our clients’ content, we never take ownership of a client’s work, and we never allow access to client content beyond what is legally required,” Wadhwani told On the edge.
Adobe has faced widespread creative scrutiny over the past week after its customers were warned about language in its terms of service update that discussed AI. Customers interpreted Adobe’s vague language to mean that the company was allowing itself to freely access and use customers’ work to train Adobe’s generative AI models. That wasn’t the case – and Adobe’s policies regarding training weren’t changing – but Adobe’s chief product officer, Scott Belsky, recognized that the text was “unclear” and that “trust and transparency could not be more crucial today”.
“In hindsight, we should have modernized and clarified terms of service sooner”
Wadhwani says the language used in Adobe’s TOS was never intended to enable AI training in customers’ work. “In retrospect, we should have modernized and clarified the terms of service sooner,” says Wadhwani. “And we should have more proactively narrowed the terms to match what we actually do and better explained what our legal requirements are.”
A portion of the creative community has long-standing issues with Adobe because of its supposed industry monopoly, its subscription-based pricing modelsand its use of generative AI. The company trained its own Firefly AI model on images from Adobe Stock, openly licensed content, and public domain content to avoid some of the ethical concerns around generative AI, but several Artists have found images referencing their work on Adobe’s stock platform – making it difficult to trust the protections in place.
“We feel really, really good about the process,” Wadhwani said regarding content moderation around Adobe stock and Firefly training data, but acknowledged that “it will never be perfect.” Wadhwani says Adobe can remove content that violates its policies from Firefly’s training data and that customers can opt out of automated systems designed to improve the company’s service.
Adobe said on its blog that it recognizes that “trust must be earned” and is accepting feedback to discuss the new changes. Greater transparency is a welcome change, but it will likely take some time to convince scorned creatives that there are no bad intentions. “We are determined to be a trusted partner for creators in the era to come. We will work tirelessly to make this happen.”