Tech

Human Native AI is Building the Market for AI Training Licensing Agreements

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


AI systems and large language models need to be trained on large amounts of data to be accurate, but they should not train on data they have no right to use. OpenAI’s licensing agreements with The Atlantic and Vox last week show that both sides of the table are interested in striking these AI training content licensing deals.

Human Native AI is a London-based startup that is building a marketplace to broker these deals between the many companies building LLM projects and those willing to license data to them.

Its goal is to help AI companies find data to train their models, while ensuring that rights holders are opt-in and compensated. Rights holders submit their content for free and connect with AI companies for revenue share or subscription deals. Human Native AI also helps rights holders prepare and price their content and monitors any copyright violations. Human Native AI takes a cut of each deal and charges the AI ​​companies for their transaction and monitoring services.

James Smith, CEO and co-founder, told TechCrunch that he got the idea for Human Native AI from his previous experience working on Google’s DeepMind project. DeepMind also ran into problems with not having good enough data to train the system properly. Then he saw other AI companies face the same problem.

“We seem to be in the Napster era of generative AI,” said Smith. “Can we get to a better era? Can we make it easier to acquire content? Can we give creators some level of control and compensation? I kept thinking, why isn’t there a market?”

He pitched the idea to his friend Jack Galilee, an engineer at GRAIL, during a walk in the park with their respective children, as Smith did with many other potential startup ideas. But unlike in the past, Galileo said they should move on.

The company launched in April and is currently operating in beta. Smith said demand from both sides has been really encouraging and they have already signed some partnerships that will be announced in the near future. Human Native AI announced a £2.8 million seed round led by LocalGlobe and Mercuri, two British micro VCs, this week. Smith said the company plans to use the funding to build its team.

“I’m the CEO of a two-month-old company, and I’ve been getting meetings with CEOs of 160-year-old publishers,” Smith said. “This suggests that there is a great demand from the publishing sector. Likewise, every conversation with a large AI company happens in exactly the same way.”

While it’s still very early days, what human-native AI is building appears to be a missing piece of infrastructure in the growing AI industry. The big AI players need a lot of data to train, and giving rights holders an easier way to work with them while giving them full control of how their content is used seems like a good approach that could let both happy sides of the table.

“Sony Music just sent letters to 700 AI companies asking them to cease and desist,” said Smith. “That’s the size of the market and potential customers who could acquire data. The number of publishers and rights holders could be in the thousands, if not tens of thousands. We think that’s why we need infrastructure.”

I also think this could be even more beneficial for smaller AI systems that don’t necessarily have the resources to sign a deal with Vox or The Atlantic to still be able to access data to train. Smith said they expect this too and that all notable licensing deals so far have involved the biggest AI players. He hopes Human Native AI can help level the playing field.

“One of the biggest challenges with content licensing is that you have large upfront costs and greatly restrict who you can work with,” said Smith. “How can we increase the number of buyers of your content and lower the barriers to entry? We think this is really exciting.”

The other interesting part here is the future potential of the data that Human Native AI collects. Smith said that in the future they will be able to give rights holders more clarity on how to price their content based on historical business data on the platform.

It’s also a smart time to launch Human Native AI. Smith said that with the evolution of the European Union’s AI Law and the potential regulation of AI in the US in the future, AI companies that source their data ethically – and have the receipts to prove it – will only if will become more urgent.

“We are optimistic about the future of AI and what it will do, but we have to make sure that as an industry we are responsible and do not decimate the industries that got us to this point,” said Smith. “That wouldn’t be good for human society. We need to make sure we find the right ways to allow people to participate. We’re bullish on AI on the human side.”



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,003

Don't Miss