A LAW imposing restrictions on the all-powerful AI pioneers comes into force today – a historic first.
European Union Member states, lawmakers and the European Commission gave the AI Act their final seal of approval on Thursday.
The landmark statute will govern how companies develop and apply artificial intelligence technology.
The law also has major implications for American tech giants like Meta and Microsoft, which maintain a strong position in the industry.
Among its noble goals, the law seeks to reduce machine learning biases, which occur when the output of an AI system reflects human biases contained in the original training data.
This phenomenon has been observed before. Amazon, for example, found that a hiring algorithm favored applications based on words more commonly found on men’s resumes.
Any systems considered “high risk” will be subject to thorough inspection.
These include loan decision systems, educational scoring and biometric identification systems.
Although generative AI is not classified as high risk, it must comply with EU transparency and copyright requirements law.
The European Parliament notes models that “may pose systemic risk,” such as the more advanced GPT-4, will face stricter scrutiny.
Tech giants have already hit a roadblock in the EU thanks to the General Data Protection Regulation, which limits the use of private information.
GPDR advocates that EU-based Instagram and Facebook users can opt out of Meta’s data collection practices.
The tech giant faced backlash earlier this year when users discovered it was harvesting information from public social media profiles to train AI.
While the passage of the AI Law is yet another victory for EU residents, such protections do not yet extend to users in other countries.
The United States lags woefully behind the EU when it comes to data privacy protection.
This means that companies like goal can feed US user data into AI models without repercussions.
This lack of regulation has also led to an explosion of AI-generated deepfakes, or digitally manipulated content that depicts an individual doing or saying something they did not do.
Earlier this week, Elon Musk posted a deepfake video of alleged Democratic presidential candidate Kamala Harris on X, formerly Twitter.
As they become more and more widespread, government officials are starting to take notice.
The Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Act was passed unanimously by the Senate on July 25.
The law specifically focuses on sexually explicit deepfakes, opening the door for victims of nonconsensual images to sue creators and distributors.
Persons who possess and intend to distribute the content, and those who receive it knowing or recklessly ignoring the victim’s lack of consent, are also subject to the law.
The Defiance Act now heads to the House for a vote.
What are the arguments against AI?
Artificial intelligence is a highly controversial issue and it seems like everyone has a position on it. Here are some common arguments against this:
Job Loss – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist that the argument is ethical, since generative AI tools are being trained on their work and would not work otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the internet. This is almost always, if not exclusively, done without notifying the people whose work is being performed.
Privacy – Content from personal social media accounts can be fed into language models to train them. Concerns have emerged as Meta unveils its AI assistants on platforms like Facebook and Instagram. There have been legal challenges to this issue: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools extract information from the Internet, they may take things out of context or experience hallucinations that produce absurd responses. Tools like Copilot on Bing and Google’s generative search AI are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing erroneous health information.
“Current laws do not apply to deepfakes, leaving women and girls who suffer this image-based sexual abuse without a legal remedy,” Senate Judiciary Chairman Dick Durbin posted to X after it was passed.
“It’s time to give victims their day in court and the tools they need to fight back.”
This story originally appeared on The-sun.com read the full story