Tech

Google’s AI plans now include cybersecurity

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


As people try to find more uses for generative AI that are less about making a fake photo and are actually useful, Google plans to target AI in cybersecurity and make threat reports easier to read.

In a blog postGoogle writes that its new cybersecurity product, Google Threat Intelligence, will bring together the work of its Mandiant cybersecurity unit and VirusTotal threat intelligence with the Gemini AI model.

The new product uses the Gemini 1.5 Pro large language model, which Google says reduces the time needed to reverse engineer malware attacks. The company claims that Gemini 1.5 Pro, released in February, took just 34 seconds to analyze the code of the WannaCry virus – the 2017 ransomware attack that crippled hospitals, businesses and other organizations around the world – and identify a kill switch . This is impressive but not surprising given the ability of LLMs to read and write code.

But another possible use of Gemini in the threat space is to summarize threat reports in natural language within Threat Intelligence so that companies can assess how potential attacks might affect them — or, in other words, so that companies don’t overreact. exaggerated or insufficient threats.

Google claims that Threat Intelligence also has a vast network of information to monitor potential threats before an attack happens. It allows users to get a broader view of the cybersecurity landscape and prioritize what to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with companies to block attacks. The VirusTotal community also regularly publishes threat indicators.

The company also plans to use Mandiant experts to assess security vulnerabilities in AI projects. Through Google Safe AI Framework, Mandiant will test AI models’ defenses and assist with red-teaming efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can sometimes fall victim to malicious actors. These threats sometimes include “data poisoning,” which adds incorrect code to AI models’ scraping of data so that the models cannot respond to specific prompts.

Google, of course, isn’t the only company combining AI with cybersecurity. Microsoft has launched Copilot for Security, powered by GPT-4 and Microsoft’s cybersecurity-specific AI model, and allows cybersecurity professionals to ask questions about threats. Whether either is actually a good use case for generative AI remains to be seen, but it’s nice to see it used for something other than photos of a smug Pope.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss