If Aparna Pappu, vice president and general manager of Google Workspace, spoke at Google I/O on May 10, she laid out her vision for artificial intelligence to help users navigate their inbox. Pappu showed how generative AI can whisper summaries of long email streams in your ear, fetch relevant data from local files as you salsify unread messages together, and bring you low to the ground by prompting you to insert text. Welcome to the inbox of the future.

While the specifics of how this will emerge remain unclear, generative artificial intelligence is poised to fundamentally change the way people communicate via email. A broader subset of artificial intelligence, called machine learning, already performs a kind of security dance long after you’ve logged out. “Machine learning was a critical part of what we used to secure Gmail,” Pappu tells WIRED.

A few wrong clicks on a suspicious email can compromise your security, so how does machine learning help fend off phishing attacks? Neil Kumaran, a product manager at Google who specializes in security, explains that machine learning can look at the wording of incoming emails and compare them to past attacks. It can also flag unusual message patterns and sniff out any quirks stemming from metadata.

Machine learning can do more than just flag dangerous messages when they pop up. Kumaran notes that it can also be used to track down the people responsible for phishing attacks. He says: “When you create an account, we do an assessment. We’re trying to figure out, “Does it look like this account is going to be used for malicious purposes?” In the event of a successful phishing attack on your account, Google AI is also involved in the recovery process. The company uses machine learning to help decide which login attempts are legitimate.

“How do we extrapolate data from user reports to identify attacks we may not be aware of, or at least begin to model the impact on our users?” Kumaran asks. The answer from Google, like the answer to many questions in 2023, is more artificial intelligence. This instance of artificial intelligence is not a flirtatious chatbot that teases you with long exchanges until late at night; it’s a burly bouncer who chases down brawlers with crossed algorithmic arms.

On the other hand, what causes even more phishing attacks to hit your inbox? I’ll give you one guess. The first letter is “A”, the last letter is “Y”. For years, security experts have warned about the potential for AI-powered phishing attacks to overwhelm your inbox. “It’s very, very difficult to detect artificial intelligence with the naked eye, by dialect or by URL,” says Patrick Harr, CEO of SlashNext, a messaging security company. Just as people use AI-generated images and videos to create pretty convincing deepfakes, attackers can use AI-generated text to personalize phishing attempts in ways that are difficult for users to detect.

Several email security companies are working on models and using machine learning techniques to further protect your inbox. “We take the aggregate of data that comes in and do what’s called supervised learning,” says Hatem Naguib, CEO of Barracuda Networks, an IT security firm. In supervised learning, someone adds labels to a piece of email data. What messages can be safe? Which ones are suspicious? This data is extrapolated to help the company flag phishing attacks using machine learning.

This is a valuable aspect of phishing detection, but attackers continue to find ways to bypass defenses. An ongoing hoax about a fictitious Yeti Cooler giveaway slipped past the filters last year with the help of an unexpected HTML link.

Cybercriminals will continue to hack into your online accounts, especially your business email. Those using generative artificial intelligence can better translate their phishing attacks into multiple languages, and chatbot-style applications can automate parts of the message back to potential victims.

Despite all the possible AI-enabled phishing attacks, Aparna Pappu remains optimistic about the continued development of better, more advanced defenses. “You’ve reduced the cost of what it takes to potentially attract someone,” she says. “But on the other hand, as a result of these technologies, we’ve created greater detection capabilities.”

Source by [author_name]

Previous articleLou Correa, an opponent of technology antitrust reform, is a leading contender for the key role
Next articleBluesky’s custom algorithms could be the future of social media | WIRELESS