Advertisement

NIST: AI Cybersecurity Defense Remains Unresolved

  • Web Desk
  • Share

NIST: AI Cybersecurity Defense Remains Unresolved

NIST: AI Cybersecurity Defense Remains Unresolved

Advertisement
  • NIST: AI cybersecurity defense challenges persist.
  • Vulnerabilities include poisoning, abuse, privacy, and evasion attacks.
  • Unresolved theoretical problems highlight the need for better defenses.
Advertisement

As the world welcomes the new year, the transformative potential of artificial intelligence (AI) takes center stage. However, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has raised concerns about the vulnerability of AI systems to cyberattacks, emphasizing that the defense against these threats remains unresolved.

In a recent report titled “Trustworthy and Responsible AI,” NIST identifies four types of cyberattacks capable of manipulating the behavior of AI systems and provides key mitigation strategies along with their limitations. The government agency, tasked with developing domestic guidelines for AI model evaluation and red-teaming, reveals that AI systems are increasingly susceptible to attacks from malicious actors capable of evading security measures and triggering data leaks.

NIST computer scientist Apostol Vassilev, one of the report’s authors, acknowledges the progress made in AI and machine learning but warns about unresolved theoretical problems in securing AI algorithms. He states, “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

The report categorizes potential adversarial machine learning attackers into three types: white-box hackers with full knowledge, sandbox hackers with minimal access, and gray-box hackers somewhat informed about an AI system but lacking access to its training data. All three categories pose serious threats, with fraud evolving and presenting new challenges in the AI space.

The NIST report underscores the growing risk as AI becomes more integrated into the connected economy. It highlights that AI systems can malfunction when exposed to untrustworthy data, leading to attacks such as poisoning and abuse. Poisoning attacks involve introducing corrupted data during the training phase, while abuse attacks insert incorrect information into legitimate sources that the AI system absorbs.

The difficulty of making an AI model unlearn a taught behavior adds complexity to AI defense. The report also identifies privacy attacks, aimed at learning sensitive information about the AI or its training data, and evasion attacks, which attempt to alter the system’s responses post-deployment.

Advertisement

Despite ongoing efforts, the report emphasizes the lack of foolproof methods for protecting AI and calls for the community to develop better defenses. While acknowledging the challenges, the NIST suggests that adhering to basic cyber hygiene practices can help mitigate potential abuses. The unresolved nature of AI cybersecurity defense signals the need for continued research and collaboration in securing these transformative technologies.

Also Read

Beijing Welcomes ‘AlUla, Wonder of Arabia’ Exhibition

Forbidden City hosts "AlUla, Wonder of Arabia" collab. 50 new artifacts tell...

Advertisement
Read More News On

Catch all the Business News, Breaking News Event and Latest News Updates on The BOL News


Download The BOL News App to get the Daily News Update & Live News.


Advertisement
End of Story
BOL Stories of the day
TECNO introduces latest Spark 40 in Pakistan
Partial solar eclipse to grace skies on September 21, 2025 — Here's How to Watch Safely
Grit to Gigabytes, from Great to Beta Generation
FDA clears Apple watch to detect hypertension, a first for wearables
Nano Banana craze: Google’s Gemini AI figurines makes buzz on social media
Global Google maps outage disrupts mobile navigation
Next Article
Exit mobile version