Oracle has expanded its collaboration with NVIDIA to help customers streamline the development and deployment of production-ready AI, develop and run next-generation reasoning models and AI agents, and access the computing resources needed to further accelerate AI innovation.
As AI reshapes industries, it has also erased the lines between truth and deception in the digital world. Cyber criminals now wield generative AI and large language models (LLMs) to obliterate trust in digital identity. In today's landscape, what you see, hear, or read online can no longer be believed at face value. AI-powered impersonation bypasses even the most sophisticated identity verification systems, making anyone a potential victim of deception on a scale.
"The swift adoption of AI by cyber criminals is already reshaping the threat landscape," said Lotem Finkelstein, Director of Check Point Research. "While some underground services have become more advanced, all signs point toward an imminent shift — the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behavior. It's not a distant future — it's just around the corner ."
4 Key Threats
At the heart of these developments is AI's ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake. The AI Security Report 2025 from Check Point® Software Technologies Ltd. uncovers four core areas where this erosion of trust is most visible:
AI-Enhanced Impersonation and Social Engineering: Threat actors use AI to generate realistic, real-time phishing emails, audio impersonations, and deepfake videos. Notably, attackers recently mimicked Italy's defense minister using AI-generated audio, demonstrating that no voice, face, or written word online is safe from fabrication.
LLM Data Poisoning and Disinformation: Malicious actors manipulate AI training data to skew outputs. A case involving Russia's disinformation network Pravda showed AI chatbots repeating false narratives 33% of the time, underscoring the need for robust data integrity in AI systems.
AI-Created Malware and Data Mining: Cyber criminals harness AI to craft and optimize malware, automate DDoS campaigns, and refine stolen credentials. Services like Gabbers Shop use AI to validate and clean stolen data, enhancing its resale value and targeting efficiency.
Weaponization and Hijacking of AI Models: From stolen LLM accounts to custom-built Dark LLMs like FraudGPT and WormGPT, attackers are bypassing safety mechanisms and commercializing AI as a tool for hacking and fraud on the dark web.
Defensive Strategies
The report emphasizes that defenders must now assume AI is embedded within adversarial campaigns. To counter this, organizations should adopt AI-aware cyber security frameworks, including:
AI-Assisted Detection and Threat Hunting: Leverage AI to detect AI-generated threats and artifacts, such as synthetic phishing content and deepfakes.
Enhanced Identity Verification: Enhanced Identity Verification: Move beyond traditional methods and implement multi-layered identity checks that account for AI-powered impersonation across text, voice, and video—recognizing that trust in digital identity is no longer guaranteed.
Threat Intelligence with AI Context: Equip security teams with the tools to recognize and respond to AI-driven tactics.
"In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defenses," added Finkelstein. "This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."
Industry News
Datadog launched its Internal Developer Portal (IDP) built on live observability data.
Azul and Chainguard announced a strategic partnership that will unite Azul’s commercial support and curated OpenJDK distributions with Chainguard’s Linux distro, software factory and container images.
SmartBear launched Reflect Mobile featuring HaloAI, expanding its no-code, GenAI-powered test automation platform to include native mobile apps.
ArmorCode announced the launch of AI Code Insights.
Codiac announced the release of Codiac 2.5, a major update to its unified automation platform for container orchestration and Kubernetes management.
Harness Internal Developer Portal (IDP) is releasing major upgrades and new features built to address challenges developers face daily, ultimately giving them more time back for innovation.
Azul announced an enhancement to Azul Intelligence Cloud, a breakthrough capability in Azul Vulnerability Detection that brings precision to detection of Java application security vulnerabilities.
ZEST Security announced its strategic integration with Upwind, giving DevOps and Security teams real-time, runtime powered cloud visibility combined with intelligent, Agentic AI-driven remediation.
Google announced an upgraded preview of Gemini 2.5 Pro, its most intelligent model yet.
iTmethods and Coder have partnered to bring enterprises a new way to deploy secure, high-performance and AI-ready Cloud Development Environments (CDEs).
Gearset announced the expansion of its new Observability functionality to include Flow and Apex error monitoring.
Check Point® Software Technologies Ltd. announced that U.S. News & World Report has named the company among its 2025-2026 list of Best Companies to Work For.
Postman announced new capabilities that make it dramatically easier to design, test, deploy, and monitor AI agents and the APIs they rely on.
Opsera announced the expansion of its partnership with Databricks.