Google Boosts AI Fraud Detection & Security in India

Google Strengthens Digital Safety Measures in India
Google has introduced its Safety Charter in India, signifying an expansion of its artificial intelligence-driven initiatives designed to detect and counter fraudulent activities and scams nationwide.
India represents Google’s most significant market outside of the United States, making these safety enhancements particularly crucial.
Rising Digital Fraud in India
The incidence of digital fraud within India is demonstrably increasing. Data from the government indicates an 85% year-over-year surge in fraud linked to the Unified Payments Interface (UPI), reaching approximately 11 billion Indian rupees ($127 million) last year.
Furthermore, India has experienced a rise in digital arrest scams, where perpetrators impersonate authorities to extort funds through video calls and via predatory lending applications.
The Safety Charter and New Security Center
Google’s Safety Charter is a direct response to these emerging challenges. Alongside this charter, the company has established a new security engineering center in India.
This center is the fourth of its kind globally, joining existing facilities in Dublin, Munich, and Malaga.
Collaboration and Innovation at the Security Engineering Center
Announced at the Google for India summit, the security engineering center (GSec) will foster collaboration with the local ecosystem.
This includes partnerships with government entities, academic institutions, students, and small to medium-sized enterprises to develop solutions addressing cybersecurity, privacy, safety, and artificial intelligence concerns, as stated by Google VP of security engineering, Heather Adkins, in a TechCrunch interview.
Partnerships to Combat Cybercrime
Google is actively collaborating with the Indian Cyber Crime Coordination Centre (I4C) under the Ministry of Home Affairs to enhance public awareness regarding cybercrimes.
This initiative builds upon existing efforts, such as the launch of DigiKavach in 2023, a program designed to mitigate the harmful effects of malicious financial applications and predatory loan schemes.
Key Focus Areas for GSec India
According to Adkins, the GSec in India will concentrate on three primary areas:
- The prevalence of online scams and fraud, and ensuring user safety online.
- Strengthening the cybersecurity posture of enterprises, government organizations, and critical infrastructure.
- Developing and deploying responsible AI technologies.
“These three areas will form the core of our safety charter for India,” Adkins explained. “We aim to leverage our local engineering capabilities to address the specific challenges faced by Indian users.”
Global and Local AI Deployment
On a global scale, Google is employing AI to combat online scams, successfully removing millions of deceptive advertisements and associated accounts.
The company intends to broaden the application of AI within India to more effectively counter digital fraud.
AI-Powered Protection in Google Products
Google Messages, pre-installed on numerous Android devices, utilizes AI-powered Scam Detection, safeguarding users from over 500 million potentially harmful messages each month.
Similarly, Google’s Play Protect, piloted in India last year, has reportedly blocked nearly 60 million attempts to install high-risk applications, leading to the removal of over 220,000 unique apps from more than 13 million devices.
Google Pay, a leading UPI-based payment application in India, has also issued 41 million warnings concerning transactions flagged as potentially fraudulent.
Insights from Heather Adkins
Heather Adkins, a long-standing member of Google’s security team with over 23 years of experience, shared further insights during a TechCrunch interview.
The Potential for AI Misuse
According to Adkins, a primary concern centers around the exploitation of AI technologies by individuals with malicious intent.
Currently, monitoring of AI is ongoing, with initial observations indicating that large language models, such as Gemini, are largely being utilized to improve efficiency. Specifically, these models are being leveraged to enhance the effectiveness of phishing attacks.
The ability to translate communications between actors and targets who speak different languages is a key benefit. This translation capability, combined with the creation of deepfakes – including images and videos – contributes to more convincing scams, as Adkins explained.
Google's Proactive Measures
Google is actively engaged in rigorous testing of its AI models to guarantee they adhere to defined boundaries of acceptable behavior.
This testing extends beyond the content generated by the AI to encompass the actions it is capable of performing, as noted by Adkins.
To mitigate the potential for misuse of its Gemini models, Google is developing frameworks, notably the Secure AI Framework, designed to restrict abusive applications.
However, the company recognizes that a broader, collaborative approach is necessary. A framework that integrates safety considerations into the communication protocols between multiple AI agents is deemed essential for long-term protection against hacking and abuse.
Industry-Wide Collaboration
The pace of development within the AI industry is exceptionally rapid, with protocols being released in a manner reminiscent of the internet’s early stages.
Adkins highlighted that safety considerations are often addressed reactively, following the initial release of code.
Google’s strategy diverges from simply imposing its own restrictive frameworks. Instead, the company is prioritizing collaboration with the research community and developers.
Open research and development are considered crucial, as Adkins stated that premature constraints could hinder progress in the field.
The Growing Concern of Surveillance Vendors
Beyond the potential misuse of generative AI by malicious actors, security expert Adkins identifies commercial surveillance vendors as presenting a substantial risk. This category encompasses developers of spyware, such as the widely criticized NSO Group and the creator of Pegasus, as well as smaller businesses offering surveillance technologies.
According to Adkins, numerous companies globally are emerging, specializing in the creation and distribution of hacking platforms. The cost of access to these platforms varies considerably, ranging from $20 to $200,000, contingent upon the platform’s complexity. This allows individuals to conduct attacks on a large scale, even without possessing specialized technical skills.
Several of these vendors market their tools for the purpose of monitoring individuals within specific markets, including India. However, India faces unique security hurdles beyond simply being a target for surveillance technologies, largely due to its vast population. The nation experiences not only AI-driven deepfakes and voice cloning fraud, but also instances of digitally facilitated arrests, which Adkins characterizes as evolved forms of traditional scams.
“The speed at which threat actors are innovating is remarkable,” Adkins noted. “Analyzing cyber activity in this region is particularly insightful, as it frequently foreshadows global trends.”
Adkins further explained that observing cyber threats in this area provides valuable insight into future worldwide developments.
Specific Threats and Regional Dynamics
- Commercial Spyware: Vendors like NSO Group offer sophisticated tools for surveillance.
- Accessibility of Hacking Platforms: Platforms are available at varying price points, lowering the barrier to entry for attackers.
- India's Unique Challenges: The country faces both advanced AI-based fraud and digitally adapted scams.
The proliferation of these tools and the rapid evolution of attack methods necessitate constant vigilance and adaptation within the cybersecurity landscape.
Understanding Multi-Factor Authentication
For a considerable period, Google has been advocating for its users to adopt authentication methods that are more robust than traditional passwords, enhancing the security of their digital lives. The implementation of multi-factor authentication (MFA) across all user accounts was completed previously, and the company continues to champion hardware security keys.
Google employees, as noted by Adkins, are actively utilizing these keys with their laptops. The term "passwordless" is also gaining traction within the technology sector, though its interpretation varies.
However, a complete shift away from passwords is anticipated to be challenging, particularly within a large and economically diverse market such as India.
Password Security and MFA Adoption
“The inherent vulnerabilities of passwords have been recognized for quite some time,” stated Adkins. “The introduction of multi-factor authentication represented a significant improvement in security protocols.”
It is believed that SMS-based authentication is likely to be the preferred MFA method among users in India.
This preference may be due to its accessibility and familiarity compared to other available options.
Related Posts

Spotify's AI Prompted Playlists: Personalized Music is Here

YouTube TV to Offer Genre-Based Plans | Cord Cutter News

Google Tests AI Article Overviews in Google News

Amazon Updates Copyright Protection for Kindle Direct Publishing

ChatGPT Tops US App Charts in 2025 | AI News
