As you navigate through the App Store or Google Play, it's hard to ignore the omnipresent force of Artificial Intelligence (AI) in mobile apps. From photo editing tools to voice assistants, health diagnostics, and more, AI has become an integral part of our digital lives. In fact, a staggering 10 out of 12 top graphic design apps rely on AI – it's everywhere!
However, as AI permeates mobile apps, it also introduces a new wave of security, privacy, and compliance risks that developers, security leaders, and businesses must address. The risk of misinformation, hallucinations, or ethical bias issues stemming from AI models has long concerned companies. But now, rampant AI-driven privacy regulatory compliance, security, and privacy issues threaten businesses.
Recently, NowSecure research uncovered multiple security and privacy vulnerabilities in the iOS version of DeepSeek. Carlos Holguera, the OWASP Mobile Application Security (MAS) Project lead and a NowSecure principal research engineer, presented a Tech Talk on the risks AI presents in mobile apps and what steps organizations can take to reduce them.
The Business Risks
As AI becomes more prevalent in mobile apps, businesses face several potential risks, including:
- Violation of data privacy laws
- Regulations and transparency requirements
- Data privacy violations and cross-border data transfer
- AI security and data leakage risks
- Liability for model outcomes
- Model theft and repackaging
- Unauthorized use of AI models and API keys
- Integrity of model outcomes and cheating risks
AI Detection
These business risks stem from AI vulnerabilities such as unencrypted connections, hardcoded API keys, model theft, reverse engineering, and insecure AI integrations.
In the Tech Talk, Holguera demonstrated how NowSecure Platform's automated mobile application security testing provides much-needed transparency to detect AI-powered apps that leak OpenAI API keys or use multiple services like OpenAI, Google, DeepSeek, and Moonshot AI. He also showcased a demo app that uses Optical Character Recognition (OCR) to steal cryptocurrency wallet data.
Best Practices for Securing AI-Powered Apps
To protect your mobile apps from these risks, leaders should take the following steps:
- Track AI Endpoints and Jurisdiction: Know which AI endpoints your app uses and where they are hosted to ensure compliance with data residency regulations.
- Identify Local Files, Models, and AI Libraries: Test your mobile apps for local AI models, ensuring they're secure and tamperproof.
- Secure API Keys and Sensitive Data Transmission: Use strong encryption and secure storage practices to protect API keys and sensitive data.
- Use OWASP Standards: Test your apps against the OWASP MASVS industry standard to cover all aspects of privacy, resilience, networking, cryptography, authentication, storage, and code risks beyond those associated with AI in mobile apps.
Act Against AI Risks in Your Apps
Watch Carlos Holguera's Tech Talk for a deeper understanding of the AI risks explored. Next, discover how NowSecure can help you identify hidden AI integrations or hardcoded secrets and ensure compliance. Test your app today!