uTalk

Official forum for Utopia Community

You are not logged in.

#1 2023-06-26 23:33:00

thrive
Member
Registered: 2023-01-04
Posts: 2,018

How SaaS Authentication Protocols Can be Tricked by Generative AI.

RCGzc1J.png
IT and security teams are frequently pressured to adopt software before they fully comprehend the security risks. The same applies to AI tools.

Business leaders and employees are rushing to use generative AI software and other similar programs, often without realizing the significant SaaS security risks they pose to the organization. 1,000 executives were surveyed in February 2023 about generative AI, and the results showed that 30% of them planned to use ChatGPT in the near future. Ninety-nine percent of users of ChatGPT claimed to have saved money in some way, and 25% said they had cut costs by at least $75,000 in some way. The usage of ChatGPT and AI tools today is undoubtedly higher because the survey was conducted just three months after ChatGPT became widely available.

In order to protect their SaaS estate, which has now replaced the operating system for businesses, from common vulnerabilities like configuration errors and overly-permitted users, security and risk teams are already overburdened. This leaves little time for analysis of the threat environment for AI tools, the status of currently in use unapproved AI tools, and the implications for SaaS security.

The most pertinent AI tool risks to SaaS systems must be understood by CISOs and their teams in order to mitigate the threats that are emerging from both inside and outside of organizations.

1 — Threat actors are able to trick SaaS authentication protocols by using generative AI.
Cybercriminals come up with ways to use AI tools to help them get more done with less just as ambitious employees do. It is simply inevitable and already feasible to use generative AI for malicious purposes.

AI's ability to impersonate humans exceedingly well renders weak SaaS authentication protocols especially vulnerable to hacking. In order to crack CAPTCHAs, guess passwords, and create more powerful malware, threat actors may use generative AI improperly, according to Techopedia. Despite the fact that these techniques appear to have a narrow attack window, the CircleCI security breach in January 2023 was caused by malware that infected a single engineer's laptop.

Similar to this, three eminent technology academics recently presented a plausible scenario for generative AI conducting a phishing attack:.


"A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company's marketing materials and phishing messages that have been successful in the past. Because it doesn't resemble the messages they've been trained to recognize, it is successful in tricking people who have received thorough training in email awareness.
".

In order to avoid being detected, malicious actors will target side doors that are less secure, which is usually the SaaS platform itself. When they can sneak around back to the open patio doors, they won't bother with the deadbolt and guard dog stationed by the front door.

Relying on authentication alone to keep SaaS data secure is not a viable option. Security and risk teams require visibility into the entire SaaS perimeter, ongoing monitoring, automated alerts for suspicious login activity, and more in addition to multi-factor authentication (MFA) implementation and physical security keys.

These understandings are crucial for connecting employees' AI tools to SaaS platforms as well as for the generative AI activities of cybercriminals.

2 — Employees Connect Unsanctioned AI Tools to SaaS Platforms Without Considering the Risks .
Nowadays, employees rely on unauthorized AI tools to simplify their work. Like any form of shadow IT, employee adoption of AI tools is motivated by the best of intentions. After all, who wants to work harder when AI tools increase effectiveness and efficiency?

For instance, a worker may be convinced they could manage their time and to-dos better, but the effort required to track and evaluate their task management and attendance at meetings feels time-consuming. AI can easily carry out that monitoring and analysis and provide recommendations almost immediately, giving the employee the much-desired productivity boost in a fraction of the time. From the end-user's point of view, registering for an AI scheduling assistant is as easy and uncomplicated as:.

enrolling using a credit card or signing up for a free trial.
Accepting the Read/Write permission requests from the AI tool.
integrating the AI scheduling assistant with their business Gmail, Google Drive, and Slack accounts.
This process, however, creates invisible conduits to an organization's most sensitive data. The hacker who is able to successfully compromise the AI tool will be able to move covertly and laterally across the authorized SaaS systems thanks to these AI-to-SaaS connections, which inherit the user's permission settings. Until suspicious activity is discovered and addressed, which could take weeks or years, a hacker can access and exfiltrate data.

AI tools, like most SaaS apps, use OAuth access tokens for ongoing connections to SaaS platforms. Once the authorization is complete, the token for the AI scheduling assistant will maintain consistent, API-based communicationwith Gmail, Google Drive, and Slack accounts — all without requiring the user to log in or authenticate at any regular intervals. The threat actor who can capitalize on this OAuth token has stumbled on the SaaS equivalent of spare keys "hidden" under the doormat.


AI device.

Figure 1: An illustration of an AI tool establishing an OAuth token connection with a major SaaS platform. Credit: AppOmni.
Security and risk teams often lack the SaaS security tooling to monitor or control such an attack surface risk. Legacy tools like cloud access security brokers (CASBs) and secure web gateways (SWGs) won't detect or alert on AI-to-SaaS connectivity.

Yet these AI-to-SaaS connections aren't the only means by which employees can unintentionally expose sensitive data to the outside world.

3 — Sensitive Information Shared with Generative AI Tools Is Susceptible to Leaks.
The data employees submit to generative AI tools — often with the goal of expediting work and improving its quality — can end up in the hands of the AI provider itself, an organization's competitors, or the general public.

Because most generative AI tools are free and exist outside the organization's tech stack, security and risk professionals have no oversight or security controls for these tools. This is a growing concern among enterprises, and generative AI data leaks have already happened.

A March incident inadvertently enabled ChatGPT users to see other users' chat titles and histories in the website's sidebar. Concern arose not just for sensitive organizational information leaks but also for user identities being revealed and compromised. OpenAI, the developers of ChatGPT, announced the ability for users to turn off chat history. In theory, this option stops ChatGPT from sending data back to OpenAI for product improvement, but it requires employees to manage data retention settings. Even with this setting enabled, OpenAI retains conversations for 30 days and exercises the right to review them "for abuse" prior to their expiration.

This bug and the data retention fine print haven't gone unnoticed. In May, Apple restricted employees from using ChatGPT over concerns of confidential data leaks. While the tech giant took this stance as it builds its own generative AI tools, it joined enterprises such as Amazon, Verizon, and JPMorgan Chase in the ban. Apple also directed its developers to avoid GitHub Co-pilot, owned by top competitor Microsoft, for automating code.

Common generative AI use cases are replete with data leak risks. Consider a product manager who prompts ChatGPT to make the message in a product roadmap document more compelling. That product roadmap almost certainly contains product information and plans never intended for public consumption, let alone a competitor's prying eyes. A similar ChatGPT bug — which an organization's IT team has no ability to escalate or remediate — could result in serious data exposure.

Stand-alone generative AI does not create SaaS security risk. But what's isolated today is connected tomorrow. Ambitious employees will naturally seek to extend the usefulness of unsanctioned generative AI tools by integrating them into SaaS applications. Currently, ChatGPT's Slack integration demands more work than the average Slack connection, but it's not an exceedingly high bar for a savvy, motivated employee. The integration uses OAuth tokens exactly like the AI scheduling assistant example described above, exposing an organization to the same risks.

How Organizations Can Safeguard Their SaaS Environments from Significant AI Tool Risks.
Organizations need guardrails in place for AI tool data governance, specifically for their SaaS environments. This requires comprehensive SaaS security tooling and proactive cross-functional diplomacy.

Employees use unsanctioned AI tools largely due to limitations of the approved tech stack. The desire to boost productivity and increase quality is a virtue, not a vice. There's an unmet need, and CISOs and their teams should approach employees with an attitude of collaboration versus condemnation.

Good-faith conversations with leaders and end-users regarding their AI tool requests are vital to building trust and goodwill. At the same time, CISOs must convey legitimate security concerns and the potential ramifications of risky AI behavior. Security leaders should consider themselves the accountants who explain the best ways to work within the tax code rather than the IRS auditors perceived as enforcers unconcerned with anything beyond compliance. Whether it's putting proper security settings in place for the desired AI tools or sourcing viable alternatives, the most successful CISOs strive to help employees maximize their productivity.

Fully understanding and addressing the risks of AI tools requires a comprehensive and robust SaaS security posture management (SSPM) solution. SSPM provides security and risk practitioners the insights and visibility they need to navigate the ever-changing state of SaaS risk.

To improve authentication strength, security teams can use SSPM to enforce MFA throughout all SaaS apps in the estate and monitor for configuration drift. SSPM enables security teams and SaaS app owners to enforce best practices without studying the intricacies of each SaaS app and AI tool setting.

The ability to inventory unsanctioned and approved AI tools connected to the SaaS ecosystem will reveal the most urgent risks to investigate. Continuous monitoring automatically alerts security and risk teams when new AI connections are established. This visibility plays a substantial role in reducing the attack surface and taking action when an unsanctioned, unsecure, and/or over permissioned AI tool surfaces in the SaaS ecosystem.

AI tool reliance will almost certainly continue to spread rapidly. Outright bans are never foolproof. Instead, a pragmatic mix of security leaders sharing their peers' goal to boost productivity and reduce repetitive tasks coupled with the right SSPM solution is the best approach to drastically cutting down SaaS data exposure or breach risk.

Offline

#2 2023-06-27 20:36:21

Dozie
Member
Registered: 2023-01-18
Posts: 658

Re: How SaaS Authentication Protocols Can be Tricked by Generative AI.

I have always known that at some point we would be having some problems with the artificial intelligence technology and if it gets into the wrong hands this can be very disastrous.

Offline

Board footer

Powered by FluxBB