Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Cyber Security

?Jun 22, 2023?The Hacker News

Losing sleep over Generative-AI apps? You’re not alone or wrong. According to the Astrix Security Research Group, mid size organizations already have, on average, 54 Generative-AI integrations to core systems like Slack, GitHub and Google Workspace and this number is only expected to grow. Continue reading to understand the potential risks and how to minimize them.

Book a Generative-AI Discovery session with Astrix Security’s experts (free – no strings attached – agentless & zero friction)

“Hey ChatGPT, review and optimize our source code”

“Hey Jasper.ai, generate a summary email of all our net new customers from this quarter”

“Hey Otter.ai, summarize our Zoom board meeting”

In this era of financial turmoil, businesses and employees alike are constantly looking for tools to automate work processes and increase efficiency and productivity by connecting third party apps to core business systems such as Google workspace, Slack and GitHub via API keys, OAuth tokens, service accounts and more. The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team’s knowledge.

From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps. With ChatGPT becoming the fastest growing app in history, and AI-powered apps being downloaded 1506% more than last year, the security risks of using, and even worse, connecting these often unvetted apps to business core systems is already causing sleepless nights for security leaders.

Your organization’s app-to-app connectivity

The risks of Gen-AI apps

AI-based apps present two main concerns for security leaders:

1. Data Sharing via apps like ChatGPT: The power of AI lies in data, but this very strength can be a weakness if mismanaged. Employees may unintentionally share sensitive, business-critical information including customers PII and intellectual property like code. Such leaks can expose organizations to data breaches, competitive disadvantages and compliance violations. And this is not a fable – just ask Samsung.

The Samsung and ChatGPT leaks – a case for caution

Samsung reported three different leaks of highly sensitive information by three employees that used ChatGPT for productivity purposes. One of the employees shared a confidential source code to check it for errors, another shared code for code optimization, and the third shared a recording of a meeting to convert into meeting notes for a presentation. All this information is now used by ChatGPT to train the AI models and can be shared across the web.

2. Unverified Generative-AI apps: Not all generative AI apps come from verified sources. Astrix’s recent research reveals that employees are increasingly connecting these AI-based apps (that usually have high-privilege access) to core systems like GitHub, Salesforce and such – raising significant security concerns.

The vast array of Generative AI apps

Book a Generative-AI Discovery session with Astrix Security’s experts (free – no strings attached – agentless & zero friction)

Real life example of a risky Gen-AI integration:

In the images below you can see the details from the Astrix platform about a risky Gen-AI integration that connects to the organization’s Google Workspace environment.

This integration, Google Workspace Integration “GPT For Gmail”, was developed by an untrusted developer and granted with high-permissions to the organization’s Gmail accounts:

Among the scopes of the permissions granted to the integration is “mail.all”, which allows the third party app to read, compose, send and delete emails – a very sensitive privilege:

Information about the integration’s supplier, which is untrusted:

How Astrix helps minimizing your AI risks

To safely navigate the exciting but complex landscape of AI, security teams need robust non-human identity management in order to get visibility into the third-party services your employees are connecting, as well as control over permissions and properly evaluate potential security risks. With Astrix you now can:

The Astrix Connectivity map
  • Get a full inventory of all AI-tools that your employees use and access your core systems, and understand the risks associated with them.
  • Remove security bottlenecks with automated security guardrails: understand the business value of each non-human connection including the usage level (frequency, last maintenance, usage volume), the connection owner, who in the company uses the integration and the marketplace info.
  • Reduce your attack surface – Ensure all AI-based non-human identities accessing your core systems have least privileged access, remove unused connections, and untrusted app vendors.
  • Detect anomalous activity and remediate risks: Astrix analyzes and detects malicious behavior such as stolen tokens, internal app abuse and untrusted vendors in real time through IP, user agent and access data anomalies.
  • Remediate faster: Astrix takes the load off your security team with automated remediation workflows as well as instructing end-users on resolving their security issues independently.

Book a Generative-AI Discovery session with Astrix Security’s experts (free – no strings attached – agentless & zero friction)

Found this article interesting? Follow us on Twitter ? and LinkedIn to read more exclusive content we post.