Enterprise AI applications are threatening security


Over the past year, AI has emerged as a transformational productivity tool, potentially revolutionizing industries across the board. AI applications, such as ChatGPT and Google Bard, are becoming common tools within the enterprise space to streamline operations and enhance decision-making. However, AI’s sharp rise in popularity brings with it a new set of security risks that organizations must grapple with to avoid costly data breaches.
Generative AI’s rapid uptake
Just two months after its launch into the public realm, ChatGPT became the fastest growing consumer-focused application in history, using generative AI technology to answer prompts and help user needs. With an array of benefits that ultimately streamline processes for the individual – suggesting recipes, writing birthday inscriptions, and acting as a go-to knowledge encyclopedia – ChatGPT’s wider application and benefit to the workplace was quickly recognized. Today, many employees in offices worldwide rely on generative AI systems to help draft emails, propose calls to action, and summarize documents. Netskope’s recent Cloud and Threat Report found that AI app use is increasing exponentially within enterprises across the globe, growing by 22.5% over May and June 2023. At this current growth rate, the popularity of these applications will double by 2024.The hacker’s honeypot
Ray Canzanese is the Director of Netskope Threat Labs.
The hacker’s honeypot
An online poll by Reuters and Ipsos found that, of 2,625 US adults, as many as 28% of workers have embraced generative AI tools and use ChatGPT regularly throughout the working day. Unfortunately, after proving itself as a nimble tool for proofing documents and checking code for errors, ChatGPT has become an exposure point for sensitive information as employees cut and paste confidential company content into the platform. The sheer quantity of sensitive information being pooled onto generative AI systems is hard to ignore. Layer X’s recent study of 10,000 employees found that a quarter of all information being shared to ChatGPT is considered sensitive.
With 1.43 billion people logging into ChatGPT in August, it’s no surprise that its hype and popularity is attractive for malicious actors, seeking to leverage LLMs to achieve their own malicious goals and also to exploit the hype surrounding LLMs to target their victims.
Business leaders are scrambling to find a way to use third party AI apps safely and securely. Early this year, JPMorgan blocked access to ChatGPT, citing its misalignment with company policy, and Apple took the same path after revealing plans to create its own model. Other companies such as Microsoft have simply advised staff not to share confidential information with the platform. There is yet to be any strong regulatory recommendation or best practice for generative AI usage, with the most worrying consequence being that 25% of US workers have no idea if their company permits ChatGPT or not.
Many different types of sensitive information are being uploaded to generative AI applications at work. According to Netskope, the most commonly uploaded information is source code, the basic text that controls the function of a computer program and usually corporate intellectual property.
ChatGPTs uncanny ability to review, explain and even train users on complex coding makes this trend unsurprising. However, uploading source code to these platforms is a high-risk activity and can lead to the exposure of serious trade secrets. Samsung was faced with this exact problem in April this year when one of its engineers used ChatGPT to check source code for errors, leading to the total ban of ChatGPT company-wide.
Common scams
Removing generative AI from company networks comes with its own risks. In this scenario, users are incentivized to use third-party ‘shadow’ applications (not approved for secure use by the employer) to streamline their workflows. Catering to this trend, an increasing number of phishing campaigns and malware distribution campaigns have been found online, seeking to profit on the generative AI hype. In these types of campaigns, websites and proxies disguise themselves as offering free, unauthenticated access to the chatbot. In reality, all user inputs are accessible to the proxy operator and are collected for future attacks.
Securing the workplace
Fortunately for enterprises, there is an alternate middle ground to enable AI’s adoption in the workplace with safety perimeters, and this includes a combination of cloud access controls and user awareness training.
Firstly, a data loss prevention policy and tools should be implemented to detect uploads that contain potentially sensitive information, such as source code and intellectual property. This can then be combined with real-time user coaching to notify employees when an action looks likely to breach company policy, giving them an opportunity to review the situation and respond appropriately.
To lessen the threat of scam websites, companies should scan website traffic and URLs, and coach users to spot cloud and AI app themed attacks.
The most effective way to implement tight security measures is to make sure AI app activity and trends are regularly monitored to identify the most critical vulnerabilities for your particular business. Security should not be an afterthought, and with the right care and attention, AI can continue to benefit the enterprise as a force for good.
We’ve featured the best online cybersecurity course.
Over the past year, AI has emerged as a transformational productivity tool, potentially revolutionizing industries across the board. AI applications, such as ChatGPT and Google Bard, are becoming common tools within the enterprise space to streamline operations and enhance decision-making. However, AI’s sharp rise in popularity brings with it a…
Recent Posts
- I tried this new online AI agent, and I can’t believe how good Convergence AI’s Proxy 1.0 is at completing multiple online tasks simultaneously
- I cannot describe how strange Elon Musk’s CPAC appearance was
- Over a million clinical records exposed in data breach
- Rabbit AI’s new tool can control your Android phone, but I’m not sure how I feel about letting it control my smartphone
- Rabbit AI’s new tool can control your Android phones, but I’m not sure how I feel about letting it control my smartphone
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010