Arcom ICT News
Posts from 2024

Is Generative AI Cyber Safe? AI and Confidential Company Info

Generative AI and company data security

 A must-read for all IT decision-makers.

 

 

Introduction

In recent times, the adoption of generative AI tools, such as ChatGPT, has witnessed a significant surge, with 29% of Gen Z, 28% of Gen X, and 27% of Millennials incorporating these tools into their professional lives. While the versatility and convenience of generative AI are undeniable, concerns about the security of confidential company information have emerged. This article delves into the intricacies of generative AI, exploring how it functions, potential risks, and measures to safeguard sensitive data.

Unravelling the Magic Behind Generative AI

Generative AI, with its ability to learn and evolve by scraping vast amounts of internet data, has transformed the way we work. Recent legal disputes, such as The New York Times' lawsuit against ChatGPT, emphasise the need for a deeper understanding of how these tools operate. It's not just about the algorithms; it's about users being aware of the information they feed into these systems. Generative AI is a powerful tool, but it requires responsible usage to prevent unintentional plagiarism and data misuse.

Guarding Against the Spectre of Data Leakage

Consider scenarios where a doctor inputs patient details or an executive reformats a strategy document using ChatGPT. The potential for sensitive information leakage is real, as highlighted by the Samsung data leak incident. It's crucial for companies to reassess their approach to generative AI, understanding the risks and implementing measures to safeguard against inadvertent data exposure.

Being Mindful: Tips for Responsible Interaction

While we are still unravelling the full extent of generative AI capabilities, it's essential to exercise caution in the information we input. Here's a key tip: be careful about the details you provide to generative AI tools. Treat it as an evolving technology and avoid sharing extremely sensitive or proprietary information until a thorough understanding of its capabilities and limitations is established.

Armour Up: Practical Steps for Personal and Company Defence

To mitigate the risks associated with generative AI, consider these actionable steps:

  • Navigate Incognito: Use incognito mode when interacting with ChatGPT. This ensures that your conversations are not used to train the model, providing an additional layer of privacy.
  • Toggle Off "Chat History & Training": Access your account settings and disable "Chat History & Training" to prevent your conversations from contributing to model learning.
  • Clear Past Conversations: Regularly clear your chat history to remove any lingering sensitive information. Be aware that although conversations may be cleared, they might not be deleted from the servers for up to thirty days.
  • Enquire About Company Policies: Ask your company if there are specific policies regarding the use of generative AI tools like ChatGPT. If no policies exist, consider suggesting the creation of guidelines to ensure responsible and secure usage.

Conclusion

As we harness the transformative power of generative AI, the responsibility to protect our company's secrets lies squarely on our shoulders. This article urges users to embrace generative AI with caution, treating interactions as more than just transactions with a machine. By staying informed, adopting best practises, and advocating for responsible usage within our organisations, we can fully leverage the benefits of generative AI while safeguarding the security of our most sensitive information in the dynamic landscape of the digital age.

This website uses cookies to ensure you get the best experience on our website. Read our Privacy Policy to find out more.