FinalMEMOReport
docx
keyboard_arrow_up
School
Seneca College *
*We aren’t endorsed by this school
Course
594
Subject
Information Systems
Date
Apr 3, 2024
Type
docx
Pages
4
Uploaded by LieutenantBookBaboon41
MEMO
TO:
IT Department
FROM:
Microsoft Copilot
DATE
: March 06, 2024
SUBJECT:
GenAI Use and Confidentiality in the Workplace
I. Introduction
The purpose of this report is to provide an in-depth overview of Generative AI (GenAI) and its implications on confidentiality within the IT industry. This report will discuss the challenges of protecting organization, customer, and employee confidentiality when GenAI is used, provide an
analysis with examples, and suggest solutions.
II. Background on GenAI and Confidentiality
Generative AI (GenAI) is a subset of AI that can create, imitate, or modify various forms of content (Lawton, 2023). It leverages machine learning algorithms to generate data similar to the data it was trained on. This technology has been used to create everything from artwork to music,
and even synthetic human voices. While it opens countless opportunities to create value and leverage automation (Zewe, 2023), it also raises significant concerns about confidentiality (Maguire, 2023).
Confidentiality is a critical aspect of data protection and privacy. It involves keeping sensitive information secret and accessible only to those with the necessary authorization. In the context of
GenAI, confidentiality becomes a concern when these systems have access to sensitive data. The use of GenAI tools can raise privacy issues, as they may share user information with third parties, such as vendors or service providers, without prior notice (Koerner, 2023).
III. Discussion
Challenges Protecting Confidentiality
1.
Limited Traceability and Irreproducibility
: One of the main challenges with GenAI is the limited traceability and irreproducibility of its outcomes. Due to the complex nature of these systems, it can be difficult to understand how they arrived at a particular output. This lack of transparency raises the possibility of bad or even illegal decision making (Bellefonds et al., 2023).
2.
Data Security and Unauthorized Access
: Another critical concern is data security and unauthorized access. GenAI systems often require large amounts of data for training. If
this data is not properly secured, it could be accessed by unauthorized individuals or entities, leading to potential data breaches (TechRepublic, 2021).
3.
Insecure Code
: Many developers are turning to generative AI to improve their productivity. However, a Stanford study found that software engineers who use code-
generating AI systems are more likely to cause security vulnerabilities in the apps they develop (KPMG, 2021).
Analysis with Examples
1.
ChatGPT
: The most well-known example is ChatGPT, an AI-powered language model developed by OpenAI. It has introduced several privacy concerns when certain prompts respond with information that includes sensitive data as a part of the responses. For instance, if a user were to input their credit card information into the system, there’s a risk
that this information could be accessed by unauthorized individuals. OpenAI is rolling out the ability for its chatbot to memorize conversations with users, which has raised privacy concerns. A privacy researcher, Davi Ottenheimer, thinks it’s “Orwellian” (Okunytė, 2024).
2.
Samsung
: An example of a company that faced challenges with GenAI is Samsung. It banned employee use of GenAI when it discovered the risks after confidential data was inadvertently leaked. This incident highlights the potential risks associated with using GenAI in the workplace, especially when it comes to handling sensitive information (Gurman, 2023).
IV. Conclusion and Recommendations
While GenAI offers numerous benefits, it’s crucial to address its potential risks to ensure the confidentiality and security of our organization’s data. Here are some actionable steps:
1.
Use GenAI Apps from Reputable Businesses
: Generative AI is becoming a top technology investment area, but it also brings new cybersecurity risks (Lin, 2024). According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years (Lin, 2024). Therefore, it’s important to understand how the maintainer of your GenAI software approaches security. Before using a GenAI application, research the company behind it and ensure they have a strong track record of data security (Lin, 2024). IBM, for instance, has introduced the IBM Framework for Securing Generative AI
to help organizations understand the likeliest attacks on AI and prioritize the defensive approaches that are most important to secure their generative AI initiatives quickly (Lin, 2024).
2.
Keep Sensitive Data Out of Prompts
: Users should not prompt GenAI applications with
sensitive information, unless they know it will travel over a private connection. This includes personal information, financial data, and other confidential details. According to a report by TechRepublic, there are significant cybersecurity risks associated with generative AI. For instance, ChatGPT keeps a detailed history of all prompts and answers, which could potentially leak sensitive data to fraudsters if they obtain account
credentials (Pernet, 2023). Therefore, it’s crucial to ensure that sensitive data is kept out of prompts when interacting with GenAI applications (Pernet, 2023).
3.
Avoid Training Data Problems or Leakages
: If you are building your own GenAI application, securing your training data is paramount. Generative AI models learn from large amounts of data and can generate new content that is similar in style and structure to the data they were trained on (Norris, 2023). However, the quality of the data used in training AI models significantly impacts their performance. Noisy or inaccurate data can introduce errors and misleading patterns, eroding the reliability and trustworthiness of AI systems (Bruchman, 2023). Therefore, it’s crucial to ensure data quality through meticulous preprocessing and validation processes. This includes implementing robust data encryption and access controls, as well as regularly auditing your systems for potential vulnerabilities (Bruchman, 2023).
4.
Implement Robust Data Encryption and Access Controls
: Implementing robust data encryption and access controls is crucial when working with Generative AI applications. DataStax Enterprise supports both remote keystore SSL providers and local keystore files, offering flexibility in how you secure your data (DataStax, 2023). To enable encryption for new search cores, you can edit the search index config file to change the class for directoryFactory to solr.EncryptedFSDirectoryFactory (DataStax, 2023). Additionally, before configuring SSL for DataStax Enterprise services, you must create SSL certificates, keystores, and truststores (DataStax, 2023). This includes using strong encryption algorithms to protect data at rest and in transit, as well as implementing strict access controls to ensure only authorized individuals can access the data (DataStax, 2023).
References
Bellefonds, N. de, Kleine, D., Grebe, M., Ewald, C., & Nopp, C. (2021). Limited Traceability and Irreproducibility. Retrieved from https://www.bcg.com/publications/2023/c-suite-genai-
concerns-challenges
Bruchman, B. (2023). Debunking 5 Common AI Myths. Collato. Retrieved from https://collato.com/blog/ai-myths/
DataStax. (2023). Encrypting new Search indexes. DataStax 6.7 Security Guide. Retrieved from https://docs.datastax.com/eol/en/security/6.7/security/secEncryptIndexNew.html
DataStax. (2023). Creating SSL Certificates, Keystores, and Truststores. DataStax Enterprise 6.8. Retrieved from https://docs.datastax.com/en/dse/6.8/docs/securing/ssl-certificates-keystores-
truststores.html
Gurman, M. (2023). Samsung Bans Generative AI Use by Staff After ChatGPT Dataleak. Retrieved from https://www.bnnbloomberg.ca/samsung-bans-generative-ai-use-by-staff-after-
chatgpt-data-leak-1.1914646
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
KPMG. (2023). Insecure Code. Retrieved from https://assets.kpmg.com/content/dam/kpmg/xx/pdf/2023/04/generative-ai-models-the-risks-and-
potential-rewards-in-business.pdf
Koerner, K. (2021). Generative AI: Privacy and tech perspectives. Retrieved from https://iapp.org/news/a/generative-ai-privacy-and-tech-perspectives
Lawton, G. (2024). Generative AI: What is Generative AI? Everything You Need to Know. Retrieved from https://www.techtarget.com/searchenterpriseai/definition/generative-AI
Lin, S. (2024). No Silver Bullet: Closing the Gender Gap in the Era of Generative AI. IBM Blog.
Retrieved from https://www.ibm.com/blog/no-silver-bullet-closing-the-gender-gap-in-the-era-of-
generative-ai/
Maguire, J. (2023). The Benefits of Generative AI. Retrieved from https://www.eweek.com/artificial-intelligence/benefits-of-generative-ai/
Microsoft Copilot. (2024, March 6). GenAI Use and Confidentiality in the Workplace. Microsoft
Copilot Conversations.
Norris, C. (2023). Generative AI: What it is and why it matters. Collato. Retrieved from https://collato.com/blog/generative-ai/
Okunytė, P. (2024). Expecting privacy from ChatGPT is like asking the NSA to stop spying on citizens. CyberNews. Retrieved from https://cybernews.com/editorial/openai-chatgpt-memory-
privacy-concerns/
Pernet, C. (2023). ChatGPT Security Concerns: Credentials on the Dark Web and More. TechRepublic. Retrieved from https://www.techrepublic.com/article/chatgpt-dark-web-security
TechRepublic. (2023). Data Security and Unauthorized Access. Retrieved from https://www.techrepublic.com/article/data-encryption-manage-security/
Zewe, A. (2023). Generative AI: What is it and why does it matter? Retrieved from https://www.csail.mit.edu/news/explained-generative-ai