Script ChatGPT
docx
keyboard_arrow_up
School
University of Notre Dame *
*We aren’t endorsed by this school
Course
FYS
Subject
Information Systems
Date
Dec 6, 2023
Type
docx
Pages
4
Uploaded by TrentFisher
I will be presenting on the issue of Privacy and Security in ChatGPT. To examine privacy
and
security
in ChatGPT I used my journal. I use my journal for creativity, self-expression,
problem-solving, decision-making, self-discovery, and goal setting. My journal is private.
I do not want people reading my
journal
because it is confidential.
2Before I discuss specifically privacy and security in ChatGPT I would like to connect the
concept to the last two books. In The Age of Genomes: Tales from the Front lines of
Genetic Medicine
. Lipkin discusses the importance
of genetic data where a person’s
information and
data
can be misused and exploited. The author presents that the use of
genetic data presents new challenges like
unauthorized
access and hacking by a third
party. Rules and regulations are in place to ensure the security of genetic data, but there
is always the fear of genetic data being used for nefarious purposes like criminal or
personal gain.
In the second book. The Ethical Algorithm: The science of Socially Aware Algorithm
Design. The authors discuss the importance of user privacy and data security as an
important part when designing an algorithm. They expand that algorithms must only
collect necessary data and
minimize the
collection
of unnecessary or sensitive data.
They discuss specific methods that can be implemented to prevent unauthorized access
to user data or theft and explain that privacy and security should be considered
throughout an algorithm’s entire lifecycle.
4This brings me to my topic privacy and security in ChatGPT which is a theme of
innovation as it is one of the most advanced language models currently available.
ChatGPT when
you break it down is an algorithm
that analyzes natural language patterns
to generate human-like text in
response
to a user input. This raises the issue of user
input. People when typing in
ChatGPT are under the impression that it is confidential
because it is like my journal. You are not talking to a physical person you are using
ChatGPT like my journal for self-expression, problem-solving, decision-making and a lot
more. OpenAI:
statement
on this matter is that it only saves searches to train and
improve its algorithm, but this raises the theme of Accountability which is the story
OpenAI is deciding to tell to avoid the blame
of collecting specific user sensitive data.
Even if OpenAI was telling the truth. Rory Mir,
associate
direct for a privacy rights
nonprofit group, describes this as at some point that data they’re holding onto may
change hands to another company you don’t trust that much or end up in the hands of a
government you don’t trust that much.
This brings me to the privacy and security aspect. As I mentioned we view things like
web searches and
ChatGPT like journals they private and secure. To Jeffery Chester,
executive director of Center for
Digital
Democracy a digital rights advocacy group,
Consumers should view these tools with suspicion at least, since-like so many other
popular technologies-they are like influenced by the forces of advertising and marketing.
Basically saying we
should avoid looking at ChatGPT and other language models like
journals. This connects to
Eudaemonism
who determines what is right and wrong when
discussing privacy and security. Is it OpenAI? Is it the government? Is it individual
people? We must purse the good and general welfare.
5Companies especially banks have already banned ChatGPT. It's not just an anti-
technology stance, however: The banks simply won't allow the use of third-party
software without
a thorough vetting, which makes sense when you consider that their
entire industry lives and dies on
keeping
its clients' money
secure. Samsung just banned
the use of chatbots by all its staff at the consumer electronics giant. This puts an end to
staff access to ChatGPT
, Bard and Bing after sensitive corporate secrets were accidentally
leaked by employees on chatbots.
8Specifically looking at ChatGPT and AI chatbots as I mentioned they do save chat
history for training. Which is to improve the model. As I mentioned this
raises privacy
and security concerns when I
ask ChatGPT about a specific medical condition I have.
Models give you the option of opting of chat data
collection
or deleting their chat
history. However, we do not know anything for sure. This could just be a narrative that
OpenAI is using to access
user
data. Also, ChatGPT does not
make it know when you sign
up that they have this feature and they collect user data.
Beyond ChatGPT what are companies using your chats for. For ChatGPT and Googles
Bard, we are told
they use the questions and
responses
to train the AI models to provide
better answers. However, Chat logs could also be used to advertisement. For example, if
I search up cancer treatment and
symptoms
in ChatGPT I will see more cancer ads when
I search the internet.
21Additionally, there is actual evidence of this where WebMB and Drugs.com which are
both popular online sources of medical information, providing users with resources to
research various
health
topics, medications, and treatment options. Shared sensitive
health concerns to advertisers for depression
and
HIV. These
people
are data brokers
who sell the lists of people with health concerns for targeted advertisement. Chronically
ill people reported
respect
seeing more targeted ads. This raises a course question of
how do you
respect
human dignity. Clearly,
showing
advertisement to chronically ill
individuals reminding them of there illness would not be an example of respecting
human dignity.
34It's a reasonable expectation that ChatGPT, Bing and Bard are all aligned to make
money and generate revenue from knowing your personal information. For example,
when it comes to asking a chatbot a personal question about your specific interests, that
information
could
be sold to advertisers on
various
websites like Facebook, Instagram
and Google. History tells us where all this is heading. Searches and browsing habits for
medical information have historically been
sold
to advertisers. If they're willing to sell
that information about you, then it's safe to assume that other ad-based networks could
make money by selling your search history no matter how invasive it could be to your
privacy.
OpenAI offers no
procedures
for individuals to check whether the company
stores their personal information, or to request it be deleted.
2In addition, human reviews do step into audit chatbot’s responses. Google even saves
some
conversations
for review and annotation for up to four years. As I mentioned
companies are at risk of hacking or sharing data to untrustworthy business partners and
this risk goes up when data is being stored for such long periods. ChatGPT can store
there’s indefinitely for only as long as
they
need in order to provide there services to
you, or for other legitimate business purposes. And if that is
not
enough this raises the
question of human reviewers. These human reviewers are witnessing chats they could
tell others.
5This brings me to the question of can you trust ChatGPT with your health and sensitive
personal information. ChatGPT does a better job
then
a search engine avoiding junk
information. Perhaps we should treat it like a baby as said by Tinglong Dai, a professor at
Johns
Hopkins
University who studies AI’s effects on health care, the technology is
already very impressive, but right now
it’s
like a baby, or maybe like a teenger”…”Right
now people are just testing it, but when they start relying on it, that’s when it becomes
really dangerous” Chat logs
using
targeted advertisement connect to tradition. Targeted
advertisement has been handed down and developed. Perhaps we must modify it in
include ethical concerns. As you can see on the right we can see who a patient shares
there medical information with, executing personal life. With the introduction of
ChatGPT it adds chatbots to the equation
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9In terms of its intended purpose, ChatGPT appears to be fulfilling its role effectively by
providing information and support to
users
. However, its use of personal health data
raises concerns about privacy and security. While the technology can empower
individuals to diagnose themselves and take control of their health, it may also lead to
misdiagnosis and the potential for personal data to be leaked or misused. Regarding
human freedom, the use of
ChatGPT
can both increase and limit it. On one hand, it
enables individuals to access information and resources that may not be readily
available otherwise, allowing for greater autonomy and control over their health. On the
other hand, the potential for misdiagnosis and data breaches can limit an individual's
freedom and agency. To be specific there is a Lack of accountability and transparency:
ChatGPT is a machine learning-based technology that works by analyzing large amounts
of data.
However
, it is not
always
clear how it arrives at its recommendations or
diagnoses, which can lead to a lack of accountability and transparency in the decision-
making process. From a meso ethical perspective, businesses and organizations that
utilize ChatGPT have a responsibility to treat their clients with respect and protect their
data. This means implementing strong privacy policies,
ensuring
secure data storage and
transmission, and being transparent
about
how personal information is being used. By
doing so, they can build trust with their clients and promote ethical practices in the
development and use of technology.
0A new Washington state law represents an important
step
in protecting the privacy and
autonomy of individuals using health technologies such as ChatGPT. By requiring
companies to obtain explicit and informed consent
from
users before collecting, sharing,
or selling their health data, the law helps ensure that individuals have greater control
over how their personal information is used. This aligns with the ethical principles of
autonomy and respect for
persons
, which emphasize the importance of respecting
individuals' right to make informed decisions about their own health and well-being. By
prioritizing these ethical principles,
policymakers
and technology developers can work
to build a more equitable and just healthcare system that supports the journey of
individuals as they seek to improve their health and well-being.