ENG1A_ Reading Quiz
docx
keyboard_arrow_up
School
De Anza College *
*We aren’t endorsed by this school
Course
1A
Subject
Communications
Date
Jan 9, 2024
Type
docx
Pages
3
Uploaded by ColonelInternetFerret29
Question 1:
After reading the post “Brief Overview of ChatGPT,” written by Sabrina Ortiz, which is posted
in our week 1 module, please write your response to this question:
In your own words and in
only one complete sentence
, define “ChatGPT.”
ChatGPT is an AI-powered language model that utilizes information from its training data to
generate responses and is used to answer questions and perform tasks such as composing emails,
essays, and code.
Question 2:
After watching the post “AI Talk on Youtube,” posted by the Center for Humane Technology,
which is posted in our week 2 module, please write your response to this question:
In only one
complete paragraph
, give three examples of how the speakers “discuss how existing A.I.
capabilities already pose catastrophic risks to a functional society.”
After wathcing "The A.I. Dilemma," Aza Raskin and Tristan Harris highlight their concerns
regarding the current incorporation of AI across various platforms, particularly ChatGPT. Both
speakers stress the profound risks posed by AI capabilities to society, with a particular emphasis
on the potential harm stemming from the integration of AI into social media. For example,
Snapchat's inclusion of ChatGPT, both speakers express concerns about the platform's
engagement with predominantly underage users, highlighting the risks of providing inappropriate
advice and potentially facilitating harmful situations. Additionally, the speakers mention how
major companies are integrating ChatGPT without adequate regulation or safety measures, such
as Microsoft did with the Windows 11 taskbar, which could lead to harmful and manipulative
interactions. Finally, Raskin and Harris address the substantial gap between AI development and
safety research, emphasizing the potential for unintended and detrimental consequences due to
this disparity. They advocate for a more balanced and safety-oriented approach in AI
development to mitigate societal risks and ensure responsible deployment.
Works Cited:
“The A.I. Dilemma - March 9, 2023.” YouTube, 5 Apr. 2023, www.youtube.com/watch?
v=xoVJKj8lcNQ. Accessed 27 Nov. 2023.
Question 3:
After listening to the post in week 4 module called “Podcast on ChatGPT,” posted by the Ezra
Klein Show, please write your response to this question:
In only three sentences
, do you think
Ezra Klein is skeptical of ChatGPT? You can paraphrase what you think; you don’t need to use
any direct quotes. Just write three sentences based on your general impressions after listening to
the episode of this show.
I think Ezra Klein is skeptical of ChatGPT he express his concerns about the challenges in
having machines interpret human intent and values. Klein emphasizes the need for models to
understand language deeply in order to overcome the limitations of the current big data
paradigm. As the podcast progresses, it takes a cautionary tone and emphasizes transparency in
the use of artificial intelligence.
Quiz Question 3:
In one organized paragraph, identify one key value you hold, describe where you learned the
value, and explain how you act on this value in your life today.
Question 4:
After reading the post “Elon Musk on AI,” written by Ryan Browne, which is posted in our week
6 module, please write your response to this question:
In one complete sentence,
state what
Elon Musk and other tech leaders did together on March 22, 2023.
On March 22, 2023, Elon Musk and other tech leaders signed an open letter from the Future of
Life Institute, urging to cease training models more powerful than GPT-4, expressing concerns
about the societal and ethical implications of increasingly human-competitive AI systems and the
potential risks of abuse.
Question 5:
After reading the post “Equity and Racism in ChatGPT,” written by Billy Perrigo, which is
posted in our week 7 module, please write your response to this question:
In one complete
sentence,
state what Gebru believed was an essential part of making sure companies do not
create harmful AI.
Gebru believed that establishing regulations to define unacceptable uses of AI like facial-
recognition bias and increasing legal protections for tech workers was an essential part of making
sure companies did not create harmful AI.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help