Assignment 2 - Case Analysis thtutor 1041
docx
keyboard_arrow_up
School
Fliedner College *
*We aren’t endorsed by this school
Course
301
Subject
Information Systems
Date
Nov 24, 2024
Type
docx
Pages
11
Uploaded by EarlRoseNewt10
Google’s Bard mistake (AI misinformation) (no hyperlink, this was widely
covered in the media)
1 |
P a g e
Table of Contents
Introduction
................................................................................................................................
3
Discussion
..................................................................................................................................
3
Crisis typology
.......................................................................................................................
3
Media visibility
......................................................................................................................
5
Stakeholder impact
.................................................................................................................
7
Crisis response
.......................................................................................................................
8
Recommendation
........................................................................................................................
8
Conclusion
..................................................................................................................................
9
References
................................................................................................................................
11
2 |
P a g e
Introduction
In both popular and scientific culture, artificial intelligence is a hot topic that has the potential
to change business and, more broadly, how people interact with technology. Artificial
intelligence is more than merely simplifying laborious activities and increasing productivity.
AI applications can learn from data and outcomes in almost real-time thanks to machine
learning and deep learning, analyzing new information from several sources and adapting
accordingly, making it a highly important level for business. Become precise. Bard is created
as an LLM interface that enables user-generative AI collaboration. Researcher thinks that
helping people fulfil their potential as human beings is one of the promises of LLM-based
technologies like Bard's. Google carefully and methodically follow plan as google carry out
the experiment known as Bard (
Akter
et al.,
2023). Google collaborate with industry
professionals, educators, policymakers, civil rights activists, content producers, and others to
examine the numerous possible applications, dangers, and restrictions of this developing
technology and its Learn how to be better In the competition for the greatest artificial
intelligence technology, Google is attempting to reassure the public that it still holds the
upper hand. And thus far, it appears that the internet behemoth provided the incorrect
response. Researcher was spotted responding wrongly to questions in an advertisement
intended to highlight his new AI bot. The market value of parent company Alphabet
decreased by $100 billion (£82 billion) on Wednesday as shares dropped more than 7%.
Discussion
Crisis typology
The parent company of Google, Alphabet, claims that its new chatbot, Bird, revealed false
information in a promotional video, erasing $100 billion from its market value and losing to
competitor Microsoft. Concern grew and lost ground. In trading on Wednesday, Alphabet's
shares fell as much as 9% and somewhat increased to 7.68%. Alphabet stock, which has lost
40% of its value in the past year, has increased by 15% since the beginning of 2023. After
Reuters revealed that the commercials for Google's new chatbot, Bard, which debuted
Monday, were defective, shares of the company fell sharply(
Awad, 2023). Google has been
working nonstop to make Bard available ever since Microsoft-backed startup OpenAI
debuted ChatGPT in November. With its human-like responses, ChatGPT, an artificial
intelligence-based chatbot, has caught the IT industry by storm.
Accuracy problem
Bard is taught to produce responses that are both contextually relevant and in line with user
3 |
P a g e
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
intent, building on Google understands of high-quality information. However, like all LLMs,
Bard has the potential to produce responses that are erroneous or deceptive even when
researcher presents himself as confident and persuasive. Since LLM's fundamental working
principle is to anticipate the next word or sentence, it currently struggles to discern between
true and false information (
Byrne, 2023). For instance, if Company ask an LLM to solve a
textual maths issue, instead of using sophisticated reasoning and calculation, the LLM will
forecast the solution based on other learnt insights. To that end, we've seen Byrd give
explanations that are either made up or false.
Bias
Training data includes viewpoints and ideas from a range of sources, including the general
population. Google keep looking for new ways to apply this information so that LLM
responses can take a greater diversity of viewpoints into account while avoiding unfavourable
reactions. When attempting to anticipate suitable responses, the model may include any gaps,
biases, or prejudices included in the training data in its output. These problems can show
themselves in a variety of ways, such as comments that solely reflect one culture or
demographic, responses that make use of unfavourable stereotypes, or responses that
demonstrate gender, religious, or racial discrimination (
Dankova,
2023). The information is
incomplete for several subjects. In other words, there isn't enough trustworthy knowledge
available for LLM to research a subject thoroughly and develop accurate conclusions. In
these situations, there is an increase in the production of false or erroneous information.
Everyone's safety is a priority when building a safe environment at Bard, and this is an
ongoing goal. Through constant fine-tuning Google enhance Bard's training data and system.
In order to conduct research and create domain roadmaps with in-depth topic experience
outside of Google, google also collaborate with a variety of communities and domain experts.
False positives / negatives
We have put in place a number of technical protections to stop Bard from reacting to
impromptu requests or showing objectionable or hazardous content. These guardrails are
meant to stop undesirable reactions, but Bird has the ability to misinterpret them, leading to
"false positive" and "false negative" outcomes. If a "false positive result" happens, Bard
might not react appropriately or might perceive the cues incorrectly. The safeguards in place
might not prevent a "false denial," and Bard might still elicit an unwarranted response. In
order to comprehend and categorise safe entry and departure places better,googlekeep
improving these models (
Dwivedi
et al.,
2023). As civilizations, events, and languages
change quickly, this scenario will persist.
4 |
P a g e
Media visibility
Google Ads can assess whether an advertisement is displayed to a potential customer using
the Active View technology found on YouTube and some websites and applications in the
Display Network. For
video and display campaigns, use the Active View measure to track
how frequently
ads appear on websites, mobile devices, and apps.
The possibility that a user will find a specific website when looking up related keywords is
measured by search visibility. This is determined by the volume of searches, the number of
keywords a website ranks for, and the position of the domain in the search results (
Fraiwan
and Khasawneh,
2023).
In order to evaluate
website prospective traffic from organic search results to that of
rivals,
it is crucial to analyse this measure. In order to evaluate
website's prospective traffic from
organic search results to that of
rivals, it is crucial to analyses this measure. Low or no
website visibility indicates that consumers cannot reach
website through organic search and
that
domain performs poorly for the most pertinent keywords. On the other hand, a website
is said to have great exposure in search engines if it gets high SERP ranks for a lot of
(popular) keywords. The maximum visibility is given to websites that are among the top three
in the SERPs. A domain's visibility in search engines is very low if it does not show up on the
first page (top 10) of results (
Garon, 2023).
75% of Hubspot users, according to study, never scroll past the first page of Google search
results. As a result, the first page of
website receives the most visibility and consequently
the most visitors. In the study, More Sistrix discovered that researcher received 28.5% of his
clicks from Google's first organic search result, 15.7% from the second, and 11.0% from the
third.
How to use Bard
As was previously noted, Bard is capable of producing unique answers even to queries and
prompts that are the same or quite similar. In initial testing, Google found that it was helpful
for users to be able to view some of these many responses, particularly for creative topics like
poetry or short tales or when there wasn't a single right answer. Researcher can confirm that if
the user chooses "Show other draughts," they can view many versions of the bard's response
and pick the one they want (
Morris, 2023). Bard is made to produce unique output based on
the underlying prediction method, just like other independent LLM-based interfaces. In rare
circumstances, the response may include references to previously published material. If Bard
takes a straight quote from a website, researcher will cite it so that readers can find the page
5 |
P a g e
and read more about the subject.
Multi-turn interactions with Bard, or interactions in which
user and bard have several back-and-forth reactions, can be interesting, but some of the
difficulties mentioned above are also likely to happen. Bard's ability to save context is now
restricted on purpose to enable for more modern and practical interactions with Bard. Bard
keeps picking up new information; researcher gets better at keeping context over lengthy
chats (
Porsdam Mann
et al.,
2023). It's important to remember that generative AI, like Bard,
is prone to "hallucinations" and gibberish by nature. Therefore, the data produced by the
model should be validated, just like ChatGPT, etc. Although Bard still needs work to catch up
to ChatGPT, a new upgrade demonstrates Google's commitment to enhancing its AI chatbot
and making it a competitive alternative.
Promotional Data
With Google's Bard and ChatGPT, users may ask questions, make requests, and generate
prompts to receive responses that are human-like. Users of ChatGPT, which became publicly
accessible in November, may become chefs and share recipes, write business plans for
marketers, and news releases for public relations experts. Google, which generates the
majority of its revenue from Google search, has similar intentions to Microsoft to add AI
tools to the service. The primary distinction between ChatGPT and Bard is that Bard allows
for the inclusion of current events in repliesBard uses data from the Internet, but ChatGPT
has access to information from many other sources (
Rudolph
et al.,
2023).
Although both
technologies are built on extensive language models, experts have long expressed concern
that artificial intelligence (AI) systems could propagate false information. However,
specialists in AI believe that with more advancement, the tool would be able to tell the
difference between accurate and fraudulent information.
Technology firms of all sizes are vying with one another to promote AI-powered goods,
intensifying the fight for artificial
intelligence. However, the failures of new technologies are
multiplying, and Google's early AI-powered chatbots have already reduced the company's
market worth by $100 billion.
Pichai said on Monday that Bard will be made public in a few weeks, but it could be wise for
Google to spend more time honing it before then. The blog post by Pichai featured a
promotional film showcasing Bard's skills.
Bard's error was originally reported by Reuters,
and as a result, Google's stock declined (
Steinhoff,
2023). In intraday trading on Wednesday
afternoon, Google shares were down around 8% to about $99 per share from $108 on
Tuesday. On Wednesday, the market cap was $1.27 trillion, up from $1.35 trillion the
previous week. Just before Google planned an event in Paris to highlight more of his Bard's
6 |
P a g e
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
features, the problem was discovered.
Stakeholder impact
According to reports, Alphabet's shares dropped 8% (8.59% per share) to $99.50, making it
one of the most frequently traded businesses on American stock exchanges. The absence of
information during Google's AI search event, which failed to demonstrate how Microsoft's
ChatGPT challenge will be solved, was blamed by analysts for the decline in market value.
The incident emphasises the significance of meticulous quality assurance and testing
procedures in creating and releasing AI-powered products. Google acknowledged its error
and made the announcement that it will launch a programme called Trusted Testers that
combines external feedback with internal testing to make sure Bard's responses adhere to its
strict standards for quality, security, and real-world information (
Taecharungroj, 2023). The
decrease in Alphabet's market value serves as a warning that AI technology is not absolute
and that it can have important ramifications for businesses and their stakeholders.
Organisations must prioritise creating effective testing and quality assurance processes as AI
continues to change how humans interact with information in order to prevent expensive
errors and reputational harm. In conclusion, his loss of more than $100 billion in market
value for Google emphasises the necessity for accuracy and care while developing and
releasing AI-powered goods. To ensure that his AI products are of the best calibre and are
trusted by users as AI develops, the organisation must place a strong priority on quality and
security.
To determine how AI will affect the future of business, the online research platform Real
Research has launched a study of Google, which lost $100 billion in shares due to AI gaffes.
A vote on his $100 billion Google loss caused by AI chatbot defects will be conducted on
February 17, 2023, on the Real Research app (
Wandhöfer and Nakib,
2023). If researcher
participates, researcher will earn 60 multinational companies as a prize.
According to Bloomberg, 18 current and former Google employees have disseminated
inaccurate information, showing that Byrd "deprioritizes his ethical obligations" and "does
not lose out to the competition." researcher claimed to have viewed an internal document that
said that. According to screenshots of internal conversations obtained by Bloomberg, Google
employees referred to Byrd as a "pathological liar." Another employee referred to it as
"terrifying." That's not all, though. Mr. Bird supplied advice that caused the crash when a
worker approached him for guidance on how to land the aircraft. According to a different
official's letter, Mr. Byrd's diving-related responses would probably cause "injury or
7 |
P a g e
death(
Grünebaum
et al.,
2023)." According to the Bloomberg story, personnel who are in
charge of new product safety and ethics have been told not to "interfere with or attempt to
destroy generative AI tools under development."
Crisis response
Ongoing research and development
Bard advances Google's most recent LLM efforts, which include the launch of the Natural
Conversational Model in 2015. With the use of this framework, it was shown how the model
can predict the subsequent sentence in a discussion based on the preceding one, leading to a
more natural conversational experience. A crucial task for comprehending natural language
and artificial intelligence is conversation modelling. There are earlier methods, but they
frequently have limitations (such being only applicable to booking airline tickets) and
demand hand-crafted regulations (
Khorashadizadeh
et al.,
2023). In this piece, use the
recently proposed Sequence-to-Sequence His architecture to give a simple method for doing
this operation. Approach changes by foretelling the subsequent statement in a discussion
using the prior sentences.
model's strength is that it can be taught from beginning to end with
much fewer manually created rules. It turns out that with a sizable conversational training
dataset, this straightforward model can produce straightforward talks.
Application of AI
Principles
Accountability and safety are the two pillars on which everything google do at Bard is
built.Company help Bard flourish by, among other people, delivering a huge social benefit.
review of the early and promising Bard applications in need of a resilient web content
ecosystem is now complete. Google are dedicated to responsible innovation in this space,
which includes collaborating with content producers to determine how this new technology
may enhance their work and benefit the broader online ecosystem. Accountability and safety
are the two pillars on which everything Google do at Bard is built(
Rubin, 2022). Google are
dedicated to responsible innovation in this space, which includes collaborating with content
producers to determine how this new technology may enhance their work and benefit the
broader online ecosystem.
We're still working on this as google develop Bard, but AI
principles also emphasize the need to prevent harm. Google routinely stress test models with
internal members of "red team" (product experts and social scientists), looking for flaws,
assessing fairness and gender concerns, and assessing business potential.
Recommendation
Google requests that all interactions with Mr. Bard be "polite, friendly, and
8 |
P a g e
approachable." It also states that comments have to be made in the first person, with
a tone that is impartial and neutral. The prohibited list looks to be longer. Employees
are instructed to "avoid making inferences based on race, national origin, gender,
age, religion, sexual orientation, political ideology, location, or similar categories."
They were also instructed to depict Bird as a human person and not to "implicit
emotion or claim to have had a human experience (
Jougleux, 2022)." Employees may
also give Mr. Byrd a "low" response rating if they believe researcher offered "legal,
medical, or financial advice" or spoke in a derogatory or nasty manner should be
communicated to the inquiry team.
Google recently enhanced Bard, an AI chatbot, to compete with ChatGPT. The
internet giant has boosted chatbots' math and logic abilities as well as some AI
responses. Bard received negative initial comments from testers who criticised him
for the numerous limitations put in place by Google. As a result, the business has
padlocked the experience to stop misuse. Google pledged to enhance its artificial
intelligence in order to get rid of Bard's restrictions (
Sarel, 2023). The "First Update
to the Bard Experience" was made available by the firm on April 10, 2023. This
upgrade enhances the Google it button as well as math and logic abilities.
Bard still has flaws despite advancements. Chatbots can nevertheless occasionally
have "hallucinations" and engage in pointless chats. In addition, answers from
chatbots frequently lack inspiration, are too brief, or cannot be programmed. But
according to Jack Krawczyk, his product manager at Google, this encoding will be
available soon in a new version. Bard is still only currently available in the US and
the UK as an experimental product and is limited to English. Chatbots aren't yet a
polished product in Google's eyes (
Shur-Ofry, 2023). Furthermore, it still has a long
way to go before catching up to many of its rivals.
Conclusion
In order to improve AI responses, particularly in the areas of logic and maths, Google
modified its chatbot Bard. The "Google it" button has also received changes in this release.
Additionally, it makes use of the Pathways Language Model (PaLM), a more complex
language model. Bard still has drawbacks, such as infrequent "hallucinations," brief or
unstipulated reactions, and a lack of programming skills. Encoding should be included in
further releases, though. Bard only supports English and is currently only available in the US
and the UK. As Google Row Company globally, Google intend to keep enhancing Bard.
9 |
P a g e
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
References
Akter, S., Hossain, M.A., Sajib, S., Sultana, S., Rahman, M., Vrontis, D. and McCarthy, G.,
2023. A framework for AI-powered service innovation capability: Review and agenda for
future research.
Technovation
,
125
, p.102768.
Awad, A.G., 2023.
Can Artificial Intelligence and Big Data Analytics Save the Future of
Psychiatry?: The Search for a New Psychiatry and Other Challenges
. iUniverse.
Byrne, M.D., 2023. Generative Artificial Intelligence and ChatGPT.
Journal of
PeriAnesthesia Nursing
.
Dankova, B., 2023.Company Had Better Check the Facts: Reader Agency in the
Identification of Machine-Generated Medical Fake News.
Reinvention: an International
Journal of Undergraduate Research
,
16
(1).
Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A.M.,
Koohang, A., Raghavan, V., Ahuja, M. and Albanna, H., 2023. “So what if ChatGPT wrote
it?” Multidisciplinary perspectives on opportunities, challenges and implications of
generative conversational AI for research, practice and policy.
International Journal of
Information Management
,
71
, p.102642.
Fraiwan, M. and Khasawneh, N., 2023. A Review of ChatGPT Applications in Education,
Marketing, Software Engineering, and Healthcare: Benefits, Drawbacks, and Research
Directions.
arXiv preprint arXiv:2305.00237
.
Garon, J., 2023. A Practical Introduction to Generative AI, Synthetic Media, and the
Messages Found in the Latest Medium.
Synthetic Media, and the Messages Found in the
Latest Medium (March 14, 2023)
.
Morris, M.R., 2023. Scientists' Perspectives on the Potential for Generative AI in their
Fields.
arXiv preprint arXiv:2304.01420
.
Porsdam Mann, S., Earp, B.D., Nyholm, S., Danaher, J., Møller, N., Bowman-Smart, H.,
Hatherley, J., Koplin, J., Plozza, M., Rodger, D. and Treit, P.V., 2023. Generative AI entails a
credit–blame asymmetry.
Nature Machine Intelligence
, pp.1-4.
Rudolph, J., Tan, S. and Tan, S., 2023. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie
and beyond. The new AI gold rush and its impact on higher education.
Journal of Applied
Learning and Teaching
,
6
(1).
RudolphA, J., 2023. Journal of Applied Learning & Teaching.
Journal of Applied Learning &
Teaching
,
6
(1).
Steinhoff, J., 2023. AI ethics as subordinated innovation network.
AI & SOCIETY
, pp.1-13.
Taecharungroj, V., 2023. “What Can ChatGPT Do?” Analyzing Early Reactions to the
10 |
P a g e
Innovative AI Chatbot on Twitter.
Big Data and Cognitive Computing
,
7
(1), p.35.
Wandhöfer, R. and Nakib, H.D., 2023. The Universe of Technology. In
Redecentralisation:
Building the Digital Financial Ecosystem
(pp. 39-74). Cham: Springer International
Publishing.
Grünebaum, A., Chervenak, J., Pollet, S.L., Katz, A. and Chervenak, F.A., 2023. The exciting
potential for ChatGPT in obstetrics and gynecology.
American Journal of Obstetrics and
Gynecology
.
Khorashadizadeh, H., Mihindukulasooriya, N., Tiwari, S., Groppe, J. and Groppe, S., 2023.
Exploring In-Context Learning Capabilities of Foundation Models for Generating Knowledge
Graphs from Text.
arXiv preprint arXiv:2305.08804
.
Rubin, V.L., 2022. Investigation in Law Enforcement, Journalism, and Sciences.
In
Misinformation and Disinformation: Detecting Fakes with the Eye and AI
(pp. 123-156).
Cham: Springer International Publishing.
Jougleux, P., 2022. Hate Speech, Fake News, and the Moderation Problem. In
Facebook and
the (EU) Law: How the Social Network Reshaped the Legal Framework
(pp. 183-212).
Cham: Springer International Publishing.
Sarel, R., 2023. Restraining ChatGPT.
Available at SSRN 4354486
.
Shur-Ofry, M., 2023. Multiplicity as an AI Governance Principle.
Available at SSRN
4444354
.
11 |
P a g e