Ethical Case Analysis of Current Events Topic
docx
keyboard_arrow_up
School
University of Nairobi *
*We aren’t endorsed by this school
Course
AUDITING
Subject
Management
Date
Nov 24, 2024
Type
docx
Pages
10
Uploaded by SuperFieldElk52
Surname1
Students Name
Professor’s Name
Course Studied
Date
Ethical Case Analysis of
Current Events Topic
The current topic I have chosen is that of self-driving cars and the ethical dilemma of
whether artificial intelligence (AI) should be automated to shield the passengers or those outside
of the car. This is a current topic as many companies, such as Tesla and Google, are currently
working on developing self-driving cars. The ethical dilemma arises from the fact that, if there
was an accident, the AI would have to choose between protecting the passengers or those
pedestrians. There is no easy answer to this dilemma, as both groups of people are innocent and
deserve to be protected (Chrnikova et al., 11). However, the Yugo automobiles CEO has to
choose whether to approve the software design of the AI to save the passengers or not.
The potential solutions to this ethical dilemma range from programming the AI to protect
the car's occupants regardless of who is injured in an accident to programming the AI to protect
the car's occupants only if they are not at fault for the accident (Hewitt et al., np). Finally, the
CEO could decide to leave the decision up to the car's occupants by programming the AI to ask
the passengers or driver what their preference is in the event of an accident. Each of these
solutions has their own advantages and disadvantages,
and it is up to the CEO of Yugo
Automotive to decide which solution is best for the company.
The ethical issues facing the CEO in this situation are complex. On the one hand, the
CEO is responsible for protecting the company's customers. On the other hand, the CEO is also
responsible for ensuring that the AI does not cause harm to innocent bystanders. The decision
that the CEO should make in this situation is to authorize the development of AI to save the
2
passengers and driver. This will ensure that the company's customers are protected and that the
AI does not cause harm to innocent bystanders. In turn, the overall security presented to both
parties by the AI will ensure that the system becomes a favorite to the public and future drive-up
sales for the car.
The ethical dilemma faced by the CEO of Yugo Automotive is whether to authorize the
software design of the AI to save the passengers or not. If the AI is set to protect the passengers
at all costs, then it is possible that the car could be involved in an accident where the occupants
are not at fault. On the other hand, if the AI is not set to save the passengers, then it is less likely
that accidents will occur, as the AI would be making decisions based on the safety of everyone,
not just the occupants of the car. Furthermore, if the AI is programmed to save the passengers at
all costs, it is more likely that the passengers will be safe in the event of an accident or incident.
However, it is possible that the AI may not be able to protect the occupants effectively, leading to
injuries or even deaths. Additionally, the AI may protect the occupants at the expense of others,
such as pedestrians or cyclists (Dixon et al., 282). the AI may become overly aggressive in
protecting the occupants, leading to accidents that could have been avoided. The CEO must
consider all of these factors in order to make the best decision for the company. He must decide
whether
it is more important to protect the passengers of the car or those outside of the car. He
must also consider the potential consequences of each decision, such as the safety of the
occupants, the satisfaction of the customers, and the reputation of the company.
The topic of self-driving cars and the ethical dilemma of whether AI should be automated
to shield the passengers or those outside of the car is an important topic to the computing field.
Self-driving cars are a rapidly growing technology and are likely to become more prevalent in
the near future. As such, it is important to consider the ethical implications of this technology.
3
The ethical dilemma faced by the CEO of Yugo Automotive is an important one as it has
implications for the safety of the car's occupants and those outside of the car. The decision that
the CEO makes will have a direct impact on the safety of both the passengers and pedestrians. It
is also important to consider the potential consequences of the decision, such as the safety of the
occupants, the satisfaction of the customers, and the reputation of the company. The computing
field must take these ethical considerations into account when designing and developing self-
driving cars. It is important to ensure that the AI is programmed to prioritize the safety of both
the passengers and those outside of the car. It is also important to ensure that the AI is
programmed to make decisions in the best interest of the occupants and not just the passengers. it
is important to ensure that the AI is programmed to make decisions that are in line with public
trust.
The ethical issues that arise in this scenario are whether the AI should be programmed to
protect the passengers at all costs or to protect everyone involved in an accident, whether the AI
should prioritize the safety of the occupants over their comfort or convenience, and what the
impact will be on public trust if the AI is programmed to protect the occupants at the expense of
innocent bystanders. These ethical issues must be carefully considered and weighed before the
CEO of Yugo Automobiles makes a decision.
Analysis Sections:
The moral agents in this scenario are the CEO of Yugo automobiles and the artificial
intelligence (AI) system being used in the self-driving cars. The CEO is responsible for making
the decision about how the AI should be programmed, while the AI is responsible for making the
decisions that will be implemented in the car. Both agents have a responsibility to ensure the
safety of the passengers and those outside the car. The CEO has the responsibility to make sure
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
4
that the AI is programmed in a way that will protect the passengers as well as any bystanders.
The AI has the responsibility to make decisions that will prioritize the safety of the occupants
and avoid injuring or killing innocent bystanders.
The value at stake is primary,secondary and tertiary. The primary value at stake in this
ethical dilemma is the safety of the passengers and the innocent bystanders. The passengers of
the car are the customers of the company and the company has a responsibility to ensure their
safety. At the same time, the innocent bystanders should also be protected from harm. The
decision of the CEO of Yugo Automobiles will determine which group of people will be
protected in the event of an accident. The secondary value at stake is the reputation of the
company. If the AI is programmed to protect the passengers at the expense of innocent
bystanders, it could lead to a decrease in public trust in automated vehicles. This could lead to a
decrease in sales, as people may be less likely to buy self-driving cars if they perceive them as
unsafe. The tertiary value at stake is the cost of the car. If the AI is programmed to prioritize the
safety of the occupants over their comfort or convenience, it could lead to a higher cost of the
car. This could be an issue for some buyers, as they may be less likely to purchase a car that is
more expensive due to safety features.
The stakeholders in this situation are the passengers and driver of the car, the pedestrians
and other vehicles outside the car, the company developing the AI, and the CEO of the company.
The passengers and driver are the primary stakeholders, as they are the ones that the AI is being
programmed to protect. The pedestrians and other vehicles outside the car are also important
stakeholders, as their safety is also at risk if the AI is programmed to protect the passengers and
driver at all costs. The company developing the AI and the CEO of the company are also
stakeholders, as they are the ones making the decision about how the AI should be programmed.
5
All of these stakeholders have a vested interest in the outcome of the decision, and their opinions
should be taken into
consideration.
A possible course of action is to program the AI to protect the car's occupants regardless
of who is injured in an accident. This solution would ensure that the car's occupants are always
protected, but it could potentially result in innocent bystanders being injured or killed. The
primary consequence of programming the AI to protect the car's occupants regardless of who is
injured is that it could result in innocent bystanders being injured or killed. This could lead to
lawsuits against the company, as well as a decrease in public trust in self-driving cars. It could
also lead to increased regulations governing the use of self-driving cars, which could make them
less attractive to potential customers.
A possible course of action is to program the AI to protect the car's occupants only if they
are not at fault for the accident. This solution would be fairer to both groups of people, as it
would ensure that the car's occupants are protected unless they are responsible for the accident,
in which case the AI would then protect the innocent bystanders. This solution would be more
difficult to program the AI to make this distinction, however. The primary consequence of
programming the AI to protect the car's occupants only if they are not at fault is that it could be
difficult to program the AI to make the distinction between fault and innocence. This could lead
to mistakes being made, which could result in innocent bystanders being injured or killed.
Additionally, this solution could lead to increased costs for the company, as it would require
more sophisticated AI programming.
A consequence of programming the AI to protect the car's occupants regardless of who is
injured in an accident is that it could lead to increased regulations governing the use of self-
driving cars. This could have a negative impact on the company, as it could lead to decreased
6
sales and reduced profits. Additionally, this could lead to fewer people buying self-driving cars,
as the public may feel that the cars are not safe to be on the road if they are programmed to put
the occupants of the car above all else.
A consequence of programming the AI to protect the car's occupants only if they are not
at fault is that it could lead to mistakes being made. This could result in innocent bystanders
being injured or killed, which could lead to lawsuits against the company and a decrease in
public trust in self-driving cars. Additionally, this could lead to increased costs for the company,
as it would require more sophisticated AI programming.
Kantianism:
Kantianism is an ethical theory that is based on the principle of universalizability. This
means that an action is only considered ethical if it is able to be universally applied. In the case
of the self-driving car dilemma, this means that the CEO should make a decision that is ethical
for all parties involved, not just the passengers of the car. In this case, the best course of action
would be to program the AI to protect the passengers and innocent bystanders alike. This would
ensure that all parties involved would be protected, and that the decision could be universally
applied.
Act Utilitarianism:
Act utilitarianism is an ethical theory that states that an action is ethical if it leads to the
greatest good for the greatest number of people. In the case of the self-driving car dilemma, this
means that the CEO should make a decision that will lead to the greatest good for the most
people. In this case, the best course of action would be to program the AI to protect the
passengers and innocent bystanders alike. This would ensure that all parties involved would be
protected, and that the decision would be beneficial for the most people.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
7
Rule Utilitarianism:
Rule utilitarianism is
an ethical theory that states that an action is ethical if it follows a
rule that leads to the greatest good for the greatest number of people. In the case of the self-
driving car dilemma, this means that the CEO should make a decision that follows a rule that will
lead to the greatest good for the most people. In this case, the best course of action would be to
program the AI to protect the passengers and innocent bystanders alike. This would ensure that
all parties involved would be protected, and that the decision follows a rule that is beneficial for
the most people.
Social Contract Theory:
Social contract theory is an ethical theory that states that an action is ethical if it follows a
rule that is agreed upon by all parties involved. In the case of the self-driving car dilemma, this
means that the CEO should make a decision that follows a rule that is agreeable to all parties
involved. In this case, the best course of action would be to program the AI to protect the
passengers and innocent bystanders alike. This would ensure that all parties involved would be
protected, and that the decision follows a rule that is agreeable to all parties involved.
Virtue Theory:
Virtue theory is an ethical theory that states that an action
is ethical if it aligns with the
virtues of justice, benevolence, and prudence. In the case of the self-driving car dilemma, this
means that the CEO should make a decision that is just, benevolent, and prudent. In this case, the
best course of action would be to program the AI to protect the passengers and innocent
bystanders alike. This would ensure that all parties involved would be protected, and that the
decision is just, benevolent, and prudent.
ACM Code of Ethics
8
The ACM Code of Ethics contains eight different clauses. Three of the clauses that apply
to this situation are 1.1, 1.2, and 1.3.
Clause 1.1 states that “Computing professionals should, at all times, act honorably,
responsibly, ethically, and legally.” In this situation, the CEO of Yugo automobiles needs to
consider the ethical implications of automating AI to protect the passengers of the car. The CEO
needs to consider the safety of the occupants and the bystanders, as well as the company's
reputation and the public's trust in self-driving cars.
Clause 1.2 states that “Computing professionals should strive to achieve the highest
quality, effectiveness, and dignity in both the process and products of their work.” In this
situation, the CEO needs to strive to create a system that is both safe and effective. The CEO
needs to consider the safety of the passengers and the bystanders, as well as the effectiveness of
the system.
Clause 1.3 states that “Computing professionals should be honest and trustworthy.” In
this situation,
the CEO needs to be honest and trustworthy in his decision-making process. He
needs to consider both the safety of the passengers and the bystanders, as well as the company's
reputation and the public's trust in self-driving cars.
Software Engineering Code of Ethics
The Software Engineering Code of Ethics contains six different clauses. Three of the
clauses that apply to this situation are 1, 4, and 5.
Clause 1 states that “Software engineers shall commit themselves to making the analysis,
specification, design, development, testing and maintenance of software a beneficial and
respected profession.” In this situation, the CEO needs to ensure that the design and development
9
of the self-driving car is beneficial to both the passengers and the bystanders. He needs to
consider the safety of both groups of people, as well as the public's trust in self-driving cars.
Clause 4 states that “Software engineers shall not knowingly release software that is
defective either in behavior or structure.” In this situation, the CEO needs to ensure that the AI is
not defective in its behavior or structure. The AI needs to be programmed to make decisions that
consider the safety of both the passengers and the bystanders.
Clause 5 states that “Software engineers shall maintain and
improve their knowledge and
skills needed for the successful practice of their profession.” In this situation, the CEO needs to
ensure that the AI is programmed with the most up-to-date knowledge and skills. The AI needs to
be able to make decisions that consider the safety of
both the passengers and the bystanders.
I would recommend programming the AI to protect the car's occupants only if they are
not at fault for the accident. This solution would be fairer to both groups of people, as it would
ensure that the car's occupants are protected unless they are responsible for the accident, in which
case the AI would then protect the innocent bystanders. This solution would be more difficult to
program the AI to make this distinction, however. There are several reasons why this solution is
the best option. First, it ensures that the occupants are only protected if they are not to blame for
the accident, which is more fair to both groups of people. Second, it reduces the risk of lawsuits
against the company, as the AI would be programmed to make decisions based on fault. Third, it
reduces the risk of public mistrust in self-driving cars, as it would be more difficult to blame the
AI for mistakes. Finally, it could lead to fewer accidents, as the AI would be better equipped to
make decisions based on fault.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
10
Work Cited
Chernikova, Alesia, et al. "Are self-driving cars secure? evasion attacks against deep neural
networks for steering angle prediction."
2019 IEEE Security and Privacy Workshops
(SPW)
. IEEE, 2019: 10-21
Dixon, Graham, et al. "What drives support for self-driving car technology in the United
States?."
Journal of Risk Research
23.3 (2020): 275-287.
Hewitt, Charlie, et al. "Assessing public perception of self-driving cars: The autonomous vehicle
acceptance model."
Proceedings of the 24th international conference on intelligent user
interfaces
. 2019.
Related Documents
Recommended textbooks for you
Management, Loose-Leaf Version
Management
ISBN:9781305969308
Author:Richard L. Daft
Publisher:South-Western College Pub
Recommended textbooks for you
- Management, Loose-Leaf VersionManagementISBN:9781305969308Author:Richard L. DaftPublisher:South-Western College Pub
Management, Loose-Leaf Version
Management
ISBN:9781305969308
Author:Richard L. Daft
Publisher:South-Western College Pub