Signature Assignment PHIL 341
docx
keyboard_arrow_up
School
West Coast University *
*We aren’t endorsed by this school
Course
341
Subject
Philosophy
Date
Jan 9, 2024
Type
docx
Pages
6
Uploaded by DukeGrasshopperMaster976
1
Signature Assignment
Topic: The ethical dilemma of self-driving cars
West Coast University
PHIL 341
2
Signature Assignment
“The Ethical Dilemma of Self-driving Cars" (Lin, 2015) is an interesting and thought-
provoking TedEd video that questions our ethics and reasoning. Patrick Lin, the speaker, asserts that in order for motor vehicles to be completely autonomous, i.e. be able to be "driven" independently on the road without any driver, they must be able to possess or imitate decision-
making by man. While incorporating road maps and driving rules and regulations in a code is fairly manageable, not all choices made by a human driver can be coded in such a way. A good example of this is a person hitting a car with a lone person as opposed to another car with more passengers when the latter means that person will be safer. Of course, it probably goes beyond that reasoning, but the point is, there are different things that must be taken into consideration, from which ethics and morality--both of which are humanistic components of thought and perhaps feelings as well--play a significant role in driving. And this is the most difficult to incorporate or code into the program of the perfectly independent self-driving car--it does not have ethics to begin with given that it is a machine and that programming it is close to impossible (since one person's morals and/or ethics differ from another). From this, Lin presents a syllogism with the following major premise: that murder is the intentional and premediated killing of another person. His minor premise was the following: programmers of self-driving cars
must intentionally and premeditatively program them to kill another person in order to protect the lives of the driver. The conclusion from this is that programmers of self-driving cars are charged of murder if they program a car with the intent to kill another human being in order to protect the lives of the driver (Lin, 2015).
Based on the previous syllogism, the conclusion follows an "if-then" statement which can
be phrased as, “if murder is the intentional and premediated killing of another person and
3
programmers intentionally programmed a car to kill another person, then the programmers are charged with murder.” This form of conclusive statement is based on deductive reasoning. Moore and Parker explain that validity of this argument, stating that if all premises are true and valid, then it follows that the conclusion must also be true and valid (2020). However, if at least one of the premises is false, then it follows that the conclusion must be false as well and is not valid. Since none of the given premises are false in nature, then the opposite must be true, making the argument valid.
Rhetorical appeals are considered as the qualities that make an argument persuasive, and three of these appeals include logos, ethos, and pathos (Moore and Parker, 2020), wherein logos refers to the appeal to logic or reason, ethos refers to ethical appeal or an author’s credibility or character, and pathos refers to appeal to emotion. First, an example of logos in Lin’s discussion is his differentiation between a reaction and a decision, based upon how a manned vehicle would
react versus how a self-driving vehicle would decide given a particular situation (Lin, 2015). Second, an example of ethos is when Lin states that self-driving cars are predicted to reduce accidents (Lin, 2015), mainly because of programmed equipment operates without human error in the calculation. Finally, an example of pathos is when Lin presents the scenario on which driver/car to hit; of noteworthy is when he mentions hitting the biker, making it seem like the audience is penalizing the biker for being a “safe driver” compared to the other options (Lin, 2015). Each of the appeals increase the persuasiveness of the argument that Lin makes in the discussion. As previously mentioned, logos appeals to logical reasoning, i.e. the argument makes
sense to a person who thinks logically; ethos appeals to credibility or character, and in this case machines are devoid of human error, or at least can be programmed for such to be removed, making them credible to perform actions that do not require human interaction nor intervention;
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
4
pathos appeals to emotion, from which the example convinces an audience to consider not just legal but also moral aspects of the decision to hit a particular driver.
Going back to the previous example in the beginning, Lin claims that if a programmer instructs the car to hit another driver, the situation would look like premeditated homicide (2015). In a way, the statement uses a rhetorical device called hyperbole, which pertains to an overstatement or exaggeration used in persuasive arguments (Moore & Parker, 2020). He phrases
his argument to exhibit the most extreme or worst-case scenario rather than showing the alternative which is minimizing harm given the circumstance, or the lesser of the evils for lack of
a better term. Overall, there is some truth to this, but in that sense, it may exemplify a false dilemma logical fallacy, which is the thought that more options or possibilities exist but only a limited number of options is presented. I think that the use of the rhetorical device was deliberate
because Lin’s arguments present worst-case scenarios that are actual possibilities and that may be faced by the programmer and an actual driver equally. In terms of the fallacy, I believe that it is not deliberate because it’s possible for the speaker not to be able to account for all flaws in logic and potential outcomes from a specific programmable decision.
In terms of moral reasoning, I think that the speaker’s tone favors consequentialism, which is “the view that the consequences of a decision, deed, or policy determine its moral value” (Parker & Moore, 2020). In essence, if the consequences of an action are better than the alternatives, then that action is moral, relative to the other actions. As far as the scenario where a choice has to be made on who the autonomous vehicle must hit, the speaker’s position is more of
a consequentialist. Specifically, the speaker sees more value in the outcome or consequence of the actions taken as opposed to how inherently “wrong” or “right” the action is. This is
5
exemplified in the scenario where a particular decision would cause the least amount of damage for all parties involved (Lin, 2015). After watching the video and thinking critically about the speaker's arguments, I find there is more to the idea of self-driving cars than just the act of programming cars to drive independently. And this critical thinking course puts that into perspective, such that any argument should not just be taken at face value since there may be more aspects to consider. Applying the concepts learned from this critical thinking class, I can see the importance of looking at all possible sides of an argument and be able to check for their validity and logic. I know that their application would also be useful not just for school but for life in general.
6
References
Lin, Patrick. (2015, December). The ethical dilemma of self-driving cars [Video]. TED Conferences. https://www.youtube.com/watch?v=ixIoDYVfKA0
Moore, B., & Parker, P. (2012). Critical Thinking, 10th Edition. McGraw-Hill. ebook. https://ariadanesh.com/wp-content/uploads/2021/04/Critical-Thinking-_10-
Edition-_-ariadanesh.com_.pdf
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help