CISC280 project 11

docx

School

Northampton County Area Community College *

*We aren’t endorsed by this school

Course

280

Subject

Computer Science

Date

Dec 6, 2023

Type

docx

Pages

7

Uploaded by UltraTurtle1405

Report
CISC280 – Project 11 For your final project, I have listed several scenarios below. Some of these questions might be a little tougher than the previous projects because I want you to think about how ethics really should be impacting technology in today’s environment. Scenario 1 : Leslie is a cybersecurity consultant approached by a new startup, BioHack, which plans to develop a revolutionary but controversial new consumer product: a subdermal implant that will broadcast customers’ personally identifying information within a 10-foot range, using strong encryption that can only be read and decrypted by intended receivers using special BioHack-designed mobile scanning devices. Users will be able to choose what kind of information they broadcast, but two primary applications will be developed and marketed initially: the first will broadcast credit card data enabling the user to make purchases with the wave of a hand. The second will broadcast medical data that can notify emergency first responders of the users’ allergies, medical conditions, and current medications. The proprietary techniques that BioHack has developed for this device are highly advanced and must be tightly secured in order for the company’s future to be viable. However, BioHack’s founders tell Leslie that they cannot presently afford to hire a dedicated in-house cybersecurity team, though they fully intend to put one in place before the product goes to market. They also tell Leslie that their security budget is limited due to the immense costs of product design and prototype testing, so they ask her to recommend FOSS (free open-source software) solutions for their security apparatus and seek other cost-saving measures for getting the most out of their security budget. They also tell her that they cannot afford her full consulting fee, so they offer instead to pay her a more modest fee, plus a considerable number of shares of their company stock. 1. What risks of ethically significant harm, are involved in this case? Who could be harmed if Leslie makes poor choices in this situation, and how? What potential benefits to others should she consider in thinking about BioHack’s proposal? Users’ sensitive information will be easily accessible to anyone with basic hacking knowledge. If bad actors were to get ahold of this information, people’s identities could be stolen, leading to their financial ruin. I don’t see any benefits to anyone but BioHack, which is a company more concerned about making money than about user safety and security. Their “intentions” to put a cybersecurity team in place don’t matter; what matters is that they DO, which, from everything else they told Leslie, I seriously doubt will actually happen. If I was Leslie, I would give any benefits to BioHack absolutely zero consideration – especially if they were trying to convince me to offer my consulting services for a cheaper amount than I know my expertise is worth. 2. Beyond the specific harms noted in your answer to 1, what are some ethical concerns that Leslie should have about the proposed arrangement with BioHack? Are there any ethical ‘red flags’ she should notice?
CISC280 – Project 11 Leslie should be concerned that if she agrees to work with BioHack, she will be complicit in the inevitable harm that will come to users of this product. She should be concerned with this company’s ethical egoism. A huge ethical red flag that she should notice is that BioHack doesn’t have the funds to properly pay for anything regarding this project – and doesn’t care about cutting corners as long as they get their product to market and get people buying it. They don’t care about the public welfare, about Leslie’s welfare, or about anything other than getting this product to market and making money and a name for themselves. 3. What are three questions that Leslie should ask about the ethics of her involvement with BioHack before deciding whether to accept them as clients (and if so, on what terms?) 1. Is what BioHack is proposing legal? 2. Will this result in serious, considerable harm being brought to the public? 3. How would my involvement in this make me feel about myself? 4. Can you think of any specific conditions that Leslie should ask BioHack’s founders to agree to before she can ethically accept this arrangement? What are they? This company seems too shady to me to be comfortable ever accepting this arrangement, but if I had to think of some specific conditions she should require the founders to agree to, she should require them to put in writing that they not only guarantee to have in place a dedicated in-house cybersecurity team before the product goes to market, but that the team is fully familiar and comfortable with the product before it launches; and she should insist on getting her consulting fee in full (it seems way too risky to own shares of this company). Scenario 2: In the summer of 2017, it was revealed that Equifax, a massive credit reporting bureau managing the credit rating and personally identifying information of most credit-using Americans, had suffered a severe security breach affecting 143 million Americans. Among the data stolen in the breach were social security and credit card numbers, birthdates, addresses, and information related to credit disputes. The scale and severity of the breach was nearly unprecedented, and to make things worse, Equifax’s conduct before and after the announcement of the breach came under severe criticism. For example, the website created by a PR consulting firm to handle consumer inquiries about the breach was itself riddled with security flaws, despite requesting customers submit personally identifying information to check to see if they were affected. The site also told consumers that by using the site to see if they were affected, they were waiving legal rights to sue Equifax for damages related to the breach. The site, which gave many users inconsistent and unclear information about their status in the breach, offered to sell consumers further credit protection services from Equifax, for a fee. Soon it was learned that Equifax had known of the May 2017 breach for several months before disclosing it. Additionally, the vulnerability the attackers exploited had been discovered by Equifax’s software supplier earlier that year; that company provided a patch to all of its
CISC280 – Project 11 customers in March 2017. Thus, Equifax had been notified of the vulnerability, and given the opportunity to patch its systems, two months before the breach exposed 100 million Americans to identity theft and grievous financial harm. Later, security researchers investigating the general quality of Equifax’s cybersecurity efforts discovered that on at least one of Equifax’s systems in Argentina, an unsecured network was allowing logons with the eminently guessable ‘admin/admin’ combination of username and password, and giving intruders ready access to sensitive data including 14,000 unencrypted employee usernames, passwords and national ID numbers. Following the massive breach, two high-ranking Equifax executives charged with information security immediately retired, and the Federal Trade Commission launched an investigation of Equifax for the breach. After learning that three other Equifax executives had sold almost two billion dollars of their company stock before the public announcement of the breach, the Department of Justice opened an investigation into the possibility of insider trading related to the executives’ prior knowledge of the breach. 1. What significant ethical harms are involved in the Equifax case, both in the short-term and the long-term? Who are some of the different stakeholders who may be harmed, and how? Equifax knew about both the vulnerability and the breach before it happened, yet took no action to prevent it or to notify those customers affected within anything even remotely resembling a timely fashion. Hackers were able to obtain names, addresses, birthdays, telephone numbers, Social Security numbers, and credit card numbers. Recovering from identity theft is a long and arduous task (it took me 3 years to resolve mine a decade ago, and I’m going through it yet again with PA’s unemployment insurance – most likely due to either this or Experian’s breach). All the stakeholders would be harmed financially because Equifax’s stocks tanked as soon as the public found out about the breach, but CIO Jun Ying and software engineer Sudhakar Reddy Bonthu were hit particularly hard – both were found guilty and sentenced for insider trading. CFO John Gamble (who sold 13% of his shares), President of U.S. Information Solutions Joseph Loughran (who sold 9% of his shares), and President of Workforce Solutions Rodolfo Ploder (who sold 4% of his shares) have NOT been charged with insider trading, despite having sold their shares three days after the company learned of the hack. A particularly despicable ethical violation that’s not mentioned in the scenario above is that it’s quite obvious that Equifax set up their own employees to take the fall: for example, Bonthu was recruited by Equifax to work on an internal project called Project Sparta for a “high-priority client.” He was not told who this client was, but discovered on his own that the client was, in fact, his employer. He proceeded to buy put options in Equifax, which netted him a 3,500% increase on his investment. Equifax had the audacity to investigate several employees for insider trading (while letting their top execs off scot —free) and he was fired by the company after refusing to cooperate with the investigation.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
CISC280 – Project 11 2. What do you imagine might be some of the causes of Equifax’s failure to adopt more stringent cybersecurity protections and a more effective incident response? Consider not just the actions of individuals but also the larger organizational structure, culture, and incentives. The causes were sheer incompetence and laziness. A patch that had been available for months wasn’t installed, and because of this, 143,000,000 people had their personal data exposed. Once Equifax did eventually inform the public of the breach, they made matters even worse by linking breach victims to a scam site via Twitter. ( https://www.forbes.com/sites/janetwburns/2017/09/21/equifax-was-linking-potential- breach-victims-on-twitter-to-a-scam-site/?sh=27669bdf288f ) According to the US Senate’s Permanent Subcommittee of Investigations’ report ( https://www.hsgac.senate.gov/imo/media/doc/FINAL%20Equifax%20Report.pdf ), Equifax was aware of cybersecurity weaknesses for years. It states, “Prior to 2015, Equifax had no official corporate policy governing how to patch known cybersecurity vulnerabilities on company systems.” An internal audit of Equifax’s configuration and patch management processes revealed multiple issues, including (but not limited to) a backlog of over 8,500 vulnerabilities with overdue patches, lack of a complete IT asset inventory, and a reactive rather than proactive patching process. 3. If you were hired to advise another major credit bureau on their information security, in light of the Equifax disaster, what are three questions you might first ask about your client’s cybersecurity practices, and their ethical values in relation to cybersecurity? 1. Do you HAVE any cybersecurity practices that you actually implement in a timely manner? 2. Do you have an IT asset inventory that’s kept up-to-date, and how/by whom is it managed? 3. What policies do you have in place to notify affected and potentially affected users of any breaches that may occur, and how long after the breach is discovered would the users be notified? 4. In what ways could an organizational culture of thinking about the ethics of cybersecurity, as described so far in this module, potentially have prevented the Equifax breach, or reduced its harmful impact? It would have potentially resulted in security patches being installed when they needed to be, which might have prevented the breach in its entirety. It would have resulted in encryption certificates being renewed when they should have been, rather than it taking 10 months to do so. Scenario 3 : Security researchers often use conference platforms such as DefCon and RSA to announce newly discovered security tools or vulnerabilities; often these are controversial, and
CISC280 – Project 11 invite careful ethical reflection on the harms of benefits of such disclosures, and the competing interests involved. Here are two examples to compare and consider from an ethical standpoint: A. At DefCon 2016, security researcher Anthony Rose presented the results of his testing of the security of products in the emerging market for Bluetooth-enabled door locks. He found that of 16 brands of locks he purchased, 12 had profoundly deficient security, including open transmission of plain-text passwords, the ability to easily change admin passwords and physically lock out users, and vulnerability to replay attacks and spoofing. Some of the locks could be remotely opened by an attacker a half-mile away. Of the manufacturers Rose contacted, only one responded to his findings. Another shut down its website but continued to sell its product on Amazon. B. At Defcon 2017, two members of Salesforce’s “Red Team” of offensive security experts were scheduled to present (under their Twitter handles rather than their professional names) details of their newly developed security tool, Meatpistol. Meatpistol is an automated ‘malware implant’ tool designed to aid security red teams in creating malware they can use to use to attack their own systems, so that they might better learn their own systems’ vulnerabilities and design more effective countermeasures. It functioned more or less as any malware tool does, able not only to generate code to infect systems but to steal data from them, except that it reduced the time needed to create new forms of such code from days to mere seconds. The two members of Salesforce’s offensive security team planned to make Meatpistol’s code public after the event, with the view that making Meatpistol an open-source tool would allow the community of security researchers to improve upon it further. As with any malware implant tool, however, making it open source would have inevitably invited other hackers to use it for malicious purposes. Just prior to the event, an executive at Salesforce instructed the team not to release Meatpistol’s code, and shortly thereafter, instructed them to cancel the previously- approved presentation altogether. The team presented on Meatpistol at DefCon anyway, after which they were summarily fired by Salesforce. Meatpistol’s code was not released. 1. Who are the different stakeholders whose interests Anthony Rose needed to consider in giving his DefCon presentation, and what potential harms/benefits to those various stakeholders did he need to consider and weigh? He needed to take into consideration the manufacturers’ stakeholders’, Amazon’s stakeholders’, and Defcon’s stakeholders’ interests. A potential harm the stakeholders could have faced is losing money because of the stock value decreasing due to his announcement. A benefit to the stakeholders would be that they could possibly sell their stocks in the manufacturers’ companies before losing too much money once the shares really started to tank. 2. Who are the different stakeholders whose interests the Salesforce red team needed to consider in giving their presentation, and what potential harms/benefits to those various stakeholders did they need to consider and weigh?
CISC280 – Project 11 They needed to consider their own company’s stakeholders’ interests as well as Defcon’s. I think the stock prices could have potentially skyrocketed due to people being interested in Meatpistol and wanting the code for it for whatever reason, good or bad. Salesforce’s top executives most likely would make an absolute killing. However, technology like this is a double-edged sword: someone could potentially turn around and install the malware with ill intent on Salesforce’s servers and wreak complete havoc on the company. 3. Do you think the 2016 Rose presentation was ethical, all things considered? Why or why not? What about the 2017 Meatpistol presentation (including its planned code release) – was it ethical? Was Salesforce right to try to stop it, and to block the code release? Yes, I think Rose’s presentation was ethical. He called out massive security flaws in 12 different products, saving countless people from robberies or worse. I do NOT think the Meatpistol presentation was ethical in the grand scheme of things. Now that people know tech like this exists, they’re going to be begging for access to it. And the 2 employees who were fired may very well release the code to the public independently. They’d likely be in violation of a non-disclosure agreement, but that won’t matter very much to anyone who was in turn affected by someone using Meatpistol for nefarious purposes. Salesforce was right to try to stop the presentation and to block the code release. 4. How do you think the two presentations shaped the professional reputations of Rose and the Salesforce Red Team, respectively? For example, if you were looking to hire a security researcher for your red team, might you be more or less likely to hire Rose after his presentation? What about the two members of the Salesforce team? What ethical considerations would need to be weighed in your decision? I think Rose’s presentation shaped his professional reputation favorably (at least I would hope it did). I’d be much more likely to hire him because his and my ethics align. I think the good of the people should always come before the profits of a company. There were most likely some people who lost their jobs because of Rose’s presentation, but I would rather lose my job than know that a product that the company I worked for produced was responsible for people losing their lives. I think the two members of the Salesforce team’s professional reputations were tarnished due to their presentation. I would not want them on my red team, because they went given direct orders to cancel the presentation but gave it anyway. I don’t think their presentation benefited the public in any way. In fact, I feel it put the public in potential danger. 5. How might members of the general public look at each of these presentations, and judge the moral character of the researchers themselves, in a different light than would other members of their security research community?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
CISC280 – Project 11 The general public may look upon the Meatpistol presentation harsher than other members of their security research community. The security research community may see some benefits to their giving the presentation – to make people aware that this technology exists and to keep an eye out for it should the code leak and be used maliciously, for example. I think the general public would look favorably upon Rose’s presentation and see him as a man of good moral character. He was looking out for the public good and wasn’t afraid to make enemies of the manufacturers to keep the public safe from harm.