LabSim Chapter 7

docx

School

Nova Southeastern University *

*We aren’t endorsed by this school

Course

615

Subject

Information Systems

Date

Apr 3, 2024

Type

docx

Pages

91

Uploaded by gioroa20

Report
7.1.1 Vulnerability Management Click one of the buttons to take you to that part of the video. Vulnerability Management 00:00-00:21 A vulnerability is a weakness that could be triggered accidentally or exploited intentionally to cause a security breach. Efficient vulnerability management plays a vital role in securing your organization's network. In this video, we'll discuss steps that can be taken to manage vulnerabilities effectively. Identification Methods 00:21-01:58 Let's start by looking at identification. Various sources can be used to identify vulnerabilities. Vulnerability scans are automated tools that scan your system for known vulnerabilities. These scans comprehensively analyze your system's current security status, flagging any areas that require immediate attention or improvement. Threat feeds provide real-time updates on cyber threats across the world. These feeds can provide information such as suspicious domains, known malware, known malicious IP addresses, and more. Security defenders can use this information to stay on top of the latest threats that might impact them. Open-source intelligence gathers data from publicly available sources. At the same time, proprietary third-party feeds offer unique intelligence from private companies. Information-sharing organizations offer sector-specific threat intelligence. Dark web intelligence can also provide insights into underground criminal activities, including selling exploit codes and stolen data. Penetration testing, often called "pen testing," is a simulated attack that can be used to test your defenses. They can help to detect security loopholes before malicious actors find them. A responsible disclosure program encourages external individuals to report vulnerabilities. Bug bounty programs take responsible disclosure one step further by offering recognition or compensation to individuals who report vulnerabilities. Lastly, system and process audits help to identify deviations from established policies. These audits can help to highlight non-compliance issues and pinpoint inefficiencies and opportunities for improvement in current processes. Threat Hunting 01:58-03:00 Next is threat hunting. This involves proactively searching through networks to detect and isolate advanced threats evading security solutions. It's about being proactive instead of reactive, staying one step ahead instead of waiting for an attack to happen. Threat hunting involves using automated and manual techniques to detect cyber threats. It begins with a hypothesis about a potential threat, which is then investigated using data analysis, system logs, and security tools. The goal is to detect hidden threats, understand the extent of any breaches, and identify ways to prevent further attacks. This proactive approach requires an in-depth understanding of an organization's network, including all internal and external connections, system configurations, and data flows. It also requires an understanding of current threat trends and tactics. Automated tools can scan and monitor network traffic, but the human element is crucial in
threat hunting. Cybersecurity professionals use their experience and intuition to interpret the data, identify anomalies, and track potential threats. Threat Analysis 03:00-04:29 Once vulnerabilities have been identified, they must be analyzed to assess their potential impact. Vulnerability analysis helps prioritize remediation efforts by identifying the most critical vulnerabilities that pose the most significant risk to an organization. Prioritization is typically based on factors such as a vulnerability's severity, the ease of exploitation, and the potential impact of an attack. Prioritizing vulnerabilities helps an organization focus limited resources on addressing the most significant threats first. A standard ranking system is the Common Vulnerability Scoring System, or CVSS. The CVSS determines the vulnerability risk based on three metrics: base, temporal, and environmental. Base metrics describe a vulnerability's unique characteristics. Temporal metrics describe its changeable attributes, and environmental metrics tell what vulnerabilities are only present in specific environments or implementations. Scan results often list the vulnerabilities by their CVE code. CVE stands for Common Vulnerabilities and Exposures. This is a free and publicly available list of standardized identifiers for known software vulnerabilities and exposures. There are currently 94 CVE numbering authorities from 16 countries, which provides a baseline for evaluation. CVE also provides standardization, which allows data exchange for cybersecurity automation and aids professionals as they determine the best assessment tools for themselves. Threat Response and Remediation 04:29-05:36 Depending on the vulnerability analysis results, appropriate steps should be taken to mitigate the identified risks. Some standard remediation practices include patching, cybersecurity insurance, segmentation, and compensating controls. Patching involves updating software or systems to fix security vulnerabilities. The process includes installing 'patches' or updates issued by software vendors to fix identified security loopholes. Cybersecurity insurance is a form of cover that offers a financial safety net in case of cyber-attacks or data breaches. It can cover costs associated with recovery, including incident response, data recovery, legal liability, and customer notification. Segmentation refers to dividing a network into smaller parts or segments. By doing this, organizations can isolate parts of their network, limiting an attacker's ability to move laterally within the system. Compensating controls are alternative security measures implemented when a standard control isn't feasible. These controls help achieve the same security objective and reduce the risk to an acceptable level if the primary control cannot be implemented. Summary 05:36-05:53 That's it for this lesson. In this lesson, we discussed vulnerability management. We talked about various methods that can be used for identifying vulnerabilities. We discussed how threat hunting, threat analysis, and threat response and remediation can play a vital role in securing an organization's network.
7.1.2VulnerabilityType Facts This lesson covers the following topics: Vulnerability management Device and operating system vulnerabilities Legacy and end-of-life (EOL) systems Firmware vulnerabilities Vulnerability Management Vulnerability management is critical to any organization's cybersecurity strategy, encompassing identifying, evaluating, treating, and reporting security vulnerabilities in operating systems, applications, and other components of an organization's IT operations. Vulnerability management may involve patching outdated systems, hardening configurations, or upgrading to more secure versions of operating systems. For applications, it might include code reviews, security testing, and updating third-party libraries. Vulnerability scanning is a crucial component of this process, with specialized tools utilized to identify potential weaknesses in an organization's digital assets automatically. These tools scan for known vulnerabilities such as open ports, insecure software configurations, or outdated versions. Post scanning, analysis is performed to validate, classify, and prioritize the identified vulnerabilities for remediation based on factors such as the potential impact of a breach, the ease of exploiting the vulnerability, and the importance of the asset at risk. This continuous cycle of assessment and improvement helps organizations maintain safe and secure computing environments. Device and Operating System Vulnerabilities Operating systems (OS) are one of the most critical components of any infrastructure, so vulnerabilities in an OS can lead to significant problems when successfully exploited. Microsoft Windows has an extensive feature set and broad user base, especially among large organizations and governments. Its vulnerabilities often include buffer overflows, input validation problems, and privilege flaws typically exploited to install malware, steal information, or gain unauthorized access. Windows is an essential target for attackers because of its large install base. Large corporations and governments heavily depend upon it, which compounds the significance of its vulnerabilities. Apple's macOS vulnerabilities often stem from its UNIX-based architecture, and weaknesses generally appear in access controls, secure boot processes, and third- party software. Apple macOS has a smaller user base than Windows, but its popularity has grown significantly. Generally, macOS is perceived as being 'safer' than other operating systems, which can lead to complacency.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Linux is a prevalent server OS but can also be used as a desktop or mobile OS. The open-source nature of Linux and the large community of active developers support rapid development. This generally results in quick identification and repair of vulnerabilities. Kernel vulnerabilities, misconfigurations, and unpatched systems are common issues in Linux. Despite its reputation for security, its widespread use in the cloud and server infrastructure makes Linux vulnerabilities especially significant. The widespread adoption of Mobile OS like Android and iOS and their increasing use as primary computing platforms instead of traditional computers make them valuable targets for attack and exploitation. Android is open source, like Linux, resulting in similar benefits and problems. Additionally, Android OS is fragmented among different manufacturers and versions, resulting in inconsistent patching and update support. iOS, while not open source like Android, has also been impacted by several significant vulnerabilities. The significance of OS vulnerabilities cannot be overstated, especially as specialized embedded systems, such as IoT, are added to our surroundings. Each system runs specialty operating systems and introduces vulnerabilities and potential pathways into corporate infrastructures. Examples of OS vulnerabilities: Microsoft Windows  — One of the most notorious vulnerabilities in Windows history was the MS08-067 vulnerability in Windows Server Service. This vulnerability allowed remote code execution if a specially crafted packet was sent to a Windows server. This vulnerability was exploited by the Conficker worm in 2008, which infected millions of computers worldwide. Additionally, MS17-010 represents a significant and critical security update released by Microsoft in March 2017. This update addressed multiple vulnerabilities in Microsoft's implementation of the Server Message Block (SMB) protocol (a network file-sharing protocol) that could allow remote code execution (RCE). These vulnerabilities, if exploited, could allow an attacker to install programs, view, change, or delete data, or create new accounts with full user rights. The significance of MS17-010 is tied closely to the EternalBlue exploit, which leveraged the vulnerabilities in early versions of the SMB protocol for malicious purposes. The most famous misuse of EternalBlue was during the WannaCry ransomware attack in May 2017, where it was used to propagate ransomware across networks worldwide, leading to massive damage and disruption. This event underlined the critical importance of timely system patching and reinforced the potential global impact of such vulnerabilities. macOS  — In 2014, a significant vulnerability called "Shellshock" affected all Unix-based systems, including macOS. It allowed attackers to potentially gain control over a system due to a flaw in the Bash shell. Though it originated from a component in Unix systems, its impact was felt in macOS due to its Unix-based architecture. Android  — The Stagefright vulnerability discovered in 2015 is a prominent example of Android. It allowed attackers to execute malicious code on an Android device by sending a specially crafted MMS message. This issue was particularly severe due to the ubiquity of the vulnerable component (the Stagefright media library) across Android versions and devices.
iOS  — In 2019, Google's Project Zero team discovered a series of vulnerabilities in iOS that nation-state attackers were abusing. These "watering hole" attacks took advantage of several vulnerabilities to gain full access to a device by having the victim visit a malicious website. Linux  — The Heartbleed bug in 2014 was a severe vulnerability in many Linux systems' OpenSSL cryptographic software library. The vulnerability allowed attackers to read the system’s memory running the OpenSSL software's vulnerable versions, compromising the secret keys to protect data. Additional concerns arise in mobile operating systems due to factors like the diversity of devices and operating system versions, bypassing operating system protections, and using apps downloaded outside official app stores. Identifying and managing vulnerabilities often involves keeping operating systems updated with the latest patches, hardening system configurations, carefully managing user privileges, and controlling software applications. Legacy and End-of-Life Systems Hardware vulnerabilities, particularly those associated with end-of-life and legacy systems, present considerable security challenges for many organizations, as patches or fixes for vulnerabilities are either unavailable or difficult to apply. End-of-life (EOL) and legacy systems share a common characteristic: they are both outdated. EOL systems may be legacy systems, and some are also EOL. The manufacturer or vendor no longer supports EOL systems, so they do not receive updates, including critical security patches. This makes them vulnerable to newly discovered threats. Conversely, while still outdated, the vendor may still fully support legacy systems. An EOL system is a specific product or version of a product that the manufacturer or vendor has publicly declared as no longer supported. It is also possible for open-source projects to be abandoned by the maintainers. An EOL system can be a hardware device, a software application, or an operating system. Products should be replaced or updated before they reach EOL status to ensure they remain supported by their vendors and receive critical security patches. Notable EOL product examples include the Windows 7 and Server 2008 operating systems, which stopped receiving updates in January 2020. These systems are significantly more vulnerable to attacks due to the absence of security patches for new vulnerabilities. Despite their EOL status, they are still in use in many environments. Many devices (peripheral devices especially) remain on sale with known severe vulnerabilities in firmware or drivers and no possibility of vendor support to remediate them, especially in secondhand, recertified, or renewed/reconditioned marketplaces. Examples include recertified computer equipment, consumer-grade and recertified networking equipment, and various Internet of Things devices.
Legacy systems typically describe outdated software methods, technology, computer systems, or application programs that continue to be used despite their shortcomings. Legacy systems often remain in use for extended periods because the organization's leadership recognizes that replacing or redesigning them will be expensive or pose significant operational risks from complexity. The term "legacy" does not necessarily mean that the vendor no longer supports the system but rather that it represents hardware and software methods that are no longer popular and often incompatible with newer architectures or methods. Legacy systems often remain in use because they operate with sufficient reliability, have been incorporated into many critical business functions, and are familiar to long-tenured staff. Assessing the risks associated with using EOL and legacy products, such as lack of updates, lack of support, and compatibility issues with newer systems, is crucial. EOL and legacy product replacements must continue to meet the organization's requirements, maintain compatibility with existing infrastructure, and support reliable data migration. Selection criteria must consider the availability of vendor support, device warranty details, and marketplace performance/reputation. Transitioning costs must be carefully assessed, too, including licensing, hardware upgrades, and professional service implementation fees. The work to transition away from EOL and legacy products must minimize disruptions and ensure long-term sustainability. Firmware Vulnerabilities Firmware is the foundational software that controls hardware and can contain significant vulnerabilities. For instance, the Meltdown and Spectre vulnerabilities identified in 2018 impacted almost all computers and mobile devices. The exposure was associated with the processors used inside the computer and allowed malicious programs to steal data as it was being processed. Another vulnerability, "LoJax," discovered in the Unified Extensible Firmware Interface (UEFI) firmware in 2018, enabled attackers to persist on a system even after a complete hard drive replacement or OS reinstallation. End-of-life (EOL) hardware vulnerabilities arise when manufacturers cease providing product updates, parts, or patches to the firmware. 7.1.3Vulnerability Identification Methods Facts This lesson covers the following topics: Vulnerability scan Threat feed Other vulnerability identification methods
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Vulnerability Scan Vulnerability scanning is a fundamental task within a vulnerability management program. It utilizes automated scanning processes to identify and evaluate potential issues. Vulnerability scans focus on networks, operating systems, applications, and other functional areas to detect known vulnerabilities, including insecure configurations, outdated software versions, or missing security patches. Regular scanning is required to maintain an accurate picture of an organization's security posture and to identify new vulnerabilities. Threat feeds play a vital role in enhancing the effectiveness of vulnerability management by providing real-time information about emerging threats and newly discovered vulnerabilities. Threat feeds aggregate data from various sources, including cybersecurity researchers, vendors, and global security communities. These are integrated into vulnerability scanning tools to improve their detection capabilities. By leveraging threat feeds, organizations can stay ahead of the threat landscape, enabling them to prioritize and address the most critical vulnerabilities before attackers can exploit them. Threat Feed Another essential element of vulnerability management is the use of threat feeds. These are real-time, continuously updated sources of information about potential threats and vulnerabilities, often gathered from multiple sources. By integrating threat feeds into their vulnerability management practices, organizations can stay aware of the latest risks and respond more swiftly. Threat feeds are pivotal in vulnerability scanning by providing real-time, continuous data about the latest vulnerabilities, exploits, and threat actors. These feeds serve as a valuable resource for enhancing the organization's threat intelligence and enabling quicker identification and remediation of potential vulnerabilities. They integrate data from various sources, including security vendors, cybersecurity organizations, and open-source intelligence, to comprehensively view the threat landscape. Common threat feed platforms include AlienVault's Open Threat Exchange (OTX), IBM's X-Force Exchange, and Recorded Future. These platforms gather, analyze, and distribute information about new and emerging threats, providing actionable intelligence that can be incorporated into an organization's vulnerability management practices and sometimes directly into security infrastructure tools to provide up-to-the-minute protections. Threat feeds significantly improve vulnerability identification by providing timely information and context about new threats that traditional vulnerability scanning does not provide. Threat feeds offer information that helps organizations focus their remediation efforts on the most relevant and potentially damaging vulnerabilities first. This proactive approach can significantly reduce the time between discovering a
vulnerability and its remediation, thus minimizing the organization's exposure to potential attacks. The following table describes several types of threat feeds: Threat intelligence platforms and feeds are supplied as one of three different commercial models: Threat Feed Type Description Third-party threat feeds Open-source and proprietary threat feeds provide valuable real-time information on the latest cyber threats and vulnerabilities. Both feed types aggregate data from various sources and can be integrated into an organization's security infrastructure, contributing to a proactive cybersecurity strategy. The choice between open-source and proprietary threat feeds often comes down to essential attributes. Open-source feeds, such as those provided by the Cyber Threat Alliance or the MISP threat-sharing platform, are typically free and accessible to all, making them a cost-effective solution for smaller organizations or those with limited budgets. However, they may lack the depth, breadth, or sophistication of analysis found in proprietary feeds. Proprietary threat feeds often provide more comprehensive information and advanced analytic insights. However, these feeds come at a cost, and the return on investment will depend on an organization's specific needs, risk profile, and resources. Some organizations may use a combination of both open- source and proprietary feeds to achieve a balance of cost and coverage. The outputs from the primary research undertaken by threat data feed providers and academics can take three main forms: Behavioral Threat Research — is narrative commentary describing examples of attacks and TTPs gathered through primary research sources. Reputational threat intelligence — is lists of IP addresses and domains associated with malicious behavior, plus signatures of known file-based malware. Threat Data — is computer data that can correlate events observed on a customer's networks and logs with known TTP and threat actor indicators. Threat data can be packaged as feeds integrating with a security information and event management (SIEM) platform. These feeds are usually described as cyber threat intelligence (CTI) data. The data is not a complete security solution; the threat data must be correlated with observed data from customer networks to produce actionable intelligence. This type of analysis is often powered by the SIEM's artificial intelligence (AI) features. Closed/proprietary — is where threat research and CTI data are available as a paid subscription to a commercial threat intelligence platform. The security solution provider will also make the most valuable research available early to platform subscribers through blogs, white papers, and webinars. Some examples of such platforms include the following: o IBM X-Force Exchange (exchange.xforce.ibmcloud.com) o Mandiant's FireEye (mandiant.com/advantage/threat-intelligence) o Recorded Future (recordedfuture.com/platform/threat-intelligence)
Threat Feed Type Description Open-source intelligence Open-source intelligence (OSINT) describes collecting and analyzing publicly available information and using it to support decision-making. In cybersecurity operations, OSINT is used to identify vulnerabilities and threat information by gathering data from many sources such as blogs, forums, social media platforms, and even the dark web. This can include information about new types of malware, attack strategies used by cybercriminals, and recently discovered software vulnerabilities. Security researchers can use OSINT tools to automatically collect and analyze this information, identifying potential threats or vulnerabilities that could impact their organization. Some standard OSINT tools include Shodan for investigating internet-connected devices, Maltego for visualizing complex networks of information, Recon-ng for web-based reconnaissance activities, and theHarvester for gathering emails, subdomains, hosts, and employee names from different public sources. Information- sharing organization Threat feed information-sharing organizations are collaborative groups that exchange data about emerging cybersecurity threats and vulnerabilities. These organizations collect, analyze, and disseminate threat intelligence from various sources, including their members, security researchers, and public sources. Members of these organizations, often composed of businesses, government entities, and academic institutions, can benefit from the shared intelligence by gaining insights into the latest threats they might not have access to individually. They can use this information to fortify their systems and respond swiftly to emerging threats. Examples of such organizations include the Cyber Threat Alliance and the Information Sharing and Analysis Centers (ISACs), which span various industries. These organizations are crucial in enhancing collective cybersecurity resilience and promoting a collaborative approach to tackling cyber threats. Deep and dark web Threat research is a counterintelligence gathering effort in which security companies and researchers attempt to discover modern cyber adversaries' tactics, techniques, and procedures (TTPs). There are many companies and academic institutions engaged in primary cybersecurity research. Security solution providers with firewall and antimalware platforms derive much data from their customers' networks. As they assist customers with cybersecurity operations, they can analyze and publicize TTPs and their indicators. These organizations also operate honeynets to observe how hackers interact with vulnerable systems. The deep web and dark web are also sources of threat intelligence. The deep web is any part of the World Wide Web not indexed by a search engine. This includes pages that require registration, pages that block search indexing, unlinked pages, pages using nonstandard DNS, and content encoded nonstandardly. Within the deep web are areas that are deliberately concealed from "regular" browser access. Dark net — is a network established as an overlay to internet infrastructure by software, such as The Onion Router (TOR), Freenet, or I2P, that acts to anonymize usage and prevent a third party from knowing about the existence of the network or analyzing any activity taking place over the network. Onion routing, for instance, uses multiple layers of encryption and relays between nodes to achieve this anonymity. Dark web — websites, content, and services accessible only over a dark net. While there are dark web search engines, many sites are hidden from them. Access to a dark website via its URL is often only available via "word of mouth" bulletin boards.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Threat Feed Type Description Investigating these dark websites and message boards is a valuable source of counterintelligence. The anonymity of dark web services has made it easy for investigators to infiltrate the forums and web stores set up to exchange stolen data and hacking tools. As adversaries react to this, they set up new networks and ways of identifying law enforcement infiltration. Consequently, dark nets and the dark web represent a continually shifting landscape. Please note that participating in illegal dark web activities is strictly prohibited. To stay safe, it is essential to exercise caution and follow legal and ethical guidelines when exploring the dark web. The dark web is generally associated with illicit activities and illegal content but has legitimate purposes. Privacy and Anonymity — The dark web provides a platform for enhanced privacy and anonymity. It allows users to communicate and browse the internet without revealing their identity or location. It can be valuable for whistleblowers, journalists, activists, or individuals living under repressive government regimes. Access to Censored Information — In countries with strict internet censorship, the dark web can be an avenue for accessing information that is otherwise blocked or restricted. It enables individuals to bypass censorship and access politically sensitive or controversial content. o Research and Information Sharing — Some academic researchers or cybersecurity professionals may explore the dark web to gain insights into criminal activities and analyze emerging threats to improve cybersecurity operations. Other Vulnerability Identification Methods The following table reviews three additional vulnerability identification methods: Method Description Penetration testing Penetration testing, or pen testing, is a more aggressive approach to vulnerability management. Ethical hackers attempt to breach an organization's security in this practice, exploiting vulnerabilities to demonstrate their potential impact. While automated vulnerability scans and threat feeds are essential components of a robust security program, they may sometimes fail to identify specific vulnerabilities that a penetration test can uncover. Bug bounties Bug bounty programs are another proactive strategy and describe when organizations incentivize discovering and reporting vulnerabilities by rewarding external security researchers or "white hat" hackers. Both penetration testing and bug bounty programs are proactive cybersecurity practices to identify and mitigate vulnerabilities in a system or application. They both involve exploiting vulnerabilities to understand their potential impact, with the difference lying primarily in who conducts the testing and how it's structured. Penetration testing is typically performed by a hired team of professional ethical hackers within a confined time frame, using a structured approach based on the organization's requirements. This approach allows for a focused, in-depth examination of specific systems or applications and provides a predictable cost and timeline. In contrast, bug bounty programs open the testing process to a global community of independent security researchers. Rewards for finding and reporting vulnerabilities incentivize these researchers. This approach can bring diverse skills and perspectives to testing, potentially uncovering more complex or obscure
Method Description vulnerabilities. An organization may choose penetration testing for a more controlled, targeted assessment, especially when testing specific components or meeting certain compliance requirements. A bug bounty program might be preferred when seeking a more extensive range of testing, leveraging the collective skills of a global community. However, many organizations see the value in both and use a combination of pen testing and bug bounty programs to ensure comprehensive vulnerability management. Responsible disclosure programs are established by organizations to encourage individuals to report security vulnerabilities in software or systems, allowing the organization to address and fix these vulnerabilities before they can be exploited maliciously. Responsible disclosure programs provide guidelines and procedures for reporting vulnerabilities and often reward or recognize individuals who responsibly disclose verified security issues. Auditing Auditing is an essential part of vulnerability management. Where product audits focus on specific features, such as application code, system/ process audits interrogate the wider use and deployment of products, including supply chain, configuration, support, monitoring, and cybersecurity. Security audits assess an organization's security controls, policies, and procedures, often using standards like ISO 27001 or the NIST Cybersecurity Framework as benchmarks. These audits can identify technical vulnerabilities and operational weaknesses impacting an organization's security posture. Cybersecurity audits are comprehensive reviews designed to ensure an organization's security posture aligns with established standards and best practices. There are various types of cybersecurity audits, including compliance audits, which assess adherence to regulations like GDPR or HIPAA; risk-based audits, which identify potential threats and vulnerabilities in an organization's systems and processes; and technical audits, which delve into the specifics of the organization's IT infrastructure, examining areas like network security, access controls, and data protection measures. Penetration testing fits into cybersecurity audit practices as a critical component of a technical audit as it provides a practical assessment of the organization's defenses by simulating real-world attack scenarios. Rather than simply evaluating policies or configurations, penetration tests seek exploitable vulnerabilities, providing a clear picture of what an attacker might achieve. The findings from these tests are then used to improve the organization's security controls and mitigate identified risks. Penetration tests also play an important role in compliance audits, as many regulations require organizations to conduct regular penetration testing as part of their cybersecurity program. For instance, the Payment Card Industry Data Security Standard (PCI DSS) mandates annual and proactive penetration tests for organizations handling cardholder data. Click one of the buttons to take you to that part of the video. Understand Various Types of Vulnerabilities 00:00-05:55 James Stanger: Every organization presents an attack surface, and that's a fancy way of saying that it can have vulnerabilities in it based on its IT presence. To tell us more about the various types of vulnerabilities, we have Gareth Marchant. How you doing, Gareth? Gareth Marchant: Hey, great to be here, James, good to see you. James Stanger: Good to have you here, man. Gareth is the owner of Lighthouse. Tell us more about Lighthouse, and let's start talking about vulnerabilities.
Gareth Marchant: You bet. Lighthouse is a company I started about five years ago to help people develop their cybersecurity programs, train their workforce, lot's of cybersecurity things. I started the business after working in IT for about 25 years and so, here we are. But, you know, vulnerabilities is something that, you know, gets a lot of discussion for obvious reasons. You know, folks want to understand, you know, what weaknesses do I have? What issues exist in my environment that someone might take advantage of? And, unfortunately, it's not an easy answer and it's not an easy thing to manage because, frankly, there are just so many of them. James Stanger: That's right. I like how you talked about environment, 'cause there's so many elements to that environment, whether it be applications or hardware, or things in the Cloud, virtualization, all these areas. Let's start with applications. Several friends of mine who are CIOs often will say, "James, it's all in the applications." What does that mean? Gareth Marchant: Yeah, you know, the application is kinda like the front cover of the book, you know. It's what you see, it's what folks interact with, it's what draws them in, or what have you. But, underneath, there's so much that goes into creating that, you know? There's so many layers to an application. Applications run on top of so many other parts of the IT infrastructure so, on the surface, is a simple user interface with pointy, clicky buttons, and things like this. You know, behind the scenes, there's databases, there's different application servers, there's data interfaces. There's programming languages, platforms, all of these things. And every single part of those, including the use of the application, all have vulnerabilities that need to be identified and addressed. James Stanger: So, it's interesting you bring up that word application, so we're not just talking about the applications that run on your phone, or that run on your desktop. You're talking about things that run on the server side, and they can run into various types of issues like buffer overflows and injection attacks, right? Gareth Marchant: Yeah, you're right, that's it. When we say application, it's so broad, you know, it's everything from a website to an application on a tablet, or a phone, or a desktop computer, which might be run on Windows, or it might be run on Mac or Linus, or what have you. And so, the very nature of that, I mean, it's a whole career path, right? Learning how to develop applications is hard. But it turns out that attackers, they're pretty good at application development too. They know about software languages and things, and so understanding how it works allows it to be manipulated and make it work in ways the developer didn't anticipate. And so, you see things like SQL injection, cross-site scripting, and other types of attacks like this that are covered on the exam that describe how little snippets of code are, sort of, provided to the application in ways it wouldn't expect, and it changes the way that it works, and it can sometimes expose data or provide unauthorized access.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
James Stanger: You know, there are other types of issues, for example, with hardware. You know, we have firmware that needs to be updated, and things like that. Yet, sometimes, you can't update the hardware, right? Gareth Marchant: It's true, especially with some of the consumer grade stuff. You know, what do we want when we buy the stuff? We want it to be inexpensive, you know? And so, in order to keep these things inexpensive, a lotta times they're, kinda, cranked out quickly and then, if there are issues with it, you know, the answer is always, well buy the new one. But, if you think about it, even, you know, at home, you know, for example, the router that you use, you know, to provide wi-fi to again access to the Internet, you know, when did you buy that? When was the last time that thing was updated, 'cause I guarantee there's vulnerabilities in it, but it's not always apparent. You know, if something has a vulnerability in it, it doesn't mean it stops working or starts blinking red, or anything like this, you know? It's just this quiet issue that sits there that's no deal until, well, it is. James Stanger: And on the more industrial side, we have a lot of devices out there and I mean big devices. Windmills, you know, electricity generation, things like that, those are governed by computers as well, and sometimes we have problems updating those as well, right? Gareth Marchant: Yeah, no, that's it. It really tells a story of how the software is everywhere, from manufacturing plants, you know, to the phones we hold in our hands, and everything in between, it's all driven by software. You know, sometimes it's a small scale thing; it may just be a personal printer, you know? But then, it could be something bigger like a control system that's used to, say, you know, regulate the pressure in a container that holds, you know, some sort of toxic waste. And so, when those software applications are exploited, when they have issues, it literally results in death, or natural disasters and things. So, I mean, the stakes couldn't really be much higher, right? James Stanger: No, indeed. And it could even start when someone jail breaks their phone just playing with it, having fun with it, but then that introduces an unknown element into the organization and then you're gonna have another vulnerability happening, right? Gareth Marchant: Yeah, exactly, that's right. You know, it's fun to do these things but, if you do that with, like, a work phone, the problem is, if that phone comes into contact with something that's malicious because it's had, sort of, the security mechanisms bypassed, that one little phone can sometimes be the pathway an attacker takes in order to get into the bigger enterprise, and all of the data and the systems that are there. It seems like a small change, a lot of folks think, "Well, I'm just the little guy on the totem pole. I got nothing that anyone's gonna want to come after me for." And it turns out that they're exactly the people that are being targeted.
James Stanger: Gareth, thanks so much for talking to us about vulnerabilities in the attack surface today. Appreciate it, man. Gareth Marchant: It's a pleasure, James. Thanks for having me. 7.1.4 Vulnerability Analysis and Remediation Click one of the buttons to take you to that part of the video. Vulnerability Analysis and Remediation 00:00-03:04 Vulnerability analysis involves evaluating vulnerabilities for their potential impact and exploitability. Vulnerability analysis aids in vulnerability classification, categorizing vulnerabilities based on their characteristics, such as the type of system or application affected, the nature of the exposure, or the potential impact. Vulnerability analysis also helps to prioritize remediation. Remediation describes how vulnerabilities are addressed to mitigate their potential risk. Prioritization is typically based on factors such as vulnerability severity, the ease of exploitation, and the potential impact of an attack. Prioritizing vulnerabilities helps an organization focus limited resources on addressing the most significant threats first. Mitigation techniques include applying patches, changing configurations, updating software, or replacing vulnerable systems. When immediate remediation is impossible, compensating controls describe alternative plans to reduce the risk. Verification that remediation efforts have been successful is accomplished via several methods, including re-scanning affected systems. Organizations can significantly improve their resilience against cyberattacks by carefully analyzing and remediating vulnerabilities. Vulnerability response and remediation practices encompass various strategies and tactics, including patching, insurance, segmentation, compensating controls, exceptions, and exemptions, each playing a distinct role in managing and mitigating cybersecurity risks. Patching is one of the most straightforward and effective remediation practices. It involves applying updates and patches to software or systems to fix known vulnerabilities. Patching helps prevent attackers from exploiting known vulnerabilities, improving an organization's security posture. Cybersecurity insurance can provide financial protection if a security breach results from a vulnerability. It's another factor in vulnerability response. While insurance doesn't mitigate vulnerabilities directly, it's vital in a comprehensive risk management strategy, complementing technical controls with financial risk transfer. Segmentation involves dividing a network into separate segments to contain potential security breaches. If an attacker exploits a vulnerability and gains access to one segment, they're confined to it. This prevents them from moving laterally across the entire network, limiting the impact of a successful attack and supporting incident response teams.
Compensating controls refer to measures put in place to mitigate the risk of a vulnerability when security teams can't directly eliminate it or when direct remediation isn't immediately possible. Exceptions and exemptions describe scenarios where specific vulnerabilities can't be remediated due to business criticality, technical constraints, or cost constraints. In these cases, the senior leadership teams accept the risk and document the rationale for the decision, along with an established timeline for reassessment. Validating Vulnerability Remediation 03:04-04:50 Validating vulnerability remediation is critical for several vital reasons. Validation ensures that the remediation actions have been implemented correctly and function as intended. Despite best intentions, human error or technical problems can frequently lead to incomplete or incorrect implementation of fixes. These issues go unnoticed without validation, exposing the organization to the same vulnerability it initially sought to address. Validation can be achieved through re-scanning, auditing, or other verification forms. Re-scanning involves performing additional vulnerability scans after remediation actions have been implemented. The re-scan aims to determine if the vulnerabilities identified in the initial scan have been resolved. If the same vulnerabilities aren't identified in the re-scan, it strongly indicates that the remediation efforts were successful. Auditing involves an in-depth examination of the remediation process by reviewing the steps taken to address the vulnerability and ensuring they align with the organization's policies and best practices. Audits verify that necessary documentation has been updated, such as records of identified vulnerabilities, remediation actions taken, and any exceptions or exemptions granted. Verification is the process of confirming the results of the remediation actions. It involves various methods, including manual checks, automated testing, or reviews of system logs or other records. Verification ensures that remediation steps have been implemented correctly, function as intended, and don't introduce new issues or vulnerabilities. That's it for this lesson. In this lesson, we discussed vulnerability analysis and remediation. We also talked about the importance of validating vulnerability remediation to ensure the fixes were effective. 7.1.5 Vulnerability Analysis and Remediation Facts This lesson covers the following topics: Vulnerability analysis Validating vulnerability remediation Vulnerability reporting
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Vulnerability Analysis Vulnerability analysis supports several critical aspects of an organization's cybersecurity strategy, including prioritization, vulnerability classification, exposure considerations, organizational impact, and risk tolerance contexts. Note the following considerations related to vulnerability analysis: Consideration Description Prioritization Vulnerability analysis prioritizes remediation efforts by identifying the most critical vulnerabilities in an organization. Prioritization is typically based on factors such as vulnerability severity, the ease of exploitation, and the potential impact of an attack. Prioritizing vulnerabilities helps an organization focus limited resources on addressing the most significant threats first. Classification Vulnerability analysis aids in vulnerability classification, categorizing vulnerabilities based on their characteristics, such as the type of system or application affected, the nature of the exposure, or the potential impact. Classification can help clarify the scope and nature of an organization's threats. Exposure factor Vulnerability analysis must also consider exposure factors like the accessibility of a vulnerable system or data and environmental factors like the current threat landscape or the specifics of the organization's IT infrastructure. These factors can significantly influence the likelihood of a vulnerability being exploited and directly impact its overall risk level. The exposure factor (EF) represents the extent to which an asset is susceptible to being compromised or impacted by a specific vulnerability. It helps assess the potential impact or loss if the vulnerability is exploited. Factors might include weak authentication mechanisms, inadequate network segmentation, or insufficient access control methods. Impacts Vulnerability analysis assesses the potential organizational impact of vulnerabilities. This could include financial loss, reputational damage, operational disruption, or regulatory penalties. Understanding this impact is crucial for making informed decisions about risk mitigation. Environmental factors Several environmental variables play a significant role in influencing vulnerability analysis. One of the primary environmental factors is the organization's IT infrastructure, which includes the hardware, software, networks, and systems in use. These components' diversity, complexity, and age can affect the number and types of vulnerabilities present. For instance, legacy systems may have known unpatched vulnerabilities, while new emerging technologies might introduce unknown vulnerabilities. The external threat landscape is another crucial environmental factor. The prevalence of certain types of attacks or the activities of specific threat actors can affect the likelihood of exploitation of particular vulnerabilities. For example, if ransomware attacks rise within the medical industry, that sector can prioritize those vulnerabilities. The regulatory and compliance environment is another significant factor. Organizations in heavily regulated industries, like healthcare or finance, may need to prioritize vulnerabilities that could lead to sensitive data breaches and result in regulatory penalties. The operational environment, including the organization's workflows, business processes, and usage patterns, can also influence vulnerability
Consideration Description analysis. Certain operational practices increase exposure to specific vulnerabilities or affect the potential impact of a successful exploit. Examples include poor patch management practices, lack of rigorous access controls, lack of awareness training, poor configuration management practices, and insufficient application development policies. Risk tolerance Vulnerability analysis must align with an organization's risk tolerance. Risk tolerance refers to the level of risk an organization is willing to accept, which can vary greatly depending on the organization's size, industry, regulatory environment, and strategic objectives. By aligning vulnerability analysis with risk tolerance, an organization can ensure its vulnerability management efforts align with its overall risk management strategy. Validating Vulnerability Remediation Validating vulnerability remediation is critically essential for several key reasons. Validation ensures that the remediation actions have been implemented correctly and function as intended. Despite best intentions, human error or technical problems can frequently lead to incomplete or incorrect implementation of fixes. These issues go unnoticed without validation, exposing the organization to the same vulnerability it initially sought to address. Validation helps confirm that the remediation has not inadvertently introduced new issues or vulnerabilities. For example, a patch may interfere with other software or systems, or a configuration change could expose new security gaps. Also, validation provides a measure of accountability, ensuring that responsible parties adequately address identified vulnerabilities. This is especially important in larger organizations where multiple teams or individuals may be involved in the remediation process. Re-scanning involves performing additional vulnerability scans after remediation actions have been implemented. The re-scan aims to determine if the vulnerabilities identified in the initial scan have been resolved. If the same vulnerabilities are not identified in the re- scan, it strongly indicates that the remediation efforts were successful. Auditing involves an in-depth examination of the remediation process by reviewing the steps taken to address the vulnerability and ensuring they align with the organization's policies and best practices. Audits verify that necessary documentation has been updated, such as records of identified vulnerabilities, remediation actions taken, and any exceptions or exemptions granted. Verification confirms the results of the remediation actions. It involves various methods, including manual checks, automated testing, or reviews of system logs or other records. Verification ensures that remediation steps have been implemented correctly, function as intended, and do not introduce new issues or vulnerabilities. Vulnerability Reporting Vulnerability reporting is a crucial aspect of vulnerability management and is critical in maintaining an organization's cybersecurity posture. A comprehensive vulnerability report highlights the existing vulnerabilities. It ranks them based on their severity and
potential impact on the organization's assets, enabling the management to prioritize remediation efforts effectively. The Common Vulnerability Scoring System (CVSS) provides a standardized method for rating the severity of vulnerabilities. It includes metrics such as exploitability, impact, and remediation level. By using CVSS, organizations can compare and prioritize vulnerabilities consistently. Another essential practice is to include information about the potential impact of each vulnerability in the report. This could involve describing the possible outcomes of exploiting the vulnerability, including data breaches, system outages, or other operational impacts. It is essential to provide recommendations for addressing each vulnerability in the report. Recommendations might suggest specific patches or updates, recommend configuration changes, or identify other mitigation strategies. Timely reporting is also essential, as delays in reporting can lead to delays in remediation and increase the window of opportunity for attackers. Vulnerability reports must use a clear, concise format that is easy for technical and non-technical stakeholders to understand to help ensure that the report is understood and that appropriate actions are taken in response. 7.1.6 Explore End of Life Software/Hardware Click one of the buttons to take you to that part of the video. Explore End-of-Life Software 00:00-00:21 End-of-life in regard to software means there's a date set when it will no longer receive any updates. This normally means there's a new version already out or going to be released, where developers will focus on the new version of the software versus the old. We're going to explore some ways to keep track of the end-of-life. Website Tracking 00:21-01:55 One good way to keep track of this is by finding information on the internet. When a piece of software will no longer be supported, the date will be published so tech administrators will know when an update is needed. One website we can check out is endoflife.date. As you can see, all sorts of different software companies are listed. Some of the most popular pages are listed here in the middle. Let's take a look at Windows Server. After scrolling down, we can see that Windows Server 2012 R2 will reach the end-of-life starting on October 10, 2023. Although this version of Windows Server hasn't received full support since 2018, it has been getting regular security updates. If a software isn't receiving security updates, it could leave you vulnerable to hackers. That's why, in most cases, software will be listed as end-of-life when it no longer receives any updates. Now, the last window is what they call extended security
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
updates. Usually, this involves paying a substantial amount of money to receive updates past the normal security support date. Once the extended date is reached, there are typically no more updates to that version of the software. Let's look at one more example. Red Hat Enterprise Linux goes through a similar phase. There comes a time when full support isn't available. Full Support may entail bug fixes and features, while Maintenance Support typically includes security updates and some bug fixes. RHEL 7 will reach the end-of-life starting on June 30th, 2024. There's also an Extended Life Cycle Support up through June 30th, 2028. If you work in the tech industry, you should always be aware of the end-of-life dates set by the software you manage. Network Scanner 01:55-02:51 Another way to be informed about end-of-life software on your network is to scan your computers. Network scanners can scan computers and find what software may be installed on them. Here, you can see that the CentOS 6.9 server is a device that was scanned on our network; however, it's at the end-of-life since it's not supported by regular maintenance. Instead of going to each computer and looking at a list, this could be helpful. The same concept applies to software. Installed software could also be marked as end-of-life if it's no longer supported. This could mean there's a new version of the software out, or the developer who originally made it abandoned the project. One example is PuTTY. You can see it picked up version 0.79. If this same version were on other systems, you'd see multiple computers listed here. The version that's installed can be used as a comparison to the latest version released by the developer. Knowing what kind of operating systems and software are installed on your network is a significant step in keeping your network secure. Summary 02:51-02:59 That's it for this demo. In this demo, we showed you how to research end-of-life dates and how you can use a network scanner to discover information on your network.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
7.2.1 Vulnerability Scanning Click one of the buttons to take you to that part of the video. Vulnerability Scanning 00:00-00:35 A vulnerability scan is a security assessment technique used to identify and evaluate potential weaknesses or vulnerabilities in a computer system, network, or application. It involves scanning the target system for known vulnerabilities, misconfigurations, or security loopholes that could be exploited by attackers. Vulnerability scans are performed internally and externally to inventory vulnerabilities from different network viewpoints. Vulnerabilities identified during scanning are then classified and prioritized for remediation by security operations teams. Network Vulnerability Scanning 00:35-01:34 A network vulnerability scanner, such as Tenable Nessus or OpenVAS, is designed to test network hosts, including client PCs, mobile devices, servers, routers, and switches. It performs three primary functions: discovery, assessment, and prioritization. During discovery, it examines an organization's on-premises systems, applications, and devices. It then assesses the scan results, comparing them to configuration templates and lists of known vulnerabilities. Typical results from a vulnerability assessment will identify missing patches, deviations from baseline configuration templates, and other related vulnerabilities. The tool compiles a report about each vulnerability. Each identified vulnerability is categorized and prioritized using an assigned impact warning. Most tools also suggest remediation techniques. This information is highly sensitive, so using these tools and distributing the reports produced should be restricted to authorized hosts and user accounts. Credentialed and Non-Credentialed Scanning 01:34-02:38 Credentialed and non-credentialed scans are two different approaches to conducting vulnerability scans. A credentialed scan, also known as an authenticated scan, requires the scanning tool to provide valid credentials to access the target system. This allows the scan to gather more comprehensive information about the system's configuration, installed software, and other details that may not be accessible without proper authentication. In contrast, non-credentialed scans, also known as unauthenticated scans, don't require credentials to access the target system. Instead, it relies on external scanning techniques to probe for vulnerabilities and potential security issues. Both types of scans have their own advantages and limitations. Credentialed scans offer a more accurate assessment of vulnerabilities but require privileged access to the target system. On the other hand, non-credentialed scans can be performed remotely and quickly, making them suitable for high-level assessments or initial security checks. However, they generally provide a less detailed view of the system's security posture compared to the credentialed scans. Application Vulnerability Scanning 02:38-03:20
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Similarly, application vulnerability scanning describes a specialized method for identifying software application weaknesses. This includes static analysis and dynamic analysis. Static analysis involves reviewing application code without executing it. It involves examining the code for potential vulnerabilities, bugs, coding standards violations, and other issues that could impact the security, performance, or maintainability of the software. Dynamic analysis involves executing a program and observing its behavior in real-time. It also tests running applications that can identify issues like unvalidated inputs, broken access controls, and SQL injection vulnerabilities. Package Monitoring 03:20-04:01 Another critical capability in application vulnerability assessment practices includes package monitoring. Package monitoring is the practice of tracking and monitoring software packages or dependencies used in an application. It involves keeping a close watch on the versions, vulnerabilities, and updates of these packages to ensure the security and integrity of the application. Modern applications often rely on third-party libraries, frameworks, or modules that provide pre-built functionalities. These packages can introduce vulnerabilities if they contain outdated or insecure components. Package monitoring helps identify such vulnerabilities and allows developers to take appropriate actions to mitigate potential risks. Summary 04:01-04:20 That's it for this lesson. In this lesson, we discussed vulnerability scanning. We looked at the three stages of network vulnerability scanning. We also looked at credentialed and non-credentialed scans. We talked about application vulnerability scanning and package monitoring—methods used to enhance the overall security posture of an organization's software. 7.2.2 Conduct Vulnerability Scans Click one of the buttons to take you to that part of the video. Conduct Vulnerability Scans 00:00-00:43 Throughout this course, we talk about many scanning methods. Typically, you scan for a specific purpose, often to look for vulnerabilities. Vulnerability scanning attempts to find points that can be exploited on a computer or network. A vulnerability scan can detect and sometimes classify system weaknesses in computers, networks, and other equipment. The scan can also predict the effectiveness of countermeasures that you might implement. A scan may be performed by an IT department or someone hired to perform a penetration test. There are several good tools that perform vulnerability scanning. But if you're on a budget, you can use nmap with a script, the Nmap Scripting Engine, to do a vulnerability scan. Get the Nmap Script 00:43-01:44
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
I'm on a Kali Linux system. This system comes with a whole bunch of scripts, but the one we want isn't included. That's not a problem; we can go out to the web and find it. To get the script, we'll go to Github. I've already located the web page with the script. This is the one that we want. I'll open that link. Down here, in the description, it says that the script uses some well-known service to provide information about vulnerabilities. That's really not a great description, but I happen to know that it works pretty well. To get the script, you need to click on this Clone or Download link and copy the address. Now, we'll open the terminal, type in 'git clone', paste in the rest of the address, and press Enter on the keyboard. I've already done that, so I won't do it again. When I was doing this, I wasn't paying attention, and it went into the root directory, so I had to go in and move it to the /usr/share/nmap/scripts directory so nmap knows where to find it by default. If you have some basic Linux skills, it shouldn't be a problem to move to the directory before downloading the script. Run the Script 01:44-03:18 All right. On my network, I have a Windows 10 system, and I've installed XAMPP on it with some web pages, php, forms, and so on. I want to put it out there facing the public internet. But I'm not sure if it has any vulnerabilities. This is my example. I could also have other systems that I can scan. But for this demo, I'm just going to scan this particular system. Since I'm familiar with this system, I already know the IP address. To begin, I want to just do a regular service scan with nmap to see what we get. For that, I'll type in 'nmap -sV 10.10.10.195' and press Enter. It takes a few seconds to get the results. When it's done, you can see that I have a few open ports and services running. As I already mentioned, this system is running a web server, Apache, along with other services, so I'm concerned about it. Now let's run this scan with the script. To run the script, we'll type 'nmap - -script nmap-vulners -sV 10.10.10.195' and press Enter. This will take a few seconds. The scan is done. Now we have some additional information. Right here, we have some information under Apache and port 443. I'm going to hold down the Ctrl key and click on this hyperlink. It'll open Firefox, and I'll go to the page where I can read more about this particular vulnerability. I'll scroll down. Under the description, it tells me exactly what this issue is. It looks like someone could execute some code, so it's probably not something I want to let loose on the web until I do more research and fix any problems. Other Scripts 03:18-03:36 While we're in Firefox, let's jump back over to the Github tab. I'll go back a page. There are other scripts here that do some additional scans. If you have a test environment (and you should), you can download others and see what type of results you can get. Remember that you can run multiple scripts in one scan. Summary 03:36-03:58
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
That's it for this demo. In this demo, we used nmap and the Nmap Scripting Engine to do a vulnerability scan on a Windows 10 system running XAMPP. First, we went to Github to locate the scripts we needed. Then we did a regular service scan with nmap. We did another scan using the script, and then we used the web to learn more about one of the vulnerabilities that the script found. 7.2.3 Scanning a Network with Nessus Click one of the buttons to take you to that part of the video. Scan a Network with Nessus 00:00-00:26 In this demonstration, we're going to work with the Nessus Vulnerability Scanner. Nessus is a very powerful security tool that you can use to scan for security vulnerabilities on your network. Nessus is accessed via a web page on port 8834. I've installed Nessus on this machine, so I'm just connecting to localhost. For this demo, I'm running an evaluation copy of Nessus, but that's sufficient to show what it can do. Create a Scan 00:26-01:28 Let's start by logging in. After I log in, we see the home page. Here, we can see the various scans that have already been created. There are many different features and options in Nessus, but for this demo, I'll stick to showing how to create and run a network scan. While I could create a custom scan profile, Nessus includes quite a few useful templates that can accomplish most of what I typically need. To create a scan, I'll start by selecting New Scan from the top right. Here, we can see some of the templates I mentioned. I'm going to run a basic network scan, so I'll select that template. This template scans a specified host or network range for known vulnerabilities. To set up the scan, I'll need to provide a name for it, as well as a network address that I'd like to scan. I'm going to name this 'home gateway scan' since I'll be scanning the computers at my house with it. My gateway is located at 10.0.0.1, so that's what I'll supply as the target. There are many other options that I could configure, including port discovery and authenticated scanning, but I just want to do a simple scan today. Since everything I need is configured, I'll click Save. Scan 01:28-02:02 Now that I've returned to the home page, I can see that the new scan has been created, but isn't running yet. I can click Launch to start it. The scan's title changes to bold, and a spinning green icon appears to show that the scan is active. I can watch the scan live by clicking on it. This screen shows information about the different hosts that are discovered and the vulnerabilities that are found. Nessus also keeps a historical log of all the times a scan has been run so that I can compare how vulnerabilities have improved on my network over time. This scan is going to take a while, so I'll pause the demo until it's finished.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Scan Results 02:02-03:01 Now that the scan has finished, we can see that my gateway has quite a few vulnerabilities. Luckily, most of them are labelled as informational. This means that the tool found information that could be useful for a security administrator to know about a host, but that the information doesn't immediately indicate a problem. An example of the type of data that would be labelled as informational is a list of open network ports. Network ports have to be open on some machines for them to function properly, but opening unexpected ports can cause problems. More serious vulnerabilities are color-coded from green to red. Low-severity vulnerabilities aren't as interesting as high or critical vulnerabilities, so I'll take a look at one of the worst issues Nessus found. When I select a vulnerability, it shows me information about the issue. In many cases, software creators patch high and critical vulnerabilities, and a relevant CVE entry is listed. Nessus provide a description and a suggested solution for many issues, as well as relevant information related to the vulnerability. Summary 03:01-03:14 And that's it for this demo. We discussed how to conduct a vulnerability scan with Nessus. We looked at what Nessus is and what it does. And we ran a vulnerability scan on my home network and explored the results. 7.2.4 Scanning a Network with OpenVAS Click one of the buttons to take you to that part of the video. Scanning a Network with OpenVAS 00:00-00:34 Greenbone's OpenVAS is a powerful network security scanner equipped with various tools, including an intuitive graphical user interface. At its core, OpenVAS comprises a server packed with a suite of network vulnerability tests designed to identify security issues within remote systems and applications. This versatile tool serves offensive and defensive security professionals, enabling them to assess and uncover potential attack surfaces. In this demonstration, we'll continue to use the traditional name OpenVAS when referring to the scanner. Getting Started 00:34-01:33 Before diving into this demonstration, I downloaded OpenVAS as a virtual appliance and imported it into a hypervisor, setting it up as a virtual machine. The system is configured, and I've run some tests, so we're ready to proceed. I'm at the login screen and will use the credentials created during the setup phase.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
After logging in, I'm directed to the Dashboard screen. This provides a quick overview of previous scans and their status. It's important to note that security scanners, like antivirus software, rely on up-to-date definitions. Let's examine this briefly. I'll navigate to the Administration tab and click on Feed Status. This column shows that all our feeds are up to date. I'm particularly interested in the CVEs (Common Vulnerabilities and Exposures). Now, let's return to the Dashboards. Please remember that this is a test system, and the abundance of red indicates severe issues. In a typical network, such extensive red flags would be highly unusual and cause serious concern. Now, let's explore the Scans menu. Tasks 01:33-03:06 Clicking on Tasks reveals a list of all the tasks I've created. For instance, the "Scan All" task scans my entire subnet. In the Reports section, you'll see that I've run this task three times, each resulting in a report. You can also check the last time the task ran and its severity level. If I wish to run the task, I can start by clicking the appropriate arrow icon. Additional actions include deleting, editing, cloning, or exporting the task. Now, let's create a new task. There are a couple of ways to do this. We can use the Task Wizard to quickly create a task with default settings, requiring only the entry of the IP address or hostname, whether a single target or an entire range of devices. Another approach is to click on New Task. I'll name the task Scan All since it's intended for scanning the entire network. Under Scan Targets, we specify what we want to scan. I've created a new target by clicking here. This opens the New Target configuration window. I'll name it Scan All and enter the network address and subnet I want to scan. Here, you can also specify exclusions if there are devices on your network you wish to skip. Credentials can be provided for a more comprehensive scan. After clicking Save, we return to the New Task window. You'll see that the Scan All target is already selected. For this task, we'll leave the rest of the settings at their default values and click Save. Now, looking at the list of tasks, you can find the Scan All task. Since it's new, there have yet to be any reports. While I'm ready to run this task, I want to explore other menu items first. Other Scan Menu Items 03:06-03:33 Under the Scans menu, you'll find options like Reports, Results, Vulnerabilities, and more. Reports display a list of accumulated reports since installing OpenVAS, while Results reveal vulnerabilities discovered during scans. For example, you can identify an operating system that has reached its end of life and the affected host system. Addressing these vulnerabilities is crucial, as outdated systems are unlikely to receive automatic patches. View Results of a Scan 03:33-04:20 Returning to the Scans menu, we'll examine the Tasks. On the first page of the list, I'll select a specific report, focusing on a task that scanned IP addresses ending with "100," including a vulnerable host called Metasploitable. I'll click on the report and head to the Results tab to view the issues detected by OpenVAS. These issues range from passwordless logins to an end-of-life
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
operating system and a VNC brute force login vulnerability. Expanding on a particular point provides detailed information and suggests solutions. Under the Scan menu, there's an extensive list of menu items, including Assets, Configuration, and more. Assets allow you to view scanned hosts, while Configuration lets you create targets and credentials. Lastly, the Administration and Help sections provide access to administration tasks and helpful resources. Summary 04:20-04:28 This concludes our demonstration. We explored Greenbone's Open Vulnerability Assessment Scanner, also known simply as OpenVAS. 7.2.5 Vulnerability Scanning Facts This lesson covers the following topics: Vulnerability scanning Application vulnerability scanning Package monitoring Vulnerability Scanning Vulnerability management is a cornerstone of modern cybersecurity practices aimed at identifying, classifying, remediating, and mitigating vulnerabilities within a system or network. A vulnerability scanner looks for weaknesses such as: Open ports. Active IP addresses. Running applications or services. Missing critical patches. Default user accounts that have not been disabled. Default or blank passwords. Misconfigurations. Missing security controls. One crucial aspect of vulnerability management is vulnerability scanning, a systematic process of probing a system or network using specialized software tools to detect security weaknesses. Vulnerability scans are performed internally and externally to inventory vulnerabilities from different network viewpoints. Vulnerabilities identified during scanning are then classified and prioritized for remediation by security operations teams. Vulnerability scanning also supports application security, as it helps to locate and identify misconfigurations and missing patches in software. Advanced vulnerability
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
scanning techniques focused on application security include specialized application scanners, pen-testing frameworks, and static and dynamic code testing. Vulnerability scanning tools like openVAS and Nessus offer a broad range of features to analyze network equipment, operating systems, databases, patch compliance, configuration, etc. While these tools are very effective, application security analysis warrants more specialized approaches. Several specialized tools exist to more deeply analyze how applications are designed to operate and can locate vulnerabilities not typically identified using generalized scanning approaches. There are different options when running a vulnerability scan. The table below explains each of these options: Vulnerability Scan Option Description Intrusive An intrusive scan finds a potential vulnerability and then actively attempts to exploit it. This leads to more accurate results but cannot be done on a live system. Non-intrusive A non-intrusive scan is the more common type of scan performed. This method scans the network and lists all potential vulnerabilities but cannot validate if the system is vulnerable. This type of scan can be performed on live systems and requires the network defender to take additional actions. Credentialed A credentialed scan is given a user account with login rights to various hosts, plus whatever other permissions are appropriate for the testing routines. This sort of test allows much more in-depth analysis, especially in detecting when applications or security settings may be misconfigured. It shows what an insider attack, or an attack with a compromised user account, may achieve. A credentialed scan is a more intrusive type of scan than a non-credentialed scanning. Non-credentialed A non-credentialed scan proceeds by directing test packets at a host without being logged on to the OS or application. The view is the one the host exposes to an unprivileged user on the network. The test routines may be able to include things such as using default passwords for service accounts and device management interfaces, but they are not given privileged access. While you may discover more weaknesses with a credentialed scan, you will sometimes want to narrow your focus to that of an attacker who does not have specific high-level permissions or total administrative access. Non- credentialed scanning is the most appropriate technique for external assessment of the network perimeter or when performing web application scanning. Application Vulnerability Scanning Similarly, application vulnerability scanning describes a specialized method for identifying software application weaknesses. This includes static analysis (reviewing application code without executing it) and dynamic analysis (testing running applications), which can identify issues like unvalidated inputs, broken access controls, and SQL injection vulnerabilities. Application vulnerability scanning is typically handled separately from general vulnerability scanning due to the unique nature of software applications and the specific types of vulnerabilities they introduce.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
General vulnerability scanning is designed to detect system-wide or network-wide weaknesses, such as out-of-date software or misconfigured firewalls. In contrast, application vulnerability scanning evaluates the coding and behavior of individual software applications. It looks for issues like cross-site scripting (XSS), SQL injection, and insecure direct object references unique to software applications. These application-specific vulnerabilities require specialized tools and techniques to identify and mitigate and are generally different from those used in general vulnerability scanning. Applications frequently have their own release and update cycles, separate from the rest of the environment, necessitating a more targeted vulnerability management process. Package Monitoring Another critical capability in application vulnerability assessment practices includes package monitoring. Package monitoring is associated with vulnerability identification because it tracks and assesses the security of third-party software packages, libraries, and dependencies used within an organization to ensure that they are up-to-date and free from known vulnerabilities that malicious actors could exploit. Package monitoring is associated with managing software bill of materials (SBOM) and software supply chain risk management practices. In an enterprise setting, package monitoring is typically achieved through automated tools and governance policies. Automated software composition analysis (SCA) tools track and monitor the software packages, libraries, and dependencies used in an organization's codebase. These tools can automatically identify outdated packages or packages with known vulnerabilities and suggest updates or replacements. They work by continuously comparing the organization's software inventory against various databases of known vulnerabilities, such as the National Vulnerability Database (NVD) or vendor-specific advisories. In addition to these tools, organizations often implement governance policies around software usage. These policies may require regular audits of software packages, approval processes for adding new packages or libraries, and procedures for updating or patching software when vulnerabilities are identified.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
7.3.1 Alerting and Monitoring Click one of the buttons to take you to that part of the video. Alerting and Monitoring 00:00-00:26 Alerting and monitoring play a critical role in cybersecurity. Continuously monitoring systems and networks allow organizations to detect potential threats and breaches early. These monitoring systems trigger timely alerts, enabling security teams to take immediate action to isolate affected systems and implement recovery plans to protect crucial data. Alerting and Monitoring Tools 00:26-02:33 Alerting and monitoring tools are used to generate alerts in real time. These tools often come equipped with dashboards for visualizing data. They also have advanced analytics capabilities for deeper insights into network security. There's a wide range of alerting and monitoring tools available. Let's take a moment to look at just a few. First are network monitoring tools. These tools scrutinize and manage the functionality of a network. They help identify network bottlenecks and often predict potential issues before they become serious problems.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Next is a flow collector. Flow collectors are a means for recording metadata and statistics about network traffic rather than recording each frame. NetFlow is a network protocol developed by Cisco for collecting IP traffic information and monitoring network traffic. System monitors provide information about the resources and performance of a system. They can monitor CPU usage, disk activity, and memory usage. System logs record the detailed operations of a system or network. They're critical for understanding system activities and identifying abnormalities. Application and Cloud Monitors supervise the operation of applications and cloud-based services. They help ensure optimal performance and uptime while also tracking unusual activity. Vulnerability Scanners scan a system or network to identify any security weaknesses. They're an integral part of maintaining a secure IT environment, as they provide the first line of defense by identifying vulnerabilities before they can be exploited. Data Loss Prevention (DLP) tools detect and prevent data breaches, exfiltration, or unwanted destruction of sensitive data. They also ensure that end users don't send sensitive or critical information outside the corporate network. Lastly, a SIEM, or Security Information and Event Management, is a comprehensive solution that provides real-time analysis of security alerts generated by networks and applications. Benchmarks 02:33-04:09 One of the functions of a vulnerability scan is to assess the configuration of security controls and application settings and permissions compared to established benchmarks. This is known as benchmarking. The scanner might try to identify whether there's a lack of controls that might be considered necessary or any system misconfiguration that would make the controls less effective or ineffective, such as antivirus software not being updated or management passwords left configured to the default. This testing requires specific information about best practices in configuring the application or security control. These best practices are provided by listing the controls and appropriate configuration settings in a template. Security Content Automation Protocol (SCAP) allows compatible scanners to determine whether a computer meets a configuration baseline. SCAP uses several components to accomplish this function. Open Vulnerability and Assessment Language, or OVAL, is an XML schema describing system security state and querying vulnerability reports and information. Extensible Configuration Checklist Description Format, or XCCDF, is an XML schema for developing and auditing best practice configuration checklists and rules. Previously, best practice guides might've been written in prose for systems administrators to apply manually. XCCDF provides a machine-readable format that can be applied and validated using compatible software. Summary 04:09-04:23 That's it for this lesson. In this lesson, we talked about alerting and monitoring. We reviewed several tools that can help with this. We also discussed the importance of establishing benchmarks for effective monitoring and alerting.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
7.3.2 Alerting and Monitoring Facts This lesson covers the following topics: Alerting and monitoring Alerting and monitoring tools Alerting and Monitoring Alerting and monitoring play a critical role in cybersecurity. Continuously monitoring systems and networks allow organizations to detect potential threats and breaches early. These monitoring systems trigger timely alerts, enabling security teams to take immediate action to isolate affected systems and implement recovery plans to protect crucial data. Alerting and Monitoring Tools The following chart describes several different commonly used alerting and monitoring tools: Tool Description Network monitors Distinct from network traffic monitoring, a network monitor collects data about network infrastructure appliances, such as switches, access points, routers, firewalls. This is used to monitor load status for CPU/memory, state tables, disk capacity, fan speeds/temperature, network link utilization/error statistics, and so on. Another important function is a heartbeat message to indicate availability. This data might be collected using the Simple Network Management Protocol (SNMP). An SNMP trap informs the management system of a notable event, such as port failure, chassis overheating, power failure, or excessive CPU utilization. The threshold for triggering traps can be set for each value. This provides a mechanism for alerts and alarms for hardware issues. As well as supporting availability, network monitoring might reveal unusual conditions that could point to some kind of attack. As distinct from network traffic monitoring, a network monitor collects data about network infrastructure appliances, such as switches, access points, routers, firewalls. This is used to monitor load status for CPU/memory, state tables, disk capacity, fan speeds/temperature, network link utilization/error statistics, and so on. Another important function is a heartbeat message to indicate availability. This data might be collected using the Simple Network Management Protocol (SNMP). An SNMP trap informs the management system of a notable event, such as port failure, chassis overheating, power failure, or excessive CPU utilization. The threshold for triggering traps can be set for each value. This provides a mechanism for alerts and alarms for hardware issues. As well as supporting availability, network monitoring might reveal unusual conditions that could point to
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Tool Description some kind of attack. Netflow A flow collector is a means of recording metadata and statistics about network traffic rather than recording each frame. Network traffic and flow data may come from a wide variety of sources (or probes), such as switches, routers, firewalls, and web proxies. Flow analysis tools can provide features such as the following: Highlighting of trends and patterns in traffic generated by applications, hosts, and ports. Alerting based on detection of anomalies, flow analysis patterns, or custom triggers. Visualization tools that show a map of network connections and make interpretation of patterns of traffic and flow data easier. Identification of traffic patterns revealing rogue user behavior, malware in transit, tunneling, or applications exceeding their allocated bandwidth. Identification of attempts by malware to contact a handler or command & control (C&C) channel. NetFlow is a Cisco-developed means of reporting network flow information to a structured database. NetFlow has been redeveloped as the IP Flow Information Export (IPFIX) IETF standard ( tools.ietf.org/html/rfc7011 ). A particular traffic flow can be defined by packets sharing the same characteristics, referred to as keys. A selection of keys is called a flow label, while traffic matching a flow label is called a flow record. A flow label is defined by packets that share the same key characteristics, such as IP source and destination addresses and protocol type. These five bits of information are referred to as a 5-tuple. A 7-tuple adds the input interface and IP type of service data. Each exporter caches data for newly seen flows and sets a timer to determine flow expiration. When a flow expires or becomes inactive, the exporter transmits the data to a collector. System monitors and logs A system monitor implements the same functionality as a network monitor for a computer host. Like switches and routers, server hosts can report health status using SNMP traps. Logs are one of the most valuable sources of security information. A system log can be used to diagnose availability issues. A security log can record both authorized and unauthorized uses of a resource or privilege. Logs function both as an audit trail of actions and (if monitored regularly) provide a warning of intrusion attempts. Log review is a critical part of security assurance. Only referring to the logs following a major incident is missing the opportunity to identify threats and vulnerabilities early and to respond proactively. Logs typically associate an action with a particular user. This is one of the reasons why it is critical that users do not share login details. If a user account is compromised, there is no means of tying events in the log to the actual attacker. Application and cloud monitors SNMP offers limited functionality. There are numerous proprietary monitoring solutions for infrastructure, application, database, and cloud environments. Some are designed for on-premises and some for cloud, while some support hybrid monitoring of both types of environments. An application monitor will include a basic heartbeat test to verify that it is responding. Other factors to monitor include number of sessions and requests, bandwidth consumption, CPU and memory utilization, and error or security alert conditions. Cloud monitors will assess different facets of cloud services, such as network bandwidth, virtual machine status, and application health. Vulnerability A vulnerability scanner will report the total number of unmitigated vulnerabilities for each host.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Tool Description scanners Consolidating these results can show the status of hosts across the whole network and highlight issues with a particular patch or configuration issue. Antivirus Most hosts should be running some type of antivirus scan (A-V) software. While the A-V moniker remains popular, these suites are better conceived of as endpoint protection platforms (EPPs) or next-gen A-V. These detect malware by signature regardless of type, though detection rates can vary quite widely from product to product. Many suites also integrate with user and entity behavior analytics (UEBA) and use AI- backed analysis to detect threat actor behavior that has bypassed malware signature matching. Antivirus will usually be configured to block a detected threat automatically. The software can be configured to generate a dashboard alert or log via integration with a SIEM. Data loss prevention Data loss prevention (DLP) mediates the copying of tagged data to restrict it to authorized media and services. As with antivirus scanning, monitoring statistics for DLP policy violations can show whether there are issues, especially where the results show trends over time. Click one of the buttons to take you to that part of the video. Understand Security Alerting & Monitoring Concepts & Tools 00:00- 05:59 James Stanger: When it comes to cyber security, being proactive is an important consideration. One way to be proactive is to enable your alerting and monitoring. And we've brought in, Gareth Marchant, to tell us more about the... alerting and monitoring concepts and tools that are available. Gareth, how are you doing man? Gareth Marchant: Hey, doing good James. Glad to be here! James Stanger: Oh, it's good to have you here. Well, let's talk about... monitoring and what all that means. What are some of the... resources that are available and what are some of the activities first, before we start talking about the tools? Gareth Marchant: Yeah, so, monitoring is all about getting an early warning and being proactive, you know. So many places, are just sort of putting out fires all day. And... just running regular IT infrastructure, that's not a great approach. You've got really frustrated users and frustrated staff. But, looking again into cyber security, you know, reacting to things, often times, it's too late. So, using tools like this, are designed to help us understand if issues are brewing,
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
or if things... look out of place, or aren't quite right, so that we can respond to these things and address them before they turn into crises. James Stanger: Very good. So, what kinds of activities are we talking about here, 'cause you've gotta gather information, so... how do you go about doing that? Gareth Marchant: So, that's it. There's a lot of info to collect from... the way users are using computers... they way... you know, the environment is being changed... how resources are being used. All of these things, they leave a... trail of evidence... and those logs... as unexciting at logs are... these plain little text logs, are the often times, the lifeblood of security operation. And so, the challenge comes to find, locate all of these different logs, what they represent and then... come up with a strategy to pull out those really important things. Sometimes, it's... just a literally an event, in a log that you need to be alerted about. Sometimes it's things in sequence... context, we say. And, being able to recognize those patterns and then getting alert, related to that. But, it's not something we can really do... ourselves manually, effectively, because... there's so much stuff. There's so much technology. There's so many logs, so many systems. And so we had to collect all of that stuff at robot speed and find these issues. So, things like SIEM... you know, which is sort of a specialized kind of log collection. And tools of this nature... come into play to help... provide that, sort of speed and efficiency that we need... to get early warning. James Stanger: 'Cause it's kind of a life cycle thing, you... collect the data, you aggregate it, you store it, then you start... kinda getting into it and teasing out insights. Cyber security increasingly is... that real time data isn't it? Gareth Marchant: Yeah. That's right. It's... shortening that time... to detection. That's key. You know, the earlier you can find it. We use these things, these things like... you know, cyber attack life cycles, you know that has a life cycle too. But, the... cyber kill chain, where you've got these really well defined steps... from someone taking an interest in the organization to actually acting on their objective. And so, you wanna try to identify when something funny is happening, as early in the kill chain as you can, because you can... sort of stamp it out before it turns into an issue. And so, you know having this real time view of things is really important. But... also, if sometimes, you know... you become aware of something, you know, even not necessarily a cyber attack, but sometimes, there's internal issues that might be driven by the Human Resources Department and they may come back and say, "You know, we've gotten... complaints about this employee, maybe, you know, using resources in ways they shouldn't. You know, can you help us... verify these claims." And so, we need all of this log data to be collected and stored and archived and protected, so we can go back sort of, forensically and say, "Okay, let me look over the last month and see if I can corroborate this claim." James Stanger: And then there's alerts that can come out... right? And things like that. And you have to set thresholds and things like that?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Gareth Marchant: Yeah. Yeah. That's... really, really come to light. A lot of folks lose... steam, when they try to implement these tools. They're pretty complicated. They can be really expensive, depending on what you pick. And so, you know, getting these things in place, can take a lot of... time and effort. But... then rarely does it... just plug in, and just... get everything done from day one. It takes... time and attention to tweak and turn, and make it alert you on these different issues that are... unique to your environment. Even, you know, even though many organizations, run databases, they run them in different ways and they have different platforms and different applications that use them, you know, in different ways. And they interface with things in different ways. And all of those things, there's issues associated with all of those that are unique. And you have to sort of... figure out what those are and turn that system to alert you of those. They're not always just, you know, an email. It could be a text message. It could be an email. It could be, a custom dashboard that you create with charts and graphs and... stop, like kind of controls and these kind of things to... just to give you this insight, so you can respond in an appropriate amount of time. James Stanger: It makes perfect sense. And then there's all these... real time user behavior analytics tools... all the way to scanning tools like Nessus and OpenVAS, that use... you know, security content, automation... protocol SCAP data and things like that, right? Gareth Marchant: Yeah. Yeah, consistency is a big part of this. We are already really understanding, you know what you're seeing and what that means. And... the vulnerability scanners, it's great that you brought that up, because those tools... what's great about those, compared to like, Endpoint Protection anti virus tools, is this standard suite of tools that are used to enumerate, score and rank and describe issues. And, so... that helps in this endeavor when you're trying to figure what the significance of something is, you scan something with an anti virus tool, depending on what you're using. They just got different names, you know. But with a vulnerability scanner, you know it's standardizes. This just makes it easier to recognize some of these issues. James Stanger: Gareth, thanks so much for giving us your wisdom about security alerting, and monitoring and the concepts and tools involved. I sure appreciate it, man. Gareth Marchant: You bet James. Any time. 7.3.3 SIEM and SOAR Click one of the buttons to take you to that part of the video. SIEM and SOAR 00:00-00:33
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
To protect our networks, we usually implement multiple automated devices such as IDSs, IPSs, firewalls, and security configurations. The problem with implementing these systems is that the amount of generated data becomes overwhelming for a human being to sort through. To help with this, we implement SIEM and SOAR systems. In this lesson, I'll go over each of these systems and how they work to help protect networks. SIEM Tools 00:33-01:46 Security information and event management tools, or SIEM tools, work by gathering all sorts of information from a network and putting it together in one central location. More than just aggregating data, SIEM systems can also actively read all this information and determine if there's an actual threat! Let's see how this works. Everything starts with the collectors. Log collectors are responsible for gathering event logs from security appliances, host systems, and applications. We can also add sensors to the system in order to capture network packets or data inputs from all the disparate systems on our network. This data is sent to the event collectors, and the event collectors send it all to the SIEM. SIEM software takes this data, reads and analyzes it, and separates it into different categories such as logon attempts, database entries, port scans, network congestion, and more. You can review the reports to help find any suspicious network activity. If data exceeds the defined thresholds of normal network activity, the SIEM sends an alert to the security administrator, who then can investigate it and take care of the threat as needed. Next-Generation SIEM 01:46-02:13 The next generation of SIEM systems are taking things to the next level. By implementing artificial intelligence and machine learning, these new systems can analyze user behavior and sentiment to determine if a threat exists. This can be used to detect threats like spear phishing attacks and insider threats. SIEM systems are great at helping network administrators filter data and improve security monitoring. But, any alert still requires manual intervention. SOAR Systems 02:13-03:58 The acronym SOAR stands for Security Orchestration, Automation, and Response. SOAR systems also gather and analyze data, but these systems take it to the next level. SOAR is a solution stack of compatible software programs that allow an organization to collect security-threat data from multiple sources and respond to low-level security events without human assistance. Let's break this down and see how these systems work. SOAR systems gather the same information as SIEM systems do, but they also gather data from multiple third-party tools. The SOAR system coordinates these tools, sensors, and collectors to work together to gather as much relevant data as possible. This is the orchestration piece of a SOAR system. You can set up a SOAR system to automate tasks that are routine, tedious, and time consuming, such as looking for and deleting phishing emails. This is usually configured using
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
checklists called playbooks or a series of conditional steps called runbooks. Automating these tasks frees up time for your security team to focus on more important things. Finally, a SOAR system is able to automatically respond to threats. For example, if malware is discovered, a SOAR system can identify the threat and quarantine it instead of just sending an alert. SOAR systems help to reduce a security operation team's workload by automating workflows and handling low-level tasks automatically. You should use both SIEM and SOAR systems for improved security. Your SOAR system can likely respond to the low-level threats that the SIEM system uncovers, and your security team can take responsibility for the higher-level issues. Summary 03:58-04:30 That'll wrap things up for now. In this lesson, we went over SIEM and SOAR systems. SIEM programs work by gathering all sorts of data from sensors and collectors. Then this data is analyzed, and alerts are sent out for potential threats. A SOAR system takes this one step further and is able to respond to low-level threats itself. You can configure SOAR systems to automate basic tasks that take up your organization's valuable time. Using both systems can greatly improve your network security. 7.3.4 SIEM and SOAR Facts This lesson covers the following topics: Security Information and Event Management (SIEM) Alerting and monitoring activities Alerting Reporting Archiving Alert tuning Security Information and Event Management (SIEM) Software designed to manage security data inputs and provide reporting and alerting is often described as security information and event management (SIEM). The core function of a SIEM tool is to collect and correlate data from network sensors and appliance/host/application logs. In addition to logs from Windows and Linux-based hosts, this could include switches, routers, firewalls, IDS sensors, packet sniffers, vulnerability scanners, malware scanners, and data loss prevention (DLP) systems. Collection is how the SIEM ingests security event data from various sources. There are three main types of security data collection: Agent-based —this approach means installing an agent service on each host. As events occur on the host, logging data is filtered, aggregated, and normalized at the host and then sent to the SIEM server for analysis and storage. Collection from Windows/Linux/macOS computers will use agent-based collection. The agent must run
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
as a process and could use 50–500 MB of RAM, depending on the amount of activity and processing it does. Listener/collector —rather than installing an agent, hosts can be configured to push log changes to the SIEM server. A process runs on the management server to parse and normalize each log/monitoring source. This method is often used to collect logs from switches, routers, and firewalls, as these are unlikely to support agents. Some variant of the Syslog protocol is typically used to forward logs from the appliance to the SIEM. Sensor —as well as log data, the SIEM might collect packet captures and traffic flow data from sniffers. A sniffer can record network data using either a switch's mirror port functionality or some tap on the network media. As distinct from collection, log aggregation refers to normalizing data from different sources to be consistent and searchable. SIEM software features connectors or plug-ins to interpret (or parse) data from distinct types of systems and to account for differences between vendor implementations. Each agent, collector, or sensor data source will require its own parser to identify attributes and content that can be mapped to standard fields in the SIEM's reporting and analysis tools. Another essential function is normalizing date/time zone differences to a single timeline. Alerting and Monitoring Activities When data has been collected and aggregated, the SIEM can implement alerting, reporting, and archiving activities. Note that these activities can be performed manually or automated using discrete tools for each security appliance. The advantage of a SIEM is to consolidate the activities into a single management interface. This consolidated functionality, referred to as a "single pane of glass," refers to the enhanced visibility into a complex environment that such software offers. Alerting Correlation means interpreting the relationship between individual data points to diagnose incidents of significance to the security team. A SIEM correlation rule is a statement that matches certain conditions. These rules use logical expressions, such as AND and OR, and operators, such as == (matches), < (less than), > (greater than), and in (contains). For example, a single-user login failure is not a condition that should raise an alert. Multiple user login failures for the same account, taking place within one hour, are more likely to require investigation and are a candidate for detection by a correlation rule. Error.LoginFailure > 3 AND LoginFailure.User AND Duration < 1 hour As well as the correlation between indicators observed in the collected data, a SIEM will likely be configured with a threat intelligence feed. This means that data points observed in the collected network data can be associated with known threat actor indicators, such as IP addresses and domain names.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Each alert will be dealt with by the incident response processes of analysis, containment, eradication, and recovery. When used in conjunction with a SIEM, two steps in alert response and remediation deserve particular attention: Validation during the analysis process is how the analyst decides whether the alert is a true positive and needs to be treated as an incident. A false positive generates an alert, but no actual threat activity exists. Quarantine isolates the source of indicators, such as a network address, host computer, or file. Alert response and remediation steps will often be guided by a playbook that assists the analyst with applying all incident response processes for a given scenario. One of the advantages of SIEM and advanced security orchestration, authorization, and reporting (SOAR) solutions is to automate validation and remediation fully or partially. For example, a quarantine action could be available as a mouse-click action via an integration with a firewall or endpoint protection product. Validation is made easier by correlating event data to known threat data and pivoting between sources, such as inspecting the packets that triggered a particular IDS alert. Reporting Reporting is a managerial control that provides insight into the security system's status. A SIEM can assist with reporting activity by exporting summary statistics and graphs. Report formats and contents are usually tailored to meet the needs of different audiences: Executive reports provide a high-level summary for decision-makers. This guides planning and investment activity. Manager reports provide cybersecurity and department leaders with detailed information. This guides day-to-day operational decision-making. Compliance reports provide whatever information is required by a regulator. Determining which metrics are most useful for reporting is always very challenging. The following types illustrate some common use cases for reporting: Authentication data, such as failed login attempts and critical file audit data. Hosts with missing patches and/or configuration vulnerabilities. Privileged user account anomalies include out-of-hours use or excessive requests for elevated permissions. Trend reporting to show changes to key metrics over time. A SIEM can be used for two types of reporting: Alerts and alarms detect the presence of threat indicators in the data and can be used to start incident cases. Day-to-day management of alert reporting forms a large part of an analyst's workload. Status reports communicate data about the level of threat or number of incidents being raised and the effectiveness of security controls and response procedures. This type of reporting can be used to inform management decisions. It might also be required for compliance reporting.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
A SIEM will ship with several preconfigured dashboards and reports, but it will also make tools available for creating custom reports. It is critical to tailor the information presented in a dashboard or report to meet the needs and goals of its intended audience. If the report contains an overwhelming amount of data or irrelevant information, it will not be possible to identify remediation actions quickly. Archiving A SIEM can enact a retention policy to keep historical log and network traffic data for a defined period. This allows for retrospective incident and threat hunting and can be a valuable source of forensic evidence. It can also meet compliance requirements to hold archives of security information. SIEM performance will degrade if excessive data is kept available for live analysis. A log rotation scheme can be configured to move outdated information to archive storage. Alert Tuning Correlation rules are likely to assign a criticality level to each match. Examples include the following: Log only — an event is produced and added to the SIEM's database, but it is automatically classified. Alert — the event is listed on a dashboard or incident handling system for an agent to assess. The agent analyzes the event data and either dismisses it to the log or validates it and starts an incident case. Alarm — the event is automatically classified as critical, and a priority alarm is raised. This might mean emailing an incident handler or sending a text message. Alert tuning is necessary to reduce the incidence of false positives. False positive alerts and alarms waste analysts' time and lower productivity. Alert fatigue is when analysts are so consumed with dismissing numerous low-priority alerts that they miss a single high-impact alert that could have prevented a data breach. Analysts can become more preoccupied with looking for a quick reason to dismiss an alert than with adequately evaluating the alert. Reducing false positives is difficult, however, firstly because there is not a simple dial to turn for overall sensitivity and secondly because reducing the number of rules that produce alerts increases the risk of false negatives. There is also a concept of true negatives. This is a measure of events that the system has properly allowed. Metrics for false and true negatives can be used to assess the performance of the alerting system. Some of the techniques used to manage alert tuning include the following: Refining detection rules and muting alert levels — If a particular rule generates multiple dashboard notifications, the rule's parameters can be adjusted to reduce this, perhaps
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
by adding more correlation factors. Alternatively, the alert can be muted to log-only status or configured only to produce a single notification for every 10 or 100 events. Redirecting sudden alert "floods" to a dedicated group — Changes in the network can cause a rule to produce far more alerts than it should. Once confirmed that this is a false positive, rather than "spamming" each analyst’s dashboard, it can be assigned to a dedicated agent or team to remediate. Redirecting infrastructure-related alerts to a dedicated group — Misconfigurations, such as deviance from a baseline, can cause continually high alert volumes. While these are important to fix, that is not the job of the incident response team and is better managed by an infrastructure team. Continuous monitoring of alert volume and analyst feedback — Managers should oversee the system and be aware of risks from alert fatigue. The experience of individual analysts can be utilized to reduce alert sensitivity, change the parameters of a given rule, or automate the processing of the rule using a SOAR solution. Deploying machine learning (ML) analysis — ML can rapidly analyze the data sets produced by SIEM. It can be used to monitor how analysts respond to alerts and attempt to automatically tune the ruleset to reduce false negatives without impacting true positives. 7.3.5 Analyze Network Traffic with Netflow Click one of the buttons to take you to that part of the video. Configure Netflow on pfsense 00:00-00:56 NetFlow is a network protocol system originally created by Cisco. Netflow collects active IP network traffic as it flows in or out of an interface. The data is gathered and then analyzed to create a picture of network traffic flow and the volume of traffic. So, why do we use Netflow? Well the NetFlow protocol is used by IT professionals as a network traffic analyzer to determine its point of origin, it's destination, the volume and the paths on the network. Before NetFlow's creation, network IT engineers and administrators would use Simple Network Management Protocol (SNMP) for network traffic analysis and monitoring. Although Netflow is a feature was created by Cisco, there are open-source alternatives such as softflowd that can be installed to work with pfsense. This will allow us to capture network traffic and create a picture to better analyze in and out traffic. Install softflowd 00:56-01:25 In order to use softflowd we must install it from the package list. To do so we need to go to system then package manager. Tab over to Available packages. Just to make it easier so were not scrolling for a while type 'softflow' in the search bar and click search. Next, we can click the Install button and then confirm. When the software is installed successfully, we can then configure it. Configure 01:25-02:27
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
To adjust our settings for softflowd we need to go to services, then softflowd. Make sure at the top it says enabled, Interface can either be LAN or WAN depending on what side of the firewall you would like to capture the data. It is possible to grab data from both if you would like however were just going to select WAN for now. The Host will be the NetFlow analyzer which happens to be '192.168.30.202' on our network. Port will the designated port of your choosing. Sometimes it is a good idea to use an alternative port for security purposes, but for now were using the standard '2055' port for NetFlow. We will leave the max flows to the default '8192'. The NetFlow version will depend on your analyzer's capability. If it doesn't support version ten then you may want to see what version, it does support. I'm going to flip this to version '9' because that is what our analyzer supports. Click save at the bottom. Verify 02:27-03:13 Even though our settings are saved we want to make sure things are working right. There is a command line command you can run to show statistics on softflowd. Go to diagnostics then command prompt. In the Execute shell command field we can insert our command 'softflowctl -c /var/runsoftflowd.vmx0.ctl statistics'. The part that has vmx0 may differ if your network interface is labeled differently. Execute that command and now we can see some results. You may not a lot of traffic yet however you may want to check back later to verify flow packets and byte counts are going up. Now we are ready to send and configure NetFlow to our analyzer. summary 03:13-03:19 That is, it for this demo, in this demo we installed and configured softflowd. 7.3.6 Data Loss Prevention Click one of the buttons to take you to that part of the video. Data Loss Prevention 00:00-00:31 Every business has sensitive data in its system, and protecting it is a high priority. A data leakage incident happens when sensitive data like credit card numbers, intellectual property, financial information, or proprietary company information is disclosed to an unauthorized person. In this lesson, we'll look at five approaches to data security, including data loss prevention, or DLP; masking; encryption; tokenization; and rights management. Data States 00:31-00:55 Data can exist in one of three states, and it's important to use the right security approach for each state. The first state is while the data is in use on an endpoint system, like a workstation. The second state is while the data is in motion, such as when it's transmitted over the network or to the cloud. The third state is while the data rests on a storage medium, like a hard disk drive, or in a database. DLP 00:55-01:29 Let's look at data protection through a DLP system. A DLP system works like a guard at the perimeter of your network, allowing unsensitive data to leave, but restricting sensitive data from
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
being transmitted out of the company. A DLP analyzes the network traffic in accordance with the organization's security policies. For example, an e-commerce retailer could use a network DLP solution to monitor for files containing credit card numbers. If one of those files were being transmitted out of the company, the DLP software would flag it as a potential security problem. Masking 01:29-01:42 Next, let's look at the masking approach. Masking works by replacing sensitive data with realistic fictional data. There are different types of masking. We'll look at dynamic data masking, and then we'll look at static data masking. Dynamic Data Masking 01:42-02:04 Let's start with dynamic. Dynamic masking replaces original information with a mask that mimics the original in form and function, making it useful for data that's in use or in processing. For example, someone's name would be replaced with another name, or credit card numbers would be replaced with a random number that contains the same number of characters. Dynamic Data Masking 02:04-02:19 This method can be used to control which users can see the actual data. A bank could have third- party analytics performed on their accounts while masking the account numbers and clients' names. With dynamic data masking, the original data can be retrieved. Static Data Masking 02:19-02:46 Another type of masking is static data masking. This type is helpful for data at rest in a database and can be specified by field or column. You may want to use this method if you're making copies of a database for testing, development, or reporting. The complex algorithms in static masking make the original data irretrievable through reverse-engineering, so making a masked copy may be a better choice than masking the original database. Encryption 02:46-03:07 All right. Next, let's briefly review encryption. Encryption happens when an algorithm changes plain text data into unreadable ciphertext. The encryption algorithm has a variable that's called a key. The authorized user that receives the encrypted data can decrypt it through the cipher key. This helps to protect data in motion. Tokenization 03:07-03:43 Now let's look at a tokenization approach. Tokenization is similar to encryption and masking--it replaces actual data with something else. But tokenization uses a randomly generated alphanumeric character set called a token to replace the original data. The token server stores the original data and is protected by security measures like authentication and authorization protocols so that the original information is disclosed only when the correct token is presented. This method is
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
frequently used for credit card numbers, bank accounts, medical records, and other personally identifiable information. Tokenization 03:43-04:18 For example, when you have a credit card number stored on your mobile phone through an app, the app connects with the remote token server, which creates the token and replaces the credit card number stored in your phone. Then, when you go to use your phone's app to pay for your purchase at the store, the store's point of sale terminal will contact a merchant acquirer. The merchant acquirer presents the token value to the remote token server. The server uses the token to map back to your actual credit card number and authorize the purchase. An authorization is sent to the merchant, and a message from the server is sent to your phone. Rights Management 04:18-04:53 Finally, we have rights management. Rights management is data protection at the file level. With rights management, you identify sensitive files in the file system and embed them with your organization's security policy. The key benefit of this approach is that the security policy travels with the specific file even if it's moved or copied. You can continue to control access to the file, such as restricting who it can be transferred to, even when the file is no longer on your system. Rights management falls into two categories: Digital Rights Management, or DRM, and Information Rights Management, or IRM. Data Rights Management 04:53-05:38 DRM is file level management applied to rich media like music, videos, and software. This strategy uses security technologies such as encryption, permissions, product keys, limited install applications, and persistent online authentication to prevent editing, sharing, and unauthorized copying. The purpose is to protect copyrighted media and software. For example, when a consumer purchases a software program, the program is not accessible without a product key provided by the manufacturer at the time of purchase. An example of continuous online authentication is when a consumer logs in to an online application or streams the information through an account that requires authentication. Information Rights Management 05:38-06:10 Now let's move on to IRM, or Information Rights Management. It's also called Enterprise Rights Management sometimes. It focuses on business-to-business transfers for files such as documents, emails, spreadsheets, and financial data. Information rights management utilizes encryption and permissions to create rules for the files, which can allow or deny copying and pasting, editing, forwarding, and printing. An example is a contract document that only the recipient can open and digitally sign and is denied forwarding abilities. Summary 06:10-06:48 That's it for this lesson. In this video, we review five approaches to data protection. First, we looked at data loss prevention. Next, we discussed two types of masking, dynamic and static. We talked about encryption, which uses a cipher with a key to encode information that can be decoded with a
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
key by the receiver. Next, we discussed tokenization, which uses an alphanumeric value as a token to replace sensitive information that's protected by the token server. And finally, we went over rights management, which protects data through permissions at the file level that stay with the file even if it leaves your network. 7.3.7 DLP Facts This lesson covers the following topics: Data loss prevention (DLP) Smaller organizations might classify and type data manually to apply data guardianship policies and procedures. However, an organization that creates and collects large amounts of personal data usually needs automated tools to assist with this task. Protecting valuable intellectual property (IP) data may also be required. Data loss prevention (DLP) products automate the discovery and classification of data types and enforce rules so that data is not viewed or transferred without proper authorization. Such solutions will usually consist of the following components: Policy server — to configure classification, confidentiality, and privacy rules and policies, log incidents, and compile reports. Endpoint agents — to enforce policy on client computers, even when they are not connected to the network. Network agents — to scan communications at network borders and interface with web and messaging servers to enforce policy. DLP agents scan content in structured formats, such as a database with a formal access control model, or unstructured formats, such as email or word processing documents. A file cracking process is applied to unstructured data to render it in a consistent scannable format. The transfer of content to removable media, such as USB devices, by email, instant messaging, or even social media, can be blocked if it does not conform to a predefined policy. Most DLP solutions can extend the protection mechanisms to cloud storage services, using either a proxy to mediate access or the cloud service provider's API to perform scanning and policy enforcement. Remediation is the action the DLP software takes when it detects a policy violation. The following remediation mechanisms are typical: Alert only — copying is allowed, but the management system records an incident and may alert an administrator. Block — the user is prevented from copying the original file but retains access. The user may or may not be alerted to the policy violation, but it will be logged as an incident by the management engine.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Quarantine — access to the original file is denied to the user (or possibly any user). This might be accomplished by encrypting the file or moving it to a quarantine area in the file system. Tombstone — the original file is quarantined and replaced with one describing the policy violation and how the user can re-release it. When configured to protect a communications channel such as email, DLP remediation might take place using client-side or server-side mechanisms. For example, some DLP solutions prevent attaching files to the email before sending it. Others might scan the email attachments and message content, strip out specific data, or stop the email from reaching its destination.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
7.4.1 Penetration Testing Click one of the buttons to take you to that part of the video. Penetration Testing 00:00-01:44 Penetration testing, often referred to as "pen testing," is a practice conducted to assess the security of an IT infrastructure by safely attempting to exploit vulnerabilities. These vulnerabilities may exist in
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
operating systems, services and application flaws, improper configurations, or risky end-user behavior. The goal of this simulated attack on a system is to identify any weak spots in an organization's defense that attackers could potentially exploit. It's designed to provide organizations with valuable insights regarding their security posture and ability to withstand cyber attacks. Once it has been decided what exactly can be tested, a timeframe and payment agreements—if applicable— should also be established and outlined in the Scope of Work. While the Scope of Work defines what work will be done, the Rules of Engagement define exactly how that work will be carried out. It should specifically state how sensitive data should be handled and who should be notified if something unexpected happens during the test. It should also specify what test methods should be used. Active vs. Passive 01:44-03:01 In penetration testing, we distinguish between two primary testing methods: active and passive. Active Penetration Testing involves direct interaction with the system to uncover vulnerabilities. This could be attempting to exploit a known software vulnerability or trying to crack a weak password. In this method, the tester is directly engaging with the target system, often leaving a trace or log of their activities. The main goal of active penetration testing is to breach the system's defenses and evaluate the impact of such a breach. Passive Penetration Testing, on the other hand, involves gathering information about the target system without directly interacting with it. This could involve network monitoring, analyzing system logs, or even something as simple as googling for information about the system or the organization. The goal here is to gather as much information as possible to understand the system and identify potential vulnerabilities. Passive testing is more covert, making it less likely to be noticed and, therefore, less likely to raise any alarms. Both methods are usually used and are important for a comprehensive penetration test because both active and passive testing methods offer unique perspectives and insights into an organization's security posture. Penetration Testing Steps 03:01-03:53 A pen test might involve the following steps: First, verify that a threat exists. To do so, you'd use surveillance, social engineering, network scanners, and vulnerability assessment tools to identify weak spots where vulnerabilities could be exploited. Next is to bypass security controls. This involves looking for easy ways to attack the system. Sometimes, the simpler solutions are the most vulnerable. For example, could the network firewall be bypassed by gaining physical access to the computer in the building? Once the computer has been accessed, malware could be introduced using a USB drive. The next step is to actively probe controls for configuration weaknesses and errors. This could include weak passwords or software vulnerabilities. Finally, once vulnerabilities have been discovered, a pen tester will prove that a vulnerability is a high risk by exploiting it to gain access to data or to install backdoors.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Summary 03:53-04:11 That's it for this lesson. In this lesson, we talked about penetration testing. We discussed outlining and documenting the goals of the pen tests before testing begins, including specifications regarding whether tests will be active or passive. We then discussed the steps that are taken during a penetration test. 7.4.2 Penetration Testing Facts Penetration testing, also commonly referred to as pentesting or ethical hacking, is the authorized simulation of an attack against an organization's security infrastructure. This can include physical and network security. This lesson covers the following topics: Types of penetration tests Security teams Documentation/contracts Penetration testing life cycle Types of Penetration Tests The purpose of a penetration test is to discover any vulnerability in an organization's network or physical security. Different types of penetration tests can be performed to simulate internal or external threats. The following table details the types of penetration tests: Penetration Test Type Description Known environment (previously known as white box) testing The penetration tester is given full knowledge of the target or network. This test allows for a comprehensive and thorough test but is unrealistic. Unknown environment (previously known as black box) testing The penetration tester has no information regarding the target or network. This type of test best simulates an outside attack and ignores insider threats. Partially known environment (previously known as gray box) testing The penetration tester is given partial information about the target or network, such as IP configurations, email lists, etc. This test simulates the insider threat. Bug bounties These unique tests are programs that are set up by organizations such as Google, Facebook, and many others. The organization sets strict guidelines and boundaries for ethical hackers to operate within. Any discovered vulnerabilities are reported, and the ethical hacker is paid based on the severity of the vulnerability.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Security Teams Depending on their role, members of security operations can be placed on different teams. These teams all work together to discover and fix security vulnerabilities. The following table describes the more common security teams: Security Team Description Red team The red team members are the ethical hackers. This team is responsible for performing the penetration tests. Blue team Blue team members are the defense of the system. This team is responsible for stopping the red team's advances. Purple team Members of the purple team work on both offense and defense. This team is a combination of the red and blue teams. White team The white team members are the referees of cybersecurity. This team is responsible for managing the engagement between the red and blue teams. This group typically consists of the managers or team leads. Documentation/Contracts Before any penetration test can take place, the goals and guidelines of the test must be established. These are spelled out in the scope of work and rules of engagement documents. The following table describes these important documents: Document Type Description Scope of work The scope of work is a very detailed document that defines exactly what is going to be included in the penetration test. This document is also referred to as the statement of work. This document should answer the: Who - specific IP ranges, servers, applications, etc., should be explicitly listed. What - anything that is off limits, such as specific servers or tactics, should be explicitly listed. When - the time frame for the penetration test. This should identify how long the test will run, the deliverables, and when the deliverables are due. Where - the location of the penetration tester. Sometimes, the penetration tester will be located in a different state. In this case, all parties must agree on which state laws will be followed. Why - the purpose and goals of the test. Penetration tests are often performed for compliance purposes, and these requirements must be detailed in the document. Special considerations, such as travel, required certifications, or anything else unexpected, will be defined in the scope of work. Finally, the scope of work should define payment and how to handle requests for additional work. This will help to reduce scope creep. Rules of The rules of engagement document defines exactly how the penetration test will be carried out. The
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Document Type Description engagement following should be defined in the rules of engagement: Type of test - whether the test will be a white box, black box, or gray box test. Data handling - an explicit statement of how sensitive data is to be handled. Be aware that the pentester will typically come across sensitive data during a penetration test. Notifications - the detailed process on when and how to notify the IT team. Penetration Testing Life Cycle Once the paperwork is complete, the pentester can begin work. The following table covers the phases of the penetration testing life cycle: Penetration Testing Life Cycle Phase Description Perform reconnaissance The first phase in the pentesting process is reconnaissance, also known as footprinting. In this phase, the pentester begins gathering information on the target. This can include gathering publicly available information, using social engineering techniques, or even dumpster diving. Scan/enumerate Running scans on the target is the second phase. During this phase, the ethical hacker is actively engaged with the target. Enumeration is part of the scanning phase. Enumeration uses scanning techniques to extract information, such as: Usernames Computer names Network resources Share names Running services Gain access The third phase takes all of the information gathered in the reconnaissance and scanning phases to exploit any discovered vulnerabilities in order to gain access. After gaining access, the pentester can perform lateral moves, pivoting to other machines on the network. The pentester will begin trying to escalate privileges with the goal of gaining administrator access. Maintain access Once the pentester has gained access, maintaining that access becomes the next priority. This can be done by installing backdoors, rootkits, or Trojans. Report The final phase is generating the test results and supporting documentation. After any penetration test, a detailed report must be compiled. Documentation provides extremely important protection for both the penetration tester and the organization. 7.4.3 Pentration Testing Methods
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
This lesson covers the following topics: Penetration tests Active and passive reconnaissance Known, partially known, and unknown testing methods Exercise types Penetration Tests A penetration test—often shortened to pen test—uses authorized hacking techniques to discover exploitable weaknesses in the target's security systems. Pen testing is also referred to as ethical hacking. A pen test might involve the following steps: Verify a Threat Exists — use surveillance, social engineering, network scanners, and vulnerability assessment tools to identify a vector by which vulnerabilities could be exploited. Bypass Security Controls — look for easy ways to attack the system. For example, if the network is strongly protected by a firewall, is it possible to gain physical access to a computer in the building and run malware from a USB stick? Actively Test Security Controls — probe controls for configuration weaknesses and errors, such as weak passwords or software vulnerabilities. Exploit Vulnerabilities — prove a vulnerability is a high risk by exploiting it to access data or install backdoors. The critical difference from passive vulnerability assessment is that an attempt is made to actively test security controls and exploit any vulnerabilities discovered. Pen testing is an intrusive assessment technique. For example, a vulnerability scan may reveal that an SQL Server has not been patched to safeguard against a known exploit. A penetration test would attempt to use the exploit to perform code injection and compromise the server. This provides active testing of security controls. Active and Passive Reconnaissance Active and passive reconnaissance provides crucial information that helps penetration testers understand target systems and identify potential vulnerabilities to plan an attack effectively. A combination of active and passive reconnaissance techniques yields the most comprehensive information regarding the target environment during a penetration testing engagement. Active reconnaissance involves probing and interacting with target systems and networks to gather information. Active reconnaissance includes activities that generate network traffic by directly requesting information from target systems. Active reconnaissance aims to discover and obtain information about the target infrastructure, services, and potential vulnerabilities. Standard techniques used in active reconnaissance include the following: Port Scanning — scanning a target network to identify open ports and their services.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Service Enumeration — interacting with identified services to gather information about their versions, configurations, and potential vulnerabilities. OS Fingerprinting — attempting to identify the operating system running on target machines by analyzing network responses and behavior. DNS Enumeration — gathering information about the target's DNS infrastructure, such as domain names, subdomains, and IP addresses. Web Application Crawling — exploring web applications to identify pages, directories, and potential vulnerabilities. Open-Source Intelligence (OSINT) Gathering — collecting publicly available information from various sources like search engines, social media, public databases, and websites. Network Traffic Analysis — monitoring network traffic to identify patterns, devices, IP addresses, and potential vulnerabilities without actively generating traffic. Social Engineering — gathering information through social engineering techniques, such as deceiving employees and vendors to extract sensitive information or access credentials. Passive reconnaissance helps penetration testers gather initial information on a target's digital footprint. It is less intrusive and has a lower detection risk than active reconnaissance techniques. Known, Partially Known, and Unknown Testing Methods The decision to use a known environment, partially known environment, or unknown environment penetration test is influenced by several factors, such as knowledge regarding the target system or network, the organization's risk appetite, and compliance requirements. Budget and resource constraints may also contribute to selecting the penetration testing method, as known environment testing generally requires fewer resources than partially known or unknown environment testing. The objectives of the penetration test influence the choice, with known environment testing suitable for assessing known vulnerabilities and partially known or unknown environment testing preferred for identifying unknown vulnerabilities. The complexity of the target system or network is also a factor, as more complex systems may necessitate more comprehensive testing methods. Organizations often combine different methods to achieve other objectives. Exercise Types Penetration testing is a crucial component of cybersecurity assessments that involves simulating real-world attacks on computer systems, networks, or applications to identify vulnerabilities and weaknesses. Different types of penetration tests exist to address specific objectives related to a security evaluation, such as testing specific systems, assessing incident response capabilities, measuring the effectiveness of physical controls, and many other areas. Different types of penetration tests allow organizations to use a flexible and prioritized approach toward security assessment. Physical penetration testing, or physical security testing, describes assessments of an organization's physical security practices and controls. It involves simulating real-world attack scenarios to identify vulnerabilities and weaknesses in physical security systems,
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
such as access controls, surveillance, and perimeter defenses. Physical penetration testing aims to assess the effectiveness of physical security controls and identify potential entry points or weaknesses that an attacker could exploit. During physical penetration testing, a skilled tester attempts to gain unauthorized physical access to restricted areas, sensitive information, or critical assets within the organization using techniques like social engineering, tailgating, lock picking, bypassing alarms or surveillance systems, and exploiting physical vulnerabilities. Integrated penetration testing refers to a holistic approach that combines different types of penetration testing methodologies and techniques to assess the overall security of an organization's systems, networks, applications, and physical infrastructure. Integrated penetration testing aims to provide a comprehensive and realistic evaluation of an organization's security operations. The importance of integrated penetration testing lies in its ability to accurately represent the organization's security posture and identify potential risks often overlooked when testing in isolated areas. For example, offensive and defensive penetration testing comprehensively assesses an organization's security posture. Offensive testing identifies vulnerabilities and weaknesses, while defensive testing evaluates the organization's ability to detect and respond to threats. By integrating both approaches, organizations can improve their security capabilities to better protect against different threats. Continuous pen testing is a similar concept, which focuses on technical vulnerabilities and is often configured to leverage automation, especially for CI/CD environments. Review the following for more information:  https://informer.io/resources/continuous- penetration-testing 7.4.4 Exploring Penetration Testing Tools Click one of the buttons to take you to that part of the video.  Exploring Penetration Testing Tools 00:00-01:39 In this demonstration, we're going to look at penetration testing tools that you can use to evaluate the security of your network or a particular host on your network. As you probably know, Linux is a very popular platform for testing network and host security. First, you need to choose a Linux distribution that you can use for penetration testing. A good place to start looking for those distributions is the DistroWatch website. Many of the Linux distributions here can be run as a live CD or installed on the hard drive. A live CD is an optical disc or a bootable USB drive that has the Linux operating system installed on it. It can also have many of the security tools you need to perform a penetration test. Because it's installed on a USB drive or an optical disc, you can insert that into the computer and boot the system off the disc. When you do, you'll have a Linux operating system up and running with the tools you need for testing.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
There are several advantages associated with testing this way. First, there's a wide variety of free penetration testing tools available for the Linux operating system, and if you're using a LiveCD, you don't have to install an operating system. If you're booting off an optical disc, there's no way for malware or anything else to actually affect the files on the disc. There are many different distributions available. You can see their names here. You can also see the purpose of the distribution listed over here. Since we're interested in security, let's go over here and look at this one. Kali Linux and Parrot Linux 01:39-02:49 The most popular and well-accepted distribution for security and penetration testing is Kali Linux. Let's go ahead and click on that and see what we can learn. It tells us where the home page is and a lot of other information. I have its home page open in another tab, up here. We can read a little bit more and see what tools are actually included on this distribution. I'm going to go to the Download page. I see the latest version right here, Kali Linux 64-Bit. Over here, I can see the checksum, or hash value. Now, I already have a copy of Kali downloaded and ready to go. We'll get to that in a minute. But I want to go back to DistroWatch. I'll go back to the previous page. I just want to point out that there's another Linux distribution called Parrot Linux. Parrot is a distribution with a collection of various utilities that are popular with penetration testers and computer forensic professionals. Here's the link to the home page, but I already have a tab opened up. Down here, you can read more about the project and learn about the different tools. Now back to Linux. Kali Install and Tools 02:49-03:48 My Kali Linux ISO image is downloaded, and I've booted it up. Now it's asking me now if I want to install it to disc or just use it as a LiveCD. I'll go ahead and say, "Sure, go ahead and just use it as a LiveCD." We'll let it launch, and pretty soon, we'll see the graph called User Interface. Each Linux distribution has its own set of package tools. This one specializes in security tools. Here are some of the tools that Kali Linux has packaged with its distribution. Over here, you'll see the Metasploit Framework Armitage (which is a graphical user interface for that framework), Burp Suite, BeEF Cross-Site Scripting Framework, and some others that we could use to do our penetration testing. Now, obviously, we don't have time in this short video to really discuss penetration testing or even look at a specific tool in-depth, but I'll show you how you could penetration test using one of the pre-packaged tools that we can launch from a LiveCD. Metasploit Framework 03:48-05:22 Let's start the Metasploit Framework by clicking on it. That's going to create the database where we're going to save some of the information about hosts and other things. Once that's done, I'll go ahead and bring up Armitage, which is the graphical user interface that we can use to easily interact with the extensive commands that exist as part of the Metasploit framework.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
It looks like that's launched now, and we'll go ahead and launch Armitage. There it is. We'll connect, and it asks us if we want to start the remote procedure called Server. We'll say, "Sure, that'll be great. It'll connect us up to the database." It takes just a minute to make those connections. It looks like it made the connection, and it'll launch here shortly. We're greeted with several different windows and several different options. Essentially, what we need to do is tell Armitage (and, in conjunction, the Metasploit Framework) which hosts we want to actually try to launch an attack on. We could use an Nmap scan to import that information. We could also use the Metasploit Framework scan, which will go out and ping the different machines in a certain subnet and gather information. In this case, I'm going to keep it simple so we don't scan all the hosts on our network, and I'll just add a single host, a vulnerable Linux virtual machine that I have on the network. I know it's on the IP ending in .102. Now click Add and then OK. It says that it added it, and we've identified a host. However, we don't have much information about it yet. Open Ports 05:22-05:45 But if I right-click on that host, I can scan it. Now it'll look for all the open ports, and you can see a variety of open ports that are coming up. We can also request additional information about services that might be running and on specific ports. So you can see here that we have certain ports, but we don't know much about the services quite yet. MSF Scan 05:45-06:35 If we wanted to, we could try to gather some additional information. Let's do an MSF scan on that host, and that will try to identify some information about the operating system that's running on it. You'll notice that every time we run a command it launches a separate window down here, so you could go along and close these out as you go. We know some information about this computer--some of the ports that are open, some of the services that may be running, etc. It's not a lot of data, but Metasploit keeps track of the vulnerabilities based on the ports that are open, the services that are running, the operating system that's running, and so on. You can come up here, and you can actually find attacks specific to the host that you've already discovered. We'll let it go through its database and find specific exploits that it might want to try. Specific Exploit 06:35-07:33 So, that's complete. Now we could go through the attack menu and look for a specific exploit. You'll see here that, based on the ports that it found, it said, "Hey, FTP's open. Go ahead and try these exploits." Each of these exploits will run the specific exploit when you click on them, and you have to provide specific values for the exploit. We don't actually know if any of these attacks will run, so the ability to check on that is built into the Metasploit framework. Some of the exploits have the ability to use this check; others are older and don't.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Let's try doing a check on some of these right now. It says this one doesn't support the check, this specific vulnerability is not exploitable, not exploitable, doesn't support the check, and so on. For the ones that don't support the check, we'd have to go through and try each of them individually. Hail Mary 07:33-08:58 Now, generally, as you're looking at an organization and trying to compromise hosts within it, you don't want to make a lot of noise or cause a lot of traffic on the network, so you try these very specific attacks based on what you discover. However, there is an option inside of Armitage (and, subsequently, Metasploit) that lets you do what's known as the Hail Mary, and that is just try out every single attachment possible. In the interest of time, rather than going through every single one of these attacks in a systematic way, we'll do the noisy attack because I want to show you how you can compromise the mission, and we'll see which vulnerabilities actually exist. If we click on Hail Mary here, then it says, "Hey, are you sure you want to do this? There's going to be a flood of exploits, and it's kind of noisy. We'll say yes, that's what we intend to do, and then it'll go through that database of exploits and try to run them with a variety of different payloads so that it's trying to establish a connection to that remote machine. We'll let it do its thing here for a minute. It went through the database, and now it's launching each of those exploits. You'll see that the exploits are going to certain ports based on the services that are running on those. We'll give it another minute here, and it'll start taking advantage of some of those exploits. Exploit Found 08:58-10:54 Oh look, our icon changed. That means we actually have a connection. And once it made the connection, it gathered some additional information and says, "Yep, this is definitely a Linux box." We can see what version of Linux and some other information, too. Once we have a session, we can gather all sorts of data. We can get the dump of the passwords and the hashes associated with the passwords and then try to crack those. Maybe we compromise the accounts and get more straightforward access, but there's all of these different sessions that we could establish based on the different exploits that we just ran. Again, usually, you wouldn't do the Hail Mary because it's too noisy. But in this case, it's a quick and easy demonstration of the vulnerabilities that exist on this specific machine. If we scroll down here, we'll see a variety of exploits that exist for this machine, and we'll give ourselves a little bit more space. You can see, as we scroll up, that it looks like there's a PHP exploit, PHP CGI injection. There's a user map, a Samba exploit, another Samba exploit, and several others that actually allowed us to have a session over to that machine. Let's go ahead and open a new console session. I'll adjust these windows so we can see better. We can run the sessions command, and you'll see that we currently have four sessions connecting from our machine. I know that's this IP address to this vulnerable machine, right there, through PHP, and then through some command lines. Let me go ahead and connect to one of these sessions interactively. We'll choose Session 2, and now I've got a Linux command prompt. I can do things like list the file system. I can say, "Who am I?" If you look at the bottom here, I have root access. With that access, I could grab the hashes for the passwords. I could execute any commands. I could set up additional back doors. I could do all sorts of things.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Summary 10:54-11:29 Penetration testing is awesome, and ultimately, we're trying to compromise systems so that we know how to lock them down. To do that, we often use these pre-packaged tools that come with the distribution. Kali Linux along with Metasploit and Armitage already pre-packaged is a fantastic tool to use for penetration testing. That's it for this demonstration. In this demo, we talked about penetration testing tools, and then we talked about the concept of a LiveCD and looked at a couple of Linux distributions that you could download and use in your testing.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help